Author: admin

Customer Journey Maps – Walking a Mile in Your Customer’s Shoes – IDF

Perhaps the biggest buzzword in customer relationship management is “engagement”. Engagement is a funny thing, in that it is not measured in likes, clicks, or even purchases. It’s a measure of how much customers feel they are in a relationship with a product, business or brand. It focuses on harmony and how your business, product or brand becomes part of a customer’s life. As such, it is pivotal in UX design. One of the best tools for examining engagement is the customer journey map.

As the old saying in the Cherokee tribe goes, “Don’t judge a man until you have walked a mile in his shoes” (although the saying was actually promoted by Harper Lee of To Kill a Mockingbird fame). The customer journey map lets you walk that mile.

“Your customer doesn’t care how much you know until they know how much you care.”

– Damon Richards, Marketing & Strategy expert

Copyright holder: Alain Thys, Flickr. Copyright terms and license: CC BY-ND 2.0

Customer journey maps don’t need to be literal journeys, but they can be. Creativity in determining how you represent a journey is fine.

What is a Customer Journey Map?

A customer journey map is a research-based tool. It examines the story of how a customer relates to the business, brand or product over time. As you might expect – no two customer journeys are identical. However, they can be generalized to give an insight into the “typical journey” for a customer as well as providing insight into current interactions and the potential for future interactions with customers.

Customer journey maps can be useful beyond the UX design and marketing teams. They can help facilitate a common business understanding of how every customer should be treated across all sales, logistics, distribution, care, etc. channels. This in turn can help break down “organizational silos” and start a process of wider customer-focused communication in a business.

They may also be employed to educate stakeholders as to what customers perceive when they interact with the business. They help them explore what customers think, feel, see, hear and do and also raise some interesting “what ifs” and the possible answers to them.

Adam Richardson of Frog Design, writing in Harvard Business Review says: “A customer journey map is a very simple idea: a diagram that illustrates the steps your customer(s) go through in engaging with your company, whether it be a product, an online experience, retail experience, or a service, or any combination. The more touchpoints you have, the more complicated — but necessary — such a map becomes. Sometimes customer journey maps are “cradle to grave,” looking at the entire arc of engagement.”

Copyright holder: Stefano Maggi, Flickr. Copyright terms and license: CC BY-ND 2.0

Here, we see a customer journey laid out based on social impact and brand interaction with that impact.

What Do You Need to Do to Create a Customer Journey Map?

Firstly, you will need to do some preparation prior to beginning your journey maps; ideally you should have:

  • User-personas. If you can’t tell a typical user’s story, how will you know if you’ve captured their journey?
  • A timescale. Customer journeys can take place in a week, a year, a lifetime, etc., and knowing what length of journey you will measure before you begin is very useful indeed.
  • A clear understanding of customer touchpointsWhat are your customers doing and how are they doing it?
  • A clear understanding of the channels in which actions occur. Channels are the places where customers interact with the business – from Facebook pages to retail stores. This helps you understand what your customers are actually doing.
  • An understanding of any other actors who might alter the customer experience. For example, friends, family, colleagues, etc. may influence the way a customer feels about any given interaction.
  • A plan for “moments of truth” – these are the positive interactions that create good feelings in customers and which you can use at touchpoints where frustrations exist.

Copyright holder: Hans Põldoja, Copyright terms and license: CC BY-SA 4.0

User personas are incredibly useful tools when it comes to putting together any kind of user research. If you haven’t developed them already, they should be a priority for you, given that they will play such a pivotal role in the work that you, and any UX teams you join in the future, will produce.

Once you’ve done your preparation, you can follow a simple 8-point process to develop your customer journey maps:

  • Review Organization Objectives – what are your goals for this mapping exercise? What organizational needs do you intend to meet?
  • Review Current User Research – the more user research you have at your fingertips, the easier this exercise will be. Be creative, and if you don’t have the right research to define the journey, then consider how you can carry that research out.
  • Review Touchpoints and Channels – the next step is to ensure that you effectively map touchpoints and channels. A touchpoint is a step in the journey where the user interacts with a company or product, and a channel is the means by which the user does this. So, for example, a touchpoint could be “pay this invoice” and channels could be “online”, “retail”, “over the phone”, “mail”, etc. It can also help to brainstorm at this stage and see if there are any touchpoints or channels you’ve missed in your original data collection exercise.
  • Create an Empathy Map. An empathy map examines how the customer feels during each interaction – you want to concentrate on how the customer feels and thinks as well as what he/she will say, do, hear, etc. in any given situation.
  • Build an affinity diagram. The idea here is first to brainstorm around each concept you’ve touched on and then to create a diagram which relates all these concepts, feelings, etc. together. This is best achieved by grouping ideas in categories and labeling them. You can eliminate concepts and the like which don’t seem to have any impact on customer experience at this stage, too.
  • Sketch the customer journey. How you do this is up to you; you can build a nice timeline map that brings together the journey over the course of time. You could also turn the idea into a video or an audio clip or use a completely different style of diagram. The idea is simply to show the motion of a customer through touchpoints and channels across your time frame and how that customer feelsabout each interaction on that journey. The map should include the outputs of your empathy map and affinity diagram.
  • Iterate and produce. Then, take your sketches and make them into something useful; keep refining the content and then produce something that is visually appealing and useful to stakeholders, team members, etc. Don’t be afraid to rope in a graphic designer at this stage if you’re not good at making things look awesome.
  • Distribute and utilize. The journey map serves no purpose sitting on your hard drive or in your desk drawer – you need to get it out there to people and explain why it’s important. Then, it needs to be put to use; you should be able to define KPIs around the ideal journey, for example, and then measure future success as you improve the journey.

Copyright holder: Rosenfeld Media. Copyright terms and license: CC BY 2.0

A complete customer journey map by adaptive path for the experience of interacting with railway networks.

Anatomy of a customer journey map

A customer journey map can take any form or shape you like, but let’s take a look at how you can use the Interaction Design Foundation’s template (link below).

Copyright holder: The Interaction Design Foundation. Copyright terms and license: CC BY-SA

A basic customer journey map template.

The map here is split into several sections: In the top zone, we show which persona this journey refers to and the scenario which is described by the map.

The middle zone has to capture the thoughts, actions and emotional experiences for the user, at each step during the journey. These are based on our qualitative user research data and can include quotes, images or videos of our users during that step. Some of these steps are “touchpoints” – i.e., situations where the customer interacts with our company or product. It’s important to describe the “channels” in each touchpoint – i.e., how that interaction takes place (e.g., in person, via email, by using our website, etc.).

In the bottom zone, we can identify the insights and barriers to progressing to the next step, the opportunities which arise from these, and possibly an assignment for internal team members to handle.

The Take Away

Creating customer journeys (including those exploring current and future states) doesn’t have to be a massively time-consuming process – most journeys can be mapped in less than a day. The effort put in is worthwhile because it enables a shared understanding of the customer experience and offers each stakeholder and team member the chance to contribute to improving that experience. Taking this “day in the life of a customer” approach will yield powerful insights into and intimate knowledge of what “it’s like” from the user’s angle. Seeing the details in sharp relief will give you the chance to translate your empathy into a design that better accommodates your users’ needs and removes (or alleviates) as many pain points as possible.

References & Where to Learn More

Hero Image: Copyright holder: Espen Klem, Flickr. Copyright terms and license: CC BY 2.0

Boag, P. (2015). Customer Journey Mapping: Everything You Need to Know.

Designing CX.The customer experience journey mapping toolkit.

Kaplan, K. (2016). When and How to Create Customer Journey Maps.

Richardson, A. (2010). Using Customer Journey Maps to Improve Customer Experience. Harvard Business Review

You can see Nielsen Norman Group’s guidelines for designing customer journey maps here:

Source :

GitHub’s Top 100 Most Valuable Repositories Out of 96 Million – Hackernoon

GitHub is not just a code hosting service with version control — it’s also an enormous developer network.

The sheer size of GitHub at over 30 million accounts, more than 2 million organizations, and over 96 million repositories translates into one of the world’s most valuable development networks.

How do you quantify the value of this network? And is there a way to get the top repositories?

Here at U°OS, we ran the GitHub network through a simplified version¹ of our reputation algorithm and produced the top 100 most valuable repositories.

The result is as fascinating as it is eclectic in the way that it does feel like a good reflection of our society’s interest in the technology and where it moves.

There are the big proprietary players with open source projects — Google, Apple, Microsoft, Facebook, and even Baidu. And at the same time, there’s a Chinese anti-censorship tool.

There’s Bitcoin for cryptocurrency.

There’s a particle detector for CERN’s Large Hadron Collider.

There are gaming projects like Space Station 13 and Cataclysm: Dark Days Ahead and a gaming engine Godot.

There are education projects like freeCodeCamp, Open edX, Oppia, and

There are web and mobile app building projects like WordPress, Joomla, and Flutter to publish your content on.

There are databases to store your content for the web like Ceph and CockroachDB.

And there’s a search engine to navigate through the content — Elasticsearch.

There are also, perhaps unsurprisingly, jailbreak projects like Cydia compatibility manager for iOS and Nintendo 3DS custom firmware.

And there’s a smart home system — Home Assistant.

All in all, it’s really a great outlook for the technology world: we learn, build stuff to broadcast our unique voices, we use crypto, break free from proprietary software on our hardware, and in the spare time we game in our automated homes. And the big companies open-source their projects.

Before I proceed with the list, a result of running the Octoverse through the reputation algorithm also produced a value score for every individual GitHub contributor. So, if you have a GitHub account and curious, you can get your score at and convert it to a Universal Portable Reputation.

Top 100 projects & repositories

Out of over 96 million repositories

  1. Google Kubernetes
    Container scheduling and management
  2. Apache Spark
    A unified analytics engine for large-scale data processing
  3. Microsoft Visual Studio Code
    A source-code editor
  4. NixOS Package Collection
    A collection of packages for the Nix package manager
  5. Rust
    Programming language
  6. Firehol IP Lists
    Blacklists for Firehol, a firewall builder
  7. Red Hat OpenShift
    A community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment
  8. Ansible
    A deployment automation platform
  9. Automattic WordPress Calypso
    A JavaScript and API powered front-end for
  10. Microsoft .NET CoreFX
    Foundational class libraries for .NET Core
  11. Microsoft .NET Roslyn
    .NET compiler
  12. Node.js
    A JavaScript runtime built on Chrome’s V8 JavaScript engine
  13. TensorFlow
    Google’s machine learning framework
  14. freeCodeCamp
    Code learning platform
  15. Space Station 13
    A round-based roleplaying game
  16. Apple Swift
    Apple’s programming language
  17. Elasticsearch
    A search engine
  18. Moby
    An open framework to assemble specialized container systems
  19. CockroachDB
    A cloud-native SQL database
  20. Cydia Compatibility Checker
    A compatibility checker for Cydia — a package manager for iOS jailbroken devices
  21. Servo
    A web browser engine
  22. Google Flutter
    Google’s mobile app SDK to create interfaces for iOS and Android
  23. macOS Homebrew Package Manager
    Default formulae for the missing package manager for macOS
  24. Home Assistant
    Home automation software
  25. Microsoft .NET CoreCLR
    Runtime for .NET Core
  26. CocoaPods Specifications
    Specifications for CocoaPods, a Cocoa dependency manager
  27. Elastic Kibana
    An analytics and search dashboard for Elasticsearch
  28. Julia Language
    A technical computing language
  29. Microsoft TypeScript
    A superset of JavaScript that compiles to plain JavaScript
  30. Joomla
    A content management system
  31. DefinitelyTyped
    A repository for TypeScript type definitions
  32. Homebrew Cask
    A CLI workflow for the administration of macOS applications distributed as binaries
  33. Ceph
    A distributed object, block, and file storage platform
  34. Go
    Programming language
  35. AMP HTML Builder
    A way to build pages for Google AMP
  36. Open edX
    An online education platform
  37. Pandas
    A data analysis and manipulation library for Python
  38. Istio
    A platform to manage microservices
  39. ManageIQ
    A containers, virtual machines, networks, and storage management platform
  40. Godot Engine
    A multi-platform 2D and 3D game engine
  41. Gentoo Repository Mirror
    A Gentoo ebuild repository mirror
  42. Odoo
    A suite of web based open source business apps
  43. Azure Documentation
    Documentation of Microsoft Azure
  44. Magento
    An eCommerce platform
  45. Saltstack
    Software to automate the management and configuration of any infrastructure or application at scale
  46. AdGuard Filters
    Ad blocking filters for AdGuard
  47. Symfony
    A PHP framework
  48. CMS Software for the Large Hadron Collider
    Particle detector software components for CERN’s Large Hadron Collider
  49. Red Hat OpenShift
    OpenShift installation and configuration management
  50. ownCloud
    Personal cloud software
  51. gRPC
    A remote procedure call (RPC) framework
  52. Liferay
    An enterprise web platform
  53. CommCare HQ
    A mobile data collection platform
  54. WordPress Gutenberg
    An editor plugin for WordPress
  55. PyTorch
    A Python package for Tensor computation and deep neural networks
  56. Kubernetes Test Infrastructure
    A test-infra repository for Kubernetes
  57. Keybase
    Keybase client repository
  58. Facebook React
    A JavaScript library for building user interfaces
    Code learning resource
  60. Bitcoin Core
    Bitcoin client software
  61. Arm Mbed OS
    A platform operating system for the Internet of Things
  62. scikit-learn
    A Python module for machine learning
  63. Nextcloud
    A self-hosted productivity platform
  64. Helm Charts
    A curated list of applications for Kubernetes
  65. Terraform
    An infrastructure management tool
  66. Ant Design
    A UI design language
  67. Phalcon Framework Documentation
    Documentation for Phalcon, a PHP framework
  68. Documentation for CMS Software for the Large Hadron Collider
    Documentation for CMS Software for CERN’s Large Hadron Collider
  69. Apache Kafka Mirror
    A mirror for Apache Kafka, a distributed streaming platform
  70. Electron
    A framework to write cross-platform desktop applications using JavaScript, HTML and CSS
  71. Zephyr Project
    A real-time operating system
  72. The web-platform-tests Project
    A cross-browser testsuite for the Web-platform stack
  73. Marlin Firmware
    Optimized firmware for RepRap 3D printers based on the Arduino platform
  74. Apache MXNet
    A library for deep learning
  75. Apache Beam
    A unified programming model
  76. Fastlane
    A build and release automaton for iOS and Android apps
  77. Kubernetes Website and Documentation
    A repository for the Kubernetes website and documentation
  78. Ruby on Rails
    A web-application framework
  79. Zulip
    Team chat software
  80. Laravel
    A web application framework
  81. Baidu PaddlePaddle
    Baidu’s deep learning framework
  82. Gatsby
    A web application framework
  83. Rust Crate Registry
    Rust’s community package registry
  84. Nintendo 3DS Custom Firmware
    A complete guide to 3DS custom firmware
  85. TiDB
    A NewSQL database
  86. Angular CLI
    CLI tool for Angular, a Google web application framework
  87. MAPS.ME
    Offline OpenStreetMap maps for iOS and Android
  88. Eclipse Che
    A cloud IDE for Eclipse
  89. Brave Browser
    A browser with native BAT cryptocurrency
  90. Patchwork
    A repository to learn Git
  91. Angular Material
    Component infrastructure and Material Design components for Angular, a Google web application framework
  92. Python
    Programming language
  93. Space Station 13
    A round-based roleplaying game
  94. Cataclysm: Dark Days Ahead
    A turn-based survival game
  95. Material-UI
    React components that implement Google’s Material Design
  96. Ionic
    A Progressive Web Apps development framework
  97. Oppia
    A tool for collaboratively building interactive lessons
  98. Alluxio
    A virtual distributed storage system
  99. XX Net
    A Chinese web proxy and anti-censorship tool
    Website: None
  100. Microsoft .NET CLI
    A CLI tool for .NET

[1] The explanation of the calculation of the simplified version is at the U°OS Network GitHub repository.

Source :


Improving the Accuracy of Automatic Speech Recognition Models for Broadcast News – Appen

Sound Waves illustration
In their paper entitled English Broadcast News Speech Recognition by Humans and Machines, the team proposes to identify techniques that close the gap between automatic speech recognition (ASR) and human performance.

Where does the data come from?

IBM’s initial work in the voice recognition space was done as part of the U.S. government’s Defense Advanced Research Projects Agency (DARPA) Effective Affordable Reusable Speech-to-Text (EARS) program, which led to significant advances in speech recognition technology. The EARS program produced about 140 hours of supervised BN training data and around 9,000 hours of very lightly supervised training data from closed captions from television shows. By contrast, EARS produced around 2,000 hours of highly supervised, human-transcribed training data for conversational telephone speech (CTS).

Lost in translation?

Because so much training data is available for CTS, the team from IBM and Appen endeavored to apply similar speech recognition strategies to BN to see how well those techniques translate across applications. To understand the challenge the team faced, it’s important to call out some important differences between the two speech styles:

Broadcast news (BN)

  • Clear, well-produced audio quality
  • Wide variety of speakers with different speaking styles
  • Varied background noise conditions — think of reporters in the field
  • Wide variety of news topics

Conversational telephone speech (CTS)

  • Often poor audio quality with sound artifacts
  • Unscripted
  • Interspersed with moments where speech overlaps between participants
  • Interruptions, sentence restarts, and background confirmations between participants i.e. “okay”, “oh”, “yes

People speaking into a phone
How the team adapted speech recognition models from CTS to BN

The team adapted the speech recognition systems that were so successfully used for the EARS CTS research: Multiple long short-term memory (LSTM) and ResNet acoustic models trained on a range of acoustic features, along with word and character LSTMs and convolutional WaveNet-style language models. This strategy had produced results between 5.1% and 9.9% accuracy for CTS in a previous study, specifically the HUB5 2000 English Evaluation conducted by the Linguistic Data Consortium (LDC). The team tested a simplified version of this approach on the BN data set, which wasn’t human-annotated, but rather created using closed captions.

Instead of adding all the available training data, the team carefully selected a reliable subset, then trained LSTM and residual network-based acoustic models with a combination of n-gram and neural network language models on that subset. In addition to automatic speech recognition testing, the team benchmarked the automatic system against an Appen-produced high-quality human transcription. The primary language model training text for all these models consisted of a total of 350 million words from different publicly available sources suitable for broadcast news.

Getting down to business

In the first set of experiments the team separately tested the LSTM and ResNet models in conjunction with the n-gram and FF-NNLM before combining scores from the two acoustic models in comparison with the results obtained on the older CTS evaluation. Unlike results observed on original CTS testing, no significant reduction in the word error rate (WER) was achieved after scores from both the LSTM and ResNet models were combined. The LSTM model with an n-gram LM individually performs quite well and its results further improve with the addition of the FF-NNLM.

For the second set of experiments, word lattices were generated after decoding with the LSTM+ResNet+n-gram+FF-NNLM model. The team generated n-best lists from these lattices and rescored them with the LSTM1-LM. LSTM2-LM was also used to rescore word lattices independently. Significant WER gains were observed after using the LSTM LMs. This led the researchers to hypothesize that the secondary fine-tuning with BN-specific data is what allows LSTM2-LM to perform better than LSTM1-LM.

The results

Our ASR results have clearly improved state-of-the-art performance, and significant progress has been made compared to systems developed over the last decade. When compared to the human performance results, the absolute ASR WER is about 3% worse. Although the machine and human error rates are comparable, the ASR system has much higher substitution and deletion error rates.

Looking at the different error types and rates, the research produced interesting takeaways:

  • There’s a significant overlap in the words that ASR and humans delete, substitute, and insert.
  • Humans seem to be careful about marking hesitations: %hesitation was the most inserted symbol in these experiments. Hesitations seem to be important in conveying meaning to the sentences in human transcriptions. The ASR systems, however, focus on blind recognition and were not successful in conveying the same meaning.
  • Machines have trouble recognizing short function words: theandofathat and these get deleted the most. Humans on the other hand, seem to catch most of them. It seems likely that these words aren’t fully articulated so the machine fails to recognize them, while humans are able to infer these words naturally.

Silhouette of person speaking on phone

The experiments show that speech ASR techniques can be transferred across domains to provide highly accurate transcriptions. For both acoustic and language modeling, the LSTM- and ResNet-based models proved effective and human evaluation experiments kept us honest. That said, while our methods keep improving, there is still a gap to close between human and machine performance, demonstrating a continued need for research on automatic transcription for broadcast news.

Source :


Consulting or con-$ulting – Hackernoon

The article by The Register regarding Hertz suing Accenture over their failed website revamp deal has gained a lot of attention on social media creating a lot of discussion around failed software projects and the IT consulting giants such as Accenture.

What I found saddest in the article is that the part about Accenture completely fumbling a huge website project doesn’t surprise me one bit: I stumble upon articles about large enterprise IT projects failing and going well over their budgets on a weekly basis. What was more striking about the article is that Hertz is suing Accenture, and going public with it. This tells us something about the state of the IT consulting business, and you don’t have to be an expert to tell that there is a huge flaw somewhere in the process of how large software projects are sold by consultancies, and especially how they are purchased and handled by their clients.

Just by reading through the article, one might think that the faults were made completely on Accenture’s side, but there is definitely more to it.Hertz too has clearly made a lot of mistakes during crucial phases of the project: in purchasing, service designing and development. I’ll try to bite into the most critical and prominent flaws.

If we dig into the actual lawsuit document we start getting a better picture of what actually went down, and what led to tens of millions of dollars going down the drain on a service that is unusable.

Siloed service design & abandoning ownership

Reading through points 2. and 3. of the legal complaint we get a small glimpse into the initial service design process:

2. Hertz spent months planning the project. It assessed the current state of its ecommerce activities, defined the goals and strategy for its digital business, and developed a roadmap that would allow Hertz to realize its vision.

3. Hertz did not have the internal expertise or resources to execute such a massive undertaking; it needed to partner with a world-class technology services firm. After considering proposals from several top-tier candidates, Hertz narrowed the field of vendors to Accenture and one other.

Hertz first “planned the project, defined the goals and strategy and developed the roadmap”. Then after realising they “don’t have the internal expertise or resources”, they started looking for a vendor who could be able to carry out their vision.

This was the first large mistake. If the initial plan, goals and vision are done before the vendor, the party who is responsible for realising the vision, is involved, you will most likely end up in a ‘broken telephone’ situation where the vision and goals are not properly transferred from the initial planners and designers to the implementers.

This is a very dangerous starting situation. What makes it even worse is this:

6. Hertz relied on Accenture’s claimed expertise in implementing such a digital transformation. Accenture served as the overall project manager. Accenture gathered Hertz’s requirements and then developed a design to implement those requirements. Accenture served as the product owner, and Accenture, not Hertz, decided whether the design met Hertz’s requirements.

Hertz made Accenture the product owner, thus relieving the ownership of the service to Accenture. This, if something, tells us that Hertz did not have the required expertise and maturity to undertake this project in the first place. Making a consulting company, a company which has no deep insight into your specific domain, business & needs, the owner & main visionary of your service is usually not a good idea. Especially when you consider that it might not be in the interest of the consulting company to finish the project in the initial budget, but rather to extend the project to generate more sales and revenue.

Having the vendor as a product owner is not a rare occurrence, and it can sometimes work if the vendor has deep enough knowledge of the client’s organisation, business & domain. However, when working in such a large project and for a huge organisation like Hertz, it’s impossible for the consulting company to have the necessary insight and experience of Hertz’s business.

Lack of transparency & communication

Moving on to the development phase of the project:

7. Accenture committed to delivering an updated, redesigned, and re-engineered website and mobile apps that were ready to “go-live” by December 2017.

8. Accenture began working on the execution phase of the project in August 2016 and it continued to work until its services were terminated in May 2018. During that time, Hertz paid Accenture more than $32 million in fees and expenses. Accenture never delivered a functional website or mobile app. Because Accenture failed to properly manage and perform the services, the go-live date was postponed twice, first until January 2018, and then until April 2018. By that point, Hertz no longer had any confidence that Accenture was capable of completing the project, and Hertz terminated Accenture.

Hertz finally lost its confidence into Accenture ~5 months after the initial planned go-live date, seemingly after at least a full year into kicking off the project partnership with them.

If it took Hertz around 1½ years to realise that Accenture can’t deliver, It’s safe to say that Hertz & Accenture have been both working in their own silos with minimal transparency into each other’s work, and critical information was not moving between the organisations. My best guess is that Hertz & Accenture met only once in a while to assess the status of the project and share. But a software project like this should be an ongoing collaborative process, with constant daily discussion between the parties. In a well functioning organisation, the client and vendor are one united team pushing the product out together.

The lack of communication infrastructure is a common problem in large scale software projects between a company and its vendor. It’s hard to say on whose responsibility it should be to organise the required tools, processes, meetings and environments to make sure that the necessary discussions are being had and that knowledge is shared. But often the consulting company is the one with a more modern take on communication, and they can provide the framework and tools for it much easier.

We get a deeper glimpse into the lack of transparency, especially regarding the technical side, when we go through points 36. — 42. of the legal complaint, e.g. number 40.:

40. Accenture’s Java code did not follow the Java standard, displayed poor logic, and was poorly written and difficult to maintain.

Right. Accenture’s code quality and technical competence was not on a satisfying level, and that is on Accenture, as they have been hired to be the technical experts in the project. But if Hertz would’ve had even one technical person working on the project, and they would have had visibility into the codebase, they could’ve caught this problem right from the first commit, instead of noticing it after over a year of Accenture delivering bad quality code. If you are buying software for tens of millions, you must have an in-house technical expert as part of the software development process, even if only as a spectator.

The lack of transparency and technical expertise combined with the lack of ownership/responsibility was ultimately the reason why Hertz managed to blow tens of millions USD, instead of just a couple. If Hertz would have had the technical know-how and had been more deeply involved in the work, they could’ve early on assessed that the way Accenture is doing things is flawed. Perhaps some people in Hertz saw that the situation was bad early on, but since the ownership of the product was on Accenture’s side, it must have been hard for those people to speak up as they saw the issues. This resulted in Accenture being allowed to do unsuitable work for over a year, until the initial ‘go-live’ date was way past and it was already too late.

And finally… Crony contracts & short-term thinking

There have been rumours of Hertz leadership firing the entire well-performing in-house software development talent, replacing it with off-shore workforce from IBM and making crony ‘golf course’ deals with Accenture in 2016. And the Hertz CIO securing a $7 million bonus for the short-term ‘savings’ made by those changes. I’d recommend taking these Hacker News comments with a grain of salt, but I wouldn’t be at all surprised if the allegations were more or less true.

These kinds of crony contracts are huge problem in the enterprise software industry in general, and the news we see about them are only the tip of the iceberg. But that is a subject for a whole other blog post.

To wrap it up

It’s important to keep in mind that the lawsuit text doesn’t really tell us the whole truth: a lot of things must have happened during those years that we will never know off. However, it’s quite clear that some common mistakes that happen in consulting projects constantly happened here too, and that the ball was dropped by both parties involved.

It’s going to be interesting to see how the lawsuit plays out, as it will work as a real-life example to both consulting companies and their clients on what could happen when their expensive software projects go south.

For a company which is considering buying software, the most important learnings to take out of this mess are:

  • Before buying software, make sure your organisation is ready for it and the required expertise is there.
  • Include the vendor from the very beginning in the planning, goal defining and service design process. Make sure you and the vendor are working as a unified team with a shared goal.
  • Make sure that the contracts are well thought out and prepare the business for worst-case scenarios.
  • Maintain the ownership of the project in your own hands, unless you are absolutely sure that the vendor has deep enough knowledge of your organisation and its business & domain.
  • Make sure the necessary communication & transparency is present both ways. The communication between you and the vendor should be constant, natural, open and wide. Include all people involved in the project, not just the managers. You must have full transparency into and understanding of the vendor’s development process.

Also, one thing to note is that many companies who have had bad experiences with large enterprise consultancies have turned to the smaller, truly agile software consultancies instead of the giants like Accenture. Smaller companies are better at taking responsibility for their work, and they have the required motivation to actually deliver quality, as they appreciate the chance to tackle a large project. For a small company the impact of delivering a project well and keeping the client happy is much more important than it is for an already well established giant.

Hopefully by learning from history and the mistakes of others, we can avoid going through the hell that the people at Hertz had to!

Source :



Geothermal Making Inroads as Baseload Power

It’s energy that has been around forever, used for years as a heating source across the world, particularly in areas with volcanic activity. Today, geothermal has surfaced as another renewable resource, with advancements in drilling technology bringing down costs and opening new areas to development.

Renewable energy continues to increase its share of the world’s power generation. Solar and wind power receive most of the headlines, but another option is increasingly being recognized as an important carbon-free resource.

Geothermal, accessing heat from the earth, is considered a sustainable and environmentally friendly source of renewable energy. In some parts of the world, the heat that can be used for geothermal is easily accessible, while in other areas, access is more challenging. Areas with volcanic activity, such as Hawaii—where the recently restarted Puna Geothermal Venture supplies about 30% of the electricity demand on the island of Hawaii—are well-suited to geothermal systems.

“What we need to do as a renewable energy industry is appreciate that we need all sources of renewable power to be successful and that intermittent sources of power need the baseload sources to get to a 100% renewable portfolio,” Will Pettitt, executive director of the Geothermal Resources Council (GRC), told POWER. “Geothermal therefore needs to be collaborating with the solar, wind, and biofuel industries to make this happen.”

1. The Nesjavellir Geothermal Power Station is located near the Hengill volcano in Iceland. The 120-MW plant contributes to the country’s 750 MW of installed geothermal generation capacity. Courtesy: Gretar Ívarsson

The U.S. Department of Energy (DOE) says the U.S. leads the world in geothermal generation capacity, with about 3.8 GW. Indonesia is next at about 2 GW, with the Philippines at about 1.9 GW. Turkey and New Zealand round out the top five, followed by Mexico, Italy, Iceland (Figure 1), Kenya, and Japan.

Research and Development

Cost savings from geothermal when compared to other technologies is part of its allure. The DOE is funding research into clean energy options, including up to $84 million in its 2019 budget to advance geothermal energy development.


2. This graphic produced by AltaRock Energy, a geothermal development and management company, shows the energy-per-well equivalent for shale gas, conventional geothermal, an enhanced geothermal system (EGS) well, and a “super hot” EGS well. Courtesy: AltaRock Energy / National Renewable Energy Laboratory

Introspective Systems, a Portland, Maine-based company that develops distributed grid management software, in February received a Small Business Innovation Research award from the DOE in support of the agency’s Enhanced Geothermal Systems’ (EGS) project. At EGS (Figure 2) sites, a fracture network is developed, and water is pumped into hot rock formations thousands of feet below the earth’s surface. The heated water is then recovered to drive conventional steam turbines. Introspective Systems is developing monitoring software that enables EGS systems to be cost-competitive.

Kay Aikin, Introspective Systems’ CEO, was among business leaders selected by the Clean Energy Business Network (CEBN)—a group of more than 3,000 business leaders from all 50 states working in the clean energy economy—to participate in meetings with members of Congress in March to discuss the need to protect and grow federal funding for the DOE and clean energy innovation overall.

Aikin told POWER that EGS technology is designed to overcome the problem of solids coming “out of the liquids and filling up all the pores,” or cracks in rock through which heated water could flow. The Introspective Systems’ software uses “algorithms to find the sites [suitable for a geothermal system]. We can track those cracks and pores, and that is what we are proposing to do.”

Looking for more insight into geothermal energy? Read our “Q&A with Geothermal Experts,” featuring Dr. Will Pettitt, executive director of the Davis, California-based Geothermal Resources Council, and Dr. Torsten Rosenboom, a partner in the Frankfurt, Germany office of global law firm Watson Farley & Williams LLP.

“In my view there are three technology pieces that need to come together for EGS to be successful,” said the GRC’s Pettitt. “Creating and maintaining the reservoir so as to ensure sufficient permeability without short-circuiting; bringing costs down on well drilling and construction; [and] high-temperature downhole equipment for zonal isolation and measurements. These technologies all have a lot of crossover opportunities to helping conventional geothermal be more efficient.”

Aikin noted a Massachusetts Institute of Technology report on geothermal [The Future of Geothermal Energy: Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century] “that was the basis for this funding from DOE,” she said. Aikin said current goals for geothermal would “offset about 6.1% of CO2 emissions, about a quarter of the Paris climate pledge. Because it’s base[load] power, it will offset coal and natural gas. We’re talking about roughly 1,500 new geothermal plants by 2050, and they can be sited almost anywhere.”

NREL Takes Prominent Role

Kate Young, manager of the geothermal program at the National Renewable Energy Laboratory (NREL) in Golden, Colorado, talked to POWER about the biggest things that the industry is focusing on. “DOE has been working with the national labs the past several years to develop the GeoVision study, that is now in the final stages of approval,” she said.

The GeoVision study explores potential geothermal growth scenarios across multiple market sectors for 2020, 2030, and 2050. NREL’s research focuses on things such as:

    ■ Geothermal resource potential – hydrothermal, coproduction, and near-field and greenfield enhanced geothermal systems.
    ■ Techno-economic characteristics – the costs and technical issues of advanced technologies and potential future impacts and calculating geothermal capacity.
    ■ Market penetration – modeling of dozens of scenarios, including multiple reference scenarios.
    ■ Non-technical barriers – factors that create delays, increase risk, or increase the cost of project development.

The study started with analyses spearheaded by several DOE labs in areas such as exploration; reservoir development and management; non-technical barriers; hybrid systems; and thermal applications (see sidebar). NREL then synthesized the analyses from the labs in market deployment models for the electricity and heating/cooling sectors.

Geothermal Is Big Business in Boise

The first U.S. geothermal district heating system began operating in 1892 in Boise, Idaho. The city still relies on geothermal, with the largest system of its kind in the U.S., and the sixth-largest worldwide, according to city officials. The current system, which began operating in 1983, heats 6 million square feet of real estate—about a third of the city’s downtown (Figure 3)—in the winter. The city last year got the go-ahead from the state Department of Water Resources to increase the amount of water it uses, and Public Works Director Steve Burgos told POWER the city wants to connect more downtown buildings to the system.

3. This plaque, designed by artist Ward Hooper, adorns buildings across downtown Boise, Idaho, denoting properties that use geothermal energy. Courtesy: City of Boise

Burgos said it costs the city about $1,000 to pump the water out of the ground and into the system on a monthly basis, and about another $1,000 for the electricity used to inject the water back into the aquifer. Burgos said the water “comes out at 177 degrees,” and the city is able to re-use the water in lower-temperature (110 degrees) scenarios, such as at laundry facilities. The city’s annual revenue from the system is $650,000 to $750,000.

“We have approximately 95 buildings using the geothermal system,” said Burgos. “About 2% of the city’s energy use is supplied by geothermal. We’re very proud of it. It’s a source of civic pride. Most of the buildings that are hooked up use geothermal for heating. Some of the buildings use geothermal for snow melt. There’s no outward sign of the system, there’s no steam coming out of the ground.”

Colin Hickman, the city’s communication manager for public works, told POWER that Boise “has a downtown YMCA, that has a huge swimming pool, that is heated by geothermal.” He and Burgos both said the system is an integral part of the city’s development.

“We’re currently looking at a strategic master plan for the geothermal,” Burgos said. “We definitely want to expand the system. Going into suburban areas is challenging, so we’re focusing on the downtown core.” Burgos said the city about a decade ago put in an injection well to help stabilize the aquifer. Hickman noted the city last year received a 25% increase in its water rights.

Boise State University (BSU) has used the system since 2013 to heat several of its buildings, and the school’s curriculum includes the study of geothermal physics. The system at BSU was expanded about a year and a half ago—it’s currently used in 11 buildings—and another campus building currently under construction also will use geothermal.

Boise officials tout the city’s Central Addition project, part of its LIV District initiative (Lasting Environments, Innovative Enterprises and Vibrant Communities). Among the LIV District’s goals is to “integrate renewable and clean geothermal energy” as part of the area’s sustainable infrastructure.

“This is part of a broader energy program for the city,” Burgos said, “as the city is looking at a 100% renewable goal, which would call for an expansion of the geothermal energy program.” Burgos noted that Idaho Power, the state’s prominent utility, has a goal of 100% clean energy by 2045.

As Boise grows, Burgos and Hickman said the geothermal system will continue to play a prominent role.

“We actively go out and talk about it when we know a new business is coming in,” Burgos said. “And as building ownership starts to change hands, we want to have a relationship with those folks.”

Said Hickman: “It’s one of the things we like as a selling point” for the city.

Young told POWER: “The GeoVision study looked at different pathways to reduce the cost of geothermal and at ways we can expand access to geothermal resources so that it can be a 50-state technology, not limited to the West. When the study is released, it will be a helpful tool in showing the potential for geothermal in the U.S.”

Young said of the DOE: “Their next big initiative is to enable EGS, using the FORGE site,” referring to the Frontier Observatory for Research in Geothermal Energy, a location “where scientists and engineers will be able to develop, test, and accelerate breakthroughs in EGS technologies and techniques,” according to DOE. The agency last year said the University of Utah “will receive up to $140 million in continued funding over the next five years for cutting-edge geothermal research and development” at a site near Milford, Utah, which will serve as a field laboratory.

“The amount of R&D money that’s been invested in geothermal relative to other technologies has been small,” Young said. “and consequently, the R&D improvement has been proportionally less than other technologies. The potential, however, for geothermal technology and cost improvement is significant; investment in geothermal could bring down costs and help to make it a 50-state technology – which could have a positive impact on the U.S. energy industry.”

For those who question whether geothermal would work in some areas, Young counters: “The temperatures are lower in the Eastern U.S., but the reality is, there’s heat underground everywhere. The core of the earth is as hot as the surface of the sun, but a lot closer. DOE is working to be able to access that heat from anywhere – at low cost.”

Investors Stepping Up

Geothermal installations are often found at tectonic plate boundaries, or at places where the Earth’s crust is thin enough to let heat through. The Pacific Rim, known as the Ring of Fire for its many volcanoes, has several of these places, including in California, Oregon, and Alaska, as well as northern Nevada.

Geothermal’s potential has not gone unnoticed. Some of the world’s wealthiest people, including Microsoft founder Bill Gates, Amazon founder and CEO Jeff Bezos, and Alibaba co-founder Jack Ma, are backing Breakthrough Energy Ventures, a firm that invests in companies developing decarbonization technologies. Breakthrough recently invested $12.5 million in Baseload Capital, a geothermal project development company that provides funding for geothermal power plants using technology developed by Climeon, its Swedish parent company.

Climeon was founded in 2011; it formed Baseload Capital in 2018. The two focus on geothermal, shipping, and heavy industry, in the latter two sectors turning waste heat into electricity. Climeon’s geothermal modules are scalable, and available for both new and existing geothermal systems. Climeon in March said it had an order backlog of about $88 million for its modules.

“We believe that a baseload resource such as low-temperature geothermal heat power has the potential to transform the energy landscape. Baseload Capital, together with Climeon’s innovative technology, has the potential to deliver [greenhouse gas-free] electricity at large scale, economically and efficiently,” Carmichael Roberts of Breakthrough Energy Ventures said in a statement.

Climeon says its modules reduce the need for drilling new wells and enable the reuse of older wells, along with speeding the development time of projects. The company says the compact and modular design is scalable from 150-kW modules up to 50-MW systems. Climeon says it can be connected to any heat source, and has just three moving parts in each module: two pumps, and a turbine.

4. The Sonoma Plant operated by Calpine is one of more than 20 geothermal power plants sited at The Geysers, the world’s largest geothermal field, located in Northern California.  Courtesy: Creative Commons / Stepheng3

Breakthrough Energy’s investment in Baseload Capital is its second into geothermal energy. Breakthrough last year backed Fervo Energy, a San Francisco, California-based company that says its technology can produce geothermal energy at a cost of 5¢/kWh to 7¢/kWh. Fervo CEO and co-founder Tim Latimer said the money from Breakthrough would be used for field testing of EGS installations. Fervo’s other co-founder, Jack Norbeck, was a reservoir engineer at The Geysers in California (Figure 4), the world’s largest geothermal field, located north of Santa Rosa and just south of the Mendocino National Forest.

Most of the nearly two dozen geothermal plants at The Geysers are owned and operated by Calpine, though not all are operating. The California Energy Commission says there are more than 40 operating geothermal plants in the state, with installed capacity of about 2,700 MW.

Geothermal “is something we have to do,” said Aikin of Introspective Systems. “We have to find new baseload power. Our distribution technology can get part of the way there, toward 80% renewables, but we need base power. [Geothermal] is a really good ‘all of the above’ direction to go in.”

Source :


Making Simulation Accessible to the Masses – American Composites Manufacturers Association

Composites simulation tools aren’t just for mega corporations. Small and mid-sized companies can reap their benefits, too.

In 2015, Solvay Composite Materials began using simulation tools from MultiMechanics to simplify testing of materials used in high-performance applications. The global business unit of Solvay recognized the benefits of conducting computer-simulated tests to accurately predict the behavior of advanced materials, such as resistance to extreme temperatures and loads. Two years later, Solvay invested $1.9 million in MultiMechanics to expedite development of the Omaha, Neb.-based startup company’s material simulation software platform, which Solvay predicts could reduce the time and cost of developing new materials by 40 percent.

Commitment to – and investment in – composites simulation tools isn’t unusual for a large company like Solvay, which recorded net sales of €10.3 billion (approximately $11.6 billion) in 2018 and has 27,000 employees working at 125 sites throughout 62 countries. What may be more surprising is the impact composites simulation can have on small to mid-sized companies. “Simulation tools are for everyone,” asserts Flavio Souza, Ph.D., president and chief technology officer of MultiMechanics.

The team at Guerrilla Gravity would agree. The 7-year-old mountain bike manufacturer in Denver began using simulation software from Altair more than a year ago to develop a new frame technology made from thermoplastic resins and carbon fiber. “We were the first ones to figure out how to create a hollow structural unit with a complex geometry out of thermoplastic materials,” says Will Montague, president of Guerrilla Gravity.

That probably wouldn’t have been possible without composites simulation tools, says Ben Bosworth, director of composites engineering at Guerrilla Gravity. Using topology optimization, which essentially finds the ideal distribution of material based on goals and constraints, the company was able to maximize use of its materials and conduct testing with confidence that the new materials would pass on the first try. (They did.) Afterward, the company was able to design its product for a specific manufacturing process – automated fiber placement.

“There is a pretty high chance that if we didn’t utilize composites simulation software, we would have been far behind schedule on our initial target launch date,” says Bosworth. Guerrilla Gravity introduced its new frame, which can be used on all four of its full-suspension mountain bike models, on Jan. 31, 2019.

The Language of Innovation
There are dozens of simulation solutions, some geared specifically to the composites industry and other general finite element analysis (FEA) tools. But they all share the common end goal of helping companies bring pioneering products to market faster – whether those companies are Fortune 500 corporations or startup entrepreneurships.

“Composites simulation is going to be the language of innovation,” says R. Byron Pipes, executive director of the Composites Manufacturing & Simulation Center at Purdue University. “Without it, a company’s ability to innovate in the composites field is going to be quite restricted.”

Those innovations can be at the material level or within end-product applications. “If you really want to improve the micromechanics of your materials, you can use simulation to tweak the properties of the fibers, the resin, the combination of the two or even the coating of fibers,” says Souza. “For those who build parts, simulation can help you innovate in terms of the shape of the part and the manufacturing process.”

One of the biggest advantages that design simulation has over the traditional engineering approach is time, says Jeff Wollschlager, senior director of composites technology at Altair. He calls conventional engineering the “build and bust” method, where companies make samples, then break them to test their viability. It’s a safe method, producing solid – although often conservative – designs. “But the downside of traditional approaches is they take a lot more time and many more dollars,” says Wollschlager. “And everything in this world is about time and money.”

In addition, simulation tools allow companies to know more about the materials they use and the products they make, which in turn facilitates the manufacturing of more robust products. “You have to augment your understanding of your product with something else,” says Wollschlager. “And that something else is simulation.”

A Leap Forward in Manufacturability
Four years ago, Montague and Matt Giaraffa, co-founder and chief engineer of Guerrilla Gravity, opted to pursue carbon fiber materials to make their bike frames lighter and sturdier. “We wanted to fundamentally improve on what was out there in the market. That required rethinking and analyzing not only the material, but how the frames are made,” says Montague.

The company also was committed to manufacturing its products in the United States. “To produce the frames in-house, we had to make a big leap forward in manufacturability of the frames,” says Montague. “And thermoplastics allow for that.” Once Montague and Giaraffa selected the material, they had to figure out exactly how to make the frames. That’s when Bosworth – and composites simulation – entered the picture.

Bosworth has more than a decade of experience with simulation software, beginning as an undergraduate student in mechanical engineering as a member of his college’s Formula SAE® team to design, build and test a vehicle for competition. While creating the new frame for Guerrilla Gravity, he used Altair’s simulation tools extensively, beginning with early development to prove the material feasibility for the application.

“We had a lot of baseline data from our previous aluminum frames, so we had a really good idea about how strong the frames needed to be and what performance characteristics we wanted,” says Bosworth. “Once we introduced the thermoplastic carbon fiber, we were able to take advantage of the software and use it to its fullest potential.” He began with simple tensile test samples and matched those with physical tests. Next, he developed tube samples using the software and again matched those to physical tests.

“It wasn’t until I was much further down the rabbit hole that I actually started developing the frame model,” says Bosworth. Even then, he started small, first developing a computer model for the front triangle of the bike frame, then adding in the rear triangle. Afterward, he integrated the boundary conditions and the load cases and began doing the optimization.

“You need to start simple, get all the fundamentals down and make sure the models are working in the way you intend them to,” says Bosworth. “Then you can get more advanced and grow your understanding.” At the composite optimization stage, Bosworth was able to develop a high-performing laminate schedule for production and design for automated fiber placement.

Even with all his experience, developing the bike frame still presented challenges. “One of the issues with composites simulation is there are so many variables to getting an accurate result,” admits Bosworth. “I focused on not coming up with a 100 percent perfect answer, but using the software as a tool to get us as close as we could as fast as possible.”

He adds that composites simulation tools can steer you in the right direction, but without many months of simulation and physical testing, it’s still very difficult to get completely accurate results. “One of the biggest challenges is figuring out where your time is best spent and what level of simulation accuracy you want to achieve with the given time constraints,” says Bosworth.

Wading into the Simulation Waters
The sophistication and expense of composites simulation tools can be daunting, but Wollschlager encourages people not to be put off by the technology. “The tools are not prohibitive to small and medium-sized companies – at least not to the level people think they are,” he says.

Cost is often the elephant in the room, but Wollschlager says it’s misleading to think packages will cost a fortune. “A proper suite provides you simulation in all facets of composite life cycles – in the concept, design and manufacturing phases,” he says. “The cost of such a suite is approximately 20 to 25 percent of the yearly cost of an average employee. Looking at it in those terms, I just don’t see the barrier to entry for small to medium-sized businesses.”

As you wade into the waters of simulation, consider the following:

Assess your goals before searching for a package. Depending on what you are trying to accomplish, you may need a comprehensive suite of design and analysis tools or only a module or two to get started. “If you want a simplified methodology because you don’t feel comfortable with a more advanced one, there are mainstream tools I would recommend,” says Souza. “But if you really want to innovate and be at the cutting-edge of your industry trying to understand how materials behave and reduce costs, then I would go with a more advanced package.” Decide upfront if you want tools to analyze materials, conduct preliminary designs, optimize the laminate schedule, predict the life of composite materials, simulate thermo-mechanical behaviors and so on.

Find programs that fit your budget. Many companies offer programs for startups and small businesses that include discounts on simulation software and a limited number of hours of free consulting. Guerrilla Gravity purchased its simulation tools through Altair’s Startup Program, which is designed for privately-held businesses less than four years old with revenues under $10 million. The program made it fiscally feasible for the mountain bike manufacturer to create a high-performing solution, says Bosworth. “If we had not been given that opportunity, we probably would’ve gone with a much more rudimentary design – probably an isotropic, black aluminum material just to get us somewhere in the ballpark of what we were trying to do,” he says.

Engage with vendors to expedite the learning curve. Don’t just buy simulation tools from suppliers. Most companies offer initial training, plus extra consultation and access to experts as needed. “We like to walk hand-in-hand with our customers,” says Souza. “For smaller companies that don’t have a lot of resources, we can work as a partnership. We help them create the models and teach them the technology behind the product.”

Start small, and take it slow. “I see people go right to the final step, trying to make a really advanced model,” says Bosworth. “Then they get frustrated because nothing is working right and the joints aren’t articulating. They end up troubleshooting so many issues.” Instead, he recommends users start simple, as he did with the thermoplastic bike frame.

Don’t expect to do it all with simulation. “We don’t advocate for 100 percent simulation. There is no such thing. We also don’t advocate for 100 percent experimentation, which is the traditional approach to design,” says Wollschlager. “The trick is that it’s somewhere in the middle, and we’re all struggling to find the perfect percentage. It’s problem-dependent.”

Put the right people in place to use the tools. “Honestly, I don’t know much about FEA software,” admits Montague. “So it goes back to hiring smart people and letting them do their thing.” Bosworth was the “smart hire” for Guerrilla Gravity. And, as an experienced user, he agrees it takes some know-how to work with simulation tools. “I think it would be hard for someone who doesn’t have basic material knowledge and a fundamental understanding of stress and strain and boundary conditions to utilize the tools no matter how basic the FEA software is,” he says. For now, simulation is typically handled by engineers, though that may change.

Perhaps the largest barrier to implementation is ignorance – not of individuals, but industry-wide, says Pipes. “People don’t know what simulation can do for them – even many top level senior managers in aerospace,” he says. “They still think of simulation in terms of geometry and performance, not manufacturing. And manufacturing is where the big payoff is going to be because that’s where all the economics lie.”

Pipes wants to “stretch people into believing what you can and will be able to do with simulation.” As the technology advances, that includes more and more each day – not just for mega corporations, but for small and mid-sized companies, too.

“As the simulation industry gets democratized, prices are going to come down due to competition, while the amount you can do will go through the roof,” says Wollschlager. “It’s a great time to get involved in simulation.”

Source :


Which New Business Models Will Be Unleashed By Web 3.0? – Fabric

The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.

The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?

A history of Business Models across Web 1.0, Web 2.0 and Web 3.0

We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.

To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:

  • In 1999, while enjoying good traffic, they were clearly struggling with their business model. Their lead investor Mike Moritz (Sequoia Capital) openly stated “we really couldn’t figure out the business model, there was a period where things were looking pretty bleak”.
  • In 2001, Google was making $85m in revenue while their rival Overture was making $288m in revenue, as CPM based online advertising was falling away post dot-com crash.
  • In 2002, adopting Overture’s ad model, Google went on to launch AdWords Select: its own pay-per-click, auction-based search-advertising product.
  • Two years later, in 2004, Google hits 84.7% of all internet searches and goes public with a valuation of $23.2 billion with annualised revenues of $2.7 billion.

After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.

Looking back at the wave of Web 2.0 Business Models


The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.


Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.

Open Source Software

Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.


In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.

Sharing Economy

It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.


While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.

Emerging Web 3.0 Business Models

Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.

By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.

  • Issuing a native asset
  • Holding the native asset, building the network:
  • Taxation on speculation (exchanges)
  • Payment tokens
  • Burn tokens
  • Work Tokens
  • Other models

Issuing a native asset:

Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.

These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.

Holding the native asset, building the network:

Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.

While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.

Taxing the Speculative Nature of these Native Assets:

The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.

Payment Tokens:

With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.

Burn Tokens:

Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.

Work Tokens:

One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.

A wide array of other models are being explored and worth touching upon:

  • Dual token model such as MKR/DAI & SPANK/BOOTY where one asset absorbs the volatile up- & down-side of usage and the other asset is kept stable for optimal transacting.
  • Governance tokens which provide the ability to influence parameters such as fees and development prioritisation and can be valued from the perspective of an insurance against a fork.
  • Tokenised securities as digital representations of existing assets (shares, commodities, invoices or real estate) which are valued based on the underlying asset with a potential premium for divisibility & borderless liquidity.
  • Transaction fees for features such as the models BloXroute & Aztec Protocol have been exploring with a treasury that takes a small transaction fee in exchange for its enhancements (e.g. scalability & privacy respectively).
  • Tech 4 Tokens as proposed by the Starkware team who wish to provide their technology as an investment in exchange for tokens — effectively building a treasury of all the projects they work with.
  • Providing UX/UI for protocols, such as Veil & Guesser are doing for Augur and Balance is doing for the MakerDAO ecosystem, relying on small fees or referrals & commissions.
  • Network specific services which currently include staking providers (e.g., CDP managers (e.g. topping off MakerDAO CDPs before they become undercollateralised) or marketplace management services such as OB1 on OpenBazaar which can charge traditional fees (subscription or as a % of revenues)
  • Liquidity providers operating in applications that don’t have revenue generating business models. For example, Uniswap is an automated market maker, in which the only route to generating revenues is providing liquidity pairs.

With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.

When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.

Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller

Source :

Unlocking the Potential for Successful Technology Transfer – Vicki A. Barbu

Technology Transfer is defined as “the process of transferring technology from its origination to a wider distribution among more people and places.” Various communities such as business, academia and government are routinely involved in these initiatives including across international borders, both formally and informally.

The primary desire is to share expertise, knowledge, technologies, methodologies, facilities, and capabilities among governments or universitiesand other institutions to ensure that scientific and technological developments are accessible to users who can then pursue development, robustification, design for manufacturability and exploit the technology into new products, processes, applications, materials or services. There are several types of returns. First, for the stakeholder’s investment in the research itself. Second, for creation of new job opportunities.  Lastly, for a new product or service that is likely to impact the health and viability on a global scale.

The U.S. government invests some $135B each year to advance science and technology (S&T) as the basis for breakthrough knowledge development and new innovations, of which around 20 to 30 percent is invested in successful Technology Transfer. The federal S&T budget is a sizeable sum. In fact, the federal laboratory ecosystem provides a home to several hundred thousand scientists and engineers working to solve some of the most significant scientific challenges on a national and global scale. The national laboratories alone annually produce 11,000 peer reviewed publications and over 1,700 reported inventions and 6,000 active technology license agreements. However, the primary mission of the federal laboratory ecosystem is to perform basic research for scientific discovery to, support national defense and other missions, and to perform research and development in spaces where industry is not yet ready to lead.

Unlike both public and private commercial companies, the federal laboratories perform R&D with neither specific products nor services directly in mind. Most work in the public interest, and are often trusted advisors of the government. They understand the mission space, the requirements, and the gaps that need to be closed to improve safety and security of the nation.

The primary customers of the output from federal laboratory research and development efforts are often the federal agencies directly funding the work, since commercial transfer brings private funds to bear to bring products to market with government funded intellectual property inside. It is also an expectation as part of the charter of federal laboratories in 1986 that successful commercial outcomes are resultant benefits of the high-performance research programs.

What are the perceived barriers of Technology Transfer?

There are several constructs that impact success of Technology Transfer, and not least the uniqueness of the Intellectual Property (IP) involved, for example:

  • Is it leading-edge and breakthrough?
  • Is it disruptive?
  • Is it easily “copied”?
  • Are there competing technologies?
  • Is there a work-around?

All these factors contribute to the ultimate value positioning opportunity for transfer. In addition, federal laboratories are not evaluated directly by their sponsors on the commercial impacts of their research initiatives, and are in many cases discouraged from “picking winners and losers” in their effort to remain the unbiased and trusted advisors of the government. Due to the nature of their funding, federal laboratory research outputs are atypically complete product solutions and are most often in the early stage of development.  Companies to which the outcomes are transitioned must provide additional resources to develop research results into commercial, robust, sustainable products and services, and in the case of any environmental or medical technologies, seek appropriate regulatory approvals and often conduct clinical trials, if necessary.

These additional steps consume further investment dollars, can dilute internal company efforts, and seriously hinder the attractiveness of the transfer opportunity. Furthermore, most successful products combine multiple innovations from a variety of sources to meet customer needs.  A single technology license rarely provides a complete solution. These circumstances are especially true for the output from federal programs. The ability to deliver a final product is rare, and outputs are routinely seen as components ready for embedding into other more complex offerings.

Other issues that present barriers, in the case of federal R&D, is that the initiatives emanating from the federal laboratories are perceived to be difficult for companies to access. This is due largely because researchers must obtain funding for all their labor hours, and relatively few resources are available to support sustained collaborations with companies unless they are negotiated within the license agreement itself. Frequently, early stage companies pursuing technology transfer opportunities require assistance and mentorship not available at federal laboratories.

The hope is that some of the newly created and established accelerators and incubators with associated mentoring and guidance make for a more seamless transition route. Often good ideas resulting from the discovery phase at the federal labs are touted as moments away from widespread distribution, yet it rarely turns out to be the case. There is still a good deal of additional development, robustification, design for manufacturability, and even market positioning necessary before an “idea” evolves into a fully-fledged commercial opportunity. To streamline this process, encouraging entrepreneurs, investors and IP licensors to communicate a more perfect set of requirements necessary for “go-to-market” opportunities would eliminate the mismatch of expectations between the research and the commercial communities.

Many existing reports, white papers and articles outline the barriers to successful Technology Transfer, and they inevitably focus on the existence of the “valley of death”  defined as the phase directly after discovery yet before commercialization. Many articles describe the “push” perspective as the process by which technology moves from research to commercialization. However, the Technology Scouts, now a common formal position in many established companies, spend most of their time searching for technology in a market “pull” process to enhance and supplement a competitive, long-term company business strategy.

Photo Credit: The MITRE Corporation

Conditions for success

There are three conditions that must be met for successful Technology Transfer. Perhaps it would be insightful to list these “pull” conditions, so that those frustrated by perceived barriers on the “push” side reassess approaches to improve and increase yields for successful transfers. The conditions are as follows:

  • Alignment of Mission: The technology must enhance, simplify, and supplement the mission and the strategy being pursued by the “scout.” It must be the answer to a problem that the scout is charged with solving by his or her stakeholders. When technology “pushers” try to convince scouts that they should be solving a different problem, the pushers, and the deal, will fail!
  • Resources and Time to Market: The cost to innovate is immediate and certain, yet the value of the innovation is future and uncertain. There is an entire industry dedicated to predicting the value of future innovation, yet it is not an exact science and the elusiveness of the return and when it will be seen can be a killer!
  • Company Exclusivity: Technology companies (and their owners) scale quickly when they have a superior value proposition and a sustainable competitive advantage. IP is a critical element in building a sustainable competitive advantage. For technology providers, this exclusivity model is not always good business since investment in IP has a return if and only when an exclusive partner successfully commercializes and scales; if it does not occur, the upfront costs for innovation are not recovered.

What needs to change to improve Technology Transfer outcomes?

Important in the process of Technology Transfer is the need for companies and the federal laboratory scientists to have a shared understanding of the resources provided by federal laboratories. The federal laboratories are tremendous sources of innovation and technical expertise, yet they cannot provide everything a company will need to develop and commercialize a product or technology offering.

Programs exist to help companies access the capabilities of the federal laboratories.  Notably, Cooperative Research and Development Agreements (CRADAs) provide mechanisms for companies to collaborate with national laboratories, and Strategic Partnership Projects (SPP)enable companies to sponsor research and development at national laboratories directly.

Both programs require companies to invest private resources, either as in-kind contributions to collaborations or as direct project funding.

One overarching opportunity, then, is for increased federal investment in collaborations with private sector partners to make the laboratories more accessible to companies with limited financial resources. The Argonne National Laboratory, for example, has supported a realization of this situation by implementing their Executive in Residence Program, where a company employed scientists working in close proximity at the federal lab during the later stages of technical development. The opportunity is then readily positioned for “spinning off” into its own entity—or to support future strategic initiatives in a well-established company.

Additionally, there are other programs that offer extension or expansion pilot programs to support Technology Transfer, such as the Small Business Voucher Program (SBV), the Technology Commercialization Fund (TCF) and the various Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR)  programs. These programs target collaborations with federal laboratories that enable increased access to the laboratories and facilitate joint projects that can result in new products and subsequently the creation of new jobs. Expansion of other programs, such as the DOE Energy Investor Center and the DHS Transition to Practice (TTP) program, increase the visibility of the innovations and capabilities of the federal laboratories. At the same time, they help raise awareness of the commercialization opportunities that exist in the federal laboratories.

Photo Credit: The MITRE Corporation

Viewing the start-up realistically

If a project is pitched as perfection but delivers great, the project fails. If the project is pitched as good but delivers great, the project succeeds. Start-ups need to reign in how much perfection-pitching they perform. Investors, partners and acquirers need to view start-ups more realistically than their perception of a start-up’s ability to make overnight transformations. 

Aligning definitions of success and support at the stages of technology progression is critical to achieving positive outcomes. Mission alignment needs to continually improve. An increasing amount of research is being performed by universities and government so, as a result, there needs to be a protocol developed upfront to allow for alignment with established companies and any start-ups that ultimately commercialize the technology and discover marketing positioning for an output product.

Terms and agreements are also critical components for Technology Transfer because with investment in research, costs occur immediately yet revenue is future and uncertain. Any licensing terms need to share this inevitable risk, and provide the freedom for the licensor to pursue other licensees in cases where a commercialization effort does not meet certain financial goals.

What kind of regulatory/mandatory changes need to take place?

Federal laboratory scientists are often willing to help companies adopt their technologies, as evidenced by the success of the Argonne Laboratory’s Executive in Residence Program, as well as MITRE’s first-hand experience of partnering with them to develop and nurture opportunities. It takes a team to deliver Technology Transfer successfully.

However, federal labs have limited funding available to support engagements with private sector companies unless incorporated directly into the license itself. Mandating availability of increased funding of this kind, as well as reducing the administrative burden associated with accessing those funds, would drive greater private sector engagement with the federal laboratories, and thereby increase the commercial impacts of federal laboratory research and development.

In terms of mandatory changes to facilitate the delivery of and derive benefits from new innovative products in healthcare, for example, supported by a connected Digital IT architecture, investment in a standard interoperability framework would be highly significant. Currently, the landscape is diverse, and pointedly so in healthcare, where hospitals each operate individually with different system installations that limit the ability to interface seamlessly across institutions. Under these circumstances, it is impossible for any new innovations to be easily accessible by all points of the domain due to lack of an interoperability mindset. In addition, encouraging a more patient-centered architecture would lead to an increasingly robust innovation environment for healthcare. In fact, it has been shown that having patients involved in their healthcare improves results and lowers costs.

What can government agencies do to enhance opportunities?

Federal laboratories are ultimately driven by the goals and objectives of their funding agencies and offices, and they remain the bedrock for delivering outcomes for national defense as well as national safety and security. Technology Transfer Offices can help companies access the innovations and capabilities of the federal laboratories by increasing the programmatic value they place on such engagements, and actively encourage or support the interactions. In effect, this approach will benefit their needs, too, by making a product or service readily available, in a robust way, at economically viable price points.

There are also more likely to be further and future advancements of the technology available in due course driven by the product development efforts of the commercial company. This outcome would undoubtedly reduce sustainability costs for the agency as their needs would continue to be serviced directly from private funds. As noted earlier, extending and expanding programs such as SBV and TCF would likely increase private sector engagement with the federal laboratories.

Government agencies also can help with mission alignment. A good example of this approach was the space race in the 1960s. There are also several other examples where the government has been the catalyst for successful technologies that generate commercial breakthrough opportunities. Agencies should be setting goals and metrics and providing financial incentives for academia, federal labs, and the private sector to work together to meet these goals. Nevertheless, the government needs to avoid picking winners and losers because only the market can determine the future value of any technology. Once the “macro” level goals are set for alignment, individuals (scientists, innovators and engineers) need to be trained on the behavioral science of how better to understand the “micro”’ level needs of the others in the chain.

The NSF (National Science Foundation) I-Corps and the Fed-Tech program deliver value by helping innovators and entrepreneurs understand product market fit through experiential training in discovering needs. Similar programs, designed to align fundamental research to commercialization, would go a long way towards improving the situation. The Innovation Research Interchange (formerly known as the Industrial Research Institute) is helping to support match-making initiatives through its Federal Laboratory Activity Group (FLAG). Specific areas of focus are: Energy/Sustainability, Advanced Materials/Manufacturing, Cyber Security/Data Analytics and Robotics/Automation.

The government makes a good partner because it is a natural convener of new discoveries, can sustain much longer term strategies compared with industry, and is not under the demands of shareholders. Rather, it is often neutral and can enable even typically competitive organizations to collaborate for the greater good of society. While governments are not expected to over-regulate, their ambiguous guidelines can sometimes lead to fragmentation if the industry does not reach consensus, as evidenced by the lack of interoperability in the healthcare segment. If the government actively engages industry, then further fragmentation would be avoided and the associated longer term problems likely minimized.

What can and how can we help entrepreneurs to aid the process to success?

To support the entrepreneurial process, federal laboratories are encouraged to focus on some new approaches, namely:

  • Increase visibility of their capabilities and ensure innovations are readily available
  • Provide clearer guidance on what the laboratories can provide and, equally important, what they cannot provide
  • Host a series of technology focused workshops to raise awareness of available programs and opportunities
  • Award grants to entrepreneurs to support their programs
  • Create more opportunities for innovation bridges, so that challenges are solved together from the onset

Supporting entrepreneurs to quickly achieve a “Yes/Go” or “No/Go” pitch to future investment is critical. For example, a start-up entity often has 12 to 18 months of runway, during which time it needs to quickly succeed or fail (and pivot, if appropriate). With limited resources, the team is unable to spread itself thinly and therefore must remain focused on its target goal. A “maybe” response is a killer; it results in burning resources and does not help entrepreneurs to understand clearly if their product is providing true value. Being harsh but factually quantitative enables a better outcome for all.

Entrepreneurs themselves fall into distinct groups as determined by their efforts. Entrepreneurs focus on target-market fit and sustainable advantages of their products to attract investors. They need exclusivity yet have limited funds for licensing. For many start-ups, future equity is their only currency, so they need financial resources to help solve the problem.

What can well-established companies do to improve interaction, integration and chances of success?

Investing time with the federal laboratories to learn more about ongoing research activities and outputs is one way to improve outcomes. Most research results are complex and are “works in progress.”  While it is relatively rare to find a nearly commercially-ready technology solution in the laboratories, the laboratories have deep expertise and capabilities and can help companies quickly solve complex challenges.

Additionally, there is the need to resist the urge to negotiate the terms and conditions of collaboration agreements with federal laboratories.  Most laboratories can quickly implement standard agreements, yet must seek multiple levels of federal approval for non-standard agreements, significantly increasing the time required to put an agreement in place.  Furthermore, federal laws and policies limit the extent to which partnering agreements can be substantively changed, so that lengthy negotiations rarely result in significant changes in agreement terms.

Defining success at the stages of discovery, development, deployment and distribution are key to having projects reach positive outcomes. Without this expectation setting, project timing will be misaligned and it will be challenging to realign the stage-appropriate support to achieve real business value. 

Again, the three conditions associated with barriers to success apply—namely, alignment of mission, resource needs and time to market, together with company exclusivity. However, where a start-up may be heavily dependent on IP as a sustainable competitive advantage, large companies have other factors contributing to that competitive advantage, for example, brand, supply chain, scale, and channels. Large corporations will tend to “engineer” around patents in their commercialization process. Acquisition of IP will occur through licensing if it is core and foundational, and they cannot overcome the barrier. They will also only buy/license IP in times of disruption or transition. Generally, this outcome is achieved by acquiring a start-up that has commercialized a proven product market fit. In effect, established corporations are looking for products, not research, when they need technology.

How can the VC communities and start-ups take advantage of outcomes from federally funded programs?

Interactions between venture capital (VC) communities and start-ups present several areas for improvement. For example, enabling them to interface and work routinely with universities and other programs would increase their familiarity and comfort level with federally funded initiatives. However, it is also important to note that writing a successful grant application is very different than preparing a strong business pitch deck.

Encouraging portfolio companies to visit and engage with the federal laboratories to learn about available technologies and collaboration opportunities would certainly drive enhanced relationships leading to technology transfer. Allied Minds is one such company that routinely interfaces with several federal entities with the primary objective of accessing and gaining exposure to early stage IP. In the main, Technology Transfer offices are always happy to coordinate visits from prospective collaborators.

VCs are essentially risk managers and are unlikely to accept more risk to increase the flow of IP. VCs need to see their investments explode—or fail fast. Return on investment from Technology Transfer extracted from federal labs would undoubtedly increase if the lab can define the path to commercialization, even if they cannot execute that path due to their mission. Quantified data linking research to customer will attract VCs. As such, NSF I-Corps, Fed-Corp and DHS TTP initiatives are helpful programs. If technology has an assessed product market fit through a customer discovery process using scientific methods, in addition to the science of the invention, there is less risk. VCs will take advantage of this type of program in their investment decisions.


There is currently a mass of untapped technical potential and IP sitting on shelves within the federal laboratory ecosystem that has been funded by federal agencies. We know that those concepts which do make it to market, such as laser technology from the 1960s, have compelling impacts, solve national and global problems, provide a catalyst for greater success by industry alone, and drive the economy and GDP of the country. The laser is only one such technology, the Internet is another—it started life at the Stanford Research Institute (SRI). And there are also many technologies we rely on today that emanated from the space race.

The results of all these programs are generally clearly visible, as compared with private investment where only those making the investment typically benefit and the outcomes are less visible to society. The advancement and discoveries of industry, therefore, have only limited impact as a result, compared with when the outcomes are delivered from government funded programs. Recommendations to further support unlocking the potential from federally funded R&D are as follows:

  • Increase funding to support the transition of technology to entrepreneurs
  • Enable federal laboratories to better understand business world needs
  • Engage teams with market positioning early on so that modifications can be built in accordingly
  • Create more programs like DHS and TPP to showcase early, impactful technologies
  • Encourage and find ways to showcase opportunities at all of the federal labs

With these modifications and implemented changes, there will likely be:

  • Increased technology transition to entrepreneurial and well-established companies
  • New opportunities generated for discoveries that make an impact on the national and global landscape
  • Economical and viable options delivered to support widespread government use of a technology
  • Technology advancement at private expense that will be available to government
  • A return on the initial investment by enabling economic development from growth of a new industry

Source :

Secondaries market growth underscores role of valuations – Robert Tribuiani

It’s not only  limited partners scrutinizing valuations; regulators have also shown an interest amid the rise of the secondaries market and a growing propensity for GP-led transactions

By Robert A. Tribuiani, Managing Director, Head of Business Development, Murray Devine Valuation Advisors

For years, the secondaries market had been considered a small, sometimes-overlooked niche within private equity. After a year in which secondary investors amassed a record $74 billion of transaction volume, the momentum and growing impact of this “niche” can now be felt across the more expansive global PE landscape.

When the secondaries market first took root, the appeal was that it allowed investors to either sell limited partnership interests in existing funds or unload direct investments in companies within a captive portfolio. The upshot, of course, was that secondaries were able to offer liquidity in an otherwise illiquid asset class.

But as the secondaries market has grown in size, both limited partners and general partners are increasingly using the market to optimize their portfolio management. LPs, for instance, will use secondaries to continually refine their PE allocations, whether to minimize the J curve, improve diversification, or redeploy capital into newer funds with potential for more upside. GPs, alternatively, are turning to secondaries to effect restructurings in which existing LPs can exit a fund, knowing that new commitments bring additional capital and more time to fully realize remaining investments.

In a sense, secondaries have helped to further mainstream private equity and have brought a level of flexibility that appeals to both newer investors and most sophisticated LPs, alike. An unintended consequence, however, is that the growth of the secondaries market has also brought more scrutiny to valuations. This is particularly the case as GP-led transactions become more common.

As the proportion of GP-led secondaries have grown – by nearly a third year-over-year according to Greenhill & Co. – and the fact that they are becoming more common earlier in a funds life,  the potential for conflicts of interest have investors and regulators scrutinizing valuations that much more closely.   This should further dissuade GPs from attempting to smooth over any near-term volatility in their approach to valuation.

A recent Secondaries Investor article, for instance, highlighted some of the concerns raised by Oregon’s State Treasury in its annual private equity review. The pension noted that innovation in the secondaries market is translating into more complexity. Recaps involving more nascent partnerships can disrupt the alignment between GPs and LPs.

“While all of this has a place in a maturing private equity industry,” the pension noted, “the aggressive pace of innovation may suggest that secondary buyers have more appetite for deals than the current market can satisfy.” A corollary — one that brings to mind Ockham’s Razor – is that the innovation to accommodate this demand can also breed “complex and conflict-riddled transaction proposals,” the pension added.

That’s not to say that GP-led secondaries, on their own, are worthy of special attention or are in any way misleading. Still, many investors will use these transactions as an opportunity to revisit the mark-to-market valuations that usually serve as the basis for pricing secondary transactions.

Regulators, too, are paying close attention. Four years ago, just as GP-led secondary transactions were becoming more common, the SEC’s then director of the Office of Compliance Inspections and Examinations Marc Wyatt identified in a speech that the agency was indeed watching and addressing “issues such as zombie advisers and fund restructurings.”  Since then, at least two GPs, in 2016 and last fall, have faced enforcement actions related to either valuations used in secondary transactions or potential conflicts that should have been disclosed.

By and large, though, the rise of the secondaries should be celebrated as an advance for the asset class. As the growth and increasing utility of the secondaries market continues in PE, so too does the transparency that aligns GP and LP interests. Ultimately, beyond providing confidence to investors and helping all constituencies manage risk, third-party independent valuations can also support ongoing growth of the secondaries market by imparting trust in the valuation that is helpful in navigating the added complexity.

Robert A. Tribuiani leads the overall business development efforts for Murray Devine Valuation Advisors. In this role, he is responsible for new business and works closely with the valuation client services team to support existing clients. Before joining Murray Devine, Rob worked as a senior business development executive for SolomonEdwards, a leading professional services firm headquartered in suburban Philadelphia and in similar roles  for Longview Solutions and VerticalNet, both private equity backed companies.  Rob is a graduate of Villanova University and earned a Bachelor of Science degree in Business Administration with a concentration in Finance.

Source :

Why are Machine Learning Projects so Hard to Manage? – Lukas Biewald

I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?

1. It’s really hard to tell in advance what’s hard and what’s easy.

Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy. Even within a domain, performance can vary wildly. What’s good accuracy for predicting sentiment? On movie reviews, there is a lot of text and writers tend to be fairly clear about what they think and these days 90–95% accuracy is expected. On Twitter, two humans might only agree on the sentiment of a tweet 80% of the time. It might be possible to get 95% accuracy on the sentiment of tweets about certain airlines by just always predicting that the sentiment is going to be negative.

Metrics can also increase a lot in the early days of a project and then suddenly hit a wall. I once ran a Kaggle competition where thousands of people competed around the world to model my data. In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques. Those people competing in the Kaggle competition worked incredibly hard to get that 68% accuracy and I’m sure felt like it was a huge achievement. But for most use cases, 65% vs 68% is totally indistinguishable. If that had been an internal project, I would have definitely been disappointed by the outcome.

My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning. Engineering projects generally move forward, but machine learning projects can completely stall. It’s possible, even common, for a week spent on modeling data to result in no improvement whatsoever.

2. Machine Learning is prone to fail in unexpected ways.

Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data. Humans are so good at generalizing from training data that we have terrible intuitions about this. I built a little robot with a camera and a vision model trained on the millions of images of ImageNet which were taken off the web. I preprocessed the images on my robot camera to look like the images from the web but the accuracy was much worse than I expected. Why? Images off the web tend to frame the object in question. My robot wouldn’t necessarily look right at an object in the same way a human photographer would. Humans likely not even notice the difference but modern deep learning networks suffered a lot. There are ways to deal with this phenomenon, but I only noticed it because the degradation in performance was so jarring that I spent a lot of time debugging it.

Much more pernicious are the subtle differences that lead to degraded performance that are hard to spot. Language models trained on the New York Times don’t generalize well to social media texts. We might expect that. But apparently, models trained on text from 2017 experience degraded performance on text written in 2018. Upstream distributions shift over time in lots of ways. Fraud models break down completely as adversaries adapt to what the model is doing.

3. Machine Learning requires lots and lots of relevant training data.

Everyone knows this and yet it’s such a huge barrier. Computer vision can do amazing things, provided you are able to collect and label a massive amount of training data. For some use cases, the data is a free byproduct of some business process. This is where machine learning tends to work really well. For many other use cases, training data is incredibly expensive and challenging to collect. A lot of medical use cases seem perfect for machine learning — crucial decisions with lots of weak signals and clear outcomes — but the data is locked up due to important privacy issues or not collected consistently in the first place.

Many companies don’t know where to start in investing in collecting training data. It’s a significant effort and it’s hard to predict a priori how well the model will work.

What are the best practices to deal with these issues?

1. Pay a lot of attention to your training data.
Look at the cases where the algorithm is misclassifying data that it was trained on. These are almost always mislabels or strange edge cases. Either way you really want to know about them. Make everyone working on building models look at the training data and label some of the training data themselves. For many use cases, it’s very unlikely that a model will do better than the rate at which two independent humans agree.

2. Get something working end-to-end right away, then improve one thing at a time.
Start with the simplest thing that might work and get it deployed. You will learn a ton from doing this. Additional complexity at any stage in the process always improves models in research papers but it seldom improves models in the real world. Justify every additional piece of complexity.

Getting something into the hands of the end user helps you get an early read on how well the model is likely to work and it can bring up crucial issues like a disagreement between what the model is optimizing and what the end user wants. It also may make you reassess the kind of training data you are collecting. It’s much better to discover those issues quickly.

3. Look for graceful ways to handle the inevitable cases where the algorithm fails.
Nearly all machine learning models fail a fair amount of the time and how this is handled is absolutely crucial. Models often have a reliable confidence score that you can use. With batch processes, you can build human-in-the-loop systems that send low confidence predictions to an operator to make the system work reliably end to end and collect high-quality training data. With other use cases, you might be able to present low confident predictions in a way that potential errors are flagged or are less annoying to the end user.

What’s Next?

The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use. As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic. I’m incredibly excited about the opportunity to build completely new kinds of products but worried about the lack of tools and best practices. So much so that I started a company to help with this called Weights and Biases. If you’re interested in learning more, check out what we’re up to.

Source :

Scroll to top