In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them.
One of our favorite areas to explore is: as a venture investor, do your odds of making better returns improve if you only invest in either enterprise or consumer companies? Or do you need a mix of both to maximize your returns? And how should recent investment and exit trends influence your investing strategy, if at all?
To help answer questions like these, we’ve collected and analyzed exit data for years. What we’ve found is pretty intriguing: portfolio value creation in enterprise tech is often driven by a cohort of exits, while value creation in consumer tech is generally driven by large, individual exits.
In general, this trend has held for several years and has led to the general belief, that if you are a consumer investor, the clear goal is to not miss that “one deal” that has a huge spike in exit valuation creation (easier said than done of course). And if you’re an enterprise investor, you want to create a “basket of exits” in your portfolio.
2019 has been a powerhouse year for consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent declines in stock price notwithstanding). The first three quarters of 2019 alone surpassed every year since 1995 for consumer exit value – and we’re not done yet. If the consumer exit pace continues at this scale, we will be on track for the most value created at exit in 25 years, according to our analysis.
Source: S&P Capital IQ, Pitchbook
Since 1995, the number of enterprise exits has consistently outpaced consumer exits (blue line versus green line above), but 2019 is the closest to seeing those lines converge in over two decades (223 enterprise vs 208 consumer exits in the first three quarters of 2019). Notably, in five of the past nine years, the value generated by consumer exits has exceeded enterprise exits.[1]
At Sapphire, we observe the following:
While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019 has generated a tremendous amount of value through IPOs, roughly $223 billion. However, after trading in the public markets, the aggregate value of those IPOs have decreased by $81 billion as of November 1, 2019.[3] This decrease is driven by Uber and Lyft from an absolute value basis, accounting for roughly 66% of this markdown over the same period, according to our figures. Over half of the IPO exits in 2019 have been consumer, and despite these stock price changes, consumer exits are still outperforming enterprise exits YTD given the enormous alpha they generated initially.
As we noted in the introduction, since 1995, historical data shows that years of high value creation from enterprise technology is often driven by a cohort of exits versus consumer value creation that is often driven by large, individual exits. The chart below illustrates this, showing a side-by-side comparison of exits and value creation.
Source: Pitchbook
At Sapphire, we observe the following:
The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies.
While total value of enterprise companies exited since 1995 ($884B) exceeds that of consumer exits ($773B), in the last 15 years, consumer returns have been making a comeback. Specifically, total consumer value exited ($538B) since 2004 exceeds that of enterprise exits ($536B). This difference has become more stark in the past 10 years with total consumer value exited ($512B) surpassing that of enterprise ($440B). As seen in the chart below, the rolling 10-year total enterprise exit value exceeded that of consumer, until the decade between 2003-2012 where consumer exit value took the lead.
Note: Data from S&P Capital IQ and Pitchbook
Source: S&P Capital IQ, Pitchbook
We believe size and then the inevitable hype around consumer IPOs has the potential to cloud investor judgment since the volume of successful deals is not increasing. The data clearly shows the surge in outsized returns comes from the outliers in consumer.
As exhibited below, large, consumer outliers since 2011 such as Facebook, Uber, and Snap often account for more than the sum of enterprise exits in any given year. For example, in the first three quarters of 2019, there have been 15 enterprise exits valued at over $1B for a total of $96B. In the same time, there have been nine consumer exits valued at over $1B for a total of $139B. Anecdotally, this can be seen from four out of the past five years being headlined by a consumer exit. While 2016 showcased an enterprise exit, it was a particularly quiet exit year.
Source: S&P Capital IQ, Pitchbook
While consumer deals have taken the lead in IPO value in recent years, on the M&A front, enterprise still has the clear edge. Since 1995 there have been 76 exits of $1 billion or more in value, of which 49 are enterprise companies and 27 are consumer companies. The vast majority of value from M&A has come from enterprise companies since 1995 — more than 2x that of consumer.
Similar to the IPO chart above, acquisition value of enterprise companies outpaced that of consumer companies until recently, with 2010-2014 being the exception.
Source: S&P Capital IQ, Pitchbook
Of course, looking only at outcomes with $1 billion or more in value covers only a fraction of where most VC exits occur. Slightly less than half of all exits in both enterprise and consumer are $50 million or under in size, and more than 70 percent of all exits are under $200 million. Moreover, in the distribution chart below, we capture only the percentage of companies for which we have exit values. If we change the denominator to all exits captured in our database (i.e. measure the percentage of $1 billion-plus exits by using a higher denominator), the percentage of outcomes drops to around 3 percent of all outcomes for both enterprise and consumer.
Source: S&P Capital IQ, Pitchbook
There’s an enormous volume of information available on startup exits, and at Sapphire Partners, we ground our analyses and theses in the numbers. At the same time, once we’ve dug into the details, it’s equally important to zoom out and think about what our findings mean for our GPs and fellow LPs. Here are some clear takeaways from our perspective:
In a nutshell, as LPs we like to see both consumer and enterprise deals in our underlying portfolio as they each provide different exposures and return profiles. However, when these investments get rolled up as part of a venture fund’s portfolio, success is often then contingent on the fund’s overall portfolio construction… but that’s a question to explore in another post.
NOTE: Total Enterprise Value (“TEV”) presented throughout analysis considers information from CapIQ when available, and supplements information from Pitchbook last round valuation estimates when CapIQ TEV is not available. TEV (Market Capitalization + Total Debt + Total Preferred Equity + Minority Interest – Cash & Short Term Investments) is as of the close price for the initial date of trading. Classification of “Enterprise” and “Consumer” companies presented herein is internally assigned by Sapphire. Company logos shown in various charts presented herein reflect the top (4) companies of any particular time period that had a TEV of $1BN or greater at the time of IPO, with the exception of chart titled “Exits by Year, 1995- Q3 2019”, where logos shown in all charts presented herein reflect the top (4) companies of any particular year that had a TEV of $7.5BN or greater at the time of IPO. During a time period in which less than (4) companies had such exits, the absolute number of logos is shown that meet described parameters. Since 1995 refers to the time period of 1/1/1995 – 9/30/2019 throughout this article.
[1] Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019.[2] Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares. [3] Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.
Source : https://sapphireventures.com/blog/openlp-series-which-investments-generate-the-greatest-value-in-venture-consumer-or-enterprise/
There was little doubt four years ago that Conagra Brands’ frozen portfolio was full of iconic items that had grown tired and, according to its then-new CEO Sean Connolly, were “trapped in time.”
While products such as Healthy Choice — with its heart-healthymessage — and Banquet — popular for its $2 turkey and gravy and salisbury steak entrees — were still generating revenue, the products lookedmuch the same as decades before. The result: sales sharply fell as consumers turned to trendier flavors and better-for-youoptions.
Executives realized the decades-old process used to create and test products wasn’t translating into meaningful sales. Simply introducing new flavors or boosting advertising was no longer enough to entice consumers to buy. If Conagra maintained the status quo, the CPG giant only risked exacerbating the slide and putting its portfolio of brands further behind the competition.
“We were doing all this work into what I would call validation insights, and things weren’t working,” Bob Nolan, senior vice president of demand sciences at Conagra, told Food Dive. “How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Nolan and other officials at Conagra eventually decided to abandon traditional product testing and market research in favor of buying huge quantities of behavioral data. Executives were convinced the datacould do a better job of predicting eventual product success than consumers sitting in an artificial setting offering feedback.
Conagra now spends about $15 million less on testing products than it did three years ago, with the much of the money now going toward buying data in food service, natural products, consumption at home, grocery retail and loyalty cards. When Nolan started working at Conagra in 2012, he estimated 90% of his budget at the company was spent on traditional validation research such as testing potential products, TV advertisements or marketing campaigns. Today, money spent on those methods hasbeen cut to zero.
While most food and beverage companies have not changed how they go about testing their products as much as Conagra, CPG businesses throughout the industry are collectively making meaningful changes to their own processes.
With more data avaliable now than ever before, companies can change their testing protocol to answer questions they might have previously not had the budget or time to address. They’re also turning to technology such as videos and smartphones to immediately enagage with consumers or to see firsthand how they would respond to their prototype products in real-life settings, like their own homes.
As food manufacturers scramble to remain competitive and meet the shopper’s insatiable demand fornew tastes and experiences,changing how they go about testing can increase the liklihood that a product succeeds — enabling corporations to reap more revenue and avoid being one of the tens of thousands of products that fail every year.
For Conagra, the new approach already is paying off. One success story came in the development of the company’s frozen Healthy Choice Korean-Inspired Beef Power Bowl. By combing data collected from the natural food channel and specialty stores like Whole Foods and Sprouts Farmers Market, the CPG giant found people were eating more of their food in bowls — a contrast to offerings in trays.
“How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Bob Nolan
Senior vice president of demand sciences, Conagra
At the same time, information gathered from restaurants showed Korean was the fastest-growing cuisine. The data also indicatedthe most popularflavors within that ethnic category. Nolan said without the data it would have been hard to instill confidence at Conagra that marketing a product like that would work, and executives would have been more likely to focus on flavors the company was already familiar with.
Since then, Conagra rebranded Healthy Choice around cleaner label foods with recognizable, modern ingredients that were incorporated into innovations such as the Power Bowl. The overhaul helped rejuvenate the 34-year old brand, with sales jumping 20% during the last three years after declining about 10% during the prior decade, according to the company.
Conagra has experienced similar success by innovating its other frozen brands, including Banquet and Marie Callender’s. For a company where frozen sales total $5.1 billlion annually, the segment is an important barometer for success at Conagra.
For years, food companies would come up with product ideas using market research approaches that dated back to the 1950s. Executives would sit in a room and mull over ways to grow a brand. They would develop prototypes before testing and retesting a few of them to find the one that would have the best chance of resonating with consumers. Data used was largely cultivated through surveys or focus groups to support or debunk a company idea.
“It’s an old industry and innovation has been talked about before but it’s never been practiced, and I think now it’s starting to get very serious because CPG companies are under a lot of pressure to innovate and get to market faster,” Sean Bisceglia, CEO of Curion, told Food Dive. “I really fear the ones that aren’t embracing it and practicing it … may damage their brand and eventaully damage their sales.”
Information on nearly every facet of a consumer’s shopping habits and preferences can be easily obtained. There is data showing how often people shop and where they go. Tens of millions of loyalty cards reveal which items were purchased at what store, and even the checkout lane the person was in. Data is available on a broader level showing how products are selling, but CPGs can drill down on an even more granular level to determine the growth rate of non-GMO or organic, or even how a specific ingredient like turmeric is performing.
Market research firms such as Nielsen and Mintel collect reams of valuable data, including when people eat, where and how they consume their food, how much time they spend eating it and even how it was prepared, such as by using a microwave, oven or blender.
To help its customers who want fast results for a fraction of the cost, Bisceglia said Curion has created a platform in which a product can be tried out among a random population group — as opposed to a specifically targeted audience made up of specific attributes, like stay-at-home moms in their 30s with two kids — with the data given to the client without the traditional in-depth analysis. It can cost a few thousand dollars with results available in a few days, compared to a far more complicated and robust testing process over several months that can sometimes cost hundreds of thousands of dollars, he said.
Curion, which has tested an estimated 8,000 products on 700,000 people during the last decade, is creating a database that could allow companies to avoid testing altogether.
For example, a business creating a mango-flavored yogurt could initially use data collected by a market research firm or someone else showing how the variety performed nationwide or by region. Then, as product development is in full swing, the company could use Curion’s information to show how mango yogurt performed with certain ages, income levels and ethnicities, or even how certain formulations or strength of mango flavor are received by consumers.
“What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again? You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Lori Rothman
Owner, Lori Rothman Consulting
Lori Rothman, who runs her own consulting firm to advise companies with their product testing,worked much of the last 30 years at companies including Kraft and Kellogg to determine the most effective way to test a product and then design the corresponding trial. She used to have days or weeks to review data and consumer comments before plotting out the best way to move forward, she said.
In today’s marketplace, there is sometimes pressure to deliver within a day or even immediately. Some companies are even reacting in real time as information comes in — a precedent Rothman warned can be dangerous because of the growing amount of data available and the inherent complexity in understanding it.
“It’s continuing toward more data. It’s just going to get more and more and we just have to get better at knowing what to do with it, and how to use it, and what’s actually important. What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again?” Rothman said. “You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Ferrara Candy, the maker of SweeTarts, Nerds and Brach’s, estimated the company considers more than 100 product ideas each year. An average of five typically make it to market.
To help whittle down the list, the candy company owned by Nutella-maker Ferrero conducts an array of tests with consumers, nearly all of them done without the customary focus group or in-person interview.
Daniel Hunt, director of insights and analytics for Ferrara, told Food Dive rather than working with outside vendors to conduct research, like the company would have a decade ago, it now handles the majority of testing itself.
In the past, the company might havespent $20,000 to run a major test. It would have paid a market research firm to write an initial set of questions to ask consumers, then refine them, run the test and then analyze the information collected.
Today, Hunt said Ferrara’s own product development team, most of whom have a research background, does most of the work creating new surveys or modifying previously used ones — all for a fraction of the cost. And what might have taken a few months to carry out in the past can sometimes be completed in as little as a few weeks.
“Now when we launch a new product, it’s not much of a surprise what it does, and how it performs, and where it does well, and where it does poorly. I think a lot of that stuff you’ve researched to the point where you know it pretty well,” Hunt told Food Dive. “Understanding what is going to happen to a product is more important — and really understanding that early in the cycle, being able to identify what are the big potential items two years ahead of launching it, so you can put your focus really where it’s most important.”
Increasingly, technology is playing a bigger part in enabling companies such as Ferrara to not only do more of their own testing, but providing them with more options of how best to carry it out.
Data can be collected from message boards, chat rooms and online communities popular with millennials and Gen Zers. But technology does have its limits. Ferrara aims to keep the time commitment for its online surveys to fewer than seven minutes because Hunt said the quality of responses tends to diminish for longer ones, especially among people who do them on their smartphones.
Other research can be far more rigorous, depending on how the company plans to use the information.
“I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything we’re doing more testing and research then before but maybe just in a slightly different way than we did in the past.”
Daniel Hunt
Director of insights and analytics, Ferrara
Last summer, Ferrara created an online community of 20 people to help it develop a chewy option for its SweeTarts brand. As part of a three-week program, participants submitted videos showing them opening boxes of candies with different sizes, shapes, flavors, tastes and textures sent to them by Ferrara. Some of the products were its own candies, while others came from competitors such as Mars Wrigley’s Skittles or Starburst. Ferrara wanted to watch each individual’s reaction as he or she tried the products.
Participants were asked what they liked or disliked, or where there were market opportunites for chewy candy to help Ferrara better hone its product development. These consumers wereasked to design their own products.
Ferrara also had people either video record themselves shopping or writing down their experience. This helped researchers get a feel for everything from when people make decisions that are impulsive or more thought out, to what would make a shopper decide not to purchase a product. As people provided feedback, Ferrara could immediately engage with them to expound on their responses.
“All of those things have really helped us get information that is more useful and helpful,” Hunt said. “I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything, we’re doing more testing and research than before, but maybe just in a slightly different way than we did in the past.”
Getting people to change isn’t easy. To help execute on its vision, Conagra spent four years overhauling the way it went about developing and testing products — a lengthy process in which one of the biggest challenges was convincing employees used to doing things a certain way for much of their career to embrace a different way of thinking.
Conagra brought in data scientists and researchers to provide evidence to show how brands grow and what consumer behavior was connected to that increase. Nolan’s team had senior management participate in training courses “so people realize this isn’t just a fly-by-night” idea, but one based on science.
The CPG giant assembled a team of more than 50 individuals— many of whom had not worked with food before — to parse the complex data andfind trends. Thismarked a dramatic new way of thinking, Nolan said.
While people with food and market research backgrounds would have been picked to fill these roles in the past, Conagra knew it would be hard to retrain them in the company’s new way of thinking. Instead, it turned to individuals who had experience indata technology, hospitality and food service, even if it took them time to get up to speed on Conagra-specific information, like the brands in its portfolio or how they were manufactured.
Conagra’s reach extended further outside its own doors, too. The company now occasionally works with professors at the University of Chicago, just 8 miles south of its headquarters, to help assess whether it is properly interpreting how people will behave.
“In the past, we were just like everybody else,” Nolan said. “There are just so many principles that we have thrown out that it is hard for people to adjust.”
Mars Wrigley has taken a different approach, maintaining the customary consumer testing while incorporating new tools, technology and ways of thinking that weren’t available or accepted even a few years ago.
“I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Lisa Saxon Reed
Director of global sensory, Mars Wrigley
Lisa Saxon Reed, director of global sensory at Mars Wrigley, told Food Dive the sweets maker was recently working to create packaging for its Extra mega-pack with 35 pieces of gum, improving upon a version developed for its Orbit brand years before. This time around, the company — which developed more than 30 prototypes — found customers wanted a recyclable plastic container they believed would keepthe unchewed gum fresh.
Shoppers also wanted to feel and hear the packaging close securely, with an auditory “click.” Saxon Reed, who was not involved with the earlier form of the package, speculated it didn’t resonate with consumers because it was made of paperboard, throwing into question freshness and whether the package would survive as long as the gum did.
The new packaging, which hit shelves in 2016 after about a year of development, has been a success, becoming the top selling gum product at Walmart within 12 months of its launch, according to Saxon Reed. Mars Wrigley also incorporated the same packaging design for a mega pack of its 5 gum brand because it was so successful.
“If we would not have made a range of packaging prototypes and had people use them in front of us, we would have absolutely missed the importance of these sensory queues and we would have potentially failed again in the marketplace,” Saxon Reed said. “If I would have done that online, I’m not sure how I would have heard thoseclues. …I don’t think those would have come up and we would have missed an opportunity to win.”
The new approach extends to the product itself, too. Saxon Reed said Mars Wrigley was looking to expand its Extra gum line into a cube shape in fall 2017. Early in the process, Mars Wrigley asked consumers to compile an online diary with words, pictures and collages showing how they defined refreshment. The company wanted to customize the new offering to U.S. consumers, and not just import the cube-shaped variety already in China.
After Mars Wrigley noticed people using the color blue or drawing waterfalls, showers or water to illustrate a feeling of refreshment, product developers went about incorporating those attributes into its new Extra Refreshers line through the color, flavor or characteristics thatfeel cool or fresh to the mouth. They later tested the product on consumers who liked gum, including through the age-old testing process where people were given multiple samples to try and asked which they preferred.
Extra Refreshers hit shelves earlier this year and is “off to a strong start,” Saxon Reed said.
“I don’t see it as an ‘either-or’ when it comes to technology and product testing. I really see it as a ‘yes-and,’ ” she said. “How can technology really help us better understand the reactions that we are getting? But at this point, I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Regardless of what process large food and beverage companies use, how much money and time they spend testing out their products, or even how heavily involved consumers are, CPG companies and product testing firms agreed that an item’s success is heavily defined by one thing that hasn’t and probably never will change: taste.
“Everybody can sell something once in beautiful packaging with all the data, but if it tastes terrible it’s not going to sell again,” Bisceglia said.
Hilco Streambank is seeking offers to acquire the patent portfolio and related assets of Anki, a leading AI-enabled, cloud-connected home robotics and entertainment developer. The patent portfolio covers broad claims related to autonomously controlled devices incorporating artificial intelligence and adaptive data analytics. Available assets also include trademarks and the Anki.com domain name.
Hilco Streambank is seeking offers to acquire the patent portfolio and related assets of Anki, a leading AI-enabled, cloud-connected home robotics and entertainment developer. The patent portfolio covers broad claims related to autonomously controlled devices incorporating artificial intelligence and adaptive data analytics. Available assets also include trademarks and the Anki.com domain name.
45 issued utility patents, including 35 U.S. patents
11 published patent applications
39 pending patent applications
3 utility patents in the National Phase (PCT)
73 issued design patents
Territories Covered
U.S., E.U., China, Germany, Canada, Japan,
South Korea, among others.
Large Addressable Market
The patents have been utilized in the consumer electronics
and gaming space, and the addressable market extends to the smart home,
security, healthcare, manufacturing, and warehousing industries, among
others.
Offers to acquire some or all of the patents and additional assets, although the seller will entertain offers received prior to that date.
Source : https://www.hilcostreambank.com/acquisition-opportunities/anki
As participants in a rapidly changing industry, those of us in the restaurant business understand the importance of innovation. From the introduction of self-service digital experiences to the emergence of third-party delivery, technology innovation has continuously proven to be a powerful force in multi-unit restaurants’ ability to drive and respond to guest behavior. However, innovation done right isn’t easy; and it is even more difficult when that innovation needs to take place in a non-standard environment.
The truth of the matter is that many multi-unit restaurant brands, especially those that are franchised, are non-standard. While regional and market variations in menus, store layouts, and technology can provide a unique, tailored experience for guests residing in a specific area, these variations also present a challenge when it comes to implementing a brand-wide technology innovation strategy.
Here to discuss the best practices for overcoming the obstacles associated with non-standard technology environments is Michael Chachula, Head of IT for IHOP Restaurants.
Q: Where does innovation come from in IHOP?
Chachula: “Most of the innovation that happens here at IHOP comes from one of two places. The first is customer demand; We continuously engage with our guests to understand the points of friction in their experience or areas where we can surprise and delight. Many of our guests have begun expecting a similar technology experience with IHOP that they have had with not only other restaurant brands but with technology providers like Uber or Apple. We hold this feedback close when forming our technology strategies. The second is analysis around the in-restaurant journey. We recognize that our guests’ most valuable currency is their time, and as a result, we continuously aim to test new technologies that make their time with us more efficient, more enjoyable, and more memorable.”
Q: What is the key to being successful when you are evaluating a new technology solution for a non-standard operational environment?
Chachula: “The word to pay attention to here is standardization. Standardization is important to enabling scalability, but that standardization cannot stem creativity. For those that are currently battling this challenge, they should look to introduce a modular, flexible, and extensible technology platform that is easy to support, but configurable enough to allow creativity in their operations community. Configurability should always be one of the top five considerations when evaluating new technology solutions for a diverse multi-unit brand; that is where technology meets operations. On top of that, those decisions should be validated through partnerships with industry experts who can help confirm that the investment that you spend on a solution won’t be an investment wasted.”
Q: What is the right way to implement new technology in this type of environment?
Chachula: “What I have found is that most of our operators share about 80% of their needs and wants when it comes to technology. What that said, the first step in preparing for a successful implementation of new technology is identifying that 20% of functionality or uniqueness that may be required from one operator to another. As that is done, and you place those unique requirements and their operational requests into logical groupings, you can begin working on how to ensure that the new technology is configured and supported properly for each one of those different groups. In this model, you are essentially creating several different configuration ‘schemas’ aligned with each of these groups. This allows increased supportability and ease of implementation when it comes to putting this new technology into the field in a fast-paced environment like an IHOP.”
A significant share of architectural energy is spent on reducing or avoiding lock-in. That’s a rather noble objective: architecture is meant to give us options and lock-in does the opposite. However, lock-in isn’t a simple true-or-false matter: avoiding being locked into one aspect often locks you into another. Also, popular notions, such as open source automagically eliminating lock-in, turn out to be not entirely true. Time to have a closer look at lock-in, so you don’t get locked up into avoiding it!
One of an architect’s major objectives is to create options. Those options make systems change-tolerant, so we can defer decisions until more information becomes available or react to unforeseen events. Lock-in does the opposite: it makes switching from one solution to another difficult. Many architects may therefore consider it their archenemy while they view themselves as the guardians of the free world of IT systems where components are replaced and interconnected at will.
Lock-in – an architect’s archenemy?
But architecture is rarely that simple – it’s a business of trade-offs. Experienced architects know that there’s more behind lock-in than proclaiming that it must be avoided. Lock-in has many facets and can even be the favored solution. So, let’s get in the Architect Elevator to have a closer look at lock-in.
The platforms we are deploying software on these days are becoming ever more powerful – modern cloud platforms not only tell us whether our photo shows a puppy or a muffin, they also compile our code, deploy it, configure the necessary infrastructure, and store our data.
This great convenience and productivity booster also brings a whole new form of lock-in. Hybrid/multi-cloud setups, which seem to attract many architects’ attention these days, are a good example of the kind of things you’ll have to think of when dealing with lock-in. Let’s say you have an application that you’d like to deploy to the cloud. Easy enough to do, but from an architect’s point of view, there are many choices and even more trade-offs, especially related to lock-in.
You might want to deploy your application in containers. That sounds good, but should you use AWS’ Elastic Container Service (ECS) to run them? After all, it’s proprietary to Amazon’s cloud. Prefer Kubernetes? It’s open source and runs on most environments, including on premises. Problem solved? Not quite – now you are tied to Kubernetes – think of all those precious YAML files! So you traded one lock-in for another, didn’t you? And if you use a managed Kubernetes services such as Google’s GKE or Amazon’s EKS, you may also be tied to a specific version of Kubernetes and proprietary extensions.
If you need your software to run on premises, you could also opt for AWS Outposts, so you do have some options. But that again is proprietary. It integrates with VMWare, which you are likely already locked into, so does it really make a difference? Google’s equivalent, freshly minted Anthos, is built from open-source components, but nevertheless a proprietary offering: you can move applications to different clouds – as long as you keep using Anthos. Now that’s the very definition of lock-in, isn’t it?
Alternatively, if you neatly separate your deployment automation from your application run-time, doesn’t that make it fairly easy to switch infrastructure, reducing the effect of all that lock-in? Hey, there are even cross-platform infrastructure-as-code tools. Aren’t those supposed to make these concerns go away altogether?
For your storage needs, how about AWS S3? Other cloud providers offer S3-compatible APIs, so can S3 be considered multi-cloud compatible and lock-in free, even though it’s proprietary? You could also wrap all your data access behind an abstraction layer and thus localize any dependency. Is that a good idea?
It looks like avoiding lock-in isn’t quite so easy and might even get you locked up into trying to escape from it. To highlight that cloud architecture is fun nevertheless, I defer to Simon Wardley’s take on hybrid cloud.
Lock-in isn’t an all-or-nothing affair.
Elevator Architects (those who ride the Architect Elevator up and down) see shades of gray where many only see black and white. When thinking about system design, they realize that common attributes like lock-in or coupling aren’t binary. Two systems aren’t just coupled or decoupled just like you aren’t simply locked into a product or not. Both properties have many nuances. For example, lock-in breaks down into numerous dimensions:
Open source software isn’t a magic cure for lock-in.
In summary, lock-in is far from an all-or-nothing affair, so understanding the different flavors can help you make more conscious architecture decisions. The list also debunks common myths, such as using open source source software magically eliminating lock-in. Open source can reduce vendor lock-in, but most of the other types of lock-in remain. This doesn’t mean open source is bad, but it isn’t a magic cure for lock-in.
Experienced architects not only see more shades of gray, they also practice good decision discipline. That’s important because we are much worse decision makers than we commonly like to believe – a quick read of Kahneman’s Thinking, Fast and Slow is in order if you have any doubt.
One of the most effective ways to improve your decision making is to use models. Even, or especially, simple models are surprisingly effective at improving decision making:
Simple but evocative models are the signature of the great scientist, but over-elaboration and over-parameterization is often the mark of mediocrity.
That’s why you shouldn’t laugh at the famed two-by-two matrix that’s so beloved by management consultants. It’s one of the simplest and therefore most effective models as we shall soon discover.
The more uncertain the environment, the more structured models can help you make better decisions.
There’s a second important point about models: a common belief tells us that in face of uncertainty you pretty much have to “shoot from the hip” – after all everything is in flux, anyway. The opposite is actually true: our generally poor decision making only gets worse when we have to deal with many interdependencies, high degrees of uncertainty, and small probabilities. Therefore, this is where models help the most to bring much needed structure and discipline into our decision-making. Deciding on whether and to what degree to accept lock-in falls well into this category, so let’s use some models.
A simple model can help us get past the “lock-in = bad” stigma. First, we have to realize that it’s difficult to not be locked into anything, so some amount of lock-in is inevitable. Second, we may happily accept some amount of lock-in if we get a commensurate pay-off, for example in form of a unique feature or utility that’s not offered by competitive products.
Let’s express these factors in a very simple model – a two-by-two matrix:
The matrix outlines our choices along the following axes:
We can now consider each of the four quadrants:
While the model is admittedly simple, placing your software (and perhaps hardware) components into this matrix is a worthwhile exercise. It not only visualizes your exposure but also communicates your decisions well to a variety of stakeholders.
For an every-day example of the four quadrants, you may have decided to use following items, which give you varying amounts of lock-in and utility (counter-clockwise from top-right):
A unique product feature doesn’t always translate into unique utility for you.
One word of caution on the unique utility: every vendor is going to give you some form of unique feature – that’s how they differentiate. However, what counts here is whether that feature translates into a concrete and unique value for you and your organization. For example, some cloud providers run Billion-user services over their amazing global network. That’s impressive and unique, but unlikely to be a utility for the average enterprise who’s quite happy to serve 1 million customers and may be restricted to doing business in a single country. Some people still buy Ferraris in small countries with strict speed limits, so apparently not all decision making is entirely rational, but perhaps a Ferrari gives you utility in more ways than a cloud platform can.
Because this simple matrix was so useful, let’s do another one. The previous matrix treats switching cost as a single element (or dimension). A good architect can see that it breaks down into two dimensions:
The matrix differentiates between the cost of making the switch from the likelihood that you’ll have (or want) to make the switch. Things that have a low likelihood and a low cost shouldn’t bother you much while the opposite end, the ones with high switching cost and a high chance of switch, are no good and should be addressed. On the other diagonal, you are taking your chances on those options that will cost you, but are unlikely to occur – that’s where you’ll want to buy some insurance, for example by limiting the scope of change or by padding your maintenance budget. You could also accept the risk – how often would you really need to migrate off Oracle onto DB2, or vice versa? Lastly, if switches are likely but cheap, you achieved agility – you embrace change and designed your system for low cost of executing it. Oddly, this quadrant often gets less attention than the top left despite many small changes adding up quickly. That’s our poor decision making at work: the unlikely drama gets more attention because what if!
When discussing the likelihood of lock-in, you’ll want to consider a variety of scenarios that’ll make you switch: a vendor may go out of business, raise prices, or may no longer be able to support your scale or functional needs. Interestingly, the desire to reduce lock-in sometimes comes in form of a negotiation tool: when negotiating license renewals you can hint your vendor that you architected your system such that switching away from their product is realistic and inexpensive. This may help you negotiate a lower price because you’ve communicated that your BATNA – your Best Alternative To a Negotiated Agreement is low. This is an architecture option that’s not really meant to be used – it’s a deterrent, sort of like a stockpile of weapons in a cold war. You might be able to fake it and not actually reduce lock-in, but you better be a good poker player in case the vendor calls your bluff, e.g. by chatting with your developers at the water cooler.
Pulling in our options analogy from the very beginning once more, if avoiding lock-in gives you options, then the cost of making the switch is the option’s strike price: it’s how much you pay to execute the option. The lower the switching cost you want to achieve, the higher is the option’s value and therefore the price. While we’d dream of having all systems in the “green boxes” with minimal switching cost, the necessary invest may not actually pay off.
Minimizing switching costs may not be the most economical choice.
For example, many architects favor not being locked into a database vendor or cloud provider. However, how likely is a switch really? Maybe 5%, or even lower? How much will it cost you to bring that switching cost down from let’s say $50,000 (for a semi-manual migration) to near zero? Likely a lot more than the $2,500 ($50,000 x 5%) you can expect to save. Therefore, minimizing the switching cost isn’t the sole goal and can easily lead to over-invest. It’s the equivalent of being over-insured: paying a huge premium to bring the deductible down to zero may give you peace of mind, but it’s often not the most economical, and therefore, rational, choice.
A final model (for once not a matrix) can help you decide how much you should invest into reducing the cost of making a switch. The following diagram shows your liability, defined as the product of switching cost times the likelihood that it occurs in relation to the up-front invest you need to make (blue line).
By investing in options, you can surely reduce your liability, either by reducing the likelihood of a switch or by reducing the cost of executing it. For example, using an Object-relational Mapping (ORM) framework like Hibernate is a small investment that can reduce database vendor lock-in. You could also create a meta-language that is translated into each database vendor’s native stored procedure syntax. It’ll allow you to fully exploit the database’s performance without being dependent, but it’s going to take a lot of up-front effort for a relatively unlikely scenario.
The interesting function therefore is the red line, the one that adds the up-front invest to the potential liability. That’s your total cost and the thing you should be minimizing. In most cases, with increasing up-front invest, you’ll move towards an optimum range. Additional investment into reducing lock-in actually leads to higher total cost. The reason is simple: the returns on investment diminish, especially for switches that carry a small probability. If we make our architecture ever-so-flexible, we are likely stuck in this zone of over-investment. The Yagni (you ain’t gonna need it) folks may aim for the other end of the spectrum – as so often, the trick is to find the happy medium.
Now that we have a pretty good grip on the costs and potential pay-offs of being locked in, we need to have a closer look at the total cost of avoiding lock-in. In the previous model we assumed that avoiding lock-in is a simple cost. In reality, though, this cost can be broken down into several components:
Complexity can be the biggest price you pay for reducing lock-in.
When calculating the cost of avoiding lock-in, an architect should make a quick run down this list to avoid blind spots. Also, be aware that attempts at avoiding lock-in can be leaky, very much like leaky abstractions. For example, Terraform is a fine tool, but its scripts use many vendor-specific constructs. Implementation details thus “leak” through, rendering the switching cost from one cloud to another decidedly non-zero.
With so much theory, let’s look at a few concrete examples.
I worked with a company who packages much of their code into Docker containers that they deploy to AWS ECS. Thus they are locked into AWS. Should they invest into replacing their container orchestration with Kubernetes, which is open source? Given that feature velocity is their main concern and the current ECS solution works well for them, I don’t think a migration would pay off. The likelihood of having to switch to another cloud provider is low and they have “bigger fish to fry”.
Recommendation: accept lock-in.
Many applications use a relational database that can be provided by numerous vendors and open source alternatives. However, SQL dialects, stored procedures, and bespoke management consoles all contribute to database lock-in. How much should you invest into avoiding this lock-in? For most languages and run-times common mapping frameworks such as Hibernate provide some level of database neutrality at a low cost. If you want to further minimize your strike price, you’d also need to avoid SQL functions and stored procedures, which may make your product less performant or require you to spend more on hardware.
Recommendation: use low-effort mechanisms to reduce lock-in. Don’t aim for zero switching cost.
Rather than switching from one database vendor to another, you may be more interested in moving your application, including its database, to the cloud. Besides technical considerations, you’ll need to be careful with some vendors’ licensing agreements that may make such a move uneconomical. In these cases, it’s wise to opt for an open source database.
Recommendation: select an open source database if it can meet your operational and support needs, but accept some degree of lock-in.
Many enterprises are fascinated the idea of portable multi-cloud deployments and come up with ever more elaborate and complex (and expensive) plans that’ll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you’d want to go to the cloud: low friction and the ability to use hosted services like storage or databases.
Recommendation: Exercise caution. Read my article on multi-cloud.
It may seem that one can put an enormous amount of time contemplating lock-in. Some may even dismiss our approach as “academic”, a word which I repeatedly fail to see as something bad because that’s where most of us got our education. Still, isn’t the old black-or-white method of architecture simpler and, perhaps, more efficient?
Architectural thinking is actually surprisingly fast if you focus and stick to simple models.
In reality thinking actually happens extremely fast. Running through all the models shown in this article may really just take a few minutes and yields well-documented decisions. No fancy tooling besides a piece of paper or a whiteboard is required. The key ingredient into fast architectural thinking is merely the ability to focus.
Compare that to the effort to prepare elaborate slide decks for lengthy steering committee meetings that are scheduled many weeks in advance and usually don’t have anyone attend who has the actual expertise to make an informed decision
A growth strategy isn’t just a set of functions you plug in to your business to boost grow your product—it’s also the way in which you organize and rally as a team.
If growth is “more of a mindset than a toolkit,” as Ryan Holiday said, then it’s a collective mindset.
Successful growth strategies are the product of engineering, marketing, leadership, design, and product management. Whether your team consists of 2 co-founders or a skyscraper full of employees, your growth hacking strategies will only be effective if you’re able to affix them to your organization, apply a workflow, and use the results of experiments to make intelligent decisions.
In short, there’s no plugin for growth. To increase your product’s user base and activation rate, your company will need to be methodical and tailor the strategies you read about to your unique product, problem, and target audience.
Before we dive into specific examples of growth strategies, let’s take a moment to establish a proper growth strategy definition:
A growth strategy is a plan of action that allows you to achieve a higher level of market share than you currently have. Contrary to popular belief, a growth strategy is not necessarily focused on short-term earnings—growth strategies can be long-term, too. Let’s keep that in mind with the following examples.
Another thing to keep in mind is that there are typically 4 types of strategies that roll up into a growth strategy. You might use one or all of the following:
Below, we’ll explore 21 growth strategy examples from teams that have achieved massive growth in their companies. Many examples use one or more of the 4 classic growth strategies, but others are outside of the box. These out-of-the-box approaches are often called “growth hacking strategies”.
Each of these examples should be understood in the context of the company where they were executed. While you can’t copy and paste their success onto your own unique product, there’s a lesson to be learned and leveraged from each one.
Now let’s get to it!
Clearbit‘s APIs allow you to do amazing things—like enrich trial sign-ups on your homepage—but to use them effectively, you need a developer’s touch. Clearbit needed to get developers to try their tool in order to grow. Their strategy involved dedicating their own developer time to creating free tools, APIs, and browser extensions that would give other developers a chance to play.
They experimented with creating free APIs for very specific purposes. One of the most successful was their free Logo API which allowed companies to quickly imprint their brand stamp onto pages of their website. Clearbit launched the API on ProductHunt and spread the word to their developer communities and email list—within a week, the Logo API had received 60,000 views and word-of-mouth traction had grown rapidly.
Clearbit made a bite-sized version of their overall product. The Logo API represents Clearbit at large—it’s a flexible and easy-to-implement way for companies to integrate data into their workflows.
Offering a bite-sized version of your product that provides value for free creates an incredible first impression. It validates that what you’re making really works and drives testers to commit to your main product. And it can be an incredibly effective source of acquisition—Clearbit’s free APIs have driven over 100,000 inbound leads for the company.
As a customer analytics tool, Segment practices what it preaches when it comes to acquisition. The Segment team has developed a data-driven, experimental approach to identify its most successful acquisition channels and double down on those strategies.
In an AMA, their head of marketing Diana Smith told the audience that they’d recently been experimenting with which paid channels worked for them. “In a nutshell, we’ve learned that retargeting definitely works and search does not,” Smith explained.
Segment learned that their marketing efforts were more effective when they reached out to users who’d viewed their site before versus when they relied on users finding them through search. So they set out to refine their retargeting strategy. They started customizing their Facebook and Twitter ads to visitors who’d viewed particular pages: to visitors who’d viewed their docs, they sent API-related messages; to visitors who’d looked at pricing, they sent free trial messages.
By narrowing your acquisition strategy, you can dramatically increase ROI on paid acquisition, increasing conversions while minimizing CAC.
Tinder famously found success by gamifying dating. But to get its growth started, Tinder needed a strategy that would allow potential users to play the game and find a willing dating pool on the other side of the app.
In order to validate their product, people needed to see it in action. Tinder’s strategy was surprisingly high touch—they sent a team to visit potential users and demonstrate the product’s value in person.
To find the right growth strategy for your product, you have to understand what it will take for users to see it working. Tinder’s in-person pitches were a massive success because it helped users see value faster by populating the 2-sided app with more relevant connections.
Zapier is all about integrations—it brings together tools across a user’s tech stack, allowing events in one tool to trigger events in another, from Asana to HubSpot to Buffer. The beauty of Zapier is that it sort of disappears behind these other tools. But that raises an interesting question: How do you market an invisible tool?
Zapier’s strategy was to leverage its multifaceted product personality through content marketing. The team takes every new integration on Zapier as a new opportunity to build authority on search and to appeal to a new audience.
The blog reads like a collective guide to hundreds of tools, with specific titles like “How to Quickly Append Text to a Note in Evernote or OneNote from Your Browser” and “How to Automatically Generate Charts and Reports in Google Sheets and Docs.” Zapier’s strategy is to sneakily make itself a content destination for the audiences of all these different tools.
This strategy helped their blog grow from scratch to over 600,000 readers in just 3 years, and the blog continues to grow as new tools and integrations are added to Zapier.
If you have a product with multiple use cases and integrations, try targeting your content marketing to specific audiences, rather than aiming for a catch-all approach.
Andy Johns arrived at Twitter as a product manager in 2010, when the platform already had over 30 million active users. But according to Johns, growth was slowing. So the Twitter user growth team got creative and tried a new growth experiment every day—the team would pick an area in which to engage more users, create an experiment, and nudge the needle up by as much as 60,000 users in a day.
One crucial user growth strategy that worked for Twitter was to coax users into following more people during the onboarding. They started suggesting 10 accounts to new users shortly after signup.
Because users never had to encounter an empty Twitter feed, they were able to experience the product’s value much faster.
Your users’ first aha moment—whether it’s connecting with friends, sending messages, or sharing files—should serve to give them a secure footing in your product and nudge your network effect into action one user at a time.
LinkedIn was designed to connect users. But in the very beginning, most users still only a few connections and needed help making more.
LinkedIn’s strategy was to capitalize on high user motivation just after signup. Nicknamed the “Reconnect Flow,” LinkedIn implemented a single question to new users during onboarding: “Where did you used to work?”
Based on this input, LinkedIn then displayed a list of possible connections from the user’s former workplace. This jogged new users’ memories and reduced the effort required to reconnect with old colleagues . Once they had made this step, users were more likely to make further connections on their own.
Thanks to this simple prompt, LinkedIn’s pageviews increased by 41%, searches jumped up 33%, and users’ profiles became richer with 38% more work positions listed.
If you notice your users aren’t making the most of your product on their own, help them out while you have their attention. Use the momentum of your onboarding to help your users become engaged.
Facebook’s active user base surpassed 1 billion in 2012. It’s easy to look at the massive growth of Facebook and see it as a sort of big bang effect—a natural event difficult to pick apart for its separate catalysts. But Facebook’s growth can be pinned down to several key strategies.
Again and again, Facebook carved out growth by maintaining a steely focus on user behavior data. They’ve identified markers of user success and used those markers as North Star metrics to guide their product decisions.
Facebook used analytics to compare cohorts of users—those who were still engaged in the site and those who’d left shortly after signing up. They found that the clearest indicator of retention whether or not users connected with 7 friends within 10 days.
Once Facebook had identified their activation metric, they crafted the onboarding experience to nudge users up to the magic number.
By focusing on a metric that correlates with stickiness, your team can take a scientific approach to growing engagement and retention, and measuring its progress.
Slack has grown by watching how teams interact with their product. Their own team was the very first test case and from then on, they’ve refined their product by engaging companies to act as testers.
To understand patterns of retention and churn, Slack peered into their user data. They found that teams who’d sent 2,000 or more messages almost never dropped out of the product. That’s a lot of messages—you only get to that number by really playing around with the product and integrating it into your routine.
Slack knew they had to give new users as many reasons as possible to send messages through the platform. They started plotting interactions with users in a way that encouraged multiple message sending.
For example, Slack’s onboarding experience simulates how a seasoned Slack user behaves. New users are introduced to the platform through interactions with the Slackbot, and are encouraged to upload files, use keyboard shortcuts, and start new conversations.
Find what success means for your product by watching loyal users closely. Mirror that behavior for new users, and encourage them to get into a pattern that leads to long-term retention.
In early 2013, self-employed e-book writer Nathan Barry publicly set himself an unusual resolution. He announced the “Web App Challenge”—he wanted to build an app from scratch and get to $5,000+ in monthly recurring revenue within 6 months.
Though he didn’t quite make it to that $5,000 mark, he did build a product—ConvertKit—with validated demand that went on to reach $125,000 in MRR per month.
Barry experimented with a lot of growth strategies over the first 3 three years, but the one he kept turning back to was direct communication with potential customers. Through personalized emails, Barry found tons of people who loved the idea of ConvertKit but said it was too much trouble for them to think about switching tools—all their contacts and drafts were set up in their existing tools.
So Barry developed a “concierge migration service.” The ConvertKit team would literally go into whichever tool the blogger was using, scrape everything out, and settle the new customer into ConvertKit. Just 15 months after initiating this strategy, ConvertKit was making $125,000 in MRR.
By actively reaching out and listening to you target users, you’ll be better able to identify precise barriers to entry and come up with creative solutions to help them overcome these hurdles.
When Yahoo doubled their mobile revenue between 2012 and 2013, it wasn’t just the product that evolved. Yahoo had hired a new leader for its Mobile and Emerging Products, Adam Cahan. As soon as Cahan arrived, he set to work making organizational changes that allowed Yahoo’s mobile division to get experimental, iterate, and develop new products quickly.
In 2 years, Cahan grew Yahoo’s mobile division from 150 million mobile users to 550 million. By hiring the right people and enabling them to focus on solving problems for users, he had opened the doors for organic growth.
Payment processing platform Stripe always knew that developers were the key to adoption of their service. Founders John and Patrick Collison started Stripe to address a very specific problem—developers were sorely in need of a payment solution they could adapt to different merchant needs and match the speed and complexity of the buyer side of the ecommerce interface.
Merchants started clamoring for Stripe because their developers were raving about it—today, Stripe commands 15.34% of the market share for payment processing. That’s in large part to Stripe’s strategy of prioritizing the needs of developers first and foremost. For instance:
Know your audience. By focusing on the people that are most directly affected by your problem, you can generate faster and more valuable word-of-mouth.
In 2013, help desk tool Groove was experiencing a worryingly high churn rate of 4.5%. They were acquiring new users just fine, but people were leaving as fast as they came. So they set out to get to know these users better. It was a strategy that would allow them to reduce churn from 4.5% to 1.6%. “Your customers probably won’t tell you when they hit a snag,” says Alex Turnbull, founder and CEO of Groove. “Dig into your data and look for creative ways to find those customers having trouble, and help them.”
By using analytics, you can identify behaviors that drive engagement vs. churn, then proactively reach out to customers when you spot these behaviors in action. By getting ahead of individual cases of churn, you can drive engagement up.
PayPal was growth hacking referrals before it was cool. When PayPal launched, they were introducing a new type of payment method—and they knew that they needed to build trust and authority in order to grow. Their strategy involved getting early adopters to refer users to the platform.
“We must have spent tens of millions in signup and referral bonuses the first year,” says David Sacks, original COO at PayPal. But that initial investment worked—PayPal’s radical first iteration of their referral program allowed them to grow to 5 million daily users in only a few months.
Incentivize your users in a way that makes sense for your business. If users adore your product, the initial cost of setting up a referral program can be recouped many times over as your users become advocates.
In 2016, the on-demand delivery service Postmates, reached 1 million monthly deliveries. They also launched a subscription service, called Postmates Plus Unlimited.
With growing demand, Postmates focused on developing products that are highly accessible and easy to use. At the same time, they gathered funding. In October 2016, they gained another $140 million investment taking their post-money valuation to $600 million. But to cope with this growth in valuation, Postmates needed to scale their growth team.
According to Siqi Chen, VP of Growth at Postmates, the company had “an incredibly scrappy, hard working team who did the best they could with the tools given, but it’s very hard to make growth work at Postmates scale without dedicated engineering and product support.”
So the team shifted to include engineering and product at every level. Now, Postmates’ growth team has 3 arms of its own—“growth product,” “growth marketing,” and “user acquisition”—each one with its own engineering support.
By connecting their growth team directly to the technical decision makers, Postmates created a team that can scale with the company.
BuzzFeed is a constantly churning content machine, publishing hundreds of pieces a day, and getting over 9 billion content views per month. BuzzFeed’s key growth strategy has been to define virality, and pursue it in everything they do.
The lesson? To go viral, you need to give the people what they want, and that means striking a balance between consistency and novelty.
Airbnb’s origin story is one of the infamous growth hacking tales. Founders Brian Chesky and Joe Gebbia knew their potential audience was already using Craigslist, so they engineered their own integration, allowing hosts to double post their ads to Airbnb and Craigslist at the same time.
But it’s their review strategy that has enabled Airbnb to keep growing, once this short-term tactic wore out its effectiveness. Reviews enrich the Airbnb platform. For 50% of bookings, guests visit a host profile at least once before booking a trip, and hosts with more than 10 reviews are 10X more likely to receive bookings.
Airbnb growth hacked their network effect by making reviewing really easy:
By making reviews easier and more honest, Airbnb grew the number of reviews on the site, which in turn grew its authority. You can growth hack your shareability by identifying barriers to trust and smoothing out points of friction along the way.
AdRoll has a great MailChimp integration—it allows users to retarget ads to their email subscribers in MailChimp. But they found that very few users were actually making use of this feature.
Peter Clark, head of Growth at AdRoll, wanted to experiment with in-app messaging in order to target the right Adroll users more effectively.
But growth experiments like this require rapid iteration. His engineers were better suited to longer development cycles, and he didn’t want to disrupt the flow of his organization.So Peter and his team started using Appcues to create custom modal windows quickly and easily—and without input from their technical team members.
With a code-free solution, AdRoll’s growth team could design and implement however many windows they needed to drive adoption of the features they were working on. Here’s how it worked for the MailChimp integration:
This single experiment yielded thousands of conversions and ended up increasing adoption rate of the integration to 60%. The experiment is so easy to replicate that Clark and the team now use modal windows for all kinds of growth experiments.
GitHub began as a software development tool called Git. It was designed to solve a problem its coder founders were having by enabling multiple developers to work together on a single project. But it was the discussion around Git—what the founders nicknamed “the Github”— that became the tool’s core value.
Github’s founders realized that the problem of collaboration wasn’t just a practical software problem—the whole developer community was missing a communal factor. So they focused on growing the community side of the product, creating a freemium product with an open-source repository where coders could come together to discuss projects and solve problems with a collective mindset.
They created the ability to follow projects and track contributions, so there’s both an element of camaraderie and an element of competitiveness. This turned GitHub into a sort of social network for coding. A little over a year after launch, Github had gained its first 100,000 users. In July of 2012, GitHub secured $100M in venture capital.
By catalyzing the network effect, it’s possible to turn a tool into a culture.For GitHub, the more developers got involved, the better the tool became. Find a community for your product and give them a place to come together.
It’s relatively easy for a consumer review site to get drive-by traffic. What makes Yelp different, and allows it to draw return visitors and community members, is that it has strategically grown the social aspect of its platform.
This is what has earned Yelp 176 million unique monthly visitors in Q2 2019 and has allowed them to overtake competitors by creating their own category of service. Yelp set out to amplify its existing network effect by rewarding users for certain behaviors.
By making reviews into a status symbol, Yelp turned itself into a community with active members who feel a sense of belonging there—and who feel motivated to use the platform more often.
Etsy reached IPO with a $2 billion valuation in 2015, ten years after the startup was founded. Today, the company boasts 42.7 million active buyers and 2.3 million active sellers who made $3.9 billion in annual gross merchandise sales in 2018. Not too shabby (chic)!
The key to their success was Etsy’s creation of a “community-centric” platform. Rather than building a simple ecommerce site, Etsy set about to create a community of like-minded craft-makers. One of the ways they did this was by boosting organic new user growth by actively encouraging sellers to share their wares on social media.
If your product involves a 2-sided market, focus on one side of that equation first. What can you do to enable those people to become an acquisition channel in and of themselves?
As cloud-based software has taken off, traditional hardware technology companies have struggled. IBM has been proactive in their efforts to redefine its brand and product offering for an increasingly mobile audience.
Faced with an increasingly competitive, cloud-based landscape, IBM decided that it was time to start telling a different story. This legacy giant began acting more like a nascent startup, as the company aggressively reinvented its portfolio.
Their strategy for reinvigorating growth and achieving startup-like mentality has been to take a product-led approach.
No matter what your team looks like—whether it’s a nimble 10-person startup or an enterprise with low flexibility—you can turn your organizational structure into a space where growth can thrive. Of course, that achievement is not without its struggles. But as Nancy Hensley, Chief Digital Officer of Data and AI at IBM says:
“There’s always pain in transformation. That’s how you know you’re transforming!”
None of these growth spurts happened by changing a whole company all at once. Instead, these teams found something—something small, a way in, a loophole, a detail—and carved out that space so growth could follow.
Whether you find that a single feature in your product is the key to engaging users, or you discover a north star metric that allows you to replicate success—pinpoint your area for growth and dig into it.
Pay attention. Listen to your users and notice what’s happening in your product and what could be happening better. That learning is your next growth strategy
Imagine a world where you had a personal board of advisors — the people you most admire and respect — and you gave them upside in your future earnings in exchange for helping you (e.g our good friend Mr. Mike Merrill.)
Imagine if there was a “Kickstarter for people” where you could support up-and-coming artists, developers, entrepreneurs — when they need the cash the most, and most importantly, you’d only profit when they profit.
Imagine if you could diversify by pooling 1% of your future income with your ten smartest friends.
Now think about how much you’d go out of your way to help, say, your brother-in-law or step-siblings. Probably much more than a stranger. Why is that?
To pose a thought experiment: If you didn’t know your cousins were related to you, you might treat them like any other person. But because we have this social context of an “extended family,” you have a sort of genetic equity in them — a feeling that your fates are shared and it’s your responsibility to support them.
This begs the question: How can we create the social context needed for people to truly care about others outside of their extended family?
If you believe that markets and trade have helped the world become a less violent place — because why hurt someone when it’ll also take money out of your pocket? — then you should believe that adding more markets (with proper safeguards) will make the world even less violent.
This is the hope of income share agreements (ISAs).
ISAs align economic incentives in ways that encourage us to help others beyond our extended family, give people economic opportunity who don’t have it today, and free people from the shackles of debt.
What are these ISAs you speak of?
An Income Share Agreement is a financial arrangement where an individual or organization provides something of value to a recipient, who, in exchange, agrees to pay back a percentage of their income for a certain period of time.
In the context of education, ISAs are a debt-free alternative to loans.
Rather than go into debt, students receive interest-free funding from an investor or benefactor. In exchange, the student agrees to share a percentage of future income with their counterparty. They come in different shapes and sizes, but almost always with terms that take into account a plethora of potential scenarios.
“Part of the elegance of an ISA is that the lender only wants a share of income when the borrower is getting a regular income “If you’re unemployed or underemployed, they’re not interested… you’re automatically getting a suspension of payments when you’re not doing well.”
– Mark Kantrowitz, a leading national expert on student loans who has testified before Congress about student aid policy.
There is a long and storied history of income share agreements, but they’ve only recently become popular due to the rise of Lambda School, a school that lets students attend for free and, if they do well after school, pay a percentage of their income until they pay Lambda back.
Wait, a popular meme sarcastically asks, did you just invent taxes?
No. Lambda only gets paid if and only if the student earns a certain amount after graduation. In other words, incentives are aligned. The student is the customer. Not the government. Not the state. Not the parents.
To be sure, it’s early days for ISAs: Adverse selection, legalization, concerns about individuals being corporations (derivatives? Shorting people?!) — there’s a lot left to figure out.
Still, it’s an idea that once you see, you can’t unsee.
Here’s a hypothetical story to help you picture how ISAs work:
Picture Janet, a Senior at Davidson High School. She has a 4.0 GPA, is captain of the debate team and star center forward of the Varsity Soccer team. She’s a shoo-in for a top 20 university, but her parents can’t afford it even with a scholarship, so she’s not even going to apply, and is headed for State. Then she learns from a news article that she’s a pretty good bet as someone who’s going to succeed down the road, and that might allow her to put some much needed cash towards her education. She goes for it, makes a profile on an ISA, and sure enough, a few strangers bet $50,000 on her college education! She immediately gets to work filling out Ivy League scholarship applications.
Throughout college, she keeps in touch with her investors, they give her advice, and because of her interest in politics, one even helps her get an internship with a governor’s election campaign over the summer. Once she graduates, she knows the clock is ticking — at 23 she’ll need to start paying back the investors 5% of her after tax income, so she hustles to work her way through the ranks.
From age 23 to 33, the payback period, Janet becomes a lawyer at a top tier firm, and the investors make a 3x cash on cash return
The above is purely hypothetical.
ISAs for traditional higher education are much more complicated than say, vocational training, where there is more direct alignment of ‘skills-development-to-job’ pathway for students. But, the beauty of ISAs is in their flexibility, so there is lots of room for innovation.
ISAs and other related instances of securitizing human capital have been tried. Here’s a brief history:
In modern times, the first notable mention of the concept of ISAs was by Nobel-prize winning economist Milton Friedman in his 1955 essay The Role of Government in Education.
In a section devoted specifically to vocational and professional education, Friedman proposed that an investor could buy a share in a student’s future earning prospects.
It’s worth noting that the barriers to adoption that Friedman identified back in the 1950s still hold true today:
Society might not have been ready for ISAs in the 1950s, but 16 years later, another Nobel Prize-winning economist, James Tobin, would help launch the first ISA option for college students at Yale University.
In the 1970s, Yale University ran an experiment called the Tuition Postponement Option (“TPO”). The TPO was a student loan program that enabled groups of undergraduates to pay off loans as a “cohort” by committing a portion of their future annual income.
Students who signed up for the program (3,300 in total) were to pay four percent of their annual income for every $1,000 borrowed until the entire group’s debt had been paid off. High earners could buy out early, paying 150% of what was borrowed plus interest.
Within each cohort, many low earners defaulted, while the highest earners bought out early, leaving a disproportionate debt burden for the remaining graduates.
Administrators also did not account for the changes to the tax code and skyrocketing inflation in the 1980s, which only exacerbated the inequitable arrangement.
“We’re all glad it’s come to an end,” It was an experiment that had good intentions but several design flaws.” — Yale President Richard Levin.
While the TPO is generally considered a failure, it was the first instance of a major university offering ISAs and a useful example for how not to structure ISAs — specifically, pooling students by cohort and allowing the highest earning students to buy out early.
It would be decades after Yale’s failed experiment before universities started experimenting again with ISAs, but today a company called Vemo Education is leading the way.
This is a crucial point: Vemo isn’t competing directly with loans, but instead is unlocking other sorts of value (i.e., helping students better choose their college). The key here is that Vemo links an individual’s fortunes to the institution’s fortunes. The company helps universities signal value to students by helping them offer ISAs that signal that the university wants to better align cost with value of its higher education program.
The first company that Vemo partnered with to offer ISAs was Purdue University.
In 2016, Purdue University began partnering with Vemo Education to offer students an ISA tuition option through its “Back a Boiler” ISA Fund. They started with a $2 million fund, and since then have raised another $10.2 million and have issued 759 contracts totaling $9.5 million to students.
Purdue markets its ISA offering as an alternative to private student loans and Parent PLUS Loans. Students of any major can get $10,000 per year in ISA funding at rates that vary between 1.73% and 5.00% of their monthly income. Purdue caps payments at 2.5x the ISA amount that students take out and payment is waived for students making less than $20,000 in annual income.
In the last few years, Vemo has emerged as the leading partner for higher education institutions looking to develop, launch and implement ISAs. In 2017, Vemo powered $23M of ISAs for college students across the US.
Fintech company Upstart initially launched with a model of “crowdfunding for education”. However, they eventually pivoted to offering traditional loans when they realized that their initial model was simply not viable.
Why? Not enough supply.
The fact that only accredited investors (over $1m in net worth) could invest severely limited the total potential funders on the site. And yet, while Upstart never got enough traction (they pivoted successfully), they paved the way for a platform like it to eventually be built.
While Upstart failed to gain traction, technical educational bootcamps have seen tremendous growth while offering their students ISAs to finance their education.
And Lambda School is leading the way.
Lambda School is an online bootcamp that trains students to become software engineers at no upfront cost. Instead of paying tuition, students agree to pay 17% of their income for the first two years that they’re employed. Lambda School includes a $50,000 minimum income threshold and caps total payments at an aggregate $30,000. They also give students the option to pay $20,000 upfront if they’d rather not receive an ISA.
Lambda School students enroll for nine months and end up with 1,500–2,000 hours of training, comparable to the level of training they’d receive during a CS-focused portion of a four-year CS degree.
“Lambda School looks like a charity from the outside, but we’re really more like a hedge fund.
We bet that smart, hardworking people are fundamentally undervalued, and we can apply some cash and leverage to fix that, taking a cut.” — Austin Allred (Lambda School CEO)
In our opinion, Lambda is legitimizing ISAs and may just be the wedge that makes ISAs mainstream.
Given where we are today, and with the potential for this type of financial innovation, what might the future look like?
There are three major themes in particular that get us excited for the future of ISAs: aggregation, novel incentive structures, and crypto.
We believe that it’s possible to pool together various segments of people to decrease overall risk of that population and provide more to each individual person.
If we assume that each individual is fairly independent from each other, this should be a possibility. As risk declines, your expected return should increase. And as your expected return increases, more investors and ISA providers will likely jump in to provide even more capital for more people.
“There is no reason you have to do this at the individual level. Most likely, it will first occur in larger aggregated groups — based on either geography, education, or other group characteristics. As with the housing market, it is important to aggregate enough individual sample points to reduce risk.” — Dave McClure
Another take on aggregation could be an individual electing to group together with their close friends or peers.
This can have the magical benefit of further aligning incentives with those around you, increasing the value of cooperation, lowering downside risk, and promoting more potential risk taking or thinking outside the box, all of which should have the benefit of increasing economic growth.
In addition to that, being able to take a more active role in a friend’s life (helping when need be, sharing in their wins, supporting in their losses, etc.) can be an extremely rewarding experience. That said, there are some definite downsides and risks to be aware of with these types of arrangements.
How can we create financial products to incentivize service provides (i.e. teachers, doctors, etc.) where they are indirectly having massive impacts to income from future generations?
Just imagine if every teacher was able to take even just a tiny percentage of every one of their students’ future earnings the difference that tweak could make. Teachers today unfortunately don’t make nearly as much money as they should given the significant consequences they have on future generations. A great teacher can create the spark for the next Einstein or Elon Musk. A terrible teacher could damage the potential Einstein or Elon Musk enough where they never realize their potential. Imagine how many more incredible people we could have.
There will always be incredible teachers regardless of monetary return, but we bet there could be more. It all comes down to aligning incentives.
This same thinking can be applied to other service providers like doctors. Currently, doctors are paid the same amount (all else equal) whether they succeed or not in a life-saving surgery. But what if the service provider also took a tiny fraction of future earnings from their patient? Incentives are more aligned. That doctor may not even realize it, but they likely would work a bit harder knowing what’s at stake.
Crypto can securitize so much more than we currently do; in essence, we could tokenize ourselves and all future income. Once those personal tokens exist, they can be traded instantly anywhere on the world with infinite divisibility. Arbitrageurs and professional traders could create new financial products (i.e. ISA aggregations) and buy / sell with each other to price things to near perfection.
We’d love to continue the conversation! This is a fascinating space with a ton of opportunity. If you’re thinking about or building anything here, feel free to leave your comments or reach out to talk more.
Special shoutout to David Weinstein & Jake Hallac for their help writing as well as Ray Batra, Dani Grant, Zander Adell, Dave McClure, Sam Lessin and Alex Marcus for their help reviewing / editing!
***
Quick refresher: Indentured servants were immigrants who bargained away their labor (and freedom) for four-to-seven years in exchange for passage to the British colonies, room, board and freedom dues (a prearranged severance). Most of these immigrants were English men who came to British colonies in the 17th century.
On the surface this seems like a decent deal, but not so fast. They could be sold, lent out or inherited. Only 40% of indentured servants lived to complete the terms of their contracts. Masters traded laborers as property and disciplined them with impunity, all lawful at the time.
Rebuttal: We are in no way advocating a return to indentured servitude (voluntary or otherwise). Modern-day ISAs must be structured to have proper governance, ensure alignment of interests and contain legal covenants that protect both parties.
We are advocating for ISAs that (i) are voluntary, (ii) do not force the recipient to work for the investor, and (iii) are a promise to share future income, not an obligation to repay a debt.
Our Response: ISAs offered by Lambda School, Holberton School and other companies are legal under current US law. To the best of our knowledge, all companies offering ISAs operate according to best practices (i.e., consumer disclosure and borrower protections) as set forth in proposed federal legislation.
The Investing in Student Success Act (H.R.3432, S.268) has been proposed in both the US House of Representatives and the US Senate. Under this legislation, ISAs would be classified as qualified education loans (rather than equity or debt securities), making them dischargeable in bankruptcy. Furthermore, the bill would exempt ISAs from being considered an investment company under the Investment Company Act of 1940.
Importantly, the bill includes consumer protections (i.e., required disclosures, payback periods, payback caps, and limits on income share amounts). The bill also includes tax stipulations that preclude ISA recipients from owning any taxes and limiting taxes for investors to apply to profits earned from ISAs.
Quick refresher: Adverse selection describes a situation in which one party has information that the other does not have. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage of raising premiums.
Our Response: In September 2018, Purdue University published a research study that looked into adverse selection in ISAs. The study concluded that there was no adverse selection by student ability among borrowers. However, ISA providers need to properly structure the ISA so as not to cap a recipient’s upside by too much. In addition, this risk can be mitigated by (i) offering a structured educational curriculum for high-income jobs and (ii) an application process that ensures that students have the ability and motivation to complete a given vocational program.
Our Response: Properly structured ISAs paired with effective offerings (i.e., skills-based training, career development assistance) have the potential to mitigate inequality and discriminatory practices. ISA programs like Lambda School require students to be motivated to succeed and have enough income to complete the program, but in no way discriminate based on age, gender or ethnicity.
However, as ISAs become more common, new legislation must include explicit protections to guard against discrimination in administration of ISAs (especially given that it’s unclear whether the Equal Credit Opportunity Act would apply to ISAs since they aren’t technically loans).
Our Response: ISA providers like Lambda School are already starting to negotiate directly with employers to ensure that students have a job after completing the curriculum. These relationships mitigate the risk of a student refusing to pay. Lambda School is able to do this because it’s developed such a strong curriculum. Furthermore, students face reputation risk should they try to avoid meeting their obligations to the ISA provider.
Future legislation should address instances where a student avoids payment or chooses to take a job with no salary (i.e., a student completes a coding bootcamp, but has a change of heart and goes to work at a non-profit that pays below the minimum income threshold.
Our Response: ISAs are not for everyone. ISA’s are best suited for people with greater expected volatility in their future earnings (instead of people with a strong likelihood of a certain amount of salary). This is similar to new businesses choosing between equity investment vs. debt to finance their operations. Businesses with clear expectations of future cashflows generally benefit more from debt vs. equity. Individuals looking to finance their education are no different. Similarly, ISA’s don’t need to be all or nothing. Individuals can choose to capitalize their education with a mix of student loans + ISA’s to get a more optimal mix.
Source: https://medium.com/@eriktorenberg_/life-capital-9e5028c0ea12
Perhaps the biggest buzzword in customer relationship management is “engagement”. Engagement is a funny thing, in that it is not measured in likes, clicks, or even purchases. It’s a measure of how much customers feel they are in a relationship with a product, business or brand. It focuses on harmony and how your business, product or brand becomes part of a customer’s life. As such, it is pivotal in UX design. One of the best tools for examining engagement is the customer journey map.
As the old saying in the Cherokee tribe goes, “Don’t judge a man until you have walked a mile in his shoes” (although the saying was actually promoted by Harper Lee of To Kill a Mockingbird fame). The customer journey map lets you walk that mile.
“Your customer doesn’t care how much you know until they know how much you care.”
– Damon Richards, Marketing & Strategy expert
Copyright holder: Alain Thys, Flickr. Copyright terms and license: CC BY-ND 2.0
Customer journey maps don’t need to be literal journeys, but they can be. Creativity in determining how you represent a journey is fine.
A customer journey map is a research-based tool. It examines the story of how a customer relates to the business, brand or product over time. As you might expect – no two customer journeys are identical. However, they can be generalized to give an insight into the “typical journey” for a customer as well as providing insight into current interactions and the potential for future interactions with customers.
Customer journey maps can be useful beyond the UX design and marketing teams. They can help facilitate a common business understanding of how every customer should be treated across all sales, logistics, distribution, care, etc. channels. This in turn can help break down “organizational silos” and start a process of wider customer-focused communication in a business.
They may also be employed to educate stakeholders as to what customers perceive when they interact with the business. They help them explore what customers think, feel, see, hear and do and also raise some interesting “what ifs” and the possible answers to them.
Adam Richardson of Frog Design, writing in Harvard Business Review says: “A customer journey map is a very simple idea: a diagram that illustrates the steps your customer(s) go through in engaging with your company, whether it be a product, an online experience, retail experience, or a service, or any combination. The more touchpoints you have, the more complicated — but necessary — such a map becomes. Sometimes customer journey maps are “cradle to grave,” looking at the entire arc of engagement.”
Copyright holder: Stefano Maggi, Flickr. Copyright terms and license: CC BY-ND 2.0
Here, we see a customer journey laid out based on social impact and brand interaction with that impact.
What Do You Need to Do to Create a Customer Journey Map?
Firstly, you will need to do some preparation prior to beginning your journey maps; ideally you should have:
Copyright holder: Hans Põldoja, slideshare.net. Copyright terms and license: CC BY-SA 4.0
User personas are incredibly useful tools when it comes to putting together any kind of user research. If you haven’t developed them already, they should be a priority for you, given that they will play such a pivotal role in the work that you, and any UX teams you join in the future, will produce.
Once you’ve done your preparation, you can follow a simple 8-point process to develop your customer journey maps:
Copyright holder: Rosenfeld Media. Copyright terms and license: CC BY 2.0
A complete customer journey map by adaptive path for the experience of interacting with railway networks.
A customer journey map can take any form or shape you like, but let’s take a look at how you can use the Interaction Design Foundation’s template (link below).
Copyright holder: The Interaction Design Foundation. Copyright terms and license: CC BY-SA
A basic customer journey map template.
The map here is split into several sections: In the top zone, we show which persona this journey refers to and the scenario which is described by the map.
The middle zone has to capture the thoughts, actions and emotional experiences for the user, at each step during the journey. These are based on our qualitative user research data and can include quotes, images or videos of our users during that step. Some of these steps are “touchpoints” – i.e., situations where the customer interacts with our company or product. It’s important to describe the “channels” in each touchpoint – i.e., how that interaction takes place (e.g., in person, via email, by using our website, etc.).
In the bottom zone, we can identify the insights and barriers to progressing to the next step, the opportunities which arise from these, and possibly an assignment for internal team members to handle.
Creating customer journeys (including those exploring current and future states) doesn’t have to be a massively time-consuming process – most journeys can be mapped in less than a day. The effort put in is worthwhile because it enables a shared understanding of the customer experience and offers each stakeholder and team member the chance to contribute to improving that experience. Taking this “day in the life of a customer” approach will yield powerful insights into and intimate knowledge of what “it’s like” from the user’s angle. Seeing the details in sharp relief will give you the chance to translate your empathy into a design that better accommodates your users’ needs and removes (or alleviates) as many pain points as possible.
Hero Image: Copyright holder: Espen Klem, Flickr. Copyright terms and license: CC BY 2.0
Boag, P. (2015). Customer Journey Mapping: Everything You Need to Know. https://www.sailthru.com/marketing-blog/written-customer-journey-mapping-need-to-know/
Designing CX.The customer experience journey mapping toolkit. http://designingcx.com/cx-journey-mapping-toolkit/
Kaplan, K. (2016). When and How to Create Customer Journey Maps. https://www.nngroup.com/articles/customer-journey-mapping/
Richardson, A. (2010). Using Customer Journey Maps to Improve Customer Experience. Harvard Business Review https://hbr.org/2010/11/using-customer-journey-maps-to/
You can see Nielsen Norman Group’s guidelines for designing customer journey maps here:
https://www.nngroup.com/articles/customer-journey-mapping/
Source : https://www.interaction-design.org/literature/article/customer-journey-maps-walking-a-mile-in-your-customer-s-shoes?r=dianne_rees
The article by The Register regarding Hertz suing Accenture over their failed website revamp deal has gained a lot of attention on social media creating a lot of discussion around failed software projects and the IT consulting giants such as Accenture.
What I found saddest in the article is that the part about Accenture completely fumbling a huge website project doesn’t surprise me one bit: I stumble upon articles about large enterprise IT projects failing and going well over their budgets on a weekly basis. What was more striking about the article is that Hertz is suing Accenture, and going public with it. This tells us something about the state of the IT consulting business, and you don’t have to be an expert to tell that there is a huge flaw somewhere in the process of how large software projects are sold by consultancies, and especially how they are purchased and handled by their clients.
Just by reading through the article, one might think that the faults were made completely on Accenture’s side, but there is definitely more to it.Hertz too has clearly made a lot of mistakes during crucial phases of the project: in purchasing, service designing and development. I’ll try to bite into the most critical and prominent flaws.
If we dig into the actual lawsuit document we start getting a better picture of what actually went down, and what led to tens of millions of dollars going down the drain on a service that is unusable.
Reading through points 2. and 3. of the legal complaint we get a small glimpse into the initial service design process:
2. Hertz spent months planning the project. It assessed the current state of its ecommerce activities, defined the goals and strategy for its digital business, and developed a roadmap that would allow Hertz to realize its vision.
3. Hertz did not have the internal expertise or resources to execute such a massive undertaking; it needed to partner with a world-class technology services firm. After considering proposals from several top-tier candidates, Hertz narrowed the field of vendors to Accenture and one other.
Hertz first “planned the project, defined the goals and strategy and developed the roadmap”. Then after realising they “don’t have the internal expertise or resources”, they started looking for a vendor who could be able to carry out their vision.
This was the first large mistake. If the initial plan, goals and vision are done before the vendor, the party who is responsible for realising the vision, is involved, you will most likely end up in a ‘broken telephone’ situation where the vision and goals are not properly transferred from the initial planners and designers to the implementers.
This is a very dangerous starting situation. What makes it even worse is this:
6. Hertz relied on Accenture’s claimed expertise in implementing such a digital transformation. Accenture served as the overall project manager. Accenture gathered Hertz’s requirements and then developed a design to implement those requirements. Accenture served as the product owner, and Accenture, not Hertz, decided whether the design met Hertz’s requirements.
Hertz made Accenture the product owner, thus relieving the ownership of the service to Accenture. This, if something, tells us that Hertz did not have the required expertise and maturity to undertake this project in the first place. Making a consulting company, a company which has no deep insight into your specific domain, business & needs, the owner & main visionary of your service is usually not a good idea. Especially when you consider that it might not be in the interest of the consulting company to finish the project in the initial budget, but rather to extend the project to generate more sales and revenue.
Having the vendor as a product owner is not a rare occurrence, and it can sometimes work if the vendor has deep enough knowledge of the client’s organisation, business & domain. However, when working in such a large project and for a huge organisation like Hertz, it’s impossible for the consulting company to have the necessary insight and experience of Hertz’s business.
Moving on to the development phase of the project:
7. Accenture committed to delivering an updated, redesigned, and re-engineered website and mobile apps that were ready to “go-live” by December 2017.
8. Accenture began working on the execution phase of the project in August 2016 and it continued to work until its services were terminated in May 2018. During that time, Hertz paid Accenture more than $32 million in fees and expenses. Accenture never delivered a functional website or mobile app. Because Accenture failed to properly manage and perform the services, the go-live date was postponed twice, first until January 2018, and then until April 2018. By that point, Hertz no longer had any confidence that Accenture was capable of completing the project, and Hertz terminated Accenture.
Hertz finally lost its confidence into Accenture ~5 months after the initial planned go-live date, seemingly after at least a full year into kicking off the project partnership with them.
If it took Hertz around 1½ years to realise that Accenture can’t deliver, It’s safe to say that Hertz & Accenture have been both working in their own silos with minimal transparency into each other’s work, and critical information was not moving between the organisations. My best guess is that Hertz & Accenture met only once in a while to assess the status of the project and share. But a software project like this should be an ongoing collaborative process, with constant daily discussion between the parties. In a well functioning organisation, the client and vendor are one united team pushing the product out together.
The lack of communication infrastructure is a common problem in large scale software projects between a company and its vendor. It’s hard to say on whose responsibility it should be to organise the required tools, processes, meetings and environments to make sure that the necessary discussions are being had and that knowledge is shared. But often the consulting company is the one with a more modern take on communication, and they can provide the framework and tools for it much easier.
We get a deeper glimpse into the lack of transparency, especially regarding the technical side, when we go through points 36. — 42. of the legal complaint, e.g. number 40.:
40. Accenture’s Java code did not follow the Java standard, displayed poor logic, and was poorly written and difficult to maintain.
Right. Accenture’s code quality and technical competence was not on a satisfying level, and that is on Accenture, as they have been hired to be the technical experts in the project. But if Hertz would’ve had even one technical person working on the project, and they would have had visibility into the codebase, they could’ve caught this problem right from the first commit, instead of noticing it after over a year of Accenture delivering bad quality code. If you are buying software for tens of millions, you must have an in-house technical expert as part of the software development process, even if only as a spectator.
The lack of transparency and technical expertise combined with the lack of ownership/responsibility was ultimately the reason why Hertz managed to blow tens of millions USD, instead of just a couple. If Hertz would have had the technical know-how and had been more deeply involved in the work, they could’ve early on assessed that the way Accenture is doing things is flawed. Perhaps some people in Hertz saw that the situation was bad early on, but since the ownership of the product was on Accenture’s side, it must have been hard for those people to speak up as they saw the issues. This resulted in Accenture being allowed to do unsuitable work for over a year, until the initial ‘go-live’ date was way past and it was already too late.
There have been rumours of Hertz leadership firing the entire well-performing in-house software development talent, replacing it with off-shore workforce from IBM and making crony ‘golf course’ deals with Accenture in 2016. And the Hertz CIO securing a $7 million bonus for the short-term ‘savings’ made by those changes. I’d recommend taking these Hacker News comments with a grain of salt, but I wouldn’t be at all surprised if the allegations were more or less true.
These kinds of crony contracts are huge problem in the enterprise software industry in general, and the news we see about them are only the tip of the iceberg. But that is a subject for a whole other blog post.
It’s important to keep in mind that the lawsuit text doesn’t really tell us the whole truth: a lot of things must have happened during those years that we will never know off. However, it’s quite clear that some common mistakes that happen in consulting projects constantly happened here too, and that the ball was dropped by both parties involved.
It’s going to be interesting to see how the lawsuit plays out, as it will work as a real-life example to both consulting companies and their clients on what could happen when their expensive software projects go south.
For a company which is considering buying software, the most important learnings to take out of this mess are:
Also, one thing to note is that many companies who have had bad experiences with large enterprise consultancies have turned to the smaller, truly agile software consultancies instead of the giants like Accenture. Smaller companies are better at taking responsibility for their work, and they have the required motivation to actually deliver quality, as they appreciate the chance to tackle a large project. For a small company the impact of delivering a project well and keeping the client happy is much more important than it is for an already well established giant.
Hopefully by learning from history and the mistakes of others, we can avoid going through the hell that the people at Hertz had to!
Source :
The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.
The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?
We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.
To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:
After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.
The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.
Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.
Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.
In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.
It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.
While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.
Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.
By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.
Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.
These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.
Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.
While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.
The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.
With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.
Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.
One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.
A wide array of other models are being explored and worth touching upon:
With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.
When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.
Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller
Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10
Recent Comments