In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them.
One of our favorite areas to explore is: as a venture investor, do your odds of making better returns improve if you only invest in either enterprise or consumer companies? Or do you need a mix of both to maximize your returns? And how should recent investment and exit trends influence your investing strategy, if at all?
To help answer questions like these, we’ve collected and analyzed exit data for years. What we’ve found is pretty intriguing: portfolio value creation in enterprise tech is often driven by a cohort of exits, while value creation in consumer tech is generally driven by large, individual exits.
In general, this trend has held for several years and has led to the general belief, that if you are a consumer investor, the clear goal is to not miss that “one deal” that has a huge spike in exit valuation creation (easier said than done of course). And if you’re an enterprise investor, you want to create a “basket of exits” in your portfolio.
2019 has been a powerhouse year for consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent declines in stock price notwithstanding). The first three quarters of 2019 alone surpassed every year since 1995 for consumer exit value – and we’re not done yet. If the consumer exit pace continues at this scale, we will be on track for the most value created at exit in 25 years, according to our analysis.
Source: S&P Capital IQ, Pitchbook
Since 1995, the number of enterprise exits has consistently outpaced consumer exits (blue line versus green line above), but 2019 is the closest to seeing those lines converge in over two decades (223 enterprise vs 208 consumer exits in the first three quarters of 2019). Notably, in five of the past nine years, the value generated by consumer exits has exceeded enterprise exits.
At Sapphire, we observe the following:
While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019 has generated a tremendous amount of value through IPOs, roughly $223 billion. However, after trading in the public markets, the aggregate value of those IPOs have decreased by $81 billion as of November 1, 2019. This decrease is driven by Uber and Lyft from an absolute value basis, accounting for roughly 66% of this markdown over the same period, according to our figures. Over half of the IPO exits in 2019 have been consumer, and despite these stock price changes, consumer exits are still outperforming enterprise exits YTD given the enormous alpha they generated initially.
As we noted in the introduction, since 1995, historical data shows that years of high value creation from enterprise technology is often driven by a cohort of exits versus consumer value creation that is often driven by large, individual exits. The chart below illustrates this, showing a side-by-side comparison of exits and value creation.
At Sapphire, we observe the following:
The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies.
While total value of enterprise companies exited since 1995 ($884B) exceeds that of consumer exits ($773B), in the last 15 years, consumer returns have been making a comeback. Specifically, total consumer value exited ($538B) since 2004 exceeds that of enterprise exits ($536B). This difference has become more stark in the past 10 years with total consumer value exited ($512B) surpassing that of enterprise ($440B). As seen in the chart below, the rolling 10-year total enterprise exit value exceeded that of consumer, until the decade between 2003-2012 where consumer exit value took the lead.
Note: Data from S&P Capital IQ and Pitchbook
Source: S&P Capital IQ, Pitchbook
We believe size and then the inevitable hype around consumer IPOs has the potential to cloud investor judgment since the volume of successful deals is not increasing. The data clearly shows the surge in outsized returns comes from the outliers in consumer.
As exhibited below, large, consumer outliers since 2011 such as Facebook, Uber, and Snap often account for more than the sum of enterprise exits in any given year. For example, in the first three quarters of 2019, there have been 15 enterprise exits valued at over $1B for a total of $96B. In the same time, there have been nine consumer exits valued at over $1B for a total of $139B. Anecdotally, this can be seen from four out of the past five years being headlined by a consumer exit. While 2016 showcased an enterprise exit, it was a particularly quiet exit year.
Source: S&P Capital IQ, Pitchbook
While consumer deals have taken the lead in IPO value in recent years, on the M&A front, enterprise still has the clear edge. Since 1995 there have been 76 exits of $1 billion or more in value, of which 49 are enterprise companies and 27 are consumer companies. The vast majority of value from M&A has come from enterprise companies since 1995 — more than 2x that of consumer.
Similar to the IPO chart above, acquisition value of enterprise companies outpaced that of consumer companies until recently, with 2010-2014 being the exception.
Source: S&P Capital IQ, Pitchbook
Of course, looking only at outcomes with $1 billion or more in value covers only a fraction of where most VC exits occur. Slightly less than half of all exits in both enterprise and consumer are $50 million or under in size, and more than 70 percent of all exits are under $200 million. Moreover, in the distribution chart below, we capture only the percentage of companies for which we have exit values. If we change the denominator to all exits captured in our database (i.e. measure the percentage of $1 billion-plus exits by using a higher denominator), the percentage of outcomes drops to around 3 percent of all outcomes for both enterprise and consumer.
Source: S&P Capital IQ, Pitchbook
There’s an enormous volume of information available on startup exits, and at Sapphire Partners, we ground our analyses and theses in the numbers. At the same time, once we’ve dug into the details, it’s equally important to zoom out and think about what our findings mean for our GPs and fellow LPs. Here are some clear takeaways from our perspective:
In a nutshell, as LPs we like to see both consumer and enterprise deals in our underlying portfolio as they each provide different exposures and return profiles. However, when these investments get rolled up as part of a venture fund’s portfolio, success is often then contingent on the fund’s overall portfolio construction… but that’s a question to explore in another post.
NOTE: Total Enterprise Value (“TEV”) presented throughout analysis considers information from CapIQ when available, and supplements information from Pitchbook last round valuation estimates when CapIQ TEV is not available. TEV (Market Capitalization + Total Debt + Total Preferred Equity + Minority Interest – Cash & Short Term Investments) is as of the close price for the initial date of trading. Classification of “Enterprise” and “Consumer” companies presented herein is internally assigned by Sapphire. Company logos shown in various charts presented herein reflect the top (4) companies of any particular time period that had a TEV of $1BN or greater at the time of IPO, with the exception of chart titled “Exits by Year, 1995- Q3 2019”, where logos shown in all charts presented herein reflect the top (4) companies of any particular year that had a TEV of $7.5BN or greater at the time of IPO. During a time period in which less than (4) companies had such exits, the absolute number of logos is shown that meet described parameters. Since 1995 refers to the time period of 1/1/1995 – 9/30/2019 throughout this article.
 Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019. Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares.  Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.
Source : https://sapphireventures.com/blog/openlp-series-which-investments-generate-the-greatest-value-in-venture-consumer-or-enterprise/
As participants in a rapidly changing industry, those of us in the restaurant business understand the importance of innovation. From the introduction of self-service digital experiences to the emergence of third-party delivery, technology innovation has continuously proven to be a powerful force in multi-unit restaurants’ ability to drive and respond to guest behavior. However, innovation done right isn’t easy; and it is even more difficult when that innovation needs to take place in a non-standard environment.
The truth of the matter is that many multi-unit restaurant brands, especially those that are franchised, are non-standard. While regional and market variations in menus, store layouts, and technology can provide a unique, tailored experience for guests residing in a specific area, these variations also present a challenge when it comes to implementing a brand-wide technology innovation strategy.
Here to discuss the best practices for overcoming the obstacles associated with non-standard technology environments is Michael Chachula, Head of IT for IHOP Restaurants.
Q: Where does innovation come from in IHOP?
Chachula: “Most of the innovation that happens here at IHOP comes from one of two places. The first is customer demand; We continuously engage with our guests to understand the points of friction in their experience or areas where we can surprise and delight. Many of our guests have begun expecting a similar technology experience with IHOP that they have had with not only other restaurant brands but with technology providers like Uber or Apple. We hold this feedback close when forming our technology strategies. The second is analysis around the in-restaurant journey. We recognize that our guests’ most valuable currency is their time, and as a result, we continuously aim to test new technologies that make their time with us more efficient, more enjoyable, and more memorable.”
Q: What is the key to being successful when you are evaluating a new technology solution for a non-standard operational environment?
Chachula: “The word to pay attention to here is standardization. Standardization is important to enabling scalability, but that standardization cannot stem creativity. For those that are currently battling this challenge, they should look to introduce a modular, flexible, and extensible technology platform that is easy to support, but configurable enough to allow creativity in their operations community. Configurability should always be one of the top five considerations when evaluating new technology solutions for a diverse multi-unit brand; that is where technology meets operations. On top of that, those decisions should be validated through partnerships with industry experts who can help confirm that the investment that you spend on a solution won’t be an investment wasted.”
Q: What is the right way to implement new technology in this type of environment?
Chachula: “What I have found is that most of our operators share about 80% of their needs and wants when it comes to technology. What that said, the first step in preparing for a successful implementation of new technology is identifying that 20% of functionality or uniqueness that may be required from one operator to another. As that is done, and you place those unique requirements and their operational requests into logical groupings, you can begin working on how to ensure that the new technology is configured and supported properly for each one of those different groups. In this model, you are essentially creating several different configuration ‘schemas’ aligned with each of these groups. This allows increased supportability and ease of implementation when it comes to putting this new technology into the field in a fast-paced environment like an IHOP.”
A significant share of architectural energy is spent on reducing or avoiding lock-in. That’s a rather noble objective: architecture is meant to give us options and lock-in does the opposite. However, lock-in isn’t a simple true-or-false matter: avoiding being locked into one aspect often locks you into another. Also, popular notions, such as open source automagically eliminating lock-in, turn out to be not entirely true. Time to have a closer look at lock-in, so you don’t get locked up into avoiding it!
One of an architect’s major objectives is to create options. Those options make systems change-tolerant, so we can defer decisions until more information becomes available or react to unforeseen events. Lock-in does the opposite: it makes switching from one solution to another difficult. Many architects may therefore consider it their archenemy while they view themselves as the guardians of the free world of IT systems where components are replaced and interconnected at will.
Lock-in – an architect’s archenemy?
But architecture is rarely that simple – it’s a business of trade-offs. Experienced architects know that there’s more behind lock-in than proclaiming that it must be avoided. Lock-in has many facets and can even be the favored solution. So, let’s get in the Architect Elevator to have a closer look at lock-in.
The platforms we are deploying software on these days are becoming ever more powerful – modern cloud platforms not only tell us whether our photo shows a puppy or a muffin, they also compile our code, deploy it, configure the necessary infrastructure, and store our data.
This great convenience and productivity booster also brings a whole new form of lock-in. Hybrid/multi-cloud setups, which seem to attract many architects’ attention these days, are a good example of the kind of things you’ll have to think of when dealing with lock-in. Let’s say you have an application that you’d like to deploy to the cloud. Easy enough to do, but from an architect’s point of view, there are many choices and even more trade-offs, especially related to lock-in.
You might want to deploy your application in containers. That sounds good, but should you use AWS’ Elastic Container Service (ECS) to run them? After all, it’s proprietary to Amazon’s cloud. Prefer Kubernetes? It’s open source and runs on most environments, including on premises. Problem solved? Not quite – now you are tied to Kubernetes – think of all those precious YAML files! So you traded one lock-in for another, didn’t you? And if you use a managed Kubernetes services such as Google’s GKE or Amazon’s EKS, you may also be tied to a specific version of Kubernetes and proprietary extensions.
If you need your software to run on premises, you could also opt for AWS Outposts, so you do have some options. But that again is proprietary. It integrates with VMWare, which you are likely already locked into, so does it really make a difference? Google’s equivalent, freshly minted Anthos, is built from open-source components, but nevertheless a proprietary offering: you can move applications to different clouds – as long as you keep using Anthos. Now that’s the very definition of lock-in, isn’t it?
Alternatively, if you neatly separate your deployment automation from your application run-time, doesn’t that make it fairly easy to switch infrastructure, reducing the effect of all that lock-in? Hey, there are even cross-platform infrastructure-as-code tools. Aren’t those supposed to make these concerns go away altogether?
For your storage needs, how about AWS S3? Other cloud providers offer S3-compatible APIs, so can S3 be considered multi-cloud compatible and lock-in free, even though it’s proprietary? You could also wrap all your data access behind an abstraction layer and thus localize any dependency. Is that a good idea?
It looks like avoiding lock-in isn’t quite so easy and might even get you locked up into trying to escape from it. To highlight that cloud architecture is fun nevertheless, I defer to Simon Wardley’s take on hybrid cloud.
Lock-in isn’t an all-or-nothing affair.
Elevator Architects (those who ride the Architect Elevator up and down) see shades of gray where many only see black and white. When thinking about system design, they realize that common attributes like lock-in or coupling aren’t binary. Two systems aren’t just coupled or decoupled just like you aren’t simply locked into a product or not. Both properties have many nuances. For example, lock-in breaks down into numerous dimensions:
Open source software isn’t a magic cure for lock-in.
In summary, lock-in is far from an all-or-nothing affair, so understanding the different flavors can help you make more conscious architecture decisions. The list also debunks common myths, such as using open source source software magically eliminating lock-in. Open source can reduce vendor lock-in, but most of the other types of lock-in remain. This doesn’t mean open source is bad, but it isn’t a magic cure for lock-in.
Experienced architects not only see more shades of gray, they also practice good decision discipline. That’s important because we are much worse decision makers than we commonly like to believe – a quick read of Kahneman’s Thinking, Fast and Slow is in order if you have any doubt.
One of the most effective ways to improve your decision making is to use models. Even, or especially, simple models are surprisingly effective at improving decision making:
Simple but evocative models are the signature of the great scientist, but over-elaboration and over-parameterization is often the mark of mediocrity.
That’s why you shouldn’t laugh at the famed two-by-two matrix that’s so beloved by management consultants. It’s one of the simplest and therefore most effective models as we shall soon discover.
The more uncertain the environment, the more structured models can help you make better decisions.
There’s a second important point about models: a common belief tells us that in face of uncertainty you pretty much have to “shoot from the hip” – after all everything is in flux, anyway. The opposite is actually true: our generally poor decision making only gets worse when we have to deal with many interdependencies, high degrees of uncertainty, and small probabilities. Therefore, this is where models help the most to bring much needed structure and discipline into our decision-making. Deciding on whether and to what degree to accept lock-in falls well into this category, so let’s use some models.
A simple model can help us get past the “lock-in = bad” stigma. First, we have to realize that it’s difficult to not be locked into anything, so some amount of lock-in is inevitable. Second, we may happily accept some amount of lock-in if we get a commensurate pay-off, for example in form of a unique feature or utility that’s not offered by competitive products.
Let’s express these factors in a very simple model – a two-by-two matrix:
The matrix outlines our choices along the following axes:
We can now consider each of the four quadrants:
While the model is admittedly simple, placing your software (and perhaps hardware) components into this matrix is a worthwhile exercise. It not only visualizes your exposure but also communicates your decisions well to a variety of stakeholders.
For an every-day example of the four quadrants, you may have decided to use following items, which give you varying amounts of lock-in and utility (counter-clockwise from top-right):
A unique product feature doesn’t always translate into unique utility for you.
One word of caution on the unique utility: every vendor is going to give you some form of unique feature – that’s how they differentiate. However, what counts here is whether that feature translates into a concrete and unique value for you and your organization. For example, some cloud providers run Billion-user services over their amazing global network. That’s impressive and unique, but unlikely to be a utility for the average enterprise who’s quite happy to serve 1 million customers and may be restricted to doing business in a single country. Some people still buy Ferraris in small countries with strict speed limits, so apparently not all decision making is entirely rational, but perhaps a Ferrari gives you utility in more ways than a cloud platform can.
Because this simple matrix was so useful, let’s do another one. The previous matrix treats switching cost as a single element (or dimension). A good architect can see that it breaks down into two dimensions:
The matrix differentiates between the cost of making the switch from the likelihood that you’ll have (or want) to make the switch. Things that have a low likelihood and a low cost shouldn’t bother you much while the opposite end, the ones with high switching cost and a high chance of switch, are no good and should be addressed. On the other diagonal, you are taking your chances on those options that will cost you, but are unlikely to occur – that’s where you’ll want to buy some insurance, for example by limiting the scope of change or by padding your maintenance budget. You could also accept the risk – how often would you really need to migrate off Oracle onto DB2, or vice versa? Lastly, if switches are likely but cheap, you achieved agility – you embrace change and designed your system for low cost of executing it. Oddly, this quadrant often gets less attention than the top left despite many small changes adding up quickly. That’s our poor decision making at work: the unlikely drama gets more attention because what if!
When discussing the likelihood of lock-in, you’ll want to consider a variety of scenarios that’ll make you switch: a vendor may go out of business, raise prices, or may no longer be able to support your scale or functional needs. Interestingly, the desire to reduce lock-in sometimes comes in form of a negotiation tool: when negotiating license renewals you can hint your vendor that you architected your system such that switching away from their product is realistic and inexpensive. This may help you negotiate a lower price because you’ve communicated that your BATNA – your Best Alternative To a Negotiated Agreement is low. This is an architecture option that’s not really meant to be used – it’s a deterrent, sort of like a stockpile of weapons in a cold war. You might be able to fake it and not actually reduce lock-in, but you better be a good poker player in case the vendor calls your bluff, e.g. by chatting with your developers at the water cooler.
Pulling in our options analogy from the very beginning once more, if avoiding lock-in gives you options, then the cost of making the switch is the option’s strike price: it’s how much you pay to execute the option. The lower the switching cost you want to achieve, the higher is the option’s value and therefore the price. While we’d dream of having all systems in the “green boxes” with minimal switching cost, the necessary invest may not actually pay off.
Minimizing switching costs may not be the most economical choice.
For example, many architects favor not being locked into a database vendor or cloud provider. However, how likely is a switch really? Maybe 5%, or even lower? How much will it cost you to bring that switching cost down from let’s say $50,000 (for a semi-manual migration) to near zero? Likely a lot more than the $2,500 ($50,000 x 5%) you can expect to save. Therefore, minimizing the switching cost isn’t the sole goal and can easily lead to over-invest. It’s the equivalent of being over-insured: paying a huge premium to bring the deductible down to zero may give you peace of mind, but it’s often not the most economical, and therefore, rational, choice.
A final model (for once not a matrix) can help you decide how much you should invest into reducing the cost of making a switch. The following diagram shows your liability, defined as the product of switching cost times the likelihood that it occurs in relation to the up-front invest you need to make (blue line).
By investing in options, you can surely reduce your liability, either by reducing the likelihood of a switch or by reducing the cost of executing it. For example, using an Object-relational Mapping (ORM) framework like Hibernate is a small investment that can reduce database vendor lock-in. You could also create a meta-language that is translated into each database vendor’s native stored procedure syntax. It’ll allow you to fully exploit the database’s performance without being dependent, but it’s going to take a lot of up-front effort for a relatively unlikely scenario.
The interesting function therefore is the red line, the one that adds the up-front invest to the potential liability. That’s your total cost and the thing you should be minimizing. In most cases, with increasing up-front invest, you’ll move towards an optimum range. Additional investment into reducing lock-in actually leads to higher total cost. The reason is simple: the returns on investment diminish, especially for switches that carry a small probability. If we make our architecture ever-so-flexible, we are likely stuck in this zone of over-investment. The Yagni (you ain’t gonna need it) folks may aim for the other end of the spectrum – as so often, the trick is to find the happy medium.
Now that we have a pretty good grip on the costs and potential pay-offs of being locked in, we need to have a closer look at the total cost of avoiding lock-in. In the previous model we assumed that avoiding lock-in is a simple cost. In reality, though, this cost can be broken down into several components:
Complexity can be the biggest price you pay for reducing lock-in.
When calculating the cost of avoiding lock-in, an architect should make a quick run down this list to avoid blind spots. Also, be aware that attempts at avoiding lock-in can be leaky, very much like leaky abstractions. For example, Terraform is a fine tool, but its scripts use many vendor-specific constructs. Implementation details thus “leak” through, rendering the switching cost from one cloud to another decidedly non-zero.
With so much theory, let’s look at a few concrete examples.
I worked with a company who packages much of their code into Docker containers that they deploy to AWS ECS. Thus they are locked into AWS. Should they invest into replacing their container orchestration with Kubernetes, which is open source? Given that feature velocity is their main concern and the current ECS solution works well for them, I don’t think a migration would pay off. The likelihood of having to switch to another cloud provider is low and they have “bigger fish to fry”.
Recommendation: accept lock-in.
Many applications use a relational database that can be provided by numerous vendors and open source alternatives. However, SQL dialects, stored procedures, and bespoke management consoles all contribute to database lock-in. How much should you invest into avoiding this lock-in? For most languages and run-times common mapping frameworks such as Hibernate provide some level of database neutrality at a low cost. If you want to further minimize your strike price, you’d also need to avoid SQL functions and stored procedures, which may make your product less performant or require you to spend more on hardware.
Recommendation: use low-effort mechanisms to reduce lock-in. Don’t aim for zero switching cost.
Rather than switching from one database vendor to another, you may be more interested in moving your application, including its database, to the cloud. Besides technical considerations, you’ll need to be careful with some vendors’ licensing agreements that may make such a move uneconomical. In these cases, it’s wise to opt for an open source database.
Recommendation: select an open source database if it can meet your operational and support needs, but accept some degree of lock-in.
Many enterprises are fascinated the idea of portable multi-cloud deployments and come up with ever more elaborate and complex (and expensive) plans that’ll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you’d want to go to the cloud: low friction and the ability to use hosted services like storage or databases.
Recommendation: Exercise caution. Read my article on multi-cloud.
It may seem that one can put an enormous amount of time contemplating lock-in. Some may even dismiss our approach as “academic”, a word which I repeatedly fail to see as something bad because that’s where most of us got our education. Still, isn’t the old black-or-white method of architecture simpler and, perhaps, more efficient?
Architectural thinking is actually surprisingly fast if you focus and stick to simple models.
In reality thinking actually happens extremely fast. Running through all the models shown in this article may really just take a few minutes and yields well-documented decisions. No fancy tooling besides a piece of paper or a whiteboard is required. The key ingredient into fast architectural thinking is merely the ability to focus.
Compare that to the effort to prepare elaborate slide decks for lengthy steering committee meetings that are scheduled many weeks in advance and usually don’t have anyone attend who has the actual expertise to make an informed decision
Imagine a world where you had a personal board of advisors — the people you most admire and respect — and you gave them upside in your future earnings in exchange for helping you (e.g our good friend Mr. Mike Merrill.)
Imagine if there was a “Kickstarter for people” where you could support up-and-coming artists, developers, entrepreneurs — when they need the cash the most, and most importantly, you’d only profit when they profit.
Imagine if you could diversify by pooling 1% of your future income with your ten smartest friends.
Now think about how much you’d go out of your way to help, say, your brother-in-law or step-siblings. Probably much more than a stranger. Why is that?
To pose a thought experiment: If you didn’t know your cousins were related to you, you might treat them like any other person. But because we have this social context of an “extended family,” you have a sort of genetic equity in them — a feeling that your fates are shared and it’s your responsibility to support them.
This begs the question: How can we create the social context needed for people to truly care about others outside of their extended family?
If you believe that markets and trade have helped the world become a less violent place — because why hurt someone when it’ll also take money out of your pocket? — then you should believe that adding more markets (with proper safeguards) will make the world even less violent.
This is the hope of income share agreements (ISAs).
ISAs align economic incentives in ways that encourage us to help others beyond our extended family, give people economic opportunity who don’t have it today, and free people from the shackles of debt.
What are these ISAs you speak of?
An Income Share Agreement is a financial arrangement where an individual or organization provides something of value to a recipient, who, in exchange, agrees to pay back a percentage of their income for a certain period of time.
In the context of education, ISAs are a debt-free alternative to loans.
Rather than go into debt, students receive interest-free funding from an investor or benefactor. In exchange, the student agrees to share a percentage of future income with their counterparty. They come in different shapes and sizes, but almost always with terms that take into account a plethora of potential scenarios.
“Part of the elegance of an ISA is that the lender only wants a share of income when the borrower is getting a regular income “If you’re unemployed or underemployed, they’re not interested… you’re automatically getting a suspension of payments when you’re not doing well.”
– Mark Kantrowitz, a leading national expert on student loans who has testified before Congress about student aid policy.
There is a long and storied history of income share agreements, but they’ve only recently become popular due to the rise of Lambda School, a school that lets students attend for free and, if they do well after school, pay a percentage of their income until they pay Lambda back.
Wait, a popular meme sarcastically asks, did you just invent taxes?
No. Lambda only gets paid if and only if the student earns a certain amount after graduation. In other words, incentives are aligned. The student is the customer. Not the government. Not the state. Not the parents.
To be sure, it’s early days for ISAs: Adverse selection, legalization, concerns about individuals being corporations (derivatives? Shorting people?!) — there’s a lot left to figure out.
Still, it’s an idea that once you see, you can’t unsee.
Here’s a hypothetical story to help you picture how ISAs work:
Picture Janet, a Senior at Davidson High School. She has a 4.0 GPA, is captain of the debate team and star center forward of the Varsity Soccer team. She’s a shoo-in for a top 20 university, but her parents can’t afford it even with a scholarship, so she’s not even going to apply, and is headed for State. Then she learns from a news article that she’s a pretty good bet as someone who’s going to succeed down the road, and that might allow her to put some much needed cash towards her education. She goes for it, makes a profile on an ISA, and sure enough, a few strangers bet $50,000 on her college education! She immediately gets to work filling out Ivy League scholarship applications.
Throughout college, she keeps in touch with her investors, they give her advice, and because of her interest in politics, one even helps her get an internship with a governor’s election campaign over the summer. Once she graduates, she knows the clock is ticking — at 23 she’ll need to start paying back the investors 5% of her after tax income, so she hustles to work her way through the ranks.
From age 23 to 33, the payback period, Janet becomes a lawyer at a top tier firm, and the investors make a 3x cash on cash return
The above is purely hypothetical.
ISAs for traditional higher education are much more complicated than say, vocational training, where there is more direct alignment of ‘skills-development-to-job’ pathway for students. But, the beauty of ISAs is in their flexibility, so there is lots of room for innovation.
ISAs and other related instances of securitizing human capital have been tried. Here’s a brief history:
In modern times, the first notable mention of the concept of ISAs was by Nobel-prize winning economist Milton Friedman in his 1955 essay The Role of Government in Education.
In a section devoted specifically to vocational and professional education, Friedman proposed that an investor could buy a share in a student’s future earning prospects.
It’s worth noting that the barriers to adoption that Friedman identified back in the 1950s still hold true today:
Society might not have been ready for ISAs in the 1950s, but 16 years later, another Nobel Prize-winning economist, James Tobin, would help launch the first ISA option for college students at Yale University.
In the 1970s, Yale University ran an experiment called the Tuition Postponement Option (“TPO”). The TPO was a student loan program that enabled groups of undergraduates to pay off loans as a “cohort” by committing a portion of their future annual income.
Students who signed up for the program (3,300 in total) were to pay four percent of their annual income for every $1,000 borrowed until the entire group’s debt had been paid off. High earners could buy out early, paying 150% of what was borrowed plus interest.
Within each cohort, many low earners defaulted, while the highest earners bought out early, leaving a disproportionate debt burden for the remaining graduates.
Administrators also did not account for the changes to the tax code and skyrocketing inflation in the 1980s, which only exacerbated the inequitable arrangement.
“We’re all glad it’s come to an end,” It was an experiment that had good intentions but several design flaws.” — Yale President Richard Levin.
While the TPO is generally considered a failure, it was the first instance of a major university offering ISAs and a useful example for how not to structure ISAs — specifically, pooling students by cohort and allowing the highest earning students to buy out early.
It would be decades after Yale’s failed experiment before universities started experimenting again with ISAs, but today a company called Vemo Education is leading the way.
This is a crucial point: Vemo isn’t competing directly with loans, but instead is unlocking other sorts of value (i.e., helping students better choose their college). The key here is that Vemo links an individual’s fortunes to the institution’s fortunes. The company helps universities signal value to students by helping them offer ISAs that signal that the university wants to better align cost with value of its higher education program.
The first company that Vemo partnered with to offer ISAs was Purdue University.
In 2016, Purdue University began partnering with Vemo Education to offer students an ISA tuition option through its “Back a Boiler” ISA Fund. They started with a $2 million fund, and since then have raised another $10.2 million and have issued 759 contracts totaling $9.5 million to students.
Purdue markets its ISA offering as an alternative to private student loans and Parent PLUS Loans. Students of any major can get $10,000 per year in ISA funding at rates that vary between 1.73% and 5.00% of their monthly income. Purdue caps payments at 2.5x the ISA amount that students take out and payment is waived for students making less than $20,000 in annual income.
In the last few years, Vemo has emerged as the leading partner for higher education institutions looking to develop, launch and implement ISAs. In 2017, Vemo powered $23M of ISAs for college students across the US.
Fintech company Upstart initially launched with a model of “crowdfunding for education”. However, they eventually pivoted to offering traditional loans when they realized that their initial model was simply not viable.
Why? Not enough supply.
The fact that only accredited investors (over $1m in net worth) could invest severely limited the total potential funders on the site. And yet, while Upstart never got enough traction (they pivoted successfully), they paved the way for a platform like it to eventually be built.
While Upstart failed to gain traction, technical educational bootcamps have seen tremendous growth while offering their students ISAs to finance their education.
And Lambda School is leading the way.
Lambda School is an online bootcamp that trains students to become software engineers at no upfront cost. Instead of paying tuition, students agree to pay 17% of their income for the first two years that they’re employed. Lambda School includes a $50,000 minimum income threshold and caps total payments at an aggregate $30,000. They also give students the option to pay $20,000 upfront if they’d rather not receive an ISA.
Lambda School students enroll for nine months and end up with 1,500–2,000 hours of training, comparable to the level of training they’d receive during a CS-focused portion of a four-year CS degree.
“Lambda School looks like a charity from the outside, but we’re really more like a hedge fund.
We bet that smart, hardworking people are fundamentally undervalued, and we can apply some cash and leverage to fix that, taking a cut.” — Austin Allred (Lambda School CEO)
In our opinion, Lambda is legitimizing ISAs and may just be the wedge that makes ISAs mainstream.
Given where we are today, and with the potential for this type of financial innovation, what might the future look like?
There are three major themes in particular that get us excited for the future of ISAs: aggregation, novel incentive structures, and crypto.
We believe that it’s possible to pool together various segments of people to decrease overall risk of that population and provide more to each individual person.
If we assume that each individual is fairly independent from each other, this should be a possibility. As risk declines, your expected return should increase. And as your expected return increases, more investors and ISA providers will likely jump in to provide even more capital for more people.
“There is no reason you have to do this at the individual level. Most likely, it will first occur in larger aggregated groups — based on either geography, education, or other group characteristics. As with the housing market, it is important to aggregate enough individual sample points to reduce risk.” — Dave McClure
Another take on aggregation could be an individual electing to group together with their close friends or peers.
This can have the magical benefit of further aligning incentives with those around you, increasing the value of cooperation, lowering downside risk, and promoting more potential risk taking or thinking outside the box, all of which should have the benefit of increasing economic growth.
In addition to that, being able to take a more active role in a friend’s life (helping when need be, sharing in their wins, supporting in their losses, etc.) can be an extremely rewarding experience. That said, there are some definite downsides and risks to be aware of with these types of arrangements.
How can we create financial products to incentivize service provides (i.e. teachers, doctors, etc.) where they are indirectly having massive impacts to income from future generations?
Just imagine if every teacher was able to take even just a tiny percentage of every one of their students’ future earnings the difference that tweak could make. Teachers today unfortunately don’t make nearly as much money as they should given the significant consequences they have on future generations. A great teacher can create the spark for the next Einstein or Elon Musk. A terrible teacher could damage the potential Einstein or Elon Musk enough where they never realize their potential. Imagine how many more incredible people we could have.
There will always be incredible teachers regardless of monetary return, but we bet there could be more. It all comes down to aligning incentives.
This same thinking can be applied to other service providers like doctors. Currently, doctors are paid the same amount (all else equal) whether they succeed or not in a life-saving surgery. But what if the service provider also took a tiny fraction of future earnings from their patient? Incentives are more aligned. That doctor may not even realize it, but they likely would work a bit harder knowing what’s at stake.
Crypto can securitize so much more than we currently do; in essence, we could tokenize ourselves and all future income. Once those personal tokens exist, they can be traded instantly anywhere on the world with infinite divisibility. Arbitrageurs and professional traders could create new financial products (i.e. ISA aggregations) and buy / sell with each other to price things to near perfection.
We’d love to continue the conversation! This is a fascinating space with a ton of opportunity. If you’re thinking about or building anything here, feel free to leave your comments or reach out to talk more.
Quick refresher: Indentured servants were immigrants who bargained away their labor (and freedom) for four-to-seven years in exchange for passage to the British colonies, room, board and freedom dues (a prearranged severance). Most of these immigrants were English men who came to British colonies in the 17th century.
On the surface this seems like a decent deal, but not so fast. They could be sold, lent out or inherited. Only 40% of indentured servants lived to complete the terms of their contracts. Masters traded laborers as property and disciplined them with impunity, all lawful at the time.
Rebuttal: We are in no way advocating a return to indentured servitude (voluntary or otherwise). Modern-day ISAs must be structured to have proper governance, ensure alignment of interests and contain legal covenants that protect both parties.
We are advocating for ISAs that (i) are voluntary, (ii) do not force the recipient to work for the investor, and (iii) are a promise to share future income, not an obligation to repay a debt.
Our Response: ISAs offered by Lambda School, Holberton School and other companies are legal under current US law. To the best of our knowledge, all companies offering ISAs operate according to best practices (i.e., consumer disclosure and borrower protections) as set forth in proposed federal legislation.
The Investing in Student Success Act (H.R.3432, S.268) has been proposed in both the US House of Representatives and the US Senate. Under this legislation, ISAs would be classified as qualified education loans (rather than equity or debt securities), making them dischargeable in bankruptcy. Furthermore, the bill would exempt ISAs from being considered an investment company under the Investment Company Act of 1940.
Importantly, the bill includes consumer protections (i.e., required disclosures, payback periods, payback caps, and limits on income share amounts). The bill also includes tax stipulations that preclude ISA recipients from owning any taxes and limiting taxes for investors to apply to profits earned from ISAs.
Quick refresher: Adverse selection describes a situation in which one party has information that the other does not have. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage of raising premiums.
Our Response: In September 2018, Purdue University published a research study that looked into adverse selection in ISAs. The study concluded that there was no adverse selection by student ability among borrowers. However, ISA providers need to properly structure the ISA so as not to cap a recipient’s upside by too much. In addition, this risk can be mitigated by (i) offering a structured educational curriculum for high-income jobs and (ii) an application process that ensures that students have the ability and motivation to complete a given vocational program.
Our Response: Properly structured ISAs paired with effective offerings (i.e., skills-based training, career development assistance) have the potential to mitigate inequality and discriminatory practices. ISA programs like Lambda School require students to be motivated to succeed and have enough income to complete the program, but in no way discriminate based on age, gender or ethnicity.
However, as ISAs become more common, new legislation must include explicit protections to guard against discrimination in administration of ISAs (especially given that it’s unclear whether the Equal Credit Opportunity Act would apply to ISAs since they aren’t technically loans).
Our Response: ISA providers like Lambda School are already starting to negotiate directly with employers to ensure that students have a job after completing the curriculum. These relationships mitigate the risk of a student refusing to pay. Lambda School is able to do this because it’s developed such a strong curriculum. Furthermore, students face reputation risk should they try to avoid meeting their obligations to the ISA provider.
Future legislation should address instances where a student avoids payment or chooses to take a job with no salary (i.e., a student completes a coding bootcamp, but has a change of heart and goes to work at a non-profit that pays below the minimum income threshold.
Our Response: ISAs are not for everyone. ISA’s are best suited for people with greater expected volatility in their future earnings (instead of people with a strong likelihood of a certain amount of salary). This is similar to new businesses choosing between equity investment vs. debt to finance their operations. Businesses with clear expectations of future cashflows generally benefit more from debt vs. equity. Individuals looking to finance their education are no different. Similarly, ISA’s don’t need to be all or nothing. Individuals can choose to capitalize their education with a mix of student loans + ISA’s to get a more optimal mix.
The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.
The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?
We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.
To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:
After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.
The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.
Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.
Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.
In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.
It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.
While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.
Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.
By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.
Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.
These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.
Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.
While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.
The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.
With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.
Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.
One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.
A wide array of other models are being explored and worth touching upon:
With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.
When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.
Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller
Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10
Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”
Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).
Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).
But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?
In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.
By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”
While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.
As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.
Latched on to this are the monetization strategies, the most common being the following:
Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.
An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).
Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.
At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).
With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.
Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.
Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.
Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.
At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.
Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).
The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.
However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.
More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”
In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).
If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.
To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.
The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).
All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.
At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.
Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).
Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).
Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).
On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.
All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.
But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.
With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.
Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?
Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).
What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:
What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”
With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.
My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.
While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.
But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.
In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”
Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.
First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.
Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”
Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.
This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.
Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.
In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.
Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.
Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.
Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.
Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.
Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).
To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.
Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.
The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.
Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive
Source : https://www.toptal.com/finance/venture-capital-consultants/open-source-software-investable-business-model-or-not
There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.
Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.
Just how active is the sector now?
That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.
And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.
The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.
Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!
Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.
Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.
These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?
The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.
Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/
In 1776, Adam Smith released his magnum opus, An Inquiry into the Nature and Causes of the Wealth of Nations, in which he outlined his fundamental economic theories. Front and center in the book — in fact in Book 1, Chapter 1 — is his realization of the productivity improvements made possible through the “Division of Labour”:
It is the great multiplication of the production of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people. Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of society.
Smith identified that when men and women specialize their skills, and also importantly “trade” with one another, the end result is a rise in productivity and standard of living for everyone. In 1817, David Ricardo published On the Principles of Political Economy and Taxation where he expanded upon Smith’s work in developing the theory of Comparative Advantage. What Ricardo proved mathematically, is that if one country has simply a comparative advantage (not even an absolute one), it still is in everyone’s best interest to embrace specialization and free trade. In the end, everyone ends up in a better place.
There are two key requirements for these mechanisms to take force. First and foremost, you need free and open trade. It is quite bizarre to see modern day politicians throw caution to the wind and ignore these fundamental tenants of economic science. Time and time again, the fact patterns show that when countries open borders and freely trade, the end result is increased economic prosperity. The second, and less discussed, requirement is for the two parties that should trade to be aware of one another’s goods or services. Unfortunately, either information asymmetry or physical distances and the resulting distribution costs can both cut against the economic advantages that would otherwise arise for all.
Fortunately, the rise of the Internet, and specifically Internet marketplace models, act as accelerants to the productivity benefits of the division of labour AND comparative advantage by reducing information asymmetry and increasing the likelihood of a perfect match with regard to the exchange of goods or services. In his 2005 book, The World Is Flat, Thomas Friedman recognizes that the Internet has the ability to create a “level playing field” for all participants, and one where geographic distances become less relevant. The core reason that Internet marketplaces are so powerful is because in connecting economic traders that would otherwise not be connected, they unlock economic wealth that otherwise would not exist. In other words, they literally create “money out of nowhere.”
Any discussion of Internet marketplaces begins with the first quintessential marketplace, ebay(*). Pierre Omidyarfounded AuctionWeb in September of 1995, and its rise to fame is legendary. What started as a web site to trade laser pointers and Beanie Babies (the Pez dispenser start is quite literally a legend), today enables transactions of approximately $100B per year. Over its twenty-plus year lifetime, just over one trillion dollars in goods have traded hands across eBay’s servers. These transactions, and the profits realized by the sellers, were truly “unlocked” by eBay’s matching and auction services.
In 1999, Jack Ma created Alibaba, a Chinese-based B2B marketplace for connecting small and medium enterprise with potential export opportunities. Four years later, in May of 2003, they launched Taobao Marketplace, Alibaba’s answer to eBay. By aggressively launching a free to use service, Alibaba’s Taobao quickly became the leading person-to-person trading site in China. In 2018, Taobao GMV (Gross Merchandise Value) was a staggering RMB2,689 billion, which equates to $428 billion in US dollars.
There have been many other successful goods marketplaces that have launched post eBay & Taobao — all providing a similar service of matching those who own or produce goods with a distributed set of buyers who are particularly interested in what they have to offer. In many cases, a deeper focus on a particular category or vertical allows these marketplaces to distinguish themselves from broader marketplaces like eBay.
With the launch of Airbnb in 2008 and Uber(*) in 2009, these two companies established a new category of marketplaces known as the “sharing economy.” Homes and automobiles are the two most expensive items that people own, and in many cases the ability to own the asset is made possible through debt — mortgages on houses and car loans or leases for automobiles. Despite this financial exposure, for many people these assets are materially underutilized. Many extra rooms and second homes are vacant most of the year, and the average car is used less than 5% of the time. Sharing economy marketplaces allow owners to “unlock” earning opportunities from these underutilized assets.
Airbnb was founded by Joe Gebbia and Brian Chesky in 2008. Today there are over 5 million Airbnb listings in 81,000 cities. Over two million people stay in an Airbnb each night. In November of this year, the company announced that it had achieved “substantially” more than $1B in revenue in the third quarter. Assuming a marketplace rake of something like 11%, this would imply gross room revenue of over $9B for the quarter — which would be $36B annualized. As the company is still growing, we can easily guess that in 2019-2020 time frame, Airbnb will be delivering around $50B per year to home-owners who were previously sitting on highly underutilized assets. This is a major “unlocking.”
When Garrett Camp and Travis Kalanick founded Uber in 2009, they hatched the industry now known as ride-sharing. Today over 3 million people around the world use their time and their underutilized automobiles to generate extra income. Without the proper technology to match people who wanted a ride with people who could provide that service, taxi and chauffeur companies were drastically underserving the potential market. As an example, we estimate that ride-sharing revenues in San Francisco are well north of 10X what taxis and black cars were providing prior to the launch of ride-sharing. These numbers will go even higher as people increasingly forgo the notion of car ownership altogether. We estimate that the global GMV for ride sharing was over $100B in 2018 (including Uber, Didi, Grab, Lyft, Yandex, etc) and still growing handsomely. Assuming a 20% rake, this equates to over $80B that went into the hands of ride-sharing drivers in a single year — and this is an industry that did not exist 10 years ago. The matching made possible with today’s GPS and Internet-enabled smart phones is a massive unlocking of wealth and value.
While it is a lesser known category, using your own backyard and home to host dog guests as an alternative to a kennel is a large and growing business. Once again, this is an asset against which the marginal cost to host a dog is near zero. By combining their time with this otherwise unused asset, dog sitters are able to offer a service that is quite compelling for consumers. Rover.com (*) in Seattle, which was founded by Greg Gottesman and Aaron Easterly in 2011, is the leading player in this market. (Benchmark is an investor in Rover through a merger with DogVacay in 2017). You may be surprised to learn that this is already a massive industry. In less than a decade since the company started, Rover has already paid out of half a billion dollars to hosts that participate on the platform.
While not as well known as the goods exchanges or sharing economy marketplaces, there is a growing and exciting increase in the number of marketplaces that help match specifically skilled labor with key opportunities to monetize their skills. The most noteworthy of these is likely Upwork(*), a company that formed from the merger of Elance and Odesk. Upwork is a global freelancing platform where businesses and independent professionals can connect and collaborate remotely. Popular categories include web developers, mobile developers, designers, writers, and accountants. In the 12 months ended June 30, 2018, the Upwork platform enabled $1.56 billion of GSV (gross services revenue) across 2.0 million projects between approximately 375,000 freelancers and 475,000 clients in over 180 countries. These labor matches represent the exact “world is flat” reality outlined in Friedman’s book.
Other noteworthy and emerging labor marketplaces:
These vertical labor marketplaces are to LinkedIn what companies like Zillow, Expedia, and GrubHub are to Google search. Through a deeper understanding of a particular vertical, a much richer perspective on the quality and differentiation of the participants, and the enablement of transactions — you create an evolved service that has much more value to both sides of the transaction. And for those professionals participating in these markets, your reputation on the vertical service matters way more than your profile on LinkedIn.
Having been a fortunate investor in many of the previously mentioned companies (*), Benchmark remains extremely excited about future marketplace opportunities that will unlock wealth on the Internet. Here are an example of two such companies that we have funded in the past few years.
The New York Times describes Hipcamp as “The Sharing Economy Visits the Backcountry.” Hipcamp(*) was founded in 2013 by Alyssa Ravasio as an engine to search across the dozens and dozens of State and National park websites for campsite availability. As Hipcamp gained traction with campers, landowners with land near many of the National and State parks started to reach out to Hipcamp asking if they could list their land on Hipcamp too. Hipcamp now offers access to more than 350k campsites across public and private land, and their most active private land hosts make over $100,000 per year hosting campers. This is a pretty amazing value proposition for both land owners and campers. If you are a rural landowner, here is a way to create “money out of nowhere” with very little capital expenditures. And if you are a camper, what could be better than to camp at a unique, bespoke campsite in your favorite location.
Instawork(*) is an on-demand staffing app for gig workers (professionals) and hospitality businesses (partners). These working professionals seek economic freedom and a better life, and Instawork gives them both — an opportunity to work as much as they like, but on their own terms with regard to when and where. On the business partner side, small business owners/managers/chefs do not have access to reliable sources to help them with talent sourcing and high turnover, and products like LinkedIn are more focused on white-collar workers. Instawork was cofounded by Sumir Meghani in San Franciso and was a member of the 2015 Y-Combinator class. 2018 was a break-out year for Instawork with 10X revenue growth and 12X growth in Professionals on the platform. The average Instawork Professional is highly engaged on the platform, and typically opens the Instawork app ten times a day. This results in 97% of gigs being matched in less than 24 hours — which is powerfully important to both sides of the network. Also noteworthy, the Professionals on Instawork average 150% of minimum wage, significantly higher than many other labor marketplaces. This higher income allows Instawork Professionals like Jose, to begin to accomplish their dreams.
As you can see, these numerous marketplaces are a direct extension of the productivity enhancers first uncovered by Adam Smith and David Ricardo. Free trade, specialization, and comparative advantage are all enhanced when we can increase the matching of supply and demand of goods and services as well as eliminate inefficiency and waste caused by misinformation or distance. As a result, productivity naturally improves.
Specific benefits of global internet marketplaces:
Source : http://abovethecrowd.com/2019/02/27/money-out-of-nowhere-how-internet-marketplaces-unlock-economic-wealth/
At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf
Having founded my startup a few years ago, I am familiar to why founders go through the pain & grit to build their own company. The statistics around startup survival rates show that the risk is high, but the potential reward both financially & emotionally is also significant.
In my case, risk was defined by the amount of money I invested in the venture plus the opportunity cost in case the startup goes nowhere. The later relates to the fact that I earned no salary at the beginning & that when I committed to that specific idea I was instantaneously saying “no” to many other opportunities and potential career advancements. The reward was two-fold too; the first one was the attractive financial outcome of a potential exit. The second one was the freedom to chase opportunities as they appear, doing what I want and how I want it.
Once I raised capital from investors, I basically traded reward for reduced risk. I started paying myself a small salary and anticipated that more resources would increase the success likelihood of the startup.
This pattern of weighing risk against rewards was crystal clear in my mind… until I joined the arena of corporate venture building. Directly during one of my first projects, I was tasked with the creation of a startup for a blue-chip corporate client. I was immediately puzzled by the reasoning behind this endeavor.
Ultimately corporate decisions are also guided by risk against reward: if they don’t take risks and innovate they might be left behind and, in some cases, join the once-great-now-extinct corporate hall of shame. That’s why they invest in research and development, spend hard earned cash in mergers and acquisitions and start innovation programs. But my interest was more at a micro level, meaning, which reasoning my corporate client follows to decide if and how to found a specific new venture?
Having thought about it a lot, I believe at micro level corporates weigh investment against control. Investment is the level of capital, manpower & political will provided by the corporate to propel the venture towards exit, break-even or strategic relevance. Control is the possibility to steer the venture towards the strategic goals the leadership team has in mind while defining the boundaries of what can & cannot be done.
In the startup case, the risk/reward is typically shared between the founders and external investors. In a corporate venture building case, the investment/control can be shared between the corporate, an empowered founder team and also external investors.
I am still in the middle of the corporate decision-making process but wanted to share with you the scenarios we are using to guide the discussions on how to structure the new venture. But before I do, I would like to mention that the considerations of investment vs. control takes place at three different stages of the venture’s existence:
• Incubation: develop & validate idea
• Acceleration: validate business model incl. product, operations & customer acquisition (find the winning formula)
• Growth: replicate the formula to grow exponentially
Based on that, three main scenarios are being considered to found the new venture.
Per definition, the incubation and acceleration stages are less capital intensive and is the moment when key strategic decisions that shape the future business are made. In these stages, the corporate is interested in maintaining the full control of the venture while absorbing the whole investment. Only when they enter the capital-intensive growth stage it becomes necessary to “share the burden” with other institutional or strategic investors. This scenario is suitable for ventures of high strategic value, especially the ones leveraging core assets and know-how of the corporate mothership.
In this case, the corporate initiator empowers a founder team and joins the project almost like an external investor would do at Seed and Series A of a startup. They agree on a broad vision, provide the funding and retain a part of the shares with shareholder meetings in between to track progress. Beyond that, they let the founder team do their thing. External investors can join at any funding round to share the investment tickets. The corporate would have lower control and investment from the get-go and can increase their influence only when new funding rounds are required or via an acquisition offer. This scenario is suitable for ventures in which the corporate can function as the first client or use their network to manufacture, market or distribute the product or service.
The venture is initially built by a founder team or external partners (often a consultancy). Only once they successfully finalized the incubation and acceleration stages, the corporate has the right or obligation to absorb the business. Differently than scenario 2, the corporate gains stronger control of the trajectory of the business during its initial stages by defining how a “transfer” event looks like. The investment necessary to put together a strong founder team is reduced by the reward of a pre-defined & short term exit event. The initial investment can be further reduced by the participation of Business Angels, also motivated by a clear path to exit and access to a new source of deal flow. This scenario is suitable for ventures closely linked to the core business of the corporate and where speed & excellence of execution is key.
There is obviously no right and wrong. Each scenario can make sense according to the end goal of the corporate. Furthermore, there are surely new scenarios and variations of the above. What is important in my opinion is to openly discuss which road to take. If the client can’t discern the alternatives and consequences, you will risk a “best of both worlds” mindset where expectations regarding investment & control don’t match. If that is the case, you will be up for a tough ride
Source : https://medium.com/@cbgf/a-corporate-venture-building-dilemma-investment-vs-control-a703b9c19c94