Category: Silicon Valley

Which investments generate the greatest value in venture: Consumer or Enterprise? – Sapphire

A Dive into Enterprise vs Consumer Exit Activity

In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at  Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them. 

One of our favorite areas to explore is: as a venture investor, do your odds of making better returns improve if you only invest in either enterprise or consumer companies?  Or do you need a mix of both to maximize your returns? And how should recent investment and exit trends influence your investing strategy, if at all? 

To help answer questions like these, we’ve collected and analyzed exit data for years.  What we’ve found is pretty intriguing: portfolio value creation in enterprise tech is often driven by a cohort of exits, while value creation in consumer tech is generally driven by large, individual exits. 

In general, this trend has held for several years and has led to the general belief, that if you are a consumer investor, the clear goal is to not miss that “one deal” that has a huge spike in exit valuation creation (easier said than done of course). And if you’re an enterprise investor, you want to create a “basket of exits” in your portfolio.

What Creates More Portfolio Value: Consumer or Enterprise?

2019 has been a powerhouse year for consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent declines in stock price notwithstanding). The first three quarters of 2019 alone surpassed every year since 1995 for consumer exit value – and we’re not done yet. If the consumer exit pace continues at this scale, we will be on track for the most value created at exit in 25 years, according to our analysis.

Source: S&P Capital IQ, Pitchbook

Since 1995, the number of enterprise exits has consistently outpaced consumer exits (blue line versus green line above), but 2019 is the closest to seeing those lines converge in over two decades (223 enterprise vs 208 consumer exits in the first three quarters of 2019). Notably, in five of the past nine years, the value generated by consumer exits has exceeded enterprise exits.[1]

At Sapphire, we observe the following:

  • Venture-backed enterprise tech companies have generated $884B in value since 1995; $349B from M&A and $535B from IPOs.
  • Venture-backed consumer tech companies have generated $773B in value since 1995; $153B from M&A and $620B from IPOs.
  • In total, there were 5,600+ venture-backed exits in enterprise tech and 3,300+ exits in consumer tech.

While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019 has generated a tremendous amount of value through IPOs, roughly $223 billion. However, after trading in the public markets, the aggregate value of those IPOs have decreased by $81 billion as of November 1, 2019.[3] This decrease is driven by Uber and Lyft from an absolute value basis, accounting for roughly 66% of this markdown over the same period, according to our figures. Over half of the IPO exits in 2019 have been consumer, and despite these stock price changes, consumer exits are still outperforming enterprise exits YTD given the enormous alpha they generated initially.

As we noted in the introduction, since 1995, historical data shows that years of high value creation from enterprise technology is often driven by a cohort of exits versus consumer value creation that is often driven by large, individual exits. The chart below illustrates this, showing a side-by-side comparison of exits and value creation.

Source: Pitchbook

At Sapphire, we observe the following:

  • The top five enterprise companies with the largest exits account for $79B in value creation, or 9% of the $884B generated in the enterprise category since 1995.
  • The top five consumer companies with largest exits account for $276B in value creation, or 36% of the $773B generated in the consumer category since 1995.

The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies. 

Understanding the Consumer Comeback

While total value of enterprise companies exited since 1995 ($884B) exceeds that of consumer exits ($773B), in the last 15 years, consumer returns have been making a comeback. Specifically, total consumer value exited ($538B) since 2004 exceeds that of enterprise exits ($536B).  This difference has become more stark in the past 10 years with total consumer value exited ($512B) surpassing that of enterprise ($440B). As seen in the chart below, the rolling 10-year total enterprise exit value exceeded that of consumer, until the decade between 2003-2012 where consumer exit value took the lead.

Note: Data from S&P Capital IQ and Pitchbook

Source: S&P Capital IQ, Pitchbook

We believe size and then the inevitable hype around consumer IPOs has the potential to cloud investor judgment since the volume of successful deals is not increasing.  The data clearly shows the surge in outsized returns comes from the outliers in consumer. 

As exhibited below, large, consumer outliers since 2011 such as Facebook, Uber, and Snap often account for more than the sum of enterprise exits in any given year. For example, in the first three quarters of 2019, there have been 15 enterprise exits valued at over $1B for a total of $96B.  In the same time, there have been nine consumer exits valued at over $1B for a total of $139B. Anecdotally, this can be seen from four out of the past five years being headlined by a consumer exit. While 2016 showcased an enterprise exit, it was a particularly quiet exit year.

  • 2015 – Consumer: Fitbit ($6B)
  • 2016 – Enterprise: Nutanix ($5B)
  • 2017 – Consumer: Snap ($27B)
  • 2018 – Consumer: Dropbox ($11B)
  • First 3 quarters of 2019 – Consumer: Uber ($85B)

Source: S&P Capital IQ, Pitchbook

Enterprise Deals Still Rule in M&A

While consumer deals have taken the lead in IPO value in recent years, on the M&A front, enterprise still has the clear edge. Since 1995 there have been 76 exits of $1 billion or more in value, of which 49 are enterprise companies and 27 are consumer companies. The vast majority of value from M&A has come from enterprise companies since 1995 — more than 2x that of consumer. 

Similar to the IPO chart above, acquisition value of enterprise companies outpaced that of consumer companies until recently, with 2010-2014 being the exception.

Source: S&P Capital IQ, Pitchbook

Of course, looking only at outcomes with $1 billion or more in value covers only a fraction of where most VC exits occur. Slightly less than half of all exits in both enterprise and consumer are $50 million or under in size, and more than 70 percent of all exits are under $200 million. Moreover, in the distribution chart below, we capture only the percentage of companies for which we have exit values. If we change the denominator to all exits captured in our database (i.e. measure the percentage of $1 billion-plus exits by using a higher denominator), the percentage of outcomes drops to around 3 percent of all outcomes for both enterprise and consumer.

Source: S&P Capital IQ, Pitchbook

What Does All of this Mean for Venture Investors?

There’s an enormous volume of information available on startup exits, and at Sapphire Partners, we ground our analyses and theses in the numbers. At the same time, once we’ve dug into the details, it’s equally important to zoom out and think about what our findings mean for our GPs and fellow LPs. Here are some clear takeaways from our perspective:

  • Consumer exits have surpassed enterprise over the past 15 years.
  • Consumer exits value is highly concentrated in the top deals.
  • There are more billion-dollar enterprise IPOs than billion-dollar consumer exits, so you may have more opportunities for a unicorn enterprise outcome than you do a consumer.
  • However, if you happen to invest in one of the outlier consumer exits, you could experience significant returns.  

In a nutshell, as LPs we like to see both consumer and enterprise deals in our underlying portfolio as they each provide different exposures and return profiles.  However, when these investments get rolled up as part of a venture fund’s portfolio, success is often then contingent on the fund’s overall portfolio construction… but that’s a question to explore in another post.


NOTE: Total Enterprise Value (“TEV”) presented throughout analysis considers information from CapIQ when available, and supplements information from Pitchbook last round valuation estimates when CapIQ TEV is not available. TEV (Market Capitalization + Total Debt + Total Preferred Equity + Minority Interest – Cash & Short Term Investments) is as of the close price for the initial date of trading. Classification of “Enterprise” and “Consumer” companies presented herein is internally assigned by Sapphire. Company logos shown in various charts presented herein reflect the top (4) companies of any particular time period that had a TEV of $1BN or greater at the time of IPO, with the exception of chart titled “Exits by Year, 1995- Q3 2019”, where logos shown in all charts presented herein reflect the top (4) companies of any particular year that had a TEV of $7.5BN or greater at the time of IPO. During a time period in which less than (4) companies had such exits, the absolute number of logos is shown that meet described parameters. Since 1995 refers to the time period of 1/1/1995 – 9/30/2019 throughout this article.

[1]  Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019.[2] Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares. [3] Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.

Source : https://sapphireventures.com/blog/openlp-series-which-investments-generate-the-greatest-value-in-venture-consumer-or-enterprise/

Deep Dive into the Past, Present, & Future of Income Share Agreements – Erik

Introduction

Imagine a world where you had a personal board of advisors — the people you most admire and respect — and you gave them upside in your future earnings in exchange for helping you (e.g our good friend Mr. Mike Merrill.)

Imagine if there was a “Kickstarter for people” where you could support up-and-coming artists, developers, entrepreneurs — when they need the cash the most, and most importantly, you’d only profit when they profit.

Imagine if you could diversify by pooling 1% of your future income with your ten smartest friends.

Now think about how much you’d go out of your way to help, say, your brother-in-law or step-siblings. Probably much more than a stranger. Why is that?

To pose a thought experiment: If you didn’t know your cousins were related to you, you might treat them like any other person. But because we have this social context of an “extended family,” you have a sort of genetic equity in them — a feeling that your fates are shared and it’s your responsibility to support them.

This begs the questionHow can we create the social context needed for people to truly care about others outside of their extended family?

If you believe that markets and trade have helped the world become a less violent place — because why hurt someone when it’ll also take money out of your pocket? — then you should believe that adding more markets (with proper safeguards) will make the world even less violent.

This is the hope of income share agreements (ISAs).

ISAs align economic incentives in ways that encourage us to help others beyond our extended family, give people economic opportunity who don’t have it today, and free people from the shackles of debt.

What are these ISAs you speak of?

An Income Share Agreement is a financial arrangement where an individual or organization provides something of value to a recipient, who, in exchange, agrees to pay back a percentage of their income for a certain period of time.

In the context of education, ISAs are a debt-free alternative to loans.

Rather than go into debt, students receive interest-free funding from an investor or benefactor. In exchange, the student agrees to share a percentage of future income with their counterparty. They come in different shapes and sizes, but almost always with terms that take into account a plethora of potential scenarios.

“Part of the elegance of an ISA is that the lender only wants a share of income when the borrower is getting a regular income “If you’re unemployed or underemployed, they’re not interested… you’re automatically getting a suspension of payments when you’re not doing well.”

– Mark Kantrowitz, a leading national expert on student loans who has testified before Congress about student aid policy.

There is a long and storied history of income share agreements, but they’ve only recently become popular due to the rise of Lambda School, a school that lets students attend for free and, if they do well after school, pay a percentage of their income until they pay Lambda back.

Wait, a popular meme sarcastically asks, did you just invent taxes?

No. Lambda only gets paid if and only if the student earns a certain amount after graduation. In other words, incentives are aligned. The student is the customer. Not the government. Not the state. Not the parents.

To be sure, it’s early days for ISAs: Adverse selection, legalization, concerns about individuals being corporations (derivatives? Shorting people?!) — there’s a lot left to figure out.

Still, it’s an idea that once you see, you can’t unsee.

Here’s a hypothetical story to help you picture how ISAs work:


Picture Janet, a Senior at Davidson High School. She has a 4.0 GPA, is captain of the debate team and star center forward of the Varsity Soccer team. She’s a shoo-in for a top 20 university, but her parents can’t afford it even with a scholarship, so she’s not even going to apply, and is headed for State. Then she learns from a news article that she’s a pretty good bet as someone who’s going to succeed down the road, and that might allow her to put some much needed cash towards her education. She goes for it, makes a profile on an ISA, and sure enough, a few strangers bet $50,000 on her college education! She immediately gets to work filling out Ivy League scholarship applications.

Throughout college, she keeps in touch with her investors, they give her advice, and because of her interest in politics, one even helps her get an internship with a governor’s election campaign over the summer. Once she graduates, she knows the clock is ticking — at 23 she’ll need to start paying back the investors 5% of her after tax income, so she hustles to work her way through the ranks.

From age 23 to 33, the payback period, Janet becomes a lawyer at a top tier firm, and the investors make a 3x cash on cash return


The above is purely hypothetical.

ISAs for traditional higher education are much more complicated than say, vocational training, where there is more direct alignment of ‘skills-development-to-job’ pathway for students. But, the beauty of ISAs is in their flexibility, so there is lots of room for innovation.

So: this is the dream — why hasn’t it happened yet?

ISAs and other related instances of securitizing human capital have been tried. Here’s a brief history:

Economist Milton Friedman Proposes Use of ISAs in Education —

In modern times, the first notable mention of the concept of ISAs was by Nobel-prize winning economist Milton Friedman in his 1955 essay The Role of Government in Education.

In a section devoted specifically to vocational and professional education, Friedman proposed that an investor could buy a share in a student’s future earning prospects.

It’s worth noting that the barriers to adoption that Friedman identified back in the 1950s still hold true today:

  1. The potential high costs of administration;
  2. The sheer novelty of the idea;
  3. The reluctance to think of investments in human beings as comparable to investments in physical assets; and
  4. Legal and conventional limitations by suitable financial intermediaries

Society might not have been ready for ISAs in the 1950s, but 16 years later, another Nobel Prize-winning economist, James Tobin, would help launch the first ISA option for college students at Yale University.

Yale experiments with ISAs —

In the 1970s, Yale University ran an experiment called the Tuition Postponement Option (“TPO”). The TPO was a student loan program that enabled groups of undergraduates to pay off loans as a “cohort” by committing a portion of their future annual income.

Students who signed up for the program (3,300 in total) were to pay four percent of their annual income for every $1,000 borrowed until the entire group’s debt had been paid off. High earners could buy out early, paying 150% of what was borrowed plus interest.

Within each cohort, many low earners defaulted, while the highest earners bought out early, leaving a disproportionate debt burden for the remaining graduates.

Administrators also did not account for the changes to the tax code and skyrocketing inflation in the 1980s, which only exacerbated the inequitable arrangement.

“We’re all glad it’s come to an end,” It was an experiment that had good intentions but several design flaws.” — Yale President Richard Levin.

While the TPO is generally considered a failure, it was the first instance of a major university offering ISAs and a useful example for how not to structure ISAs — specifically, pooling students by cohort and allowing the highest earning students to buy out early.

ISAs as a Financial Aid Option —

It would be decades after Yale’s failed experiment before universities started experimenting again with ISAs, but today a company called Vemo Education is leading the way.

This is a crucial point: Vemo isn’t competing directly with loans, but instead is unlocking other sorts of value (i.e., helping students better choose their college). The key here is that Vemo links an individual’s fortunes to the institution’s fortunes. The company helps universities signal value to students by helping them offer ISAs that signal that the university wants to better align cost with value of its higher education program.

The first company that Vemo partnered with to offer ISAs was Purdue University.

In 2016, Purdue University began partnering with Vemo Education to offer students an ISA tuition option through its “Back a Boiler” ISA Fund. They started with a $2 million fund, and since then have raised another $10.2 million and have issued 759 contracts totaling $9.5 million to students.

Purdue markets its ISA offering as an alternative to private student loans and Parent PLUS Loans. Students of any major can get $10,000 per year in ISA funding at rates that vary between 1.73% and 5.00% of their monthly income. Purdue caps payments at 2.5x the ISA amount that students take out and payment is waived for students making less than $20,000 in annual income.

In the last few years, Vemo has emerged as the leading partner for higher education institutions looking to develop, launch and implement ISAs. In 2017, Vemo powered $23M of ISAs for college students across the US.

Upstart: A Short-Lived Attempt at “Kickstarter for People” —

Fintech company Upstart initially launched with a model of “crowdfunding for education”. However, they eventually pivoted to offering traditional loans when they realized that their initial model was simply not viable.

Why? Not enough supply.

The fact that only accredited investors (over $1m in net worth) could invest severely limited the total potential funders on the site. And yet, while Upstart never got enough traction (they pivoted successfully), they paved the way for a platform like it to eventually be built.

ISAs for Vocational Training —

While Upstart failed to gain traction, technical educational bootcamps have seen tremendous growth while offering their students ISAs to finance their education.

And Lambda School is leading the way.

Lambda School is an online bootcamp that trains students to become software engineers at no upfront cost. Instead of paying tuition, students agree to pay 17% of their income for the first two years that they’re employed. Lambda School includes a $50,000 minimum income threshold and caps total payments at an aggregate $30,000. They also give students the option to pay $20,000 upfront if they’d rather not receive an ISA.

Lambda School students enroll for nine months and end up with 1,500–2,000 hours of training, comparable to the level of training they’d receive during a CS-focused portion of a four-year CS degree.

“Lambda School looks like a charity from the outside, but we’re really more like a hedge fund.

We bet that smart, hardworking people are fundamentally undervalued, and we can apply some cash and leverage to fix that, taking a cut.” — Austin Allred (Lambda School CEO)

In our opinion, Lambda is legitimizing ISAs and may just be the wedge that makes ISAs mainstream.

An Outlook for the Future of ISAs

Given where we are today, and with the potential for this type of financial innovation, what might the future look like?

There are three major themes in particular that get us excited for the future of ISAs: aggregation, novel incentive structures, and crypto.

Aggregation —

We believe that it’s possible to pool together various segments of people to decrease overall risk of that population and provide more to each individual person.

If we assume that each individual is fairly independent from each other, this should be a possibility. As risk declines, your expected return should increase. And as your expected return increases, more investors and ISA providers will likely jump in to provide even more capital for more people.

“There is no reason you have to do this at the individual level. Most likely, it will first occur in larger aggregated groups — based on either geography, education, or other group characteristics. As with the housing market, it is important to aggregate enough individual sample points to reduce risk.” — Dave McClure

Another take on aggregation could be an individual electing to group together with their close friends or peers.

This can have the magical benefit of further aligning incentives with those around you, increasing the value of cooperation, lowering downside risk, and promoting more potential risk taking or thinking outside the box, all of which should have the benefit of increasing economic growth.

In addition to that, being able to take a more active role in a friend’s life (helping when need be, sharing in their wins, supporting in their losses, etc.) can be an extremely rewarding experience. That said, there are some definite downsides and risks to be aware of with these types of arrangements.

Novel Incentive Structures —

How can we create financial products to incentivize service provides (i.e. teachers, doctors, etc.) where they are indirectly having massive impacts to income from future generations?

Just imagine if every teacher was able to take even just a tiny percentage of every one of their students’ future earnings the difference that tweak could make. Teachers today unfortunately don’t make nearly as much money as they should given the significant consequences they have on future generations. A great teacher can create the spark for the next Einstein or Elon Musk. A terrible teacher could damage the potential Einstein or Elon Musk enough where they never realize their potential. Imagine how many more incredible people we could have.

There will always be incredible teachers regardless of monetary return, but we bet there could be more. It all comes down to aligning incentives.

This same thinking can be applied to other service providers like doctors. Currently, doctors are paid the same amount (all else equal) whether they succeed or not in a life-saving surgery. But what if the service provider also took a tiny fraction of future earnings from their patient? Incentives are more aligned. That doctor may not even realize it, but they likely would work a bit harder knowing what’s at stake.

Crypto —

Crypto can securitize so much more than we currently do; in essence, we could tokenize ourselves and all future income. Once those personal tokens exist, they can be traded instantly anywhere on the world with infinite divisibility. Arbitrageurs and professional traders could create new financial products (i.e. ISA aggregations) and buy / sell with each other to price things to near perfection.

What’s next?

We’d love to continue the conversation! This is a fascinating space with a ton of opportunity. If you’re thinking about or building anything here, feel free to leave your comments or reach out to talk more.

Special shoutout to David Weinstein & Jake Hallac for their help writing as well as Ray Batra, Dani Grant, Zander Adell, Dave McClure, Sam Lessin and Alex Marcus for their help reviewing / editing!

***

Appendix: Addressing Common Concerns —

Isn’t giving up the legal right to a portion of future income equivalent to modern-day indentured servitude?

Quick refresher: Indentured servants were immigrants who bargained away their labor (and freedom) for four-to-seven years in exchange for passage to the British colonies, room, board and freedom dues (a prearranged severance). Most of these immigrants were English men who came to British colonies in the 17th century.

On the surface this seems like a decent deal, but not so fast. They could be sold, lent out or inherited. Only 40% of indentured servants lived to complete the terms of their contracts. Masters traded laborers as property and disciplined them with impunity, all lawful at the time.

Rebuttal: We are in no way advocating a return to indentured servitude (voluntary or otherwise). Modern-day ISAs must be structured to have proper governance, ensure alignment of interests and contain legal covenants that protect both parties.

We are advocating for ISAs that (i) are voluntary, (ii) do not force the recipient to work for the investor, and (iii) are a promise to share future income, not an obligation to repay a debt.

ISAs are unregulated. How do we structure and enforce ISAs without a legal framework to rely on?

Our Response: ISAs offered by Lambda School, Holberton School and other companies are legal under current US law. To the best of our knowledge, all companies offering ISAs operate according to best practices (i.e., consumer disclosure and borrower protections) as set forth in proposed federal legislation.

The Investing in Student Success Act (H.R.3432S.268) has been proposed in both the US House of Representatives and the US Senate. Under this legislation, ISAs would be classified as qualified education loans (rather than equity or debt securities), making them dischargeable in bankruptcy. Furthermore, the bill would exempt ISAs from being considered an investment company under the Investment Company Act of 1940.

Importantly, the bill includes consumer protections (i.e., required disclosures, payback periods, payback caps, and limits on income share amounts). The bill also includes tax stipulations that preclude ISA recipients from owning any taxes and limiting taxes for investors to apply to profits earned from ISAs.

Given that ISAs are riskier than student loans, but don’t require the same qualifications, aren’t ISAs prone to adverse selection?

Quick refresher: Adverse selection describes a situation in which one party has information that the other does not have. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage of raising premiums.

Our Response: In September 2018, Purdue University published a research study that looked into adverse selection in ISAs. The study concluded that there was no adverse selection by student ability among borrowers. However, ISA providers need to properly structure the ISA so as not to cap a recipient’s upside by too much. In addition, this risk can be mitigated by (i) offering a structured educational curriculum for high-income jobs and (ii) an application process that ensures that students have the ability and motivation to complete a given vocational program.

Couldn’t ISAs result in lack of diversity and discriminatory practices?

Our Response: Properly structured ISAs paired with effective offerings (i.e., skills-based training, career development assistance) have the potential to mitigate inequality and discriminatory practices. ISA programs like Lambda School require students to be motivated to succeed and have enough income to complete the program, but in no way discriminate based on age, gender or ethnicity.

However, as ISAs become more common, new legislation must include explicit protections to guard against discrimination in administration of ISAs (especially given that it’s unclear whether the Equal Credit Opportunity Act would apply to ISAs since they aren’t technically loans).

Can’t students simply refuse to pay once they start earning income after graduation?

Our Response: ISA providers like Lambda School are already starting to negotiate directly with employers to ensure that students have a job after completing the curriculum. These relationships mitigate the risk of a student refusing to pay. Lambda School is able to do this because it’s developed such a strong curriculum. Furthermore, students face reputation risk should they try to avoid meeting their obligations to the ISA provider.

Future legislation should address instances where a student avoids payment or chooses to take a job with no salary (i.e., a student completes a coding bootcamp, but has a change of heart and goes to work at a non-profit that pays below the minimum income threshold.

Equity is expensive (relative to debt), so wouldn’t students be better off sticking with traditional debt financing?

Our Response: ISAs are not for everyone. ISA’s are best suited for people with greater expected volatility in their future earnings (instead of people with a strong likelihood of a certain amount of salary). This is similar to new businesses choosing between equity investment vs. debt to finance their operations. Businesses with clear expectations of future cashflows generally benefit more from debt vs. equity. Individuals looking to finance their education are no different. Similarly, ISA’s don’t need to be all or nothing. Individuals can choose to capitalize their education with a mix of student loans + ISA’s to get a more optimal mix.

Source: https://medium.com/@eriktorenberg_/life-capital-9e5028c0ea12

Open Source Software – Investable Business Model or Not? – Natallia Chykina

Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”

Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).

Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).

But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?

In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.

OSS Monetization Models

By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”

While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.

As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.

Latched on to this are the monetization strategies, the most common being the following:

  • Extra charge for enterprise services, support, and consulting. The classic model targeted at large enterprise clients with sophisticated needs. Examples: MySQL, Red Hat, Hortonworks, DataStax
  • Freemium. (advanced features/products/add-ons) A custom licensed product on top of the OSS might generate a lavish revenue stream, but it requires a lot of R&D costs and time to build. Example: Cloudera, which provides the basic version for free and charges the customers for Cloudera Enterprise
  • SaaS/PaaS business model: The modern way to monetize the OSS products that assumes centrally hosting the software and shifting its maintenance costs to the provider. Examples: Elastic, GitHub, Databricks, SugarCRM

Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.

Influx of VC Dollars

An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).

Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.

At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).

With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.

Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.

Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.

Case Study 1: Red Hat

Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.

At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.

Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).

The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.

However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.

More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”

In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).

If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.

Case Study 2: Hortonworks

To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.

The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).

All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.

At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.

Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).

Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).

Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).

On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.

All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.

But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.

With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.

Why are VCs Investing in OSS Companies?

Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?

Like what you’re reading?
Get the latest updates first.
No spam. Just great articles & insights.

Good Fit for a Strategic Acquisition

Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).

What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:

  • Getting access to the client base. Sun is reported to have been motivated by this when it acquired MySQL. They wanted to access the SME market and cross-sell other products to smaller clients. Simply forking the product or developing a competing technology internally wouldn’t deliver the customer base and would have made Sun incur additional customer acquisition costs.
  • Getting control over the product. The ability to influence further development of the product is a crucial factor for a strategic buyer. This allows it to build and expand its own product offering based on the acquired products without worrying about sudden substantial changes in it. Example: Red Hat acquiring Ansible, KVM, Gluster, Inktank (Ceph), and many more
  • Entering adjacent markets. Acquiring open source companies in adjacent market segments, again, allows a company to expand the product offering, which makes vendor lock-in easier, and scales the business further. Example: Citrix acquiring XenSource
  • Acquiring the team. This is more relevant for smaller and younger projects than for larger, more well-established ones, but is worth mentioning.

What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”

With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.

My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.

A Better Monetization Model: SaaS

While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.

But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.

In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”

Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.

First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.

Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”

Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.

This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.

Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.

In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.

Success Stories for OSS with SaaS

Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.

Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.

Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.

Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.

Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).

Investable Business Model or Not?

To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.

Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.

The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.

Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive

Source : https://www.toptal.com/finance/venture-capital-consultants/open-source-software-investable-business-model-or-not

Industrial tech may not be sexy, but VCs are loving it – John Tough

There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.

Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.

Just how active is the sector now?

That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.

And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.

The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.

Detail by stage of financing

Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!

Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.

Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.

These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?

Above: Growth in pre-money valuation particularly acute in later stage deals

The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.

  • The average Series A round now buys about 24 percent, slightly less than five years ago
  • The average Series B round now buys about 22 percent of the company, down from 26 percent five years ago
  • The average Series C round now buys approximately 20 percent, down from 23 percent five years ago.

Some conclusions

  • Dollars invested as a portion of industry revenue and profit allows for further capital commitments.
  • There is a growing appreciation for the industrial sales cycle. Investor willingness to wait for reduced risk to deploy even more capital in the perceived winners appears to be driving this trend.
  • Entrepreneurs that can successfully de-risk their enterprise through revenue, partnerships, and industry hires will gain access to outsized capital pools. The winners in this market tend to compound as later customers look to early adopters
  • Uncertainty still remains about exit opportunities for technology companies that serve these industries. While there are a few headline-grabbing acquisitions (PlanGrid, Kurion, OSIsoft), we are not hearing about a sizable exit from this market on a weekly or monthly cadence. This means we won’t know for a few years about the returns impact of these rising valuations. Grab your hard hat!

Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/

Predicting a Startup Valuation with Data Science – Sebastian Quintero

The following is a condensed and slightly modified version of a Radicle working paper on the startup economy in which we explore post-money valuations by venture capital stage classifications. We find that valuations have interesting distributional properties and then go on to describe a statistical model for estimating an undisclosed valuation with considerable ease. In conjunction with this post, we are releasing a free tool for estimating startup valuations. To use the tool and to download the full PDF of the working paper, go here, but please read the entirety of this post before doing so. This is not magic and the details matter. With that said, grab some coffee and get comfortable––we’re going deep.

Introduction

It’s often difficult to comprehend the significance of numbers thrown around in the startup economy. If a company raises a $550M Series F at a valuation of $4 billion [3]— how big is that really? How does that compare to other Series F rounds? Is that round approximately average when compared to historical financing events, or is it an anomaly?

At Radicle, a disruption research company, we use data science to better understand the entrepreneurial ecosystem. In our quest to remove opacity from the startup economy, we conducted an empirical study to better understand the nature of post-money valuations. While it’s popularly accepted that seed rounds tend to be at valuations somewhere in the $2m to the $10m valuation range [18], there isn’t much data to back this up, nor is it clear what valuations really look like at subsequent financing stages. Looking back at historical events, however, we can see some anecdotally interesting similarities.

Google and Facebook, before they were household names, each raised Series A rounds with valuations of $98m and $100m, respectively. More recently, Instacart, the grocery delivery company, and Medium, the social publishing network on which you’re currently reading this, raised Series B rounds with valuations of $400m and $457m, respectively. Instagram wasn’t too dissimilar at that stage, with a Series B valuation of $500m before its acquisition by Facebook in 2012. Moving one step further, Square (NYSE: SQ), Shopify (NYSE: SHOP), and Wish, the e-commerce company that is mounting a challenge against Amazon, all raised Series C rounds with valuations of exactly $1 billion. Casper, the privately held direct-to-consumer startup disrupting the mattress industry, raised a similar Series C with a post-money valuation of $920m. Admittedly, these are probably only systematic similarities in hindsight because human minds are wired to see patterns even when there aren’t any, but that still makes us wonder if there exists some underlying trend. Our research suggests that there is, but why is this important?

We think entrepreneurs, venture capitalists, and professionals working in corporate innovation or M&A would benefit greatly from having an empirical view of startup valuations. New company financings are announced on a daily cadence, and having more data-driven publicly available research helps anyone that engages with startups make better decisions. That said, this research is solely for informational purposes and our online tool is not a replacement for the intrinsic, from the ground up, valuation methods and tools already established by the venture capital community. Instead, we think of this body of research as complementary — removing information asymmetries and enabling more constructive conversations for decision-making around valuations.

Making Sense of Startup Valuations

We obtained data for this analysis from Crunchbase, a venture capital database that aggregates funding events and associated meta-data about the entrepreneurial ecosystem. Our sample consists of 8,812 financing events since the year 2010 with publicly disclosed valuations and associated venture stage classifications. Table I below provides summary statistics.

The sample size for the median amount of capital raised at each stage is much higher [N=84k] because round sizes are more frequently disclosed and publicly available.

To better understand the nature of post-money valuations, we assessed their distributional properties using kernel density estimation (KDE), a non-parametric approach commonly used to approximate the probability density function (PDF) of a continuous random variable [8]. Put simply, KDE draws the distribution for a variable of interest by analyzing the frequency of events much like a histogram does. Non-parametric is just a fancy way of saying that the method does not make any assumption about the data being normally distributed, which makes it perfect for exercises where we want to draw a probability distribution but have no prior knowledge about what it actually looks like.

The two plots immediately above and further down below show the valuation probability density functions for venture capital stages on a logarithmic scale, with vertical lines indicating the median for each class. Why on a logarithmic scale? Well, post-money valuations are power-law distributed, as most things are in the venture capital domain [5], which means that the majority of valuations are at low values but there’s a long tail of rare but exceptionally high valuation events. Technically speaking, post-money valuations can also be described as being log-normally distributed, which just means that taking the natural logarithm of valuations produces the bell curves we’re all so familiar with. Series A, B, and C valuations may be argued as being bimodal log-normal distributions, and seed valuations may be approaching multimodality (more on that later), but technical fuss aside, this detail is important because log-normal distributions are easy for us to understand using the common language of mean, median, and standard deviation — even if we have to exponentiate the terms to put them in dollar signs. More importantly, this allows us to consider classical statistical methods that only work when we make strong assumptions about normality.

Founders that seek venture capital to get their company off the ground usually start by raising an angel or a seed round. An angel round consists of capital raised from their friends, family members, or wealthy individuals, while seed rounds are usually a startup’s first round of capital from institutional investors [18]. The median valuation for both angel and seed is $2.2m USD, while the median valuation for pre-seed is $1.9m USD. While we anticipated some overlap between angel, pre-seed and seed valuations, we were surprised to find that the distributions for these three classes of rounds almost completely overlap. This implies that these early-stage classifications are remarkably similar in reality. That said, we think it’s possible that the angel sample is biased towards the larger events that get reported, so we remain slightly skeptical of the overlap. And as mentioned earlier, the distribution of seed stage valuations appears to be approaching multimodality, meaning it has multiple modes. This may be due to the changing definition of a seed round and the recent institutionalization of pre-seed rounds, which are equal to or less than $1m in total capital raised and have only recently started being classified as ’Pre-seed” in Crunchbase (and hence the small sample size). There’s also a clear mode in the seed valuation distribution around $7m USD, which overlaps with the Series A distribution, suggesting, as others recently have, that some subset of seed rounds are being pushed further out and resemble what Series A rounds were 10 years ago [1].

Around 21 percent of seed stage companies move on to raise a Series A [16] about 18 months after raising their seed — with approximately 50 percent of Series A companies moving on to a Series B a further 18–21 months out [17]. In that time the median valuation jumps to $16m at the Series A and leaps to $130m at the Series B stage. Valuations climb further to a median of $500m at Series C. In general, we think it’s interesting to see the binomial nature as well as the extent of overlap between the Series A, B, and C valuation distributions. It’s possible that the overlap stems from changes in investor behavior, with the general size and valuation at each stage continuously redefined. Just like some proportion of seed rounds today are what Series A rounds were 10 years ago, the data suggests, for instance, that some proportion of Series B rounds today are what Series C rounds used to be. This was further corroborated when we segmented the data by decades going back to the year 2000 and compared the resulting distributions. We would note, however, that the changes are very gradual, and not as sensational as is often reported [12].

The median valuation for startups reaches $1b between the Series D and E stages, and $1.65 billion at Series F. This answers our original question, putting Peloton’s $4 billion-dollar appraisal at the 81 percentile of valuations at the Series F stage, far above the median, and indeed above the median $2.4b valuation for Series G companies. From there we see a considerable jump to the median Series H and Series I valuations of $7.7b and $9b, respectively. The Series I distribution has a noticeably lower peak in density and higher variance due to a smaller sample size. We know companies rarely make it that far, so that’s expected. Lyft and SpaceX, at valuations of $15b and $27b, respectively, are recent examples of companies that have made to the Series I stage. (Note: In December 2018 SpaceX raised a Series J round, which is a classification not analyzed in this paper.)

We classified each stage into higher level classes using the distributions above, as one of Early (Angel, Pre-Seed, Seed), Growth (Series A, B, C), Late (Series D, E, F, G), or Private IPO (Series H, I). With these aggregate classifications, we further investigated how valuations have faired across time and found that the medians (and means) have been more or less stable on a logarithmic scale. What has changed, since 2013, is the appearance of the “Private IPO” [11, 13]. These rounds, described above with companies such as SpaceX, Lyft, and others such as Palantir Technologies, are occurring later and at higher valuations than have previously existed. These late-stage private rounds are at such high valuations that future IPOs, if they ever occur, may end up being down rounds [22].

Approximating an Undisclosed Valuation

Given the above, we designed a simple statistical model to predict a round’s post-money valuation by its stage classification and the amount of capital raised. Why might this be useful? Well, the relationship between capital raised and post-money valuation is true by mathematical definition, so we’re not interested in claiming to establish a causal relationship in the classical sense. A startup’s post-money valuation is equal to an intrinsic pre-money valuation calculated by investors at the time of investment plus the amount of new capital raised [19, 21]. However, pre-money valuations are often not disclosed, so a statistical model for estimating an undisclosed valuation would be helpful when the size of a financing round is available and its stage is either disclosed as well or easily inferred.

We formulated an ordinary least squares log-log regression model after considering that we did not have enough stage classifications and complete observations at each stage for multilevel modeling and that it would be desirable to build a model that could be easily understood and utilized by founders, investors, executives, and analysts. Formally, our model is of the form:

where y is the output post-money valuation, c is the amount of capital raised, r is a binary term that indicates the financing stage, and epsilon is the error term. log(c · r) is, therefore, an interaction term that specifies the amount of capital raised at a specific stage. The model we present does not include stage main effects because the model remains the same, whether they’re left in or pulled out, while the coefficients become reparameterizations of the original estimates [23]. In other words, boolean stage main effects adjust the constant and coefficients while maintaining equivalent summed values — increasing the mental gymnastics required for interpretation without adding any statistical power to the regression. Capital main effects are not included because domain knowledge and the distributions above suggest that financing events are always indicative of a company’s stage, so the effect is not fixed, and therefore including capital by itself results in a misspecified model alongside interaction terms. Of course, whether or not a stage classification is agreed upon by investors and founders and specified on the term sheet is another matter.

As is standard practice, we used heteroscedasticity robust standard errors to estimate the beta coefficients, and residual analysis via a fitted values versus residuals plot confirms that the model validates the general assumptions of ordinary least squares regression. There is no multicollinearity between the variables, and a Q-Q plot further confirmed that the data is log-normally distributed. The results are statistically significant at the p < 0.001 level for all terms with an adjusted  of 89 percent and an F-Statistic of 5,900 (p < 0.001). Table II outlines the results. Monetary values in the model are specified in millions, USD.

The model can be interpreted by solving for and differentiating with respect to to get the marginal effect. Therefore, we can think of percentage increases in x as leading to some percentage increase in y. At the seed stage, for example, for a 10 percent increase in money raised a company can expect a 6.6 percent increase in their post-money valuation, ceteris paribus. That premium increases as companies make their way through the venture capital funnel, peaking at the Series I stage with a 12.4 percent increase in valuation per 10 percent increase in capital raised. In practice, an analyst could approximate an unknown post-money valuation by specifying the amount of capital raised at the appropriate stage in the model, exponentiating the constant and the beta term, and multiplying the values, such that:

Using the first equation and the values in Table II, the estimated undisclosed post-money valuation of a startup after a $2m seed round is approximately $9.4m USD — for a $35m Series B, it’s $224m — and for a $200m Series D, it’s $1.7b. Subtracting the amount of capital raised from the estimated post-money valuation would yield an estimated pre-money valuation.

Can it really be that simple? Well, that depends entirely on your use case. If you want to approximate a valuation and don’t have the tools to do so, and can’t get on the phone with the founders of the company, then the calculations above should be good enough for that purpose. If instead, you’re interested in purchasing a company, this is a good starting point for discussions, but you probably want to use other valuation methods, too. As mentioned earlier, this research is not meant to supplant existing valuation methodologies established by the venture capital community.

As far as estimation errors, you can infer from the scatter plot above that, for the predictions at the early stages, you can expect valuations to be off by a few million dollars — for growth-stage companies, a few hundred million — and in the late and private IPO stages, being off by a few billion would be reasonable. Of course, the accuracy of any prediction depends on the reliability of the estimated means, i.e., the credible intervals of the posterior distributions under a Bayesian framework [6], as well as the size of the error from omitted variable bias — which is not insignificant. We can reformulate our model in a directly comparable probabilistic Bayesian framework, in vector notation, as:

where the distribution of log(y) given X, an n · k matrix of interaction terms, is normal with a mean that is a linear function of X, observation errors are independent and of equal variance, and represents an n · n identity matrix. We fit the model with a non-informative flat prior using the No-U-Turn Sampler (NUTS), an extension of the Hamiltonian Monte Carlo MCMC algorithm [9], for which our model converges appropriately and has the desirable hairy caterpillar sampling properties [6].

The 95 percent credible intervals in Figure V suggest that posterior distributions from angel to series E, excluding pre-seed, have stable ranges of highly probable values around our original OLS coefficients. However, the distributions become more uncertain at the later stages, particularly for series F, G, H, and I. This should be obvious, considering our original sample sizes for the pre-seed class and for the later stages. Since the data needs to be transformed back to its original scale for appropriate estimation, and the fact that the magnitudes of late-stage rounds tend to be very high, such changes in the exponential will lead to dramatically different prediction results. As with any simple tool then, your mileage may vary. For more accurate and precise estimates, we’d suggest hiring a data scientist to build a more sophisticated machine learning algorithm or Bayesian model to account for more features and hierarchy. If your budget doesn’t allow for it, the simple calculation using the estimates in Table II will get you in the ballpark.

Concluding Remarks

This paper provides an empirical foundation for how to think about startup valuations and introduces a statistical model as a simple tool to help practitioners working in venture capital approximate an undisclosed post-money valuation. That said, the information in this paper is not investment advice, and is provided solely for educational purposes from sources believed to be reliable. Historical data is a great indicator but never a guarantee of the future, and statistical models are never correct — only useful [2]. This paper also makes no comment on whether current valuation practices result in accurate representations of a startup’s fair market value, as that is an entirely separate discussion [7].

This research may also serve as a starting point for others to pursue their own applied machine learning research. We translated the model presented in this article into a more powerful learning algorithm [8] with more features that fills-in the missing post-money valuations in our own database. These estimates are then passed to Startup Anomaly Detection™, an algorithm we’ve developed to estimate the plausibility that a venture-backed startup will have a liquidity such as an IPO or acquisition event given the current state of knowledge about them. Our machine learning system appears to have some similarities with others recently disclosed by GV [15], Google’s venture capital arm, and Social Capital [14], with the exception that our probability estimates are available as part of Radicle’s research products.

Companies will likely continue raising even later and larger rounds in the coming years, and valuations at each stage may continue being redefined, but now we have a statistical perspective on valuations as well as greater insight into their distributional properties, which gives us a foundation for understanding disruption as we look forward.

Source : https://towardsdatascience.com/making-sense-of-startup-valuations-with-data-science-1dededaf18bb

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

When, which … Design Thinking, Lean, Design Sprint, Agile? – Geert Claes

Confusion galore!

A lot of people are — understandably so — very confused when it comes to innovation methodologies, frameworks, and techniques. Questions like: “When should we use Design Thinking?”, “What is the purpose of a Design Sprint?”, “Is Lean Startup just for startups?”, “Where does Agile fit in?”, “What happens after the <some methodology> phase?” are all very common questions.

(How) does it all connect?

When browsing the Internet for answers, one notices quickly that others too are struggling to understand how it all works together.

Gartner (as well as numerous others) tried to visualise how methodologies like Design Thinking, Lean, Design Sprint and Agile flow nicely from one to the next. Most of these visualisations have a number of nicely coloured and connected circles, but for me they seem to miss the mark. The place where one methodology flows into the next is very debatable, because there are too many similar techniques and there is just too much overlap.

The innovation spectrum

It probably makes more sense to just look at Design Thinking, Lean, Design Sprint & Agile as a bunch of tools and techniques in one’s toolbox, rather than argue for one over the other, because they can all add value somewhere on the innovation spectrum.

Innovation initiatives can range from exploring an abstract problem space, to experimenting with a number of solutions, before continuously improving a very concrete solution in a specific market space.

Business model

An aspect which often seems to be omitted, is the business model maturity axis. For established products as well as adjacent ones (think McKinsey’s Horizon 1 and 2), the business models are often very well understood. For startups and disruptive innovations within an established business however, the business model will need to be validated through experiments.

Methodologies

Design Thinking

Design Thinking really shines when we need to better understand the problem space and identify the early adopters. There are various flavors of design thinking, but they all sort of follow the double-diamond flow. Simplistically the first diamond starts by diverging and gathering lots of insights through talking to our target stakeholders, followed by converging through clustering these insights and identifying key pain-points, problems or jobs to be done. The second diamond starts by a diverging exercise to ideate a large number of potential solutions before prototyping and testing the most promising ideas. Design Thinking is mainly focussed on qualitative rather than quantitative insights.

Lean Startup

The slight difference with Design Thinking is that the entrepreneur (or intrapreneur) often already has a good understanding of the problem space. Lean considers everything to be a hypothesis or assumption until validated …so even that good understanding of the problem space is just an assumption. Lean tends to starts by specifying your assumptions on a customer focussed (lean) canvas and then prioritizing and validating the assumptions according to highest risk for the entire product. The process to validate assumptions is creating an experiment (build), testing it (measure) and learn whether our assumption or hypothesis still stands. Lean uses qualitative insights early on but later forces you to define actionable quantitative data to measure how effective the solution addresses the problem and whether the growth strategy is on track. The “Get out of the building” phrase is often associated with Lean Startup, but the same principle of reaching out the customers obviously also counts for Design Thinking (… and Design Sprint … and Agile).

Design Sprint

It appears that the Google Venture-style Design Sprint method could have its roots from a technique described in the Lean UX book. The key strength of a Design Sprint is to share insights, ideate, prototype and test a concept all in a 5-day sprint. Given the short timeframe, Design Sprints only focus on part of the solution, but it’s an excellent way to learn really quickly if you are on the right track or not.

Agile

Just like dealing with the uncertainty of our problem, solution and market assumptions, agile development is a great way to cope with uncertainty in product development. No need to specify every detail of a product up-front, because here too there are plenty of assumptions and uncertainty. Agile is a great way to build-measure-learn and validate assumptions whilst creating a Minimum Viable Product in Lean Startup parlance. We should define and prioritize a backlog of value to be delivered and work in short sprints, delivering and testing the value as part of each sprint.

Conclusion

Probably not really the answer you were looking for, but there is no clear rule on when to start where. There is also no obvious handover point because there is just too much overlap, and this significant overlap could be the explanation of why some people claim methodology <x> is better than <y>.

Anyhow, most innovation methodologies can add great value and it’s really up to the team to decide where to start and when to apply which methods and techniques. The common ground most can agree with, is to avoid falling in love with your own solution and listen to qualitative as well as quantitative customer feedback.

Innovation Spectrum

Some great books: Creative Confidence, Lean Startup, Running Lean, Sprint, Dual Transformation, Lean UX, Lean Enterprise, Scaling Lean … and a nice video on Innovation@50x

Update: minor update in the innovation canvas, moving the top axis of problem-solution-market to the side

Source : https://medium.com/@geertwlclaes/when-which-design-thinking-lean-design-sprint-agile-a4614fa778b9

Former Google CEO Eric Schmidt listed the ‘3 big failures’ he sees in tech startups today – Business Insider

Former Google CEO Eric Schmidt has listed the three “big failures” in tech entrepreneurship around the world.

Schmidt outlined the failings in a speech he gave at the Centre for Entrepreneurs in London this week. He later expanded on his thoughts in an interview with former BBC News boss James Harding.

Below are the three mistakes he outlined, with quotes taken from both a draft of his speech seen by Business Insider, and comments he delivered on the night.

1. People stick to who and what they know

“Far too often, we invest mostly in people we already know, who are working in very narrow disciplines,” Schmidt wrote in his draft.

In his speech, Schmidt pegged this point closely to a need for diversity and inclusion. He said companies need to be open to bringing in people from other countries and backgrounds.

He said entrepreneurship won’t flourish if people are “going to one institution, hiring only those people, and only — if I can be blunt — only white males.”

During the Q&A, Schmidt specifically addressed the gender imbalance in the tech industry. He said there’s a reason to be optimistic about women’s representation in tech improving, predicting that tech’s gender imbalance will vanish in one generation.

2. Too much focus on product and not on platforms

“We frequently don’t build the best technology platforms to tackle big social challenges, because often there is no immediate promise of commercial return,” Schmidt wrote in his draft.

“There are a million e-commerce apps but not enough speciality platforms for safely sharing and analyzing data on homelessness, climate change or refugees.”

Schmidt’s omitted this mention of socially conscious tech from his final speech, but did say that he sees a lot of innovation coming out of network platforms, which allow people to connect and pool data, because “the barrier to entry for these startups is very, very low.”

3. Companies aren’t partnering up early enough

Finally, Schmidt wrote in his draft that tech startups don’t partner enough with other companies in the modern, hyper-connected world. “It’s impossible to think about any major challenge for society in a silo,” he wrote.

He said in his speech that tech firms have to be ready to partner “fairly early.” He gave the example of a startup that wants to build homecare robots.

“The market for homecare robots is going to be very, very large. The problem is that you need visual systems, and machine learning systems, and listening systems, and motor systems, and so forth. You’re not going to be able to do it with three people,” he said.

After detailing his failures in tech entrepreneurship, Schmidt laid out what he views as the solution. He referred back to the Renaissance in Europe, saying people turned their hand to all sorts of disciplines, from science, to art, to business.

Source : https://www.businessinsider.com/eric-schmidt-3-big-failures-he-sees-in-tech-entrepreneurship-2018-11

6 Biases Holding You Back From Rational Thinking – Robert Greene

Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.

These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.

1) Confirmation Bias

I look at the evidence and arrive at my decisions through more or less rational processes.

To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.

We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.

When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.

2) Conviction Bias

I believe in this idea so strongly. It must be true.

We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.

3) Appearance Bias

I understand the people I deal with; I see them just as they are.

We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.

4) The Group Bias

My ideas are my own. I do not listen to the group. I am not a conformist.

We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.

5) The Blame Bias

I learn from my experience and mistakes.

Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.

6) Superiority Bias

I’m different. I’m more rational than others, more ethical as well.

Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.

We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.

Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f

Building safe artificial intelligence: specification, robustness, and assurance – DeepMind

Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.

If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.

At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.

In this inaugural post, we discuss three areas of technical AI safety: specificationrobustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.

Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.

Specification: define the purpose of the system

You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.

This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:

  • ideal specification (the “wishes”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;
  • design specification (the “blueprint”), corresponding to the specification that we actually use to build the AI system, e.g. the reward function that a reinforcement learning system maximises;
  • and revealed specification (the “behaviour”), which is the specification that best describes what actually happens, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.

specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.

For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.

*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.

From Faulty Reward Functions in the Wild by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.

As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.

Robustness: design the system to withstand perturbations

There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shiftadversarial inputs, and unsafe exploration can be classified as robustness problems.

To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.

From AI Safety Gridworlds. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.

Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.

An adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.

Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015Amodei and Olah et al., 2016).

Assurance: monitor and control system activity

Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.

Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.

AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).

ToMNet discovers two subspecies of agents and predicts their behaviour (from “Machine Theory of Mind”)

Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).

A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See Orseau and Armstrong, 2016.

Looking ahead

We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.

Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.

With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!

We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research

Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1

 

Scroll to top