Category: Silicon Valley

Open Source Software – Investable Business Model or Not? – Natallia Chykina

Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”

Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).

Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).

But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?

In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.

OSS Monetization Models

By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”

While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.

As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.

Latched on to this are the monetization strategies, the most common being the following:

  • Extra charge for enterprise services, support, and consulting. The classic model targeted at large enterprise clients with sophisticated needs. Examples: MySQL, Red Hat, Hortonworks, DataStax
  • Freemium. (advanced features/products/add-ons) A custom licensed product on top of the OSS might generate a lavish revenue stream, but it requires a lot of R&D costs and time to build. Example: Cloudera, which provides the basic version for free and charges the customers for Cloudera Enterprise
  • SaaS/PaaS business model: The modern way to monetize the OSS products that assumes centrally hosting the software and shifting its maintenance costs to the provider. Examples: Elastic, GitHub, Databricks, SugarCRM

Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.

Influx of VC Dollars

An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).

Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.

At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).

With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.

Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.

Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.

Case Study 1: Red Hat

Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.

At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.

Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).

The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.

However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.

More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”

In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).

If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.

Case Study 2: Hortonworks

To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.

The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).

All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.

At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.

Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).

Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).

Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).

On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.

All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.

But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.

With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.

Why are VCs Investing in OSS Companies?

Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?

Like what you’re reading?
Get the latest updates first.
No spam. Just great articles & insights.

Good Fit for a Strategic Acquisition

Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).

What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:

  • Getting access to the client base. Sun is reported to have been motivated by this when it acquired MySQL. They wanted to access the SME market and cross-sell other products to smaller clients. Simply forking the product or developing a competing technology internally wouldn’t deliver the customer base and would have made Sun incur additional customer acquisition costs.
  • Getting control over the product. The ability to influence further development of the product is a crucial factor for a strategic buyer. This allows it to build and expand its own product offering based on the acquired products without worrying about sudden substantial changes in it. Example: Red Hat acquiring Ansible, KVM, Gluster, Inktank (Ceph), and many more
  • Entering adjacent markets. Acquiring open source companies in adjacent market segments, again, allows a company to expand the product offering, which makes vendor lock-in easier, and scales the business further. Example: Citrix acquiring XenSource
  • Acquiring the team. This is more relevant for smaller and younger projects than for larger, more well-established ones, but is worth mentioning.

What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”

With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.

My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.

A Better Monetization Model: SaaS

While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.

But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.

In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”

Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.

First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.

Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”

Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.

This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.

Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.

In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.

Success Stories for OSS with SaaS

Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.

Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.

Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.

Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.

Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).

Investable Business Model or Not?

To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.

Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.

The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.

Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive

Source : https://www.toptal.com/finance/venture-capital-consultants/open-source-software-investable-business-model-or-not

Industrial tech may not be sexy, but VCs are loving it – John Tough

There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.

Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.

Just how active is the sector now?

That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.

And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.

The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.

Detail by stage of financing

Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!

Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.

Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.

These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?

Above: Growth in pre-money valuation particularly acute in later stage deals

The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.

  • The average Series A round now buys about 24 percent, slightly less than five years ago
  • The average Series B round now buys about 22 percent of the company, down from 26 percent five years ago
  • The average Series C round now buys approximately 20 percent, down from 23 percent five years ago.

Some conclusions

  • Dollars invested as a portion of industry revenue and profit allows for further capital commitments.
  • There is a growing appreciation for the industrial sales cycle. Investor willingness to wait for reduced risk to deploy even more capital in the perceived winners appears to be driving this trend.
  • Entrepreneurs that can successfully de-risk their enterprise through revenue, partnerships, and industry hires will gain access to outsized capital pools. The winners in this market tend to compound as later customers look to early adopters
  • Uncertainty still remains about exit opportunities for technology companies that serve these industries. While there are a few headline-grabbing acquisitions (PlanGrid, Kurion, OSIsoft), we are not hearing about a sizable exit from this market on a weekly or monthly cadence. This means we won’t know for a few years about the returns impact of these rising valuations. Grab your hard hat!

Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/

Predicting a Startup Valuation with Data Science – Sebastian Quintero

The following is a condensed and slightly modified version of a Radicle working paper on the startup economy in which we explore post-money valuations by venture capital stage classifications. We find that valuations have interesting distributional properties and then go on to describe a statistical model for estimating an undisclosed valuation with considerable ease. In conjunction with this post, we are releasing a free tool for estimating startup valuations. To use the tool and to download the full PDF of the working paper, go here, but please read the entirety of this post before doing so. This is not magic and the details matter. With that said, grab some coffee and get comfortable––we’re going deep.

Introduction

It’s often difficult to comprehend the significance of numbers thrown around in the startup economy. If a company raises a $550M Series F at a valuation of $4 billion [3]— how big is that really? How does that compare to other Series F rounds? Is that round approximately average when compared to historical financing events, or is it an anomaly?

At Radicle, a disruption research company, we use data science to better understand the entrepreneurial ecosystem. In our quest to remove opacity from the startup economy, we conducted an empirical study to better understand the nature of post-money valuations. While it’s popularly accepted that seed rounds tend to be at valuations somewhere in the $2m to the $10m valuation range [18], there isn’t much data to back this up, nor is it clear what valuations really look like at subsequent financing stages. Looking back at historical events, however, we can see some anecdotally interesting similarities.

Google and Facebook, before they were household names, each raised Series A rounds with valuations of $98m and $100m, respectively. More recently, Instacart, the grocery delivery company, and Medium, the social publishing network on which you’re currently reading this, raised Series B rounds with valuations of $400m and $457m, respectively. Instagram wasn’t too dissimilar at that stage, with a Series B valuation of $500m before its acquisition by Facebook in 2012. Moving one step further, Square (NYSE: SQ), Shopify (NYSE: SHOP), and Wish, the e-commerce company that is mounting a challenge against Amazon, all raised Series C rounds with valuations of exactly $1 billion. Casper, the privately held direct-to-consumer startup disrupting the mattress industry, raised a similar Series C with a post-money valuation of $920m. Admittedly, these are probably only systematic similarities in hindsight because human minds are wired to see patterns even when there aren’t any, but that still makes us wonder if there exists some underlying trend. Our research suggests that there is, but why is this important?

We think entrepreneurs, venture capitalists, and professionals working in corporate innovation or M&A would benefit greatly from having an empirical view of startup valuations. New company financings are announced on a daily cadence, and having more data-driven publicly available research helps anyone that engages with startups make better decisions. That said, this research is solely for informational purposes and our online tool is not a replacement for the intrinsic, from the ground up, valuation methods and tools already established by the venture capital community. Instead, we think of this body of research as complementary — removing information asymmetries and enabling more constructive conversations for decision-making around valuations.

Making Sense of Startup Valuations

We obtained data for this analysis from Crunchbase, a venture capital database that aggregates funding events and associated meta-data about the entrepreneurial ecosystem. Our sample consists of 8,812 financing events since the year 2010 with publicly disclosed valuations and associated venture stage classifications. Table I below provides summary statistics.

The sample size for the median amount of capital raised at each stage is much higher [N=84k] because round sizes are more frequently disclosed and publicly available.

To better understand the nature of post-money valuations, we assessed their distributional properties using kernel density estimation (KDE), a non-parametric approach commonly used to approximate the probability density function (PDF) of a continuous random variable [8]. Put simply, KDE draws the distribution for a variable of interest by analyzing the frequency of events much like a histogram does. Non-parametric is just a fancy way of saying that the method does not make any assumption about the data being normally distributed, which makes it perfect for exercises where we want to draw a probability distribution but have no prior knowledge about what it actually looks like.

The two plots immediately above and further down below show the valuation probability density functions for venture capital stages on a logarithmic scale, with vertical lines indicating the median for each class. Why on a logarithmic scale? Well, post-money valuations are power-law distributed, as most things are in the venture capital domain [5], which means that the majority of valuations are at low values but there’s a long tail of rare but exceptionally high valuation events. Technically speaking, post-money valuations can also be described as being log-normally distributed, which just means that taking the natural logarithm of valuations produces the bell curves we’re all so familiar with. Series A, B, and C valuations may be argued as being bimodal log-normal distributions, and seed valuations may be approaching multimodality (more on that later), but technical fuss aside, this detail is important because log-normal distributions are easy for us to understand using the common language of mean, median, and standard deviation — even if we have to exponentiate the terms to put them in dollar signs. More importantly, this allows us to consider classical statistical methods that only work when we make strong assumptions about normality.

Founders that seek venture capital to get their company off the ground usually start by raising an angel or a seed round. An angel round consists of capital raised from their friends, family members, or wealthy individuals, while seed rounds are usually a startup’s first round of capital from institutional investors [18]. The median valuation for both angel and seed is $2.2m USD, while the median valuation for pre-seed is $1.9m USD. While we anticipated some overlap between angel, pre-seed and seed valuations, we were surprised to find that the distributions for these three classes of rounds almost completely overlap. This implies that these early-stage classifications are remarkably similar in reality. That said, we think it’s possible that the angel sample is biased towards the larger events that get reported, so we remain slightly skeptical of the overlap. And as mentioned earlier, the distribution of seed stage valuations appears to be approaching multimodality, meaning it has multiple modes. This may be due to the changing definition of a seed round and the recent institutionalization of pre-seed rounds, which are equal to or less than $1m in total capital raised and have only recently started being classified as ’Pre-seed” in Crunchbase (and hence the small sample size). There’s also a clear mode in the seed valuation distribution around $7m USD, which overlaps with the Series A distribution, suggesting, as others recently have, that some subset of seed rounds are being pushed further out and resemble what Series A rounds were 10 years ago [1].

Around 21 percent of seed stage companies move on to raise a Series A [16] about 18 months after raising their seed — with approximately 50 percent of Series A companies moving on to a Series B a further 18–21 months out [17]. In that time the median valuation jumps to $16m at the Series A and leaps to $130m at the Series B stage. Valuations climb further to a median of $500m at Series C. In general, we think it’s interesting to see the binomial nature as well as the extent of overlap between the Series A, B, and C valuation distributions. It’s possible that the overlap stems from changes in investor behavior, with the general size and valuation at each stage continuously redefined. Just like some proportion of seed rounds today are what Series A rounds were 10 years ago, the data suggests, for instance, that some proportion of Series B rounds today are what Series C rounds used to be. This was further corroborated when we segmented the data by decades going back to the year 2000 and compared the resulting distributions. We would note, however, that the changes are very gradual, and not as sensational as is often reported [12].

The median valuation for startups reaches $1b between the Series D and E stages, and $1.65 billion at Series F. This answers our original question, putting Peloton’s $4 billion-dollar appraisal at the 81 percentile of valuations at the Series F stage, far above the median, and indeed above the median $2.4b valuation for Series G companies. From there we see a considerable jump to the median Series H and Series I valuations of $7.7b and $9b, respectively. The Series I distribution has a noticeably lower peak in density and higher variance due to a smaller sample size. We know companies rarely make it that far, so that’s expected. Lyft and SpaceX, at valuations of $15b and $27b, respectively, are recent examples of companies that have made to the Series I stage. (Note: In December 2018 SpaceX raised a Series J round, which is a classification not analyzed in this paper.)

We classified each stage into higher level classes using the distributions above, as one of Early (Angel, Pre-Seed, Seed), Growth (Series A, B, C), Late (Series D, E, F, G), or Private IPO (Series H, I). With these aggregate classifications, we further investigated how valuations have faired across time and found that the medians (and means) have been more or less stable on a logarithmic scale. What has changed, since 2013, is the appearance of the “Private IPO” [11, 13]. These rounds, described above with companies such as SpaceX, Lyft, and others such as Palantir Technologies, are occurring later and at higher valuations than have previously existed. These late-stage private rounds are at such high valuations that future IPOs, if they ever occur, may end up being down rounds [22].

Approximating an Undisclosed Valuation

Given the above, we designed a simple statistical model to predict a round’s post-money valuation by its stage classification and the amount of capital raised. Why might this be useful? Well, the relationship between capital raised and post-money valuation is true by mathematical definition, so we’re not interested in claiming to establish a causal relationship in the classical sense. A startup’s post-money valuation is equal to an intrinsic pre-money valuation calculated by investors at the time of investment plus the amount of new capital raised [19, 21]. However, pre-money valuations are often not disclosed, so a statistical model for estimating an undisclosed valuation would be helpful when the size of a financing round is available and its stage is either disclosed as well or easily inferred.

We formulated an ordinary least squares log-log regression model after considering that we did not have enough stage classifications and complete observations at each stage for multilevel modeling and that it would be desirable to build a model that could be easily understood and utilized by founders, investors, executives, and analysts. Formally, our model is of the form:

where y is the output post-money valuation, c is the amount of capital raised, r is a binary term that indicates the financing stage, and epsilon is the error term. log(c · r) is, therefore, an interaction term that specifies the amount of capital raised at a specific stage. The model we present does not include stage main effects because the model remains the same, whether they’re left in or pulled out, while the coefficients become reparameterizations of the original estimates [23]. In other words, boolean stage main effects adjust the constant and coefficients while maintaining equivalent summed values — increasing the mental gymnastics required for interpretation without adding any statistical power to the regression. Capital main effects are not included because domain knowledge and the distributions above suggest that financing events are always indicative of a company’s stage, so the effect is not fixed, and therefore including capital by itself results in a misspecified model alongside interaction terms. Of course, whether or not a stage classification is agreed upon by investors and founders and specified on the term sheet is another matter.

As is standard practice, we used heteroscedasticity robust standard errors to estimate the beta coefficients, and residual analysis via a fitted values versus residuals plot confirms that the model validates the general assumptions of ordinary least squares regression. There is no multicollinearity between the variables, and a Q-Q plot further confirmed that the data is log-normally distributed. The results are statistically significant at the p < 0.001 level for all terms with an adjusted  of 89 percent and an F-Statistic of 5,900 (p < 0.001). Table II outlines the results. Monetary values in the model are specified in millions, USD.

The model can be interpreted by solving for and differentiating with respect to to get the marginal effect. Therefore, we can think of percentage increases in x as leading to some percentage increase in y. At the seed stage, for example, for a 10 percent increase in money raised a company can expect a 6.6 percent increase in their post-money valuation, ceteris paribus. That premium increases as companies make their way through the venture capital funnel, peaking at the Series I stage with a 12.4 percent increase in valuation per 10 percent increase in capital raised. In practice, an analyst could approximate an unknown post-money valuation by specifying the amount of capital raised at the appropriate stage in the model, exponentiating the constant and the beta term, and multiplying the values, such that:

Using the first equation and the values in Table II, the estimated undisclosed post-money valuation of a startup after a $2m seed round is approximately $9.4m USD — for a $35m Series B, it’s $224m — and for a $200m Series D, it’s $1.7b. Subtracting the amount of capital raised from the estimated post-money valuation would yield an estimated pre-money valuation.

Can it really be that simple? Well, that depends entirely on your use case. If you want to approximate a valuation and don’t have the tools to do so, and can’t get on the phone with the founders of the company, then the calculations above should be good enough for that purpose. If instead, you’re interested in purchasing a company, this is a good starting point for discussions, but you probably want to use other valuation methods, too. As mentioned earlier, this research is not meant to supplant existing valuation methodologies established by the venture capital community.

As far as estimation errors, you can infer from the scatter plot above that, for the predictions at the early stages, you can expect valuations to be off by a few million dollars — for growth-stage companies, a few hundred million — and in the late and private IPO stages, being off by a few billion would be reasonable. Of course, the accuracy of any prediction depends on the reliability of the estimated means, i.e., the credible intervals of the posterior distributions under a Bayesian framework [6], as well as the size of the error from omitted variable bias — which is not insignificant. We can reformulate our model in a directly comparable probabilistic Bayesian framework, in vector notation, as:

where the distribution of log(y) given X, an n · k matrix of interaction terms, is normal with a mean that is a linear function of X, observation errors are independent and of equal variance, and represents an n · n identity matrix. We fit the model with a non-informative flat prior using the No-U-Turn Sampler (NUTS), an extension of the Hamiltonian Monte Carlo MCMC algorithm [9], for which our model converges appropriately and has the desirable hairy caterpillar sampling properties [6].

The 95 percent credible intervals in Figure V suggest that posterior distributions from angel to series E, excluding pre-seed, have stable ranges of highly probable values around our original OLS coefficients. However, the distributions become more uncertain at the later stages, particularly for series F, G, H, and I. This should be obvious, considering our original sample sizes for the pre-seed class and for the later stages. Since the data needs to be transformed back to its original scale for appropriate estimation, and the fact that the magnitudes of late-stage rounds tend to be very high, such changes in the exponential will lead to dramatically different prediction results. As with any simple tool then, your mileage may vary. For more accurate and precise estimates, we’d suggest hiring a data scientist to build a more sophisticated machine learning algorithm or Bayesian model to account for more features and hierarchy. If your budget doesn’t allow for it, the simple calculation using the estimates in Table II will get you in the ballpark.

Concluding Remarks

This paper provides an empirical foundation for how to think about startup valuations and introduces a statistical model as a simple tool to help practitioners working in venture capital approximate an undisclosed post-money valuation. That said, the information in this paper is not investment advice, and is provided solely for educational purposes from sources believed to be reliable. Historical data is a great indicator but never a guarantee of the future, and statistical models are never correct — only useful [2]. This paper also makes no comment on whether current valuation practices result in accurate representations of a startup’s fair market value, as that is an entirely separate discussion [7].

This research may also serve as a starting point for others to pursue their own applied machine learning research. We translated the model presented in this article into a more powerful learning algorithm [8] with more features that fills-in the missing post-money valuations in our own database. These estimates are then passed to Startup Anomaly Detection™, an algorithm we’ve developed to estimate the plausibility that a venture-backed startup will have a liquidity such as an IPO or acquisition event given the current state of knowledge about them. Our machine learning system appears to have some similarities with others recently disclosed by GV [15], Google’s venture capital arm, and Social Capital [14], with the exception that our probability estimates are available as part of Radicle’s research products.

Companies will likely continue raising even later and larger rounds in the coming years, and valuations at each stage may continue being redefined, but now we have a statistical perspective on valuations as well as greater insight into their distributional properties, which gives us a foundation for understanding disruption as we look forward.

Source : https://towardsdatascience.com/making-sense-of-startup-valuations-with-data-science-1dededaf18bb

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

When, which … Design Thinking, Lean, Design Sprint, Agile? – Geert Claes

Confusion galore!

A lot of people are — understandably so — very confused when it comes to innovation methodologies, frameworks, and techniques. Questions like: “When should we use Design Thinking?”, “What is the purpose of a Design Sprint?”, “Is Lean Startup just for startups?”, “Where does Agile fit in?”, “What happens after the <some methodology> phase?” are all very common questions.

(How) does it all connect?

When browsing the Internet for answers, one notices quickly that others too are struggling to understand how it all works together.

Gartner (as well as numerous others) tried to visualise how methodologies like Design Thinking, Lean, Design Sprint and Agile flow nicely from one to the next. Most of these visualisations have a number of nicely coloured and connected circles, but for me they seem to miss the mark. The place where one methodology flows into the next is very debatable, because there are too many similar techniques and there is just too much overlap.

The innovation spectrum

It probably makes more sense to just look at Design Thinking, Lean, Design Sprint & Agile as a bunch of tools and techniques in one’s toolbox, rather than argue for one over the other, because they can all add value somewhere on the innovation spectrum.

Innovation initiatives can range from exploring an abstract problem space, to experimenting with a number of solutions, before continuously improving a very concrete solution in a specific market space.

Business model

An aspect which often seems to be omitted, is the business model maturity axis. For established products as well as adjacent ones (think McKinsey’s Horizon 1 and 2), the business models are often very well understood. For startups and disruptive innovations within an established business however, the business model will need to be validated through experiments.

Methodologies

Design Thinking

Design Thinking really shines when we need to better understand the problem space and identify the early adopters. There are various flavors of design thinking, but they all sort of follow the double-diamond flow. Simplistically the first diamond starts by diverging and gathering lots of insights through talking to our target stakeholders, followed by converging through clustering these insights and identifying key pain-points, problems or jobs to be done. The second diamond starts by a diverging exercise to ideate a large number of potential solutions before prototyping and testing the most promising ideas. Design Thinking is mainly focussed on qualitative rather than quantitative insights.

Lean Startup

The slight difference with Design Thinking is that the entrepreneur (or intrapreneur) often already has a good understanding of the problem space. Lean considers everything to be a hypothesis or assumption until validated …so even that good understanding of the problem space is just an assumption. Lean tends to starts by specifying your assumptions on a customer focussed (lean) canvas and then prioritizing and validating the assumptions according to highest risk for the entire product. The process to validate assumptions is creating an experiment (build), testing it (measure) and learn whether our assumption or hypothesis still stands. Lean uses qualitative insights early on but later forces you to define actionable quantitative data to measure how effective the solution addresses the problem and whether the growth strategy is on track. The “Get out of the building” phrase is often associated with Lean Startup, but the same principle of reaching out the customers obviously also counts for Design Thinking (… and Design Sprint … and Agile).

Design Sprint

It appears that the Google Venture-style Design Sprint method could have its roots from a technique described in the Lean UX book. The key strength of a Design Sprint is to share insights, ideate, prototype and test a concept all in a 5-day sprint. Given the short timeframe, Design Sprints only focus on part of the solution, but it’s an excellent way to learn really quickly if you are on the right track or not.

Agile

Just like dealing with the uncertainty of our problem, solution and market assumptions, agile development is a great way to cope with uncertainty in product development. No need to specify every detail of a product up-front, because here too there are plenty of assumptions and uncertainty. Agile is a great way to build-measure-learn and validate assumptions whilst creating a Minimum Viable Product in Lean Startup parlance. We should define and prioritize a backlog of value to be delivered and work in short sprints, delivering and testing the value as part of each sprint.

Conclusion

Probably not really the answer you were looking for, but there is no clear rule on when to start where. There is also no obvious handover point because there is just too much overlap, and this significant overlap could be the explanation of why some people claim methodology <x> is better than <y>.

Anyhow, most innovation methodologies can add great value and it’s really up to the team to decide where to start and when to apply which methods and techniques. The common ground most can agree with, is to avoid falling in love with your own solution and listen to qualitative as well as quantitative customer feedback.

Innovation Spectrum

Some great books: Creative Confidence, Lean Startup, Running Lean, Sprint, Dual Transformation, Lean UX, Lean Enterprise, Scaling Lean … and a nice video on Innovation@50x

Update: minor update in the innovation canvas, moving the top axis of problem-solution-market to the side

Source : https://medium.com/@geertwlclaes/when-which-design-thinking-lean-design-sprint-agile-a4614fa778b9

Former Google CEO Eric Schmidt listed the ‘3 big failures’ he sees in tech startups today – Business Insider

Former Google CEO Eric Schmidt has listed the three “big failures” in tech entrepreneurship around the world.

Schmidt outlined the failings in a speech he gave at the Centre for Entrepreneurs in London this week. He later expanded on his thoughts in an interview with former BBC News boss James Harding.

Below are the three mistakes he outlined, with quotes taken from both a draft of his speech seen by Business Insider, and comments he delivered on the night.

1. People stick to who and what they know

“Far too often, we invest mostly in people we already know, who are working in very narrow disciplines,” Schmidt wrote in his draft.

In his speech, Schmidt pegged this point closely to a need for diversity and inclusion. He said companies need to be open to bringing in people from other countries and backgrounds.

He said entrepreneurship won’t flourish if people are “going to one institution, hiring only those people, and only — if I can be blunt — only white males.”

During the Q&A, Schmidt specifically addressed the gender imbalance in the tech industry. He said there’s a reason to be optimistic about women’s representation in tech improving, predicting that tech’s gender imbalance will vanish in one generation.

2. Too much focus on product and not on platforms

“We frequently don’t build the best technology platforms to tackle big social challenges, because often there is no immediate promise of commercial return,” Schmidt wrote in his draft.

“There are a million e-commerce apps but not enough speciality platforms for safely sharing and analyzing data on homelessness, climate change or refugees.”

Schmidt’s omitted this mention of socially conscious tech from his final speech, but did say that he sees a lot of innovation coming out of network platforms, which allow people to connect and pool data, because “the barrier to entry for these startups is very, very low.”

3. Companies aren’t partnering up early enough

Finally, Schmidt wrote in his draft that tech startups don’t partner enough with other companies in the modern, hyper-connected world. “It’s impossible to think about any major challenge for society in a silo,” he wrote.

He said in his speech that tech firms have to be ready to partner “fairly early.” He gave the example of a startup that wants to build homecare robots.

“The market for homecare robots is going to be very, very large. The problem is that you need visual systems, and machine learning systems, and listening systems, and motor systems, and so forth. You’re not going to be able to do it with three people,” he said.

After detailing his failures in tech entrepreneurship, Schmidt laid out what he views as the solution. He referred back to the Renaissance in Europe, saying people turned their hand to all sorts of disciplines, from science, to art, to business.

Source : https://www.businessinsider.com/eric-schmidt-3-big-failures-he-sees-in-tech-entrepreneurship-2018-11

6 Biases Holding You Back From Rational Thinking – Robert Greene

Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.

These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.

1) Confirmation Bias

I look at the evidence and arrive at my decisions through more or less rational processes.

To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.

We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.

When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.

2) Conviction Bias

I believe in this idea so strongly. It must be true.

We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.

3) Appearance Bias

I understand the people I deal with; I see them just as they are.

We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.

4) The Group Bias

My ideas are my own. I do not listen to the group. I am not a conformist.

We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.

5) The Blame Bias

I learn from my experience and mistakes.

Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.

6) Superiority Bias

I’m different. I’m more rational than others, more ethical as well.

Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.

We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.

Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f

Building safe artificial intelligence: specification, robustness, and assurance – DeepMind

Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.

If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.

At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.

In this inaugural post, we discuss three areas of technical AI safety: specificationrobustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.

Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.

Specification: define the purpose of the system

You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.

This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:

  • ideal specification (the “wishes”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;
  • design specification (the “blueprint”), corresponding to the specification that we actually use to build the AI system, e.g. the reward function that a reinforcement learning system maximises;
  • and revealed specification (the “behaviour”), which is the specification that best describes what actually happens, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.

specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.

For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.

*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.

From Faulty Reward Functions in the Wild by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.

As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.

Robustness: design the system to withstand perturbations

There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shiftadversarial inputs, and unsafe exploration can be classified as robustness problems.

To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.

From AI Safety Gridworlds. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.

Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.

An adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.

Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015Amodei and Olah et al., 2016).

Assurance: monitor and control system activity

Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.

Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.

AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).

ToMNet discovers two subspecies of agents and predicts their behaviour (from “Machine Theory of Mind”)

Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).

A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See Orseau and Armstrong, 2016.

Looking ahead

We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.

Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.

With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!

We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research

Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1

 

How 20 big-name US VC firms invest at Series A & B – Pitchbook

NEA is one of the most well-known investors around, and the firm also takes the crown as the most active VC investor in Series A and B rounds in the US so far in 2018. Andreessen HorowitzAccel and plenty of the other usual early-stage suspects are on the list, too.

Also included is a pair of names that have been in the news this year for backing away from the traditional VC model: Social Capital and SV Angel. The two are on the list thanks to deals completed earlier in the year.

Just how much are these prolific investors betting on Series A and Series B rounds? And at what valuation? We’ve used data from the PitchBook Platform to highlight a collection of the top venture capital investors in the US (excluding accelerators) and provide information about the Series A and B rounds they’ve joined so far this year. Click on the graphic below to open a PDF.

Source : https://pitchbook.com/news/articles/how-20-big-name-us-vc-firms-invest-at-series-a-b

Lyft – Geofencing San Francisco Valencia Street – Greater investment in loading zones is needed for this to be more effective

Creating a Safer Valencia Street

San Francisco is known for its famous neighborhoods and commercial corridors — and the Mission District’s Valencia Street takes it to the next level. For Lyft, Valencia Street is filled with top destinations that our passengers frequent: trendy cafes, hipster clothing stores, bars, and live music.

To put it simply, there’s a lot happening along Valencia Street. Besides the foot traffic, many of its restaurants are popular choices on the city’s growing network of courier services, providing on-demand food delivery via cars and bicycles. Residents of the Mission are increasingly relying on FedEx, Amazon, and UPS for stuff. Merchants welcome commercial trucks to deliver their goods. In light of a recent road diet on Mission Street to create much needed dedicated lanes to improve MUNI bus service, many vehicles have been re-routed to parallel streets like Valencia. And of course, Valencia Street is also one of the most heavily trafficked bicycling corridors in the City, with 2,100 cyclists commuting along Valencia Street each day.

Source: SFMTA

With so many different users of the street and a street design that has largely remained unchanged, it’s no surprise that the corridor has experienced growing safety concerns — particularly around increased traffic, double parking, and bicycle dooring.

Valencia Street is part of the City’s Vision Zero High-Injury Network, the 13% of city streets that account for 75% of severe and fatal collisions. From January 2012 to December 2016, there were 204 people injured and 268 reported collisions along the corridor, of which one was fatal.

As the street has become more popular and the need to act has become more apparent, community organizers have played an important role in rallying City forces to commit to a redesign. The San Francisco Bicycle Coalition has been a steadfast advocate for the cycling community’s needs: going back to the 1990s when they helped bring painted bike lanes to the corridor, to today’s efforts to upgrade to a protected bike lane. The People Protected Bike Lane Protests have helped catalyze the urgency of finding a solution. And elected officials, including Supervisor Ronen and former Supervisor Sheehyhave been vocal about the need for change.

Earlier this spring, encouraged by the SFMTA’s first steps in bringing new, much-needed infrastructure to the corridor, we began conducting an experiment to leverage our technology as part of the solution. As we continue to partner closely with the SFMTA as they work on a new design for the street, we want to report back what we’ve learned.

Introduction

As we began our pilot, we set out with the following goals:

  1. Promote safety on the busiest parts of Valencia Street for the most vulnerable users by helping minimize conflict for bicyclists, pedestrians, and transit riders.
  2. Continue to provide a good experience for drivers and passengers to help ensure overall compliance with the pilot.
  3. Understand the effectiveness of geofencing as a tool to manage pickup activity.
  4. Work collaboratively with city officials and the community to improve Valencia Street.

To meet these goals, we first examined Lyft ride activity in the 30-block project area: Valencia Street between Market Street and Cesar Chavez.

Within this project area, we found that the most heavily traveled corridors were Valencia between 16th and 17th Street, 17th and 18th Street, and 18th and 19th Street. We found that these three blocks make up 27% of total Lyft rides along the Valencia corridor.

We also wanted to understand the top destinations along the corridor. To do this, we looked at ride history where passengers typed in the location they wanted to get picked up from.

Next, we looked at how demand for Lyft changed over time of day and over the course of the week. This would help answer questions such as “how does demand for Lyft differ on weekends vs. weeknights” or “what times of day do people use Lyft to access the Valencia corridor?”

We found that Lyft activity on Valencia Street was highest on weekends and in the evenings. Demand is fairly consistent on weekdays, with major spikes of activity on Fridays, Saturdays, and Sundays. The nighttime hours of 8 PM to 2 AM are also the busiest time for trips, making up 44% of all rides. These findings suggest the important role Lyft plays as a reliable option when transit service doesn’t run as frequently, or as a safe alternative to driving under the influence (a phenomenon we are observing around the country).

The Pilot

Our hypothesis was that because of the increased need for curb space between multiple on-demand services, as well as the the unsafe experience of double parking or crossing over the bike lane to reach passengers, improvements in the Lyft app could help create a better experience for everyone.

To test this, our curb access pilot program was conducted as an “A/B experiment”, where subjects were randomly assigned to a control or treatment group, and statistical analysis is used to determine which variation performs better. 50% of riders continued to have the same experience requesting rides within the pilot area: able to get picked up wherever they wanted. The other 50% of Lyft passengers requesting rides within the pilot zone were shown the experiment scenario, which asked them to walk to a dedicated pickup spot.

Geofencing and Venues

Screenshot from the Lyft app showing our Valencia “Venue” between 17th and 18th Street. Passengers requesting a ride are re-directed to a dedicated pickup spot on a side street (depicted as a purple dot). During the pilot, we created these hot spots on Valencia Street between 16th St and 19th St.

Our pilot was built using a Lyft feature called “Venues”, a geospatial tool designed to recommend pre-set pickup locations to passengers. When a user tries to request a ride from an area that has been mapped with a Venue, they are unable to manually control the area in which they’d like to be picked up. Rather, the Venue feature automatically redirects them to a pre-established location. This forced geofencing feature helps ensure that passengers are requesting rides from safe locations and build reliability and predictability for both passengers and drivers as they find each other.

Given our understanding of ride activity and demand, we decided to create Venues on Valencia Street between 16th Street and 19th Street. We prioritized creating pickup zones along side streets in areas of lower traffic. Where possible, we tried to route pickups to existing loading zones: however, a major finding of the pilot was that existing curb space is insufficient and that the city needs more loading zones. To support better routing and reduce midblock u-turns or other unsafe driving behavior, we tried to put pickup spots on side streets that allowed for both westbound and eastbound directionality.

Findings

Our pilot ran for three months, from March 2018 to June 2018. Although our initial research focused on rideshare activity during hours of peak demand (i.e. nights and weekends), to support our project goals of increasing overall safety along the corridor and to create an easy and intuitive experience for passengers, we ultimately decided to run the experiments 24/7.

The graphic below illustrates where passengers were standing when they requested a ride, and which hotspot they were redirected to. We found that the top hot spots were on 16th Street. This finding suggests the need for continued coordination with the City to make sure that the dedicated pickup spots to protect cyclists on Valencia Street don’t interrupt on-time performance for the 55–16th Street or 22–Fillmore Muni bus routes.

Loading Time

Loading time, when a driver has pulled over to wait for a passenger to arrive or exit their car, was important for us to look at in terms of traffic flow. This is a similar metric to the transportation planning metric, dwell time.

Currently, our metric for loading time looks at the time between when a driver arrives at the pickup location and when they press the “I have picked up my passenger” button. However, this is an imperfect measurement for dwell time, as drivers may press the button before the passenger gets in the vehicle. Based on our pilot, we have identified this as an area for further research.

Going into our experiment, we expected to see a slight increase in loading time, as passengers would need to get used to walking to the pickup spot. This hypothesis was correct: during the pilot, we saw loading time increased from an average of 25 seconds per ride to 28 seconds. To help speed up the process of drivers and passengers finding each other, we recommend the addition of wayfinding and signage in popular loading areas.

We also wanted to understand the difference between pickups and drop-offs. Generally, we found that pickups have a longer loading time than a drop-off.

Post Pilot Recommendations

Ridesharing is one part of the puzzle to creating a more organized streetscape along the Valencia corridor, so sharing information and coordinating with city stakeholders was critical. After our experiment, we sat down with elected officials, project staff from the SFMTA, WalkSF, and the San Francisco Bicycle Coalition to discuss the pilot findings and collaborate on how our work could support other initiatives underway across the city. We are now formally engaged with the SFMTA’s Valencia Bikeway Improvement Project and look forward to continuing to support this initiative.

Given the findings of this pilot program and our commitment to creating sustainable streets (including our acquisition of the leading bikeshare company Motivate and introduction of bike and scooter sharing to the Lyft platform), we decided to move our project from a pilot to a permanent featurewithin the Lyft app. This means that currently, anyone requesting a ride on Valencia Street between 16th Street and 19th Street will be redirected to a pickup spot on a side street.

Based on the learnings of our pilot, we recommend the following:

  1. The city needs more loading zones to support increased demand for curbside loading.
  2. Valencia Street can best support all users of the road by building infrastructure like protected bike lanes that offer physical separation from motor vehicle traffic.
  3. Ridesharing is one of many competing uses for curb space. The City needs to take a comprehensive approach to curb space management.
  4. Geofencing alone does not solve a space allocation problem. Lyft’s digital solutions are best leveraged when the necessary infrastructure (i.e. loading zones) are in place. The digital and physical environments should reinforce each other.
  5. Wayfinding and signage can inform a user’s trip-making process before someone opens their app. Having clear and concise information that directs both passengers and riders can help ensure greater compliance.
  6. Collaboration is key. Keeping various stakeholders (public agencies, the private sector, community and advocacy groups, merchants associations, etc.) aware and engaged in ongoing initiatives can help create better outcomes.

Technology is Not a Silver Bullet

We know that ridesharing is just one of the many competing uses of Valencia Street and technology alone will not solve the challenges of pickups and drop-offs: adequate infrastructure like protected bike lanes and loading zones will be necessary to achieving Vision Zero.

Looking ahead, we know there’s much to be done on this front. To start with, we are are excited to partner with civic engagement leaders like Streetmixwhose participatory tools ensure that public spaces and urban design support safe streets. By bringing infrastructure designs like parking protected bike lanes or ridesharing loading zones into Streetmix, planners can begin to have the tools to engage community groups on what they’d like to see their streets look like.

We’ve also begun partnering with Together for Safer Roads to support local bike and pedestrian advocacy groups and share Lyft performance data to help improve safety on some of the nation’s most dangerous street corridors. And finally, through our application to the SFMTA to become a permitted scooter operator in the City, we are committing $1 per day per scooter to support expansion of the City’s protected bike lane network. We know that this kind of infrastructure is critical to making safer streets for everyone.

Our work on Valencia Street is a continuation of our commitment to rebuild our transportation network and place people not cars at the center of our communities.

We know that this exciting work ahead cannot be done alone: we look forward to bringing this type of work to other cities around the country and to working together to achieve this vision

Source : https://medium.com/@debsarctica/creating-a-safer-valencia-street-54c25a75b753

 

Scroll to top