Comments Off on Rethinking the Professional Services Organization Post-2020 – Constellation
In 2020 professional services organizations (PSOs) are profoundly
experiencing at least three different and substantial disruptions to
their business, and often several secondary ones as well.
It was the arrival of COVID-19 earlier this year that led to lockdowns around the globe that have curtailed client demand and hampered project delivery. Those same lockdowns have also restricted project staff to their homes
for the most part, slowing down client projects and impacting billable
work. Finally, a rising economic downturn is wreaking havoc both in
client budgets as well as the PSO firms’ own finances, leading to urgent
calls for cost cutting and new efficiencies.
It’s the proverbial perfect storm of major challenges and it’s
leading to a dramatic impact on customer success as well as affecting
morale of the talent base in most PSOs. This has resulted in calls for a
widespread push within firms to rapidly rethink and update their
operating models to determine the right mitigations and respond
effectively.
Except, as many have found, responding quickly and effectively to
these challenges is hard to do when so much uncontrolled change is
currently taking place.
In fact, many PSOs will be tempted to focus first on cost control to
ensure short-term survival. While this is a natural response that does
offer immediate and tangible control over a once-in-a-lifetime event
filled with uncertainty, organizations must also remain mindful of the
vital characteristics of professional services firms. Overly
enthusiastic responses to this year’s disruptions can adversely impact
the organizations strategic operating model long term.
Sustaining a Bridge to a Better Future
The risk is in affecting the overarching characteristics of
professional services firms. In particular, it is two stand-out
characteristics that make them unique in the industry: One is the nature
of the highly bespoke work they do that is especially tailored for each
client regardless of tools, services model or data. The second is the
special nature of cultivating successful long-term client relationships.
Both of these characteristics require intensive and skilled delivery
capabilities. Finally, underpinning both of these foundational
characteristics is the high leverage human capital model that determines
both revenue and profit for the PSO, and which needs to be finely tuned
across the many layers of the organization.
This was a painful lesson learned across the PSO industry from the
2008-2009 financial crisis. Decisions made too expediently negatively
impacted firms long after the crisis had passed. Studies have shown that decisions to reduce talent or cut compensation and billable time affected their client relationships as well their brand image for many years. Conversely, the firms that weathered the short-term
pressure and managed to keep hard-to-replace human capital prospered as
the economy recovered. In this same vein, organizations with a clear
sense of the type of PSO that must emerge from the veil of 2020 — and
what it will take to thrive in the resulting market conditions — will be
in the best position to prosper.
Figure 1: The Pre-2020 Model of PSOs is Giving Way to a New Client/Talent Focused Model
Priorities First: Update the Firm to Reflect New Realities
It’s therefore paramount that any cost control discussions be viewed
through the constructive lens of the organization’s business and talent
strategy, along with its future operating model. In order to be able to
do that, the business must first recalibrate their core strategy and
execution with today’s fresh realities in mind:
Clients are going to be much more selective and demanding about projects going forward
Deal flows and talent sourcing will be more turbulent until well after the pandemic passes
Delivery talent will require better enablement and support for their new daily work realities
Major opportunities now exist in creating a more holistic and
dynamic PSO operating model that can cost less and with less talent
loss, while actually increasing margin
Significant new types of business and growth opportunities have come within reach to offset recent revenue impacts
In other words, there are major prospects to do more than just
survive through brute force cost cutting. Instead, more elegant
solutions afford themselves if first the PSO will engage in a rapid
rethinking through a view of the current the art-of-the-possible. In
essence, a combined business and digital transformation. This
transformation will generally consist of a combination of bold new
ideas, better integration and consolidation of operational activities,
and powerful new technology tools including automation, holistic user experience upgrades, and powerful new concepts from the realm of digital business.
The Evolution of the PSO Through Proactive, Targeted Transformation
As it turns out, the typical PSO has been experiencing quite a bit of change in the last couple of years anyway. Trends like more dynamic staffing approaches,
better automation of delivery, more project analytics and diagnostics,
have all led to improved services, higher margins, and greater customer
success. Often led initially by technology, which has led to
simultaneous advances in re-imagining the operational models of PSOs
through new capabilities, a new type of PSO is emerging that is more agile, lean, digitally-infused, and experience-centric.
Driven by industry trends, technology innovations, and changes in the
world, below are the types of key shifts that are being seen in PSO
organizations as a result of the events in 2020. These trends are
grouped into three categories, focusing on the business/clients, the
worker, and overall health and wellbeing of all PSO stakeholders.
The Business of PS: Trends
Predictive operations. Projects are
becoming instrumented well enough today while sufficient historical
project baseline data is now available to routinely predict risks and
anticipating opportunities before they actually happen. Using these
insights can lead to significant cost savings, higher success rates, and
quality improvements.
M&A support. A wave of mergers and
acquisitions will inevitably occur out of the events of 2020,
particularly of smaller PS firms. PSOs that have sophisticated
infrastructure and processes for managing the financial, operational,
and structural merging of clients will have an advantage.
Large portfolio management. Most PSO are not
taking advantage of the ability to manage large portfolios of projects
across a client to maximize talent reuse, achieve economies of scale,
and improve delivery.
Next-generation client engagement. As the industry
becomes hypercompetitive, the time is right for a more engaging,
sustained, informative, and transparent connection to the client using a
combination of technology, user experience, and real-time data flows.
This higher quality delivery approach will result in increase of project
share within clients against other PSO firms.
New growth models. Most PSOs have readily accessible untapped growth opportunities which they can add to their existing portfolios to increase sales.
New business models. The time is ripe for PSOs to
lateral over into adjacent business models such as subscriptions, IP
licensing, strategic data services, and annual recurring revenue that
can provide vital new green fields for resilience and expansion
The PS Career: Trends
Smart recruitment. New models exist for recruiting
and project matching via AI, while talent screening and the
pre/onboarding process can be made more intelligent and automated. This
will drive bottom-line business benefits while also increasing
acquisition and retention.
Better talent experience. The top workers will
have expectations of a general return to the quality of work life they
had prior to 2020. PSOs that proactively deliver on this in remote work
scenarios while also uplevelling the overall worker experience will have
significant retention benefits.
Learning the future of PS. The existential changes
and new opportunities in the PSO world must be better communicated to
workers, so they can help realize as well as reap the benefits
enumerated here.
Hybrid talent sourcing. New dynamic staffing
models — aka the Gig Economy for professional services — will mix with
full-time employment to create much stronger teams that are also more
cost contoured while attracting new types of diverse talent.
Health and Wellbeing: Trends
Delivery team engagement. Creating enabling and
more connected working environments in the new remote work situation
particularly for delivery teams is essential to preserve a connection to
the “mothership” while also nurturing workers through the tough and
challenging times.
Wellbeing tracking. Tools and processes that track
the physical, mental, and psychological health of PSO stakeholders,
from project staff, back office, and clients — and provide appropriate
assistance when needed — will be increasingly expected and has already
become a hallmark of best-in-class employers.
In summary, PSOs have a historic opportunity to pivot to adapt to the significant disruptions they have faced so far in 2020. By adopting an updated operating model and quickly delivering on it with clients and talent using new solutions, PSOs can avoid the most damaging types of cost cutting while being positioned for growth in 2021 and beyond. That is, as long as they are willing to think outside the box and adopt sensible yet far-reaching shifts in their strategies, tools, and operating models.
Comments Off on Which investments generate the greatest value in venture: Consumer or Enterprise? – Sapphire
A Dive into Enterprise vs Consumer Exit Activity
In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them.
One of our favorite areas to explore
is: as a venture investor, do your odds of making better returns improve
if you only invest in either enterprise or consumer companies? Or do
you need a mix of both to maximize your returns? And how should recent
investment and exit trends influence your investing strategy, if at
all?
To help answer questions like these, we’ve collected and analyzed exit data for years. What we’ve found is pretty intriguing: portfolio
value creation in enterprise tech is often driven by a cohort of exits,
while value creation in consumer tech is generally driven by large,
individual exits.
In general, this trend has held for
several years and has led to the general belief, that if you are a
consumer investor, the clear goal is to not miss that “one deal” that
has a huge spike in exit valuation creation (easier said than done of
course). And if you’re an enterprise investor, you want to create a
“basket of exits” in your portfolio.
What Creates More Portfolio Value: Consumer or Enterprise?
2019 has been a powerhouse year for
consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent
declines in stock price notwithstanding). The first three quarters of
2019 alone surpassed every year since 1995 for consumer exit value – and
we’re not done yet. If the consumer exit pace continues at this scale,
we will be on track for the most value created at exit in 25 years,
according to our analysis.
Source: S&P Capital IQ, Pitchbook
Since 1995, the number of enterprise
exits has consistently outpaced consumer exits (blue line versus green
line above), but 2019 is the closest to seeing those lines converge in
over two decades (223 enterprise vs 208 consumer exits in the first
three quarters of 2019). Notably, in five of the past nine years, the
value generated by consumer exits has exceeded enterprise exits.[1]
At Sapphire, we observe the following:
Venture-backed enterprise tech companies have generated $884B in value since 1995; $349B from M&A and $535B from IPOs.
Venture-backed consumer tech companies have generated $773B in value since 1995; $153B from M&A and $620B from IPOs.
In total, there were 5,600+ venture-backed exits in enterprise tech and 3,300+ exits in consumer tech.
While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019
has generated a tremendous amount of value through IPOs, roughly $223
billion. However, after trading in the public markets, the aggregate
value of those IPOs have decreased by $81 billion as of November 1,
2019.[3] This
decrease is driven by Uber and Lyft from an absolute value basis,
accounting for roughly 66% of this markdown over the same period,
according to our figures. Over half of the IPO exits in 2019 have been
consumer, and despite these stock price changes, consumer exits are
still outperforming enterprise exits YTD given the enormous alpha they
generated initially.
As we noted in the introduction,
since 1995, historical data shows that years of high value creation from
enterprise technology is often driven by a cohort of exits versus
consumer value creation that is often driven by large, individual exits.
The chart below illustrates this, showing a side-by-side comparison of
exits and value creation.
Source: Pitchbook
At Sapphire, we observe the following:
The top
five enterprise companies with the largest exits account for $79B in
value creation, or 9% of the $884B generated in the enterprise category
since 1995.
The top
five consumer companies with largest exits account for $276B in value
creation, or 36% of the $773B generated in the consumer category since
1995.
The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies.
Understanding the Consumer Comeback
While total value of enterprise
companies exited since 1995 ($884B) exceeds that of consumer exits
($773B), in the last 15 years, consumer returns have been making a
comeback. Specifically, total consumer value exited ($538B) since 2004
exceeds that of enterprise exits ($536B). This difference has become
more stark in the past 10 years with total consumer value exited ($512B)
surpassing that of enterprise ($440B). As seen in the chart below, the
rolling 10-year total enterprise exit value exceeded that of consumer,
until the decade between 2003-2012 where consumer exit value took the
lead.
Note: Data from S&P Capital IQ and Pitchbook
Source: S&P Capital IQ, Pitchbook
We believe size and then the
inevitable hype around consumer IPOs has the potential to cloud investor
judgment since the volume of successful deals is not increasing. The data clearly shows the surge in outsized returns comes from the outliers in consumer.
As exhibited below, large, consumer
outliers since 2011 such as Facebook, Uber, and Snap often account for
more than the sum of enterprise exits in any given year. For example, in
the first three quarters of 2019, there have been 15 enterprise exits
valued at over $1B for a total of $96B. In the same time, there have
been nine consumer exits valued at over $1B for a total of $139B.
Anecdotally, this can be seen from four out of the past five years being
headlined by a consumer exit. While 2016 showcased an enterprise exit,
it was a particularly quiet exit year.
2015 – Consumer: Fitbit ($6B)
2016 – Enterprise: Nutanix ($5B)
2017 – Consumer: Snap ($27B)
2018 – Consumer: Dropbox ($11B)
First 3 quarters of 2019 – Consumer: Uber ($85B)
Source: S&P Capital IQ, Pitchbook
Enterprise Deals Still Rule in M&A
While consumer deals have taken the
lead in IPO value in recent years, on the M&A front, enterprise
still has the clear edge. Since 1995 there have been 76 exits of $1
billion or more in value, of which 49 are enterprise companies and 27
are consumer companies. The vast majority of value from M&A has come
from enterprise companies since 1995 — more than 2x that of consumer.
Similar to the IPO chart above,
acquisition value of enterprise companies outpaced that of consumer
companies until recently, with 2010-2014 being the exception.
Source: S&P Capital IQ, Pitchbook
Of course, looking only at outcomes
with $1 billion or more in value covers only a fraction of where most VC
exits occur. Slightly less than half of all exits in both enterprise
and consumer are $50 million or under in size, and more than 70 percent
of all exits are under $200 million. Moreover, in the distribution chart
below, we capture only the percentage of companies for which we have
exit values. If we change the denominator to all exits captured in our
database (i.e. measure the percentage of $1 billion-plus exits by using a
higher denominator), the percentage of outcomes drops to around 3
percent of all outcomes for both enterprise and consumer.
Source: S&P Capital IQ, Pitchbook
What Does All of this Mean for Venture Investors?
There’s an enormous volume of
information available on startup exits, and at Sapphire Partners, we
ground our analyses and theses in the numbers. At the same time, once
we’ve dug into the details, it’s equally important to zoom out and think
about what our findings mean for our GPs and fellow LPs. Here are some
clear takeaways from our perspective:
Consumer exits have surpassed enterprise over the past 15 years.
Consumer exits value is highly concentrated in the top deals.
There are
more billion-dollar enterprise IPOs than billion-dollar consumer exits,
so you may have more opportunities for a unicorn enterprise outcome than
you do a consumer.
However, if you happen to invest in one of the outlier consumer exits, you could experience significant returns.
In a nutshell, as LPs we like to see
both consumer and enterprise deals in our underlying portfolio as they
each provide different exposures and return profiles. However, when
these investments get rolled up as part of a venture fund’s portfolio,
success is often then contingent on the fund’s overall portfolio
construction… but that’s a question to explore in another post.
NOTE: Total Enterprise Value
(“TEV”) presented throughout analysis considers information from CapIQ
when available, and supplements information from Pitchbook last round
valuation estimates when CapIQ TEV is not available. TEV (Market
Capitalization + Total Debt + Total Preferred Equity + Minority Interest
– Cash & Short Term Investments) is as of the close price for the
initial date of trading. Classification of “Enterprise” and “Consumer”
companies presented herein is internally assigned by Sapphire. Company
logos shown in various charts presented herein reflect the top (4)
companies of any particular time period that had a TEV of $1BN or
greater at the time of IPO, with the exception of chart titled “Exits by
Year, 1995- Q3 2019”, where logos shown in all charts presented herein
reflect the top (4) companies of any particular year that had a TEV of
$7.5BN or greater at the time of IPO. During a time period in which less
than (4) companies had such exits, the absolute number of logos is
shown that meet described parameters. Since 1995 refers to the time
period of 1/1/1995 – 9/30/2019 throughout this article.
[1] Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019.[2] Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares. [3] Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.
Comments Off on Behind the scenes: Data and technology bring food product R&D into the 21st century – Food Dive
With CPG companies under pressure to develop items faster and stretch their spending, Conagra, Mars Wrigley and Ferrara are rethinking the decades-old way of creating new things for consumers.
There was little doubt four years ago that Conagra Brands’ frozen portfolio was full of iconic items that had grown tired and, according to its then-new CEO Sean Connolly, were “trapped in time.”
While products such as Healthy Choice — with its heart-healthymessage — and Banquet — popular for its $2 turkey and gravy and salisbury steak entrees — were still generating revenue, the products lookedmuch the same as decades before. The result: sales sharply fell as consumers turned to trendier flavors and better-for-youoptions.
Executives realized the decades-old process used to create and test products wasn’t translating into meaningful sales. Simply introducing new flavors or boosting advertising was no longer enough to entice consumers to buy. If Conagra maintained the status quo, the CPG giant only risked exacerbating the slide and putting its portfolio of brands further behind the competition.
“We were doing all this work into what I would call validation insights, and things weren’t working,” Bob Nolan, senior vice president of demand sciences at Conagra, told Food Dive. “How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Credit: Conagra
Nolan and other officials at Conagra eventually decided to abandon traditional product testing and market research in favor of buying huge quantities of behavioral data. Executives were convinced the datacould do a better job of predicting eventual product success than consumers sitting in an artificial setting offering feedback.
Conagra now spends about $15 million less on testing products than it did three years ago, with the much of the money now going toward buying data in food service, natural products, consumption at home, grocery retail and loyalty cards. When Nolan started working at Conagra in 2012, he estimated 90% of his budget at the company was spent on traditional validation research such as testing potential products, TV advertisements or marketing campaigns. Today, money spent on those methods hasbeen cut to zero.
While most food and beverage companies have not changed how they go about testing their products as much as Conagra, CPG businesses throughout the industry are collectively making meaningful changes to their own processes.
With more data avaliable now than ever before, companies can change their testing protocol to answer questions they might have previously not had the budget or time to address. They’re also turning to technology such as videos and smartphones to immediately enagage with consumers or to see firsthand how they would respond to their prototype products in real-life settings, like their own homes.
As food manufacturers scramble to remain competitive and meet the shopper’s insatiable demand fornew tastes and experiences,changing how they go about testing can increase the liklihood that a product succeeds — enabling corporations to reap more revenue and avoid being one of the tens of thousands of products that fail every year.
For Conagra, the new approach already is paying off. One success story came in the development of the company’s frozen Healthy Choice Korean-Inspired Beef Power Bowl. By combing data collected from the natural food channel and specialty stores like Whole Foods and Sprouts Farmers Market, the CPG giant found people were eating more of their food in bowls — a contrast to offerings in trays.
“How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Bob Nolan
Senior vice president of demand sciences, Conagra
At the same time, information gathered from restaurants showed Korean was the fastest-growing cuisine. The data also indicatedthe most popularflavors within that ethnic category. Nolan said without the data it would have been hard to instill confidence at Conagra that marketing a product like that would work, and executives would have been more likely to focus on flavors the company was already familiar with.
Since then, Conagra rebranded Healthy Choice around cleaner label foods with recognizable, modern ingredients that were incorporated into innovations such as the Power Bowl. The overhaul helped rejuvenate the 34-year old brand, with sales jumping 20% during the last three years after declining about 10% during the prior decade, according to the company.
Conagra has experienced similar success by innovating its other frozen brands, including Banquet and Marie Callender’s. For a company where frozen sales total $5.1 billlion annually, the segment is an important barometer for success at Conagra.
A decades-old approach
For years, food companies would come up with product ideas using market research approaches that dated back to the 1950s. Executives would sit in a room and mull over ways to grow a brand. They would develop prototypes before testing and retesting a few of them to find the one that would have the best chance of resonating with consumers. Data used was largely cultivated through surveys or focus groups to support or debunk a company idea.
“It’s an old industry and innovation has been talked about before but it’s never been practiced, and I think now it’s starting to get very serious because CPG companies are under a lot of pressure to innovate and get to market faster,” Sean Bisceglia, CEO of Curion, told Food Dive. “I really fear the ones that aren’t embracing it and practicing it … may damage their brand and eventaully damage their sales.”
Credit: Curion
Information on nearly every facet of a consumer’s shopping habits and preferences can be easily obtained. There is data showing how often people shop and where they go. Tens of millions of loyalty cards reveal which items were purchased at what store, and even the checkout lane the person was in. Data is available on a broader level showing how products are selling, but CPGs can drill down on an even more granular level to determine the growth rate of non-GMO or organic, or even how a specific ingredient like turmeric is performing.
Market research firms such as Nielsen and Mintel collect reams of valuable data, including when people eat, where and how they consume their food, how much time they spend eating it and even how it was prepared, such as by using a microwave, oven or blender.
To help its customers who want fast results for a fraction of the cost, Bisceglia said Curion has created a platform in which a product can be tried out among a random population group — as opposed to a specifically targeted audience made up of specific attributes, like stay-at-home moms in their 30s with two kids — with the data given to the client without the traditional in-depth analysis. It can cost a few thousand dollars with results available in a few days, compared to a far more complicated and robust testing process over several months that can sometimes cost hundreds of thousands of dollars, he said.
Curion, which has tested an estimated 8,000 products on 700,000 people during the last decade, is creating a database that could allow companies to avoid testing altogether.
For example, a business creating a mango-flavored yogurt could initially use data collected by a market research firm or someone else showing how the variety performed nationwide or by region. Then, as product development is in full swing, the company could use Curion’s information to show how mango yogurt performed with certain ages, income levels and ethnicities, or even how certain formulations or strength of mango flavor are received by consumers.
“What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again? You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Lori Rothman
Owner, Lori Rothman Consulting
Lori Rothman, who runs her own consulting firm to advise companies with their product testing,worked much of the last 30 years at companies including Kraft and Kellogg to determine the most effective way to test a product and then design the corresponding trial. She used to have days or weeks to review data and consumer comments before plotting out the best way to move forward, she said.
In today’s marketplace, there is sometimes pressure to deliver within a day or even immediately. Some companies are even reacting in real time as information comes in — a precedent Rothman warned can be dangerous because of the growing amount of data available and the inherent complexity in understanding it.
“It’s continuing toward more data. It’s just going to get more and more and we just have to get better at knowing what to do with it, and how to use it, and what’s actually important. What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again?” Rothman said. “You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Sweet relief
Ferrara Candy, the maker of SweeTarts, Nerds and Brach’s, estimated the company considers more than 100 product ideas each year. An average of five typically make it to market.
To help whittle down the list, the candy company owned by Nutella-maker Ferrero conducts an array of tests with consumers, nearly all of them done without the customary focus group or in-person interview.
Daniel Hunt, director of insights and analytics for Ferrara, told Food Dive rather than working with outside vendors to conduct research, like the company would have a decade ago, it now handles the majority of testing itself.
In the past, the company might havespent $20,000 to run a major test. It would have paid a market research firm to write an initial set of questions to ask consumers, then refine them, run the test and then analyze the information collected.
Today, Hunt said Ferrara’s own product development team, most of whom have a research background, does most of the work creating new surveys or modifying previously used ones — all for a fraction of the cost. And what might have taken a few months to carry out in the past can sometimes be completed in as little as a few weeks.
Credit: Ferrara
“Now when we launch a new product, it’s not much of a surprise what it does, and how it performs, and where it does well, and where it does poorly. I think a lot of that stuff you’ve researched to the point where you know it pretty well,” Hunt told Food Dive. “Understanding what is going to happen to a product is more important — and really understanding that early in the cycle, being able to identify what are the big potential items two years ahead of launching it, so you can put your focus really where it’s most important.”
Increasingly, technology is playing a bigger part in enabling companies such as Ferrara to not only do more of their own testing, but providing them with more options of how best to carry it out.
Data can be collected from message boards, chat rooms and online communities popular with millennials and Gen Zers. But technology does have its limits. Ferrara aims to keep the time commitment for its online surveys to fewer than seven minutes because Hunt said the quality of responses tends to diminish for longer ones, especially among people who do them on their smartphones.
Other research can be far more rigorous, depending on how the company plans to use the information.
“I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything we’re doing more testing and research then before but maybe just in a slightly different way than we did in the past.”
Daniel Hunt
Director of insights and analytics, Ferrara
Last summer, Ferrara created an online community of 20 people to help it develop a chewy option for its SweeTarts brand. As part of a three-week program, participants submitted videos showing them opening boxes of candies with different sizes, shapes, flavors, tastes and textures sent to them by Ferrara. Some of the products were its own candies, while others came from competitors such as Mars Wrigley’s Skittles or Starburst. Ferrara wanted to watch each individual’s reaction as he or she tried the products.
Participants were asked what they liked or disliked, or where there were market opportunites for chewy candy to help Ferrara better hone its product development. These consumers wereasked to design their own products.
Ferrara also had people either video record themselves shopping or writing down their experience. This helped researchers get a feel for everything from when people make decisions that are impulsive or more thought out, to what would make a shopper decide not to purchase a product. As people provided feedback, Ferrara could immediately engage with them to expound on their responses.
“All of those things have really helped us get information that is more useful and helpful,” Hunt said. “I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything, we’re doing more testing and research than before, but maybe just in a slightly different way than we did in the past.”
Convincing people to change
Getting people to change isn’t easy. To help execute on its vision, Conagra spent four years overhauling the way it went about developing and testing products — a lengthy process in which one of the biggest challenges was convincing employees used to doing things a certain way for much of their career to embrace a different way of thinking.
Conagra brought in data scientists and researchers to provide evidence to show how brands grow and what consumer behavior was connected to that increase. Nolan’s team had senior management participate in training courses “so people realize this isn’t just a fly-by-night” idea, but one based on science.
The CPG giant assembled a team of more than 50 individuals— many of whom had not worked with food before — to parse the complex data andfind trends. Thismarked a dramatic new way of thinking, Nolan said.
While people with food and market research backgrounds would have been picked to fill these roles in the past, Conagra knew it would be hard to retrain them in the company’s new way of thinking. Instead, it turned to individuals who had experience indata technology, hospitality and food service, even if it took them time to get up to speed on Conagra-specific information, like the brands in its portfolio or how they were manufactured.
Conagra’s reach extended further outside its own doors, too. The company now occasionally works with professors at the University of Chicago, just 8 miles south of its headquarters, to help assess whether it is properly interpreting how people will behave.
“In the past, we were just like everybody else,” Nolan said. “There are just so many principles that we have thrown out that it is hard for people to adjust.”
Mars Wrigley has taken a different approach, maintaining the customary consumer testing while incorporating new tools, technology and ways of thinking that weren’t available or accepted even a few years ago.
“I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Lisa Saxon Reed
Director of global sensory, Mars Wrigley
Lisa Saxon Reed, director of global sensory at Mars Wrigley, told Food Dive the sweets maker was recently working to create packaging for its Extra mega-pack with 35 pieces of gum, improving upon a version developed for its Orbit brand years before. This time around, the company — which developed more than 30 prototypes — found customers wanted a recyclable plastic container they believed would keepthe unchewed gum fresh.
Shoppers also wanted to feel and hear the packaging close securely, with an auditory “click.” Saxon Reed, who was not involved with the earlier form of the package, speculated it didn’t resonate with consumers because it was made of paperboard, throwing into question freshness and whether the package would survive as long as the gum did.
The new packaging, which hit shelves in 2016 after about a year of development, has been a success, becoming the top selling gum product at Walmart within 12 months of its launch, according to Saxon Reed. Mars Wrigley also incorporated the same packaging design for a mega pack of its 5 gum brand because it was so successful.
“If we would not have made a range of packaging prototypes and had people use them in front of us, we would have absolutely missed the importance of these sensory queues and we would have potentially failed again in the marketplace,” Saxon Reed said. “If I would have done that online, I’m not sure how I would have heard thoseclues. …I don’t think those would have come up and we would have missed an opportunity to win.”
The new approach extends to the product itself, too. Saxon Reed said Mars Wrigley was looking to expand its Extra gum line into a cube shape in fall 2017. Early in the process, Mars Wrigley asked consumers to compile an online diary with words, pictures and collages showing how they defined refreshment. The company wanted to customize the new offering to U.S. consumers, and not just import the cube-shaped variety already in China.
Credit: Mars Wrigley
After Mars Wrigley noticed people using the color blue or drawing waterfalls, showers or water to illustrate a feeling of refreshment, product developers went about incorporating those attributes into its new Extra Refreshers line through the color, flavor or characteristics thatfeel cool or fresh to the mouth. They later tested the product on consumers who liked gum, including through the age-old testing process where people were given multiple samples to try and asked which they preferred.
Extra Refreshers hit shelves earlier this year and is “off to a strong start,” Saxon Reed said.
“I don’t see it as an ‘either-or’ when it comes to technology and product testing. I really see it as a ‘yes-and,’ ” she said. “How can technology really help us better understand the reactions that we are getting? But at this point, I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Regardless of what process large food and beverage companies use, how much money and time they spend testing out their products, or even how heavily involved consumers are, CPG companies and product testing firms agreed that an item’s success is heavily defined by one thing that hasn’t and probably never will change: taste.
“Everybody can sell something once in beautiful packaging with all the data, but if it tastes terrible it’s not going to sell again,” Bisceglia said.
Comments Off on Profitability Challenge for Challenger Banks – Fincog
The rise of challenger banks
Over the past years, we have witnessed a steady rise of challenger banks, or neobanks. These newly established retail- and SME banks are challenging the established banks with modern banking propositions tailored to the digital world. In the aftermath of the financial crisis, many have been founded with the vision to create a better and more fair banking experience for customers.
Starting from scratch, they collectively managed to secure their position on the market and make a sizable impact. Our database of over 150 challenger banks worldwide currently counts a collective customer base of over 200 million customers, and still growing larger every month. Similarly, our Fincog Challenger Bank Index grew almost 8x larger since 2015, representing a growth of 55% per year.
One of the biggest success stories is Revolut from the UK. The company was founded in June 2013 and launched in July 2015 with foreign exchange services. Over time, it gradually expanded its offering to include amongst others current accounts and cryptocurrency trading. Nowadays it boasts a client base of 7 million customers.
Another success story is the Brazilian bank Nubank. It was founded in 2013 with the vision to bring simple and efficient financial services for Brazilian consumers to free them from existing high fees and unnecessary complexity. Nubank offers retail customer a free current account with a credit card and personal loans, combined with innovative financial management features. Since its initial launch in 2014, it achieved over 12 million customers in Brazil and is currently valued at $10 billion.
These success stories do not stand on their own. These challengers have appeared all over the world: for example Chime and Acorns from the USA, Toss and kakaobank from South-Korea, Judo Bank from Australia and WeBank from China, amongst others.
These challengers share some important commonalities. First, they have a strong focus on the digital world, and deliver advanced mobile apps with modern banking features – often only exclusively available through a mobile app. Not only the front-end, but also the back-end is largely automated, with minimum human interaction.
Second, they offer a great customer experience. The account opening process is simple and quick, daily banking services are easy to use and intuitive, and pricing is transparent. In addition, many offer financial management services (i.e. financial overview, savings tools) and seamless payments (i.e. instant P2P payments, mobile payments). Neobanks tend to focus on a specific customer segment or product, typically areas that are underserved or overpriced by incumbent players, with a better solution. Monese, for example, enables migrant workers to easily open a bank account, without the need of a postal address – which migrant workers may lack.
Third, they typically offer very competitive pricing to compete with established players. For example, many offer a free payment account, free or low cost international money transfers and travel money, and top rates on lending and deposits.
As opposed to incumbent players, neobanks are not hindered by legacy IT systems, large organizations, or physical distribution networks. Neither are they subject to the same regulatory requirements, as they often only provide a subset of banking services or operate under a different license (instead of a full banking license). In addition, they bring a fresh view and a new culture to banking, while focusing on the customer experience.
Collectively, the neobanks are making a permanent impact on the market, driving innovation and competition, setting the benchmark for incumbent players.
Low levels of income and profitability
While these challengers are successful in attracting large number of customers, many of them haven’t quite yet made profit. Simultaneously, the larger the size, the more the losses.
We have performed a benchmark on a selection of leading challengers internationally that are centered around payments (see infographic below). Over the years, they collectively secured a customer base of over 28 million Retail & Business customers. With a combined total funding of USD 2.9 billion, they are valued at USD 17.8 billion.
We have benchmarked them on their profile, propositions, pricing and financial results. What we observe is that all have negative profitability, losing money for every customer they serve. Monzo takes the bottom of the ranking with a total net loss of USD 58 mln (GBP 47.2 mln) in YE Feb-19, equivalent to a loss of USD 18.71 per customer. And as Monzo is growing, the losses only increase; the net loss increased from USD 37.6 (GBP 30.5 mln) in 2018, a rise of 54% YoY. Monzo is not alone; Revolut, N26 and the others also saw dramatic rises in their losses.
In terms of losses per customer, Nubank seems to be the closest to break-even, with a loss of (only) USD 2 per customer. The company has the highest number of customers in our sample, the most funding and highest valuation. Nubank’s income predominantly originates from credit cards, in which it secured its position thanks to a competitive interest rate, in combination with beneficial economic circumstances that drove customers to use credit cards as an alternative for consumer finance.
The large losses are predominantly driven by the bank’s low level of income. In our benchmark we measure total income net of the cost of sales, for example also subtracting the interest expense or commission expense to the total income. Mogo and Bunq achieve the highest income per customer, respectively USD 32.38 and USD 19.72. The income of the others is lower, for some even negative.
The low level of income can be explained by various factors. First of all, pricing is generally very competitive with thin margins. Second, they all offer only a sub-set of banking services, which limits their revenue potential. Third, while they generally do have a large customer base, a relatively large share is inactive and too often use it as secondary account.
The benchmark shows some important differences with incumbent players. Lloyds Banking Group (LBG) for example is one of the largest UK banks, serving over 30 million Retail and Business customers in the country, with a full range of financial service through a combination of both digital- and physical channels. Performing the same benchmark, it does have a much larger cost-base with operational cost of $335 per customer. However, with an income of $728 per customer, it achieves a profit of $180 per customer. Amongst others this is driven by a broader product portfolio and larger customer balances (i.e. around $18,200 in loans and $17,050 in deposits per customer).
This shows that the challengers still have a long way to go to deepen the customer relationship, to grow the revenue per customer and achieve profitability.
Some fundamental obstacles, but long-term success feasible
There are various reasons for the low profitability of the neobanks. First of all, many of the challengers are rather focused on growth of customers over profitability. Similar to the strategy of earlier Tech Giants, this approach assumes that they will find a way to capitalize on the large customer base later on.
For example, N26 caught lots of attention after its latest funding round in July 2019 when co-founder Maximilian Tayenthal stated that profitability was not one of their core metrics. Tayenthal said: “We want to build a global financial services company… In the years to come we won’t see profitability, we’re not aiming to reach profitability. The good news is we have a lot of investors that have very deep pockets and that share our deep vision.”
Two years ago the CEO of Monzo, Tom Blomfield shared a similar view: “The more you grow, the more you lose and you have to turn that corner at some point… Getting to profitability is not a goal we are prioritising over delivering customers real value. If that takes 10 years, we are committed to it.”
It is true that most challengers are still relatively early phase and operating at subscale. They require large initial investments to build the company and in marketing to attract customers. Once they have established their foundation with sufficient economies of scale, they should be better positioned to be profitable.
What makes it more difficult to achieve this is that the challengers must compete with existing banking infrastructure and banking relationships. Churn-rates in banking are rather low, depending on the market, typically around 2-5% per year. Many struggle to secure the primary customer relationship, which is the most sticky and most profitable one, and offers the best opportunities for cross-sell. Instead, customers more often use the neobanks as secondary accounts for specific services or features.
This requires the challengers to offer large incentives for customers to switch banks, being a better service or price. And in fact that is the strategy of most challengers who offer very competitive pricing, with free payment accounts, low-cost international transfers etc. Moreover, charging customers for certain services is consider unfair, ‘ripping off customers’. This leaves many of them with tiny – or even negative – margins on their services.
Last, most challengers still offer a rather limited product portfolio, typically centered around the payment account only. Especially when the core services are offered for free, this provides few options to generate substantial income. Many of the UK challengers, for instance, have largely relied on interchange fees on card payments, but this now seems to be an insufficient source of income on its own. As opposed to most incumbent players that offer a full range of banking services and can benefit from cross-sell opportunities to achieve much higher revenue per customer.
Overall, the less-active customer base combined with a limited product portfolio at lower margins, leaves many challengers with rather low income per customer and often negative profitability.
This may spark the question whether these challengers are able to survive in the long-run, to achieve sufficient scale and become profitable. Surely not all will survive, in fact we already have witnessed the end of various players, such as Hufsy (Denmark) that recently ceased operations.
However, we believe that with the right strategic choices challenger banks should be able to enhance their profitability, achieving a sustainable business model with a lasting positive impact on customers. In our next blog we will share some insights on challengers that are profitably, what we can learn from them and how to improve your own profitability.
Comments Off on 21 innovative growth strategies used by top growth teams – Appcues
A growth strategy isn’t just a set of functions you plug in to your business to boost grow your product—it’s also the way in which you organize and rally as a team.
If growth is “more of a mindset than a toolkit,” as Ryan Holiday said, then it’s a collective mindset.
Successful growth strategies are the product of engineering, marketing, leadership, design, and product management. Whether your team consists of 2 co-founders or a skyscraper full of employees, your growth hacking strategies will only be effective if you’re able to affix them to your organization, apply a workflow, and use the results of experiments to make intelligent decisions.
In short, there’s no plugin for growth. To increase your product’s user base and activation rate, your company will need to be methodical and tailor the strategies you read about to your unique product, problem, and target audience.
What is a growth strategy?
Before we dive into specific examples of growth strategies, let’s take a moment to establish a proper growth strategy definition:
A growth strategy is a plan of action that allows you to achieve a higher level of market share than you currently have. Contrary to popular belief, a growth strategy is not necessarily focused on short-term earnings—growth strategies can be long-term, too. Let’s keep that in mind with the following examples.
Another thing to keep in mind is that there are typically 4 types of strategies that roll up into a growth strategy. You might use one or all of the following:
Product development strategy—growing your market share by developing new products to serve that market. These new products should either solve for a new problem or add to the existing problem you product solves.
Market development strategy—growing your market share by developing new segments of the market, expanding your user base, or expanding your current users’ usage of your product.
Market penetration strategy—growing your market share by bundling products, lowering prices, and advertising—basically everything you can do through marketing after your product is created.. This strategy is often confused with market development strategy.
Diversification strategy—growing your market share by entering entirely new markets.
Below, we’ll explore 21 growth strategy examples from teams that have achieved massive growth in their companies. Many examples use one or more of the 4 classic growth strategies, but others are outside of the box. These out-of-the-box approaches are often called “growth hacking strategies”.
Growth strategy examples
Each of these examples should be understood in the context of the company where they were executed. While you can’t copy and paste their success onto your own unique product, there’s a lesson to be learned and leveraged from each one.
Now let’s get to it!
1. How Clearbit drove 100k inbound leads by giving away free tools
Clearbit‘s APIs allow you to do amazing things—like enrich trial sign-ups on your homepage—but to use them effectively, you need a developer’s touch. Clearbit needed to get developers to try their tool in order to grow. Their strategy involved dedicating their own developer time to creating free tools, APIs, and browser extensions that would give other developers a chance to play.
They experimented with creating free APIs for very specific purposes. One of the most successful was their free Logo API which allowed companies to quickly imprint their brand stamp onto pages of their website. Clearbit launched the API on ProductHunt and spread the word to their developer communities and email list—within a week, the Logo API had received 60,000 views and word-of-mouth traction had grown rapidly.
Clearbit made a bite-sized version of their overall product. The Logo API represents Clearbit at large—it’s a flexible and easy-to-implement way for companies to integrate data into their workflows.
Offering a bite-sized version of your product that provides value for free creates an incredible first impression. It validates that what you’re making really works and drives testers to commit to your main product. And it can be an incredibly effective source of acquisition—Clearbit’s free APIs have driven over 100,000 inbound leads for the company.
2. How Segment increased conversions by experimenting with paid acquisition
As a customer analytics tool, Segment practices what it preaches when it comes to acquisition. The Segment team has developed a data-driven, experimental approach to identify its most successful acquisition channels and double down on those strategies.
In an AMA, their head of marketing Diana Smith told the audience that they’d recently been experimenting with which paid channels worked for them. “In a nutshell, we’ve learned that retargeting definitely works and search does not,” Smith explained.
Segment learned that their marketing efforts were more effective when they reached out to users who’d viewed their site before versus when they relied on users finding them through search. So they set out to refine their retargeting strategy. They started customizing their Facebook and Twitter ads to visitors who’d viewed particular pages: to visitors who’d viewed their docs, they sent API-related messages; to visitors who’d looked at pricing, they sent free trial messages.
By narrowing your acquisition strategy, you can dramatically increase ROI on paid acquisition, increasing conversions while minimizing CAC.
3. How Tinder tripled its user base by reaching target users in person
Tinder famously found success by gamifying dating. But to get its growth started, Tinder needed a strategy that would allow potential users to play the game and find a willing dating pool on the other side of the app.
In order to validate their product, people needed to see it in action. Tinder’s strategy was surprisingly high touch—they sent a team to visit potential users and demonstrate the product’s value in person.
They invested in a tour of sororities and fraternities at colleges to manually recruit signups from their target audience: millennials. It was a move that increased their user base from less than 5,000 users to over 15,000.
First, they helped groups of women install the app, guiding them past initial install friction.
Then they did the same pitch to a group of men. Both cohorts were able to see value quickly because the app was now used people who had something important in common—they all went to the same school.
To find the right growth strategy for your product, you have to understand what it will take for users to see it working. Tinder’s in-person pitches were a massive success because it helped users see value faster by populating the 2-sided app with more relevant connections.
4. How Zapier growth hacked signups by writing about other products
Zapier is all about integrations—it brings together tools across a user’s tech stack, allowing events in one tool to trigger events in another, from Asana to HubSpot to Buffer. The beauty of Zapier is that it sort of disappears behind these other tools. But that raises an interesting question: How do you market an invisible tool?
Zapier’s strategy was to leverage its multifaceted product personality through content marketing. The team takes every new integration on Zapier as a new opportunity to build authority on search and to appeal to a new audience.
This strategy helped their blog grow from scratch to over 600,000 readers in just 3 years, and the blog continues to grow as new tools and integrations are added to Zapier.
If you have a product with multiple use cases and integrations, try targeting your content marketing to specific audiences, rather than aiming for a catch-all approach.
5. How Twitter strengthened their network effect with onboarding suggestions
Andy Johns arrived at Twitter as a product manager in 2010, when the platform already had over 30 million active users. But according to Johns, growth was slowing. So the Twitter user growth team got creative and tried a new growth experiment every day—the team would pick an area in which to engage more users, create an experiment, and nudge the needle up by as much as 60,000 users in a day.
One crucial user growth strategy that worked for Twitter was to coax users into following more people during the onboarding. They started suggesting 10 accounts to new users shortly after signup.
Because users never had to encounter an empty Twitter feed, they were able to experience the product’s value much faster.
Your users’ first aha moment—whether it’s connecting with friends, sending messages, or sharing files—should serve to give them a secure footing in your product and nudge your network effect into action one user at a time.
6. How LinkedIn growth hacked connections by asking a simple question
LinkedIn was designed to connect users. But in the very beginning, most users still only a few connections and needed help making more.
LinkedIn’s strategy was to capitalize on high user motivation just after signup. Nicknamed the “Reconnect Flow,” LinkedIn implemented a single question to new users during onboarding: “Where did you used to work?”
Based on this input, LinkedIn then displayed a list of possible connections from the user’s former workplace. This jogged new users’ memories and reduced the effort required to reconnect with old colleagues . Once they had made this step, users were more likely to make further connections on their own.
Thanks to this simple prompt, LinkedIn’s pageviews increased by 41%, searches jumped up 33%, and users’ profiles became richer with 38% more work positions listed.
If you notice your users aren’t making the most of your product on their own, help them out while you have their attention. Use the momentum of your onboarding to help your users become engaged.
7. How Facebook increased week 1 retention by finding its north star metric
Facebook’s active user base surpassed 1 billion in 2012. It’s easy to look at the massive growth of Facebook and see it as a sort of big bang effect—a natural event difficult to pick apart for its separate catalysts. But Facebook’s growth can be pinned down to several key strategies.
Again and again, Facebook carved out growth by maintaining a steely focus on user behavior data. They’ve identified markers of user success and used those markers as North Star metrics to guide their product decisions.
Once Facebook had identified their activation metric, they crafted the onboarding experience to nudge users up to the magic number.
By focusing on a metric that correlates with stickiness, your team can take a scientific approach to growing engagement and retention, and measuring its progress.
8. How Slack got users to stick around by mirroring successful teams
Slack has grown by watching how teams interact with their product. Their own team was the very first test case and from then on, they’ve refined their product by engaging companies to act as testers.
To understand patterns of retention and churn, Slack peered into their user data. They found that teams who’d sent 2,000 or more messages almost never dropped out of the product. That’s a lot of messages—you only get to that number by really playing around with the product and integrating it into your routine.
Slack knew they had to give new users as many reasons as possible to send messages through the platform. They started plotting interactions with users in a way that encouraged multiple message sending.
For example, Slack’s onboarding experience simulates how a seasoned Slack user behaves. New users are introduced to the platform through interactions with the Slackbot, and are encouraged to upload files, use keyboard shortcuts, and start new conversations.
Find what success means for your product by watching loyal users closely. Mirror that behavior for new users, and encourage them to get into a pattern that leads to long-term retention.
9. How ConvertKit grew $125,000 MRR by helping users switch tools
In early 2013, self-employed e-book writer Nathan Barry publicly set himself an unusual resolution. He announced the “Web App Challenge”—he wanted to build an app from scratch and get to $5,000+ in monthly recurring revenue within 6 months.
Though he didn’t quite make it to that $5,000 mark, he did build a product—ConvertKit—with validated demand that went on to reach $125,000 in MRR per month.
Barry experimented with a lot of growth strategies over the first 3 three years, but the one he kept turning back to was direct communication with potential customers. Through personalized emails, Barry found tons of people who loved the idea of ConvertKit but said it was too much trouble for them to think about switching tools—all their contacts and drafts were set up in their existing tools.
So Barry developed a “concierge migration service.” The ConvertKit team would literally go into whichever tool the blogger was using, scrape everything out, and settle the new customer into ConvertKit. Just 15 months after initiating this strategy, ConvertKit was making $125,000 in MRR.
By actively reaching out and listening to you target users, you’ll be better able to identify precise barriers to entry and come up with creative solutions to help them overcome these hurdles.
10. How Yahoo doubled mobile revenue by rearranging their team
When Yahoo doubled their mobile revenue between 2012 and 2013, it wasn’t just the product that evolved. Yahoo had hired a new leader for its Mobile and Emerging Products, Adam Cahan. As soon as Cahan arrived, he set to work making organizational changes that allowed Yahoo’s mobile division to get experimental, iterate, and develop new products quickly.
First, he encouraged elements of a startup environment. Cahan brought together talented individuals from different disciplines—design, product management, engineering—and encouraged them to work like a founding team to focus solely on developing mobile products that would grow.
Cahan maintained that collaborative environment even as the division grew to 50 members. By making every member of the team focused on user experience before all else, he removed some of the bottlenecks and divisions that often build up in a large tech company. He gave the team a mission to discover how to make Yahoo better for customers, even if that meant dismantling the status quo or abandoning older software.
In 2 years, Cahan grew Yahoo’s mobile division from 150 million mobile users to 550 million. By hiring the right people and enabling them to focus on solving problems for users, he had opened the doors for organic growth.
11. How Stripe grew by looking after developers first
Payment processing platform Stripe always knew that developers were the key to adoption of their service. Founders John and Patrick Collison started Stripe to address a very specific problem—developers were sorely in need of a payment solution they could adapt to different merchant needs and match the speed and complexity of the buyer side of the ecommerce interface.
Merchants started clamoring for Stripe because their developers were raving about it—today, Stripe commands 15.34% of the market share for payment processing. That’s in large part to Stripe’s strategy of prioritizing the needs of developers first and foremost. For instance:
Code could only get Stripe so far—so in order to drive adoption, they focused on creating clear, comprehensive documentation so that developers could pick up Stripe products and run with them.
Stripe created a library of docs that lead the user through each product. There’s more plain English in these docs than code, bridging the gap for new users.
There’s a “Try Now” section where users can see what it takes to tokenize a credit card with Stripe.
Know your audience. By focusing on the people that are most directly affected by your problem, you can generate faster and more valuable word-of-mouth.
12. How Groove turned high churn around with targeted emails
In 2013, help desk tool Groove was experiencing a worryingly high churn rate of 4.5%. They were acquiring new users just fine, but people were leaving as fast as they came. So they set out to get to know these users better. It was a strategy that would allow them to reduce churn from 4.5% to 1.6%. “Your customers probably won’t tell you when they hit a snag,” says Alex Turnbull, founder and CEO of Groove. “Dig into your data and look for creative ways to find those customers having trouble, and help them.”
Groove used Kissmetrics to examine customer data. They identified who was leaving and who was staying in the app.
They compared the user behavior of both cohorts and found that staying in the app was strongly correlated with performing certain key actions—like being able to create a support widget in 2 to 3 minutes. Users who churned were taking far longer, meaning that for some reason they weren’t able to get a grasp of the tool.
Groove was then able to send highly targeted emails to this second cohort, bringing them back into the app and helping them achieve value.
By using analytics, you can identify behaviors that drive engagement vs. churn, then proactively reach out to customers when you spot these behaviors in action. By getting ahead of individual cases of churn, you can drive engagement up.
13. How PayPal paid users to growth hack for them
PayPal was growth hacking referrals before it was cool. When PayPal launched, they were introducing a new type of payment method—and they knew that they needed to build trust and authority in order to grow. Their strategy involved getting early adopters to refer users to the platform.
As users grew accustomed to the idea of PayPal, signup bonuses were decreased to $10, then $5, then were phased out—but by that time, their user base had started to grow organically.
“We must have spent tens of millions in signup and referral bonuses the first year,” says David Sacks, original COO at PayPal. But that initial investment worked—PayPal’s radical first iteration of their referral program allowed them to grow to 5 million daily users in only a few months.
Incentivize your users in a way that makes sense for your business. If users adore your product, the initial cost of setting up a referral program can be recouped many times over as your users become advocates.
14. How Postmates reached 1 million deliveries by baking growth into engineering and product
In 2016, the on-demand delivery service Postmates, reached 1 million monthly deliveries. They also launched a subscription service, called Postmates Plus Unlimited.
With growing demand, Postmates focused on developing products that are highly accessible and easy to use. At the same time, they gathered funding. In October 2016, they gained another $140 million investment taking their post-money valuation to $600 million. But to cope with this growth in valuation, Postmates needed to scale their growth team.
According to Siqi Chen, VP of Growth at Postmates, the company had “an incredibly scrappy, hard working team who did the best they could with the tools given, but it’s very hard to make growth work at Postmates scale without dedicated engineering and product support.”
So the team shifted to include engineering and product at every level. Now, Postmates’ growth team has 3 arms of its own—“growth product,” “growth marketing,” and “user acquisition”—each one with its own engineering support.
By connecting their growth team directly to the technical decision makers, Postmates created a team that can scale with the company.
15. How BuzzFeed grew to 9 billion monthly visitors with their “golden rules of shareability”
BuzzFeed is a constantly churning content machine, publishing hundreds of pieces a day, and getting over 9 billioncontent views per month. BuzzFeed’s key growth strategy has been to define virality, and pursue it in everything they do.
Jonah Peretti, BuzzFeed’s CEO, shut off the noise and started listening to readers. He found that readers were more concerned about their communities than about the content—they were disappointed when they didn’t find something to share with their friends. The most important metrics the Buzzfeed team could judge themselves by were social shares and traffic from social sites.
BuzzFeed created the Golden Rules of Shareability to further refine their criteria, and analyzed their viral content to create a formula for what makes something inherently shareable. This is important, because it makes it possible for Team BuzzFeed to take leaps into new topics and areas.
BuzzFeed’s focus has followed its social crowd and has been able to adapt to changing reading patterns and platforms. The company has also upped its political arm, and has made big investments in branded video.
The lesson? To go viral, you need to give the people what they want, and that means striking a balance between consistency and novelty.
16. How Airbnb continued to scale by simplifying user reviews
Airbnb’s origin story is one of the infamous growth hacking tales. Founders Brian Chesky and Joe Gebbia knew their potential audience was already using Craigslist, so they engineered their own integration, allowing hosts to double post their ads to Airbnb and Craigslist at the same time.
But it’s their review strategy that has enabled Airbnb to keep growing, once this short-term tactic wore out its effectiveness. Reviews enrich the Airbnb platform. For 50% of bookings, guests visit a host profile at least once before booking a trip, and hosts with more than 10 reviews are 10X more likely to receive bookings.
They made the review process double-blind, so feedback isn’t visible until both traveler and host have filled out the form. This not only ensures more honest reviews, but removes a key source of friction from the review process.
They also enabled private feedback and reduced the timeline for leaving a review to 14 days, making reviewing more spontaneous and authentic.
By making reviews easier and more honest, Airbnb grew the number of reviews on the site, which in turn grew its authority. You can growth hack your shareability by identifying barriers to trust and smoothing out points of friction along the way.
17. How AdRoll used Appcues modal windows to increase adoption to 60%
AdRoll has a great MailChimp integration—it allows users to retarget ads to their email subscribers in MailChimp. But they found that very few users were actually making use of this feature.
Peter Clark, head of Growth at AdRoll, wanted to experiment with in-app messaging in order to target the right Adroll users more effectively.
But growth experiments like this require rapid iteration. His engineers were better suited to longer development cycles, and he didn’t want to disrupt the flow of his organization.So Peter and his team started using Appcues to create custom modal windows quickly and easily—and without input from their technical team members.
With a code-free solution, AdRoll’s growth team could design and implement however many windows they needed to drive adoption of the features they were working on. Here’s how it worked for the MailChimp integration:
The team first used a tool called Datanyze to isolate users who used both AdRoll and MailChimp.
They copied this list into Appcues and created the modal window below, targeting it only to appear to users with both tools who could take immediate advantage of the integration.
They set the modal to appear as users arrived logged in to their dashboards—the core area of the AdRoll tool, in which users are already poised to take action on their ad campaigns.
This single experiment yielded thousands of conversions and ended up increasing adoption rate of the integration to 60%. The experiment is so easy to replicate that Clark and the team now use modal windows for all kinds of growth experiments.
18. How GitHub grew to 100,000 users in a year by nurturing its network effect
GitHub began as a software development tool called Git. It was designed to solve a problem its coder founders were having by enabling multiple developers to work together on a single project. But it was the discussion around Git—what the founders nicknamed “the Github”— that became the tool’s core value.
Github’s founders realized that the problem of collaboration wasn’t just a practical software problem—the whole developer community was missing a communal factor. So they focused on growing the community side of the product, creating a freemium product with an open-source repository where coders could come together to discuss projects and solve problems with a collective mindset.
They created the ability to follow projects and track contributions, so there’s both an element of camaraderie and an element of competitiveness. This turned GitHub into a sort of social network for coding. A little over a year after launch, Github had gained its first 100,000 users. In July of 2012, GitHub secured $100M in venture capital.
By catalyzing the network effect, it’s possible to turn a tool into a culture.For GitHub, the more developers got involved, the better the tool became. Find a community for your product and give them a place to come together.
19. How Yelp reached 176 million unique monthly visits by gamifying reviews
It’s relatively easy for a consumer review site to get drive-by traffic. What makes Yelp different, and allows it to draw return visitors and community members, is that it has strategically grown the social aspect of its platform.
This is what has earned Yelp 176 million unique monthly visitors in Q2 2019 and has allowed them to overtake competitors by creating their own category of service. Yelp set out to amplify its existing network effect by rewarding users for certain behaviors.
They created user levels—users could achieve “Elite” status by writing good reviews frequently and for voting and commenting on other users’ reviews.
Yelp judged reviews based on several factors, including level of detail and how many votes of approval they received. All of these factors helped to make Yelp more shareable. Essentially, they were teaching loyal users to be better content creators by rewarding them for upping the quality of Yelp’s content.
By making reviews into a status symbol, Yelp turned itself into a community with active members who feel a sense of belonging there—and who feel motivated to use the platform more often.
20. How Etsy grew to 42.7 million active buyers by empowering sellers
Etsy reached IPO with a $2 billion valuation in 2015, ten years after the startup was founded. Today, the company boasts 42.7 million active buyers and 2.3 million active sellers who made $3.9 billion in annual gross merchandise sales in 2018. Not too shabby (chic)!
The key to their success was Etsy’s creation of a “community-centric” platform. Rather than building a simple ecommerce site, Etsy set about to create a community of like-minded craft-makers. One of the ways they did this was by boosting organic new user growth by actively encouraging sellers to share their wares on social media.
First, Etsy’s strategy was to focus on the seller side of its user acquisition. They gave their sellers tons of support but also tons of independence to promote and curate their businesses—which ultimately gave sellers a sense of ownership over their own success. Thanks to this approach, Etsy sellers were motivated to recruit their own buyers, who then visited Etsy and got hooked on the site itself.
Etsy’s seller handbook is basically a course in how to operate a small online business—hashtags and all. Vendors create their own regulars, and drum up their own new business through social sharing, while Etsy positions itself as the supportive platform.
If your product involves a 2-sided market, focus on one side of that equation first. What can you do to enable those people to become an acquisition channel in and of themselves?
21. How IBM created a growth hacking team to spur startup-level growth
As cloud-based software has taken off, traditional hardware technology companies have struggled. IBM has been proactive in their efforts to redefine its brand and product offering for an increasingly mobile audience.
Faced with an increasingly competitive, cloud-based landscape, IBM decided that it was time to start telling a different story. This legacy giant began acting more like a nascent startup, as the company aggressively reinvented its portfolio.
Their strategy for reinvigorating growth and achieving startup-like mentality has been to take a product-led approach.
In 2014, IBM created a growth hacking team. Already a large corporation, IBM didn’t need to climb the initial hill of growth to get its product off the ground. But by building this focused team, it aimed to grow into new areas and new audiences with “data-driven creativity,” by using the small business strategies it was seeing in the startup scene.
IBM now essentially has startup-sized teams within its massive team, working in a lab style with the autonomy to test marketing strategies
No matter what your team looks like—whether it’s a nimble 10-person startup or an enterprise with low flexibility—you can turn your organizational structure into a space where growth can thrive. Of course, that achievement is not without its struggles. But as Nancy Hensley, Chief Digital Officer of Data and AI at IBM says:
“There’s always pain in transformation. That’s how you know you’re transforming!”
Listen up before you get loud
None of these growth spurts happened by changing a whole company all at once. Instead, these teams found something—something small, a way in, a loophole, a detail—and carved out that space so growth could follow.
Whether you find that a single feature in your product is the key to engaging users, or you discover a north star metric that allows you to replicate success—pinpoint your area for growth and dig into it.
Pay attention. Listen to your users and notice what’s happening in your product and what could be happening better. That learning is your next growth strategy
Comments Off on GitHub’s Top 100 Most Valuable Repositories Out of 96 Million – Hackernoon
GitHub is not just a code hosting service with version control — it’s also an enormous developer network.
The sheer size of GitHub at over 30 million accounts, more than 2 million organizations, and over 96 million repositories translates into one of the world’s most valuable development networks.
How do you quantify the value of this network? And is there a way to get the top repositories?
Here at U°OS, we ran the GitHub network through a simplified version¹ of our reputation algorithm and produced the top 100 most valuable repositories.
The result is as fascinating as it is eclectic in the way that it does feel like a good reflection of our society’s interest in the technology and where it moves.
There are the big proprietary players with open source projects — Google, Apple, Microsoft, Facebook, and even Baidu. And at the same time, there’s a Chinese anti-censorship tool.
There’s Bitcoin for cryptocurrency.
There’s a particle detector for CERN’s Large Hadron Collider.
There are gaming projects like Space Station 13 and Cataclysm: Dark Days Ahead and a gaming engine Godot.
There are education projects like freeCodeCamp, Open edX, Oppia, and Code.org.
There are web and mobile app building projects like WordPress, Joomla, and Flutter to publish your content on.
There are databases to store your content for the web like Ceph and CockroachDB.
And there’s a search engine to navigate through the content — Elasticsearch.
There are also, perhaps unsurprisingly, jailbreak projects like Cydia compatibility manager for iOS and Nintendo 3DS custom firmware.
And there’s a smart home system — Home Assistant.
All in all, it’s really a great outlook for the technology world: we learn, build stuff to broadcast our unique voices, we use crypto, break free from proprietary software on our hardware, and in the spare time we game in our automated homes. And the big companies open-source their projects.
Before I proceed with the list, a result of running the Octoverse through the reputation algorithm also produced a value score for every individual GitHub contributor. So, if you have a GitHub account and curious, you can get your score at https://u.community/github and convert it to a Universal Portable Reputation.
Comments Off on Improving the Accuracy of Automatic Speech Recognition Models for Broadcast News – Appen
In their paper entitled English Broadcast News Speech Recognition by Humans and Machines, the team proposes to identify techniques that close the gap between automatic speech recognition (ASR) and human performance.
Where does the data come from?
IBM’s initial work in the voice recognition space was done as part of the U.S. government’s Defense Advanced Research Projects Agency (DARPA) Effective Affordable Reusable Speech-to-Text (EARS) program, which led to significant advances in speech recognition technology. The EARS program produced about 140 hours of supervised BN training data and around 9,000 hours of very lightly supervised training data from closed captions from television shows. By contrast, EARS produced around 2,000 hours of highly supervised, human-transcribed training data for conversational telephone speech (CTS).
Lost in translation?
Because so much training data is available for CTS, the team from IBM and Appen endeavored to apply similar speech recognition strategies to BN to see how well those techniques translate across applications. To understand the challenge the team faced, it’s important to call out some important differences between the two speech styles:
Broadcast news (BN)
Clear, well-produced audio quality
Wide variety of speakers with different speaking styles
Varied background noise conditions — think of reporters in the field
Wide variety of news topics
Conversational telephone speech (CTS)
Often poor audio quality with sound artifacts
Unscripted
Interspersed with moments where speech overlaps between participants
Interruptions, sentence restarts, and background confirmations between participants i.e. “okay”, “oh”, “yes”
How the team adapted speech recognition models from CTS to BN
The team adapted the speech recognition systems that were so successfully used for the EARS CTS research: Multiple long short-term memory (LSTM) and ResNet acoustic models trained on a range of acoustic features, along with word and character LSTMs and convolutional WaveNet-style language models. This strategy had produced results between 5.1% and 9.9% accuracy for CTS in a previous study, specifically the HUB5 2000 English Evaluation conducted by the Linguistic Data Consortium (LDC). The team tested a simplified version of this approach on the BN data set, which wasn’t human-annotated, but rather created using closed captions.
Instead of adding all the available training data, the team carefully selected a reliable subset, then trained LSTM and residual network-based acoustic models with a combination of n-gram and neural network language models on that subset. In addition to automatic speech recognition testing, the team benchmarked the automatic system against an Appen-produced high-quality human transcription. The primary language model training text for all these models consisted of a total of 350 million words from different publicly available sources suitable for broadcast news.
Getting down to business
In the first set of experiments the team separately tested the LSTM and ResNet models in conjunction with the n-gram and FF-NNLM before combining scores from the two acoustic models in comparison with the results obtained on the older CTS evaluation. Unlike results observed on original CTS testing, no significant reduction in the word error rate (WER) was achieved after scores from both the LSTM and ResNet models were combined. The LSTM model with an n-gram LM individually performs quite well and its results further improve with the addition of the FF-NNLM.
For the second set of experiments, word lattices were generated after decoding with the LSTM+ResNet+n-gram+FF-NNLM model. The team generated n-best lists from these lattices and rescored them with the LSTM1-LM. LSTM2-LM was also used to rescore word lattices independently. Significant WER gains were observed after using the LSTM LMs. This led the researchers to hypothesize that the secondary fine-tuning with BN-specific data is what allows LSTM2-LM to perform better than LSTM1-LM.
The results
Our ASR results have clearly improved state-of-the-art performance, and significant progress has been made compared to systems developed over the last decade. When compared to the human performance results, the absolute ASR WER is about 3% worse. Although the machine and human error rates are comparable, the ASR system has much higher substitution and deletion error rates.
Looking at the different error types and rates, the research produced interesting takeaways:
There’s a significant overlap in the words that ASR and humans delete, substitute, and insert.
Humans seem to be careful about marking hesitations: %hesitation was the most inserted symbol in these experiments. Hesitations seem to be important in conveying meaning to the sentences in human transcriptions. The ASR systems, however, focus on blind recognition and were not successful in conveying the same meaning.
Machines have trouble recognizing short function words: the, and, of, a, that and these get deleted the most. Humans on the other hand, seem to catch most of them. It seems likely that these words aren’t fully articulated so the machine fails to recognize them, while humans are able to infer these words naturally.
Conclusion
The experiments show that speech ASR techniques can be transferred across domains to provide highly accurate transcriptions. For both acoustic and language modeling, the LSTM- and ResNet-based models proved effective and human evaluation experiments kept us honest. That said, while our methods keep improving, there is still a gap to close between human and machine performance, demonstrating a continued need for research on automatic transcription for broadcast news.
Comments Off on Which New Business Models Will Be Unleashed By Web 3.0? – Fabric
The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.
The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?
A history of Business Models across Web 1.0, Web 2.0 and Web 3.0
We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.
To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:
In 1999, while enjoying good traffic, they were clearly struggling with their business model. Their lead investor Mike Moritz (Sequoia Capital) openly stated “we really couldn’t figure out the business model, there was a period where things were looking pretty bleak”.
In 2001, Google was making $85m in revenue while their rival Overture was making $288m in revenue, as CPM based online advertising was falling away post dot-com crash.
In 2002, adopting Overture’s ad model, Google went on to launch AdWords Select: its own pay-per-click, auction-based search-advertising product.
After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.
Looking back at the wave of Web 2.0 Business Models
Content
The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.
Marketplaces
Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.
Open Source Software
Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.
SaaS
In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.
Sharing Economy
It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.
Advertising
While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.
Emerging Web 3.0 Business Models
Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.
By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.
Issuing a native asset
Holding the native asset, building the network:
Taxation on speculation (exchanges)
Payment tokens
Burn tokens
Work Tokens
Other models
Issuing a native asset:
Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.
These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.
Holding the native asset, building the network:
Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.
While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.
Taxing the Speculative Nature of these Native Assets:
The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.
Payment Tokens:
With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.
Burn Tokens:
Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.
Work Tokens:
One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.
A wide array of other models are being explored and worth touching upon:
Dual token model such as MKR/DAI & SPANK/BOOTY where one asset absorbs the volatile up- & down-side of usage and the other asset is kept stable for optimal transacting.
Governance tokens which provide the ability to influence parameters such as fees and development prioritisation and can be valued from the perspective of an insurance against a fork.
Tokenised securities as digital representations of existing assets (shares, commodities, invoices or real estate) which are valued based on the underlying asset with a potential premium for divisibility & borderless liquidity.
Transaction fees for features such as the models BloXroute & Aztec Protocol have been exploring with a treasury that takes a small transaction fee in exchange for its enhancements (e.g. scalability & privacy respectively).
Tech 4 Tokens as proposed by the Starkware team who wish to provide their technology as an investment in exchange for tokens — effectively building a treasury of all the projects they work with.
Providing UX/UI for protocols, such as Veil & Guesser are doing for Augur and Balance is doing for the MakerDAO ecosystem, relying on small fees or referrals & commissions.
Network specific services which currently include staking providers (e.g. Staked.us), CDP managers (e.g. topping off MakerDAO CDPs before they become undercollateralised) or marketplace management services such as OB1 on OpenBazaar which can charge traditional fees (subscription or as a % of revenues)
Liquidity providers operating in applications that don’t have revenue generating business models. For example, Uniswap is an automated market maker, in which the only route to generating revenues is providing liquidity pairs.
With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.
When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.
Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller
Comments Off on Rise of deep tech startups in Japan – Norihiko Isawa
Although the news was inconspicuous at the time in Japan, it was indeed remarkable that Airbus Ventures participated as a lead investor in the $7.3 million Series A round for Infostellar, a Tokyo-based satellite antenna sharing platform in September 2017. The financing was substantially the first lead investment by a venture capital arm of a prestigious European multinational corporation in the Japanese startup history.
In addition to Infostellar, Airbus Ventures participated in the $11 million Series A round for Trillium Secure, a presently Sunnyvale-headquartered cyber security technology startup originally established in Tokyo. The financing led by JAFCO Japan, one of the largest VC firms in the country, in July 2018 is another funding round which involved a venture capital arm of an eminent European MNC, Deutsche Bahn Digital Ventures this time.
The recent two investments above imply a newly emerging omen that European capital has gradually started to go towards Japan. However, contrary to the large market sizes of and trading volumes between Europe and Japan, both a number and amount of bilateral investments are still small for the time being. Major reasons behind this limited interaction are considered to be the low visibility of Japanese startups and a lack of necessary information for investment examination in Europe. Therefore, this article principally aims at introducing the latest ecosystem and rising deep tech startups in Japan to MNCs and VC firms in Europe.
Overview of the Recent Japanese Startup Ecosystem
The global top 8 VC investment markets in 2017 from top to bottom are the United States ($71.9 billion), China ($40.0 billion), the United Kingdom ($5.8 billion), Israel ($3.9 billion), Germany ($2.9 billion), Japan ($2.5 billion), France ($2.4 billion) and Sweden ($1.7 billion). So, in terms of a VC investment amount, Japan is ranked 6th in the world, behind Germany and ahead of France.
The VC investment amounts in Japan rapidly increased after the inception of the new economic growth policy “Abenomics” installed by the Abe Administration in 2013. The quantitative easing policy increased the amount of capital in the market and encouraged private-sector investment, resulting in the dramatic change in VC investment from $0.8 billion in 2013 to $2.5 billion in 2017 with a CAGR of 35.1%.
Unlike other countries, there is a notable characteristic in Japan that corporations, instead of VC firms or institutional investors, have been the principal player in the market. Indeed, shares of VC investment amounts by players in 2017 from top to bottom are corporations (41.0%), financial institutions (16.1%), independent VC firms (14.8%), governmental agencies and universities (12.5%) and others (15.7%).
A major reason behind this characteristic is the large amount of retained earnings as cash and deposits within Japanese corporations. The overall amounts of cash and deposits within Japanese corporations have continuously increased from $1.4 trillion in 2008 to $2.2 trillion in 2017. And the richest Japanese corporations such as Toyota, Sony and Mitsubishi have as much cash and deposits as the US “Tech Giants” such as Apple, Microsoft and Alphabet do.
Large corporations also play an important role as Limited Partnership investors or sponsors of major VC funds. For example, the LP investors of World Innovation Lab, or WiL for short, the largest and most prestigious VC firm in Japan, are 28 leading corporations in major industries ranging from automobile and TMT to transportation and financial services. In addition to WiL, the lead LP investors of Global Brain, another VC giant, include large corporations such as KDDI, a major mobile operator, and Mitsui Fudosan, the largest real estate developer.
Large corporations strategically invest in VC funds in order to search for promising startups as potential business partners, especially those in frontier domains. In the past couple of years, investment in deep tech startups has significantly increased.
Rise of Deep Tech Startups in Japan
VC investments in deep tech fields such as but not limited to autonomous driving, robotics and UAVs dramatically increased in 2017. Comparing numbers of VC investments in the top 50 largest financing rounds, they are 3, 18 and 9 in 2016, 2017 and 2018, respectively. ispace, a space resource exploration company, raised $90 million, the largest financing in the Japanese startup history, from large corporations such as Japan Airlines, KDDI, Suzuki Motor, Dentsu and Konica Minolta as well as VC funds such as Mirai Creation Fund, the CVC of Toyota. In addition to ispace, GROOVE X, a consumer robot developer, raised a significant amount of approximately $40 million, mostly owing to the fame of its founder Mr. Hayashi, the product manager of Pepper at Softbank.
Autonomous driving is a core field of deep tech which recently attracts a large amount of investment. Indeed, Preferred Networks, an AI for-IoT startup spun-out from the University of Tokyo, has raised $130 million in total until now, and its lead investor is Toyota which subsequently funded $8.2 million and $95 million in 2015 and 2017, respectively. The company is notable for its brilliant founders; they are among the most intelligent researchers in the field of AI and participated in the prominent International Collegiate Programming Contest by the Association for Computing Machinery. Preferred Networks has developed big data analysis infrastructures and deep learning frameworks, and now focuses on development of deep learning technologies for IoT. In this context, Toyota Research Institute, Toyota’s AI laboratory, partnered with Preferred Networks in 2017 to co-develop AI for autonomous vehicles.
Another notable deep tech startup is Tier IV, a Nagoya University-origin startup established in 2013 which develops and provides corporations with Autoware, open source software for autonomous driving, and other hardware necessary for demonstration experiments. Tier IV has partnered with NVIDIA and intends to become a global de-facto standard in the field of autonomous driving software, through the distribution of Autoware for free. The company has raised over $25 million in total from large corporations such as KDDI, Sony, Yamaha and Aisan Technology, a surveying technology company. KDDI, Aisan Technology and Tier IV started a demonstration project of autonomous driving in February 2019.
In the field of Urban Air Mobility, a hoverbike developer A.L.I. Technologies is remarkable. The startup was established in 2016 by students from the famed Department of Aeronautics and Astronautics at the University of Tokyo. It has been funded by Drone Fund, a fund dedidicated to investments in UAVs, and itself in turn invests in drone startups to construct a technology portfolio of drone-related IPs. The startup now develops a hoverbike called Speeder and relevant traffic control systems for UAVs.
In addition to large corporations, it is funded by KSK Angel Fund, a family office of Keisuke Honda, a Japanese professional football player who once played for VVV-Venlo, AC Milan and CSKA Moscow in Europe.
There exist several reasons why deep tech startups have successfully raised vast amounts of capital from and partnered with large corporations.
In short, there is a clear win-win collaboration opportunity between large corporations and startups. Large corporations have for long focused their in-house R&D activities on strategic themes which are directly related to their core businesses, in order to increase the ROI/ Return on Investment and other efficiency indices. Therefore, they have hardly been able to deal with frontier technologies which are far from their existing businesses, resulting in a lack of innovations from Japanese corporations. This inconvenient mechanism in corporate R&D activities has produced motives for large corporations to search deep tech startups as their co-development partners. Since Japanese high-tech corporations require frontier technologies which cannot be developed in-house, startups spun-out from universities which usually have more advanced technologies become their main targets.
In general, technologies in leading universities are considered to be 10 to 20 years more advanced than technologies in large corporations. However, deep tech startups spun-out from universities tend to lack know-how and capabilities to commercialize. Therefore, they are motivated to collaborate with and look for corporations which can support them to transform their technologies into viable products and services.
Thus, the clear win-win collaboration opportunity has led to the recent tide of deep tech investments and corporate venturing.
Global Expansion by Japanese Deep Tech Startups
It is remarkable that some Japanese deep tech startups are already open to global. For example, ispace has an office in Luxemburg with 12 staffs. In addition, Rapyuta Robotics, a developer of cloud-connected low-cost multi-robot systems for security and inspection, is a spun-out venture from ETH Zurich. Furthermore, Xtreme-D, a developer of plug-in computing for next generation high-performance computing, has already opened an office in Silicon Valley, with the cooperation of WiL. Tier IV has also opened an office in Silicon Valley. One of the co-founders of Ascent Robotics, a startup for automation algorithms, is from the United States.
As already mentioned, Infostellar and Trillium Secure have been funded by Airbus and Deutsche Bahn. In addition to the two, Floadia Corporation and Cerebrex, which are both semiconductor startups, have been funded by UMC Capital, the CVC arm of UMC, the world’s second largest semiconductor foundry based in Taiwan. There only exist a few examples of such globally-funded startups, but they indicate an omen that their followers will appear soon.
In terms of hiring, some deep tech startups are comprised of multinational teams. Startups such as Ascent Robotics actively recruit staffs, especially engineers, globally and their job descriptions are prepared in English.
Potential Collaborations between European MNCs and Japanese Deep Tech Startups
As described above, Japanese deep tech startups have rapidly gained power for the last couple of years, through constant capital injection from and co-development with large corporations. At the same time, European MNCs such as Airbus and Deutsche Bahn have also reached and invested in some Japanese deep tech startups. These recent trends indicate that there exist further potentials of collaborations between Europe and Japan; Japanese deep tech startups could be relevant business partners and promising investees for European high-tech MNCs. This hypothesis is backed by four reasons.
Firstly, quite a few Japanese deep tech startups are ventures spun-out from leading universities, and thus their technologies are foremost of its kind and generally more advanced than those in large corporations which themselves are front-runners in the world. This means that Japanese deep tech startups are worth partnering in terms of technology acquisition for European high-tech MNCs.
Secondly, European and Japanese large corporations have similar organizational cultures and behaviors. For example, conservative senses of values, relatively slow and bureaucratic decision-making processes and high quality standards can be seen in both European and Japanese large corporations. This means that Japanese deep tech startups which have been co-working and therefore are now familiar with cultures and behaviors of Japanese large corporations could easily adapt to those of European ones. From the viewpoint of European high-tech MNCs, hurdles to collaborate with Japanese deep tech startups are fairly low now.
Thirdly, economic and trade relationships between Europe and Japan are expected to strengthen due to the Agreement between the European Union and Japan for an Economic Partnership which just came into effect on February 1 st, 2019. The EPA will further encourage trade of goods and services and investment between the two parties. In addition to the national-level policies, the Tokyo Metropolitan Government has been promoting global expansions of startups based in Tokyo, through the X-Hub program in which selected startups are able to receive business support from mentors such as venture capitalists and seed accelerators in partner countries. Indeed, Tier IV and Ascent Robotics have been selected as startups of the Germany onward program.
Lastly, Japanese deep tech startups have recently been more open to the global market but do not have proper business partners in Europe. This means that there exist white spaces for European high-tech MNCs to become their business partners.
Comments Off on Open Source Software – Investable Business Model or Not? – Natallia Chykina
Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”
Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).
Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).
But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?
In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.
OSS Monetization Models
By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”
While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.
As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.
Latched on to this are the monetization strategies, the most common being the following:
Extra charge for enterprise services, support, and consulting. The classic model targeted at large enterprise clients with sophisticated needs. Examples: MySQL, Red Hat, Hortonworks, DataStax
Freemium. (advanced features/products/add-ons) A custom licensed product on top of the OSS might generate a lavish revenue stream, but it requires a lot of R&D costs and time to build. Example: Cloudera, which provides the basic version for free and charges the customers for Cloudera Enterprise
SaaS/PaaS business model: The modern way to monetize the OSS products that assumes centrally hosting the software and shifting its maintenance costs to the provider. Examples: Elastic, GitHub, Databricks, SugarCRM
Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.
Influx of VC Dollars
An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).
Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.
At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).
With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.
Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.
Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.
Case Study 1: Red Hat
Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.
At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.
Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).
The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.
However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.
More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”
In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).
If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.
Case Study 2: Hortonworks
To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.
The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).
All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.
At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.
Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).
Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).
Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).
On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.
All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.
But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.
With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.
Why are VCs Investing in OSS Companies?
Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?
Good Fit for a Strategic Acquisition
Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).
What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:
Getting access to the client base. Sun is reported to have been motivated by this when it acquired MySQL. They wanted to access the SME market and cross-sell other products to smaller clients. Simply forking the product or developing a competing technology internally wouldn’t deliver the customer base and would have made Sun incur additional customer acquisition costs.
Getting control over the product. The ability to influence further development of the product is a crucial factor for a strategic buyer. This allows it to build and expand its own product offering based on the acquired products without worrying about sudden substantial changes in it. Example: Red Hat acquiring Ansible, KVM, Gluster, Inktank (Ceph), and many more
Entering adjacent markets. Acquiring open source companies in adjacent market segments, again, allows a company to expand the product offering, which makes vendor lock-in easier, and scales the business further. Example: Citrix acquiring XenSource
Acquiring the team. This is more relevant for smaller and younger projects than for larger, more well-established ones, but is worth mentioning.
What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”
With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.
My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.
A Better Monetization Model: SaaS
While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.
But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.
In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”
Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.
First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.
Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”
Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.
This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.
Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.
In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.
Success Stories for OSS with SaaS
Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.
Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.
Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.
Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.
Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).
Investable Business Model or Not?
To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.
Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.
The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.
Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive
Recent Comments