News & Press

Follow our trends news and stay up to date.

Rethinking the Professional Services Organization Post-2020 – Constellation

In 2020 professional services organizations (PSOs) are profoundly experiencing at least three different and substantial disruptions to their business, and often several secondary ones as well.

It was the arrival of COVID-19 earlier this year that led to lockdowns around the globe that have curtailed client demand and hampered project delivery. Those same lockdowns have also restricted project staff to their homes for the most part, slowing down client projects and impacting billable work. Finally, a rising economic downturn is wreaking havoc both in client budgets as well as the PSO firms’ own finances, leading to urgent calls for cost cutting and new efficiencies.

It’s the proverbial perfect storm of major challenges and it’s leading to a dramatic impact on customer success as well as affecting morale of the talent base in most PSOs. This has resulted in calls for a widespread push within firms to rapidly rethink and update their operating models to determine the right mitigations and respond effectively.

Except, as many have found, responding quickly and effectively to these challenges is hard to do when so much uncontrolled change is currently taking place.

In fact, many PSOs will be tempted to focus first on cost control to ensure short-term survival. While this is a natural response that does offer immediate and tangible control over a once-in-a-lifetime event filled with uncertainty, organizations must also remain mindful of the vital characteristics of professional services firms. Overly enthusiastic responses to this year’s disruptions can adversely impact the organizations strategic operating model long term.

Sustaining a Bridge to a Better Future

The risk is in affecting the overarching characteristics of professional services firms. In particular, it is two stand-out characteristics that make them unique in the industry: One is the nature of the highly bespoke work they do that is especially tailored for each client regardless of tools, services model or data. The second is the special nature of cultivating successful long-term client relationships. Both of these characteristics require intensive and skilled delivery capabilities. Finally, underpinning both of these foundational characteristics is the high leverage human capital model that determines both revenue and profit for the PSO, and which needs to be finely tuned across the many layers of the organization.

This was a painful lesson learned across the PSO industry from the 2008-2009 financial crisis. Decisions made too expediently negatively impacted firms long after the crisis had passed.  Studies have shown that decisions to reduce talent or cut compensation and billable time affected their client relationships as well their brand image for many years. Conversely, the firms that weathered the short-term pressure and managed to keep hard-to-replace human capital prospered as the economy recovered. In this same vein, organizations with a clear sense of the type of PSO that must emerge from the veil of 2020 — and what it will take to thrive in the resulting market conditions — will be in the best position to prosper.

The Rapidly Shifting Model of Professional Services Organizations in the Pandemic Age

Figure 1: The Pre-2020 Model of PSOs is Giving Way to a New Client/Talent Focused Model

Priorities First: Update the Firm to Reflect New Realities

It’s therefore paramount that any cost control discussions be viewed through the constructive lens of the organization’s business and talent strategy, along with its future operating model. In order to be able to do that, the business must first recalibrate their core strategy and execution with today’s fresh realities in mind:

  • Clients are going to be much more selective and demanding about projects going forward
  • Deal flows and talent sourcing will be more turbulent until well after the pandemic passes
  • Delivery talent will require better enablement and support for their new daily work realities
  • Major opportunities now exist in creating a more holistic and dynamic PSO operating model that can cost less and with less talent loss, while actually increasing margin
  • Significant new types of business and growth opportunities have come within reach to offset recent revenue impacts

In other words, there are major prospects to do more than just survive through brute force cost cutting. Instead, more elegant solutions afford themselves if first the PSO will engage in a rapid rethinking through a view of the current the art-of-the-possible. In essence, a combined business and digital transformation. This transformation will generally consist of a combination of bold new ideas, better integration and consolidation of operational activities, and powerful new technology tools including automation, holistic user experience upgrades, and powerful new concepts from the realm of digital business.

The Evolution of the PSO Through Proactive, Targeted Transformation

As it turns out, the typical PSO has been experiencing quite a bit of change in the last couple of years anyway. Trends like more dynamic staffing approaches, better automation of delivery, more project analytics and diagnostics, have all led to improved services, higher margins, and greater customer success. Often led initially by technology, which has led to simultaneous advances in re-imagining the operational models of PSOs through new capabilities, a new type of PSO is emerging that is more agile, lean, digitally-infused, and experience-centric.

Driven by industry trends, technology innovations, and changes in the world, below are the types of key shifts that are being seen in PSO organizations as a result of the events in 2020. These trends are grouped into three categories, focusing on the business/clients, the worker, and overall health and wellbeing of all PSO stakeholders.

The Business of PS: Trends

  • Predictive operations. Projects are becoming instrumented well enough today while sufficient historical project baseline data is now available to routinely predict risks and anticipating opportunities before they actually happen. Using these insights can lead to significant cost savings, higher success rates, and quality improvements.
  • M&A support. A wave of mergers and acquisitions will inevitably occur out of the events of 2020, particularly of smaller PS firms. PSOs that have sophisticated infrastructure and processes for managing the financial, operational, and structural merging of clients will have an advantage.
  • Large portfolio management. Most PSO are not taking advantage of the ability to manage large portfolios of projects across a client to maximize talent reuse, achieve economies of scale, and improve delivery.
  • Next-generation client engagement. As the industry becomes hypercompetitive, the time is right for a more engaging, sustained, informative, and transparent connection to the client using a combination of technology, user experience, and real-time data flows. This higher quality delivery approach will result in increase of project share within clients against other PSO firms.
  • New growth models. Most PSOs have readily accessible untapped growth opportunities which they can add to their existing portfolios to increase sales.
  • New business models. The time is ripe for PSOs to lateral over into adjacent business models such as subscriptions, IP licensing, strategic data services, and annual recurring revenue that can provide vital new green fields for resilience and expansion

The PS Career: Trends

  • Smart recruitment. New models exist for recruiting and project matching via AI, while talent screening and the pre/onboarding process can be made more intelligent and automated. This will drive bottom-line business benefits while also increasing acquisition and retention.
  • Better talent experience. The top workers will have expectations of a general return to the quality of work life they had prior to 2020. PSOs that proactively deliver on this in remote work scenarios while also uplevelling the overall worker experience will have significant retention benefits.
  • Learning the future of PS. The existential changes and new opportunities in the PSO world must be better communicated to workers, so they can help realize as well as reap the benefits enumerated here.
  • Hybrid talent sourcing. New dynamic staffing models — aka the Gig Economy for professional services — will mix with full-time employment to create much stronger teams that are also more cost contoured while attracting new types of diverse talent.

Health and Wellbeing: Trends

  • Delivery team engagement. Creating enabling and more connected working environments in the new remote work situation particularly for delivery teams is essential to preserve a connection to the “mothership” while also nurturing workers through the tough and challenging times.
  • Wellbeing tracking. Tools and processes that track the physical, mental, and psychological health of PSO stakeholders, from project staff, back office, and clients — and provide appropriate assistance when needed — will be increasingly expected and has already become a hallmark of best-in-class employers.

In summary, PSOs have a historic opportunity to pivot to adapt to the significant disruptions they have faced so far in 2020. By adopting an updated operating model and quickly delivering on it with clients and talent using new solutions, PSOs can avoid the most damaging types of cost cutting while being positioned for growth in 2021 and beyond. That is, as long as they are willing to think outside the box and adopt sensible yet far-reaching shifts in their strategies, tools, and operating models.

Source : https://www.constellationr.com/blog-news/rethinking-professional-services-organization-post-2020

Which investments generate the greatest value in venture: Consumer or Enterprise? – Sapphire

A Dive into Enterprise vs Consumer Exit Activity

In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at  Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them. 

One of our favorite areas to explore is: as a venture investor, do your odds of making better returns improve if you only invest in either enterprise or consumer companies?  Or do you need a mix of both to maximize your returns? And how should recent investment and exit trends influence your investing strategy, if at all? 

To help answer questions like these, we’ve collected and analyzed exit data for years.  What we’ve found is pretty intriguing: portfolio value creation in enterprise tech is often driven by a cohort of exits, while value creation in consumer tech is generally driven by large, individual exits. 

In general, this trend has held for several years and has led to the general belief, that if you are a consumer investor, the clear goal is to not miss that “one deal” that has a huge spike in exit valuation creation (easier said than done of course). And if you’re an enterprise investor, you want to create a “basket of exits” in your portfolio.

What Creates More Portfolio Value: Consumer or Enterprise?

2019 has been a powerhouse year for consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent declines in stock price notwithstanding). The first three quarters of 2019 alone surpassed every year since 1995 for consumer exit value – and we’re not done yet. If the consumer exit pace continues at this scale, we will be on track for the most value created at exit in 25 years, according to our analysis.

Source: S&P Capital IQ, Pitchbook

Since 1995, the number of enterprise exits has consistently outpaced consumer exits (blue line versus green line above), but 2019 is the closest to seeing those lines converge in over two decades (223 enterprise vs 208 consumer exits in the first three quarters of 2019). Notably, in five of the past nine years, the value generated by consumer exits has exceeded enterprise exits.[1]

At Sapphire, we observe the following:

  • Venture-backed enterprise tech companies have generated $884B in value since 1995; $349B from M&A and $535B from IPOs.
  • Venture-backed consumer tech companies have generated $773B in value since 1995; $153B from M&A and $620B from IPOs.
  • In total, there were 5,600+ venture-backed exits in enterprise tech and 3,300+ exits in consumer tech.

While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019 has generated a tremendous amount of value through IPOs, roughly $223 billion. However, after trading in the public markets, the aggregate value of those IPOs have decreased by $81 billion as of November 1, 2019.[3] This decrease is driven by Uber and Lyft from an absolute value basis, accounting for roughly 66% of this markdown over the same period, according to our figures. Over half of the IPO exits in 2019 have been consumer, and despite these stock price changes, consumer exits are still outperforming enterprise exits YTD given the enormous alpha they generated initially.

As we noted in the introduction, since 1995, historical data shows that years of high value creation from enterprise technology is often driven by a cohort of exits versus consumer value creation that is often driven by large, individual exits. The chart below illustrates this, showing a side-by-side comparison of exits and value creation.

Source: Pitchbook

At Sapphire, we observe the following:

  • The top five enterprise companies with the largest exits account for $79B in value creation, or 9% of the $884B generated in the enterprise category since 1995.
  • The top five consumer companies with largest exits account for $276B in value creation, or 36% of the $773B generated in the consumer category since 1995.

The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies. 

Understanding the Consumer Comeback

While total value of enterprise companies exited since 1995 ($884B) exceeds that of consumer exits ($773B), in the last 15 years, consumer returns have been making a comeback. Specifically, total consumer value exited ($538B) since 2004 exceeds that of enterprise exits ($536B).  This difference has become more stark in the past 10 years with total consumer value exited ($512B) surpassing that of enterprise ($440B). As seen in the chart below, the rolling 10-year total enterprise exit value exceeded that of consumer, until the decade between 2003-2012 where consumer exit value took the lead.

Note: Data from S&P Capital IQ and Pitchbook

Source: S&P Capital IQ, Pitchbook

We believe size and then the inevitable hype around consumer IPOs has the potential to cloud investor judgment since the volume of successful deals is not increasing.  The data clearly shows the surge in outsized returns comes from the outliers in consumer. 

As exhibited below, large, consumer outliers since 2011 such as Facebook, Uber, and Snap often account for more than the sum of enterprise exits in any given year. For example, in the first three quarters of 2019, there have been 15 enterprise exits valued at over $1B for a total of $96B.  In the same time, there have been nine consumer exits valued at over $1B for a total of $139B. Anecdotally, this can be seen from four out of the past five years being headlined by a consumer exit. While 2016 showcased an enterprise exit, it was a particularly quiet exit year.

  • 2015 – Consumer: Fitbit ($6B)
  • 2016 – Enterprise: Nutanix ($5B)
  • 2017 – Consumer: Snap ($27B)
  • 2018 – Consumer: Dropbox ($11B)
  • First 3 quarters of 2019 – Consumer: Uber ($85B)

Source: S&P Capital IQ, Pitchbook

Enterprise Deals Still Rule in M&A

While consumer deals have taken the lead in IPO value in recent years, on the M&A front, enterprise still has the clear edge. Since 1995 there have been 76 exits of $1 billion or more in value, of which 49 are enterprise companies and 27 are consumer companies. The vast majority of value from M&A has come from enterprise companies since 1995 — more than 2x that of consumer. 

Similar to the IPO chart above, acquisition value of enterprise companies outpaced that of consumer companies until recently, with 2010-2014 being the exception.

Source: S&P Capital IQ, Pitchbook

Of course, looking only at outcomes with $1 billion or more in value covers only a fraction of where most VC exits occur. Slightly less than half of all exits in both enterprise and consumer are $50 million or under in size, and more than 70 percent of all exits are under $200 million. Moreover, in the distribution chart below, we capture only the percentage of companies for which we have exit values. If we change the denominator to all exits captured in our database (i.e. measure the percentage of $1 billion-plus exits by using a higher denominator), the percentage of outcomes drops to around 3 percent of all outcomes for both enterprise and consumer.

Source: S&P Capital IQ, Pitchbook

What Does All of this Mean for Venture Investors?

There’s an enormous volume of information available on startup exits, and at Sapphire Partners, we ground our analyses and theses in the numbers. At the same time, once we’ve dug into the details, it’s equally important to zoom out and think about what our findings mean for our GPs and fellow LPs. Here are some clear takeaways from our perspective:

  • Consumer exits have surpassed enterprise over the past 15 years.
  • Consumer exits value is highly concentrated in the top deals.
  • There are more billion-dollar enterprise IPOs than billion-dollar consumer exits, so you may have more opportunities for a unicorn enterprise outcome than you do a consumer.
  • However, if you happen to invest in one of the outlier consumer exits, you could experience significant returns.  

In a nutshell, as LPs we like to see both consumer and enterprise deals in our underlying portfolio as they each provide different exposures and return profiles.  However, when these investments get rolled up as part of a venture fund’s portfolio, success is often then contingent on the fund’s overall portfolio construction… but that’s a question to explore in another post.


NOTE: Total Enterprise Value (“TEV”) presented throughout analysis considers information from CapIQ when available, and supplements information from Pitchbook last round valuation estimates when CapIQ TEV is not available. TEV (Market Capitalization + Total Debt + Total Preferred Equity + Minority Interest – Cash & Short Term Investments) is as of the close price for the initial date of trading. Classification of “Enterprise” and “Consumer” companies presented herein is internally assigned by Sapphire. Company logos shown in various charts presented herein reflect the top (4) companies of any particular time period that had a TEV of $1BN or greater at the time of IPO, with the exception of chart titled “Exits by Year, 1995- Q3 2019”, where logos shown in all charts presented herein reflect the top (4) companies of any particular year that had a TEV of $7.5BN or greater at the time of IPO. During a time period in which less than (4) companies had such exits, the absolute number of logos is shown that meet described parameters. Since 1995 refers to the time period of 1/1/1995 – 9/30/2019 throughout this article.

[1]  Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019.[2] Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares. [3] Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.

Source : https://sapphireventures.com/blog/openlp-series-which-investments-generate-the-greatest-value-in-venture-consumer-or-enterprise/

Behind the scenes: Data and technology bring food product R&D into the 21st century – Food Dive

With CPG companies under pressure to develop items faster and stretch their spending, Conagra, Mars Wrigley and Ferrara are rethinking the decades-old way of creating new things for consumers.

There was little doubt four years ago that Conagra Brands’ frozen portfolio was full of iconic items that had grown tired and, according to its then-new CEO Sean Connolly, were “trapped in time.” 

While products such as Healthy Choice — with its heart-healthymessage — and Banquet — popular for its $2 turkey and gravy and salisbury steak entrees — were still generating revenue, the products lookedmuch the same as decades before. The result: sales sharply fell as consumers turned to trendier flavors and better-for-youoptions.

Executives realized the decades-old process used to create and test products wasn’t translating into meaningful sales. Simply introducing new flavors or boosting advertising was no longer enough to entice consumers to buy. If Conagra maintained the status quo, the CPG giant only risked exacerbating the slide and putting its portfolio of brands further behind the competition.

“We were doing all this work into what I would call validation insights, and things weren’t working,” Bob Nolan, senior vice president of demand sciences at Conagra, told Food Dive. “How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”

Credit: Conagra 

Nolan and other officials at Conagra eventually decided to abandon traditional product testing and market research in favor of buying huge quantities of behavioral data. Executives were convinced the datacould do a better job of predicting eventual product success than consumers sitting in an artificial setting offering feedback.

Conagra now spends about $15 million less on testing products than it did three years ago, with the much of the money now going toward buying data in food service, natural products, consumption at home, grocery retail and loyalty cards. When Nolan started working at Conagra in 2012, he estimated 90% of his budget at the company was spent on traditional validation research such as testing potential products, TV advertisements or marketing campaigns. Today, money spent on those methods hasbeen cut to zero.

While most food and beverage companies have not changed how they go about testing their products as much as Conagra, CPG businesses throughout the industry are collectively making meaningful changes to their own processes.

With more data avaliable now than ever before, companies can change their testing protocol to answer questions they might have previously not had the budget or time to address. They’re also turning to technology such as videos and smartphones to immediately enagage with consumers or to see firsthand how they would respond to their prototype products in real-life settings, like their own homes.

As food manufacturers scramble to remain competitive and meet the shopper’s insatiable demand fornew tastes and experiences,changing how they go about testing can increase the liklihood that a product succeeds — enabling corporations to reap more revenue and avoid being one of the tens of thousands of products that fail every year. 

For Conagra, the new approach already is paying off. One success story came in the development of the company’s frozen Healthy Choice Korean-Inspired Beef Power Bowl. By combing data collected from the natural food channel and specialty stores like Whole Foods and Sprouts Farmers Market, the CPG giant found people were eating more of their food in bowls — a contrast to offerings in trays.

“How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”

 Bob Nolan

Senior vice president of demand sciences, Conagra

At the same time, information gathered from restaurants showed Korean was the fastest-growing cuisine. The data also indicatedthe most popularflavors within that ethnic category. Nolan said without the data it would have been hard to instill confidence at Conagra that marketing a product like that would work, and executives would have been more likely to focus on flavors the company was already familiar with.

Since then, Conagra rebranded Healthy Choice around cleaner label foods with recognizable, modern ingredients that were incorporated into innovations such as the Power Bowl. The overhaul helped rejuvenate the 34-year old brand, with sales jumping 20% during the last three years after declining about 10% during the prior decade, according to the company. 

Conagra has experienced similar success by innovating its other frozen brands, including Banquet and Marie Callender’s. For a company where frozen sales total $5.1 billlion annually, the segment is an important barometer for success at Conagra.

A decades-old approach

For years, food companies would come up with product ideas using market research approaches that dated back to the 1950s. Executives would sit in a room and mull over ways to grow a brand. They would develop prototypes before testing and retesting a few of them to find the one that would have the best chance of resonating with consumers. Data used was largely cultivated through surveys or focus groups to support or debunk a company idea.

“It’s an old industry and innovation has been talked about before but it’s never been practiced, and I think now it’s starting to get very serious because CPG companies are under a lot of pressure to innovate and get to market faster,” ​Sean Bisceglia, CEO of Curion, told Food Dive. “I really fear the ones that aren’t embracing it and practicing it … may damage their brand and eventaully damage their sales.”

Credit: Curion 

Information on nearly every facet of a consumer’s shopping habits and preferences can be easily obtained. There is data showing how often people shop and where they go. Tens of millions of loyalty cards reveal which items were purchased at what store, and even the checkout lane the person was in. Data is available on a broader level showing how products are selling, but CPGs can drill down on an even more granular level to determine the growth rate of non-GMO or organic, or even how a specific ingredient like turmeric is performing.

Market research firms such as Nielsen and Mintel collect reams of valuable data, including when people eat, where and how they consume their food, how much time they spend eating it and even how it was prepared, such as by using a microwave, oven or blender. 

To help its customers who want fast results for a fraction of the cost, Bisceglia said Curion has created a platform in which a product can be tried out among a random population group — as opposed to a specifically targeted audience made up of specific attributes, like stay-at-home moms in their 30s with two kids —​ with the data given to the client without the traditional in-depth analysis. It can cost a few thousand dollars with results available in a few days, compared to a far more complicated and robust testing process over several months that can sometimes cost hundreds of thousands of dollars, he said.

Curion, which has tested an estimated 8,000 products on 700,000 people during the last decade, is creating a database that could allow companies to avoid testing altogether.

For example, a business creating a mango-flavored yogurt could initially use data collected by a market research firm or someone else showing how the variety performed nationwide or by region. Then, as product development is in full swing, the company could use Curion’s information to show how mango yogurt performed with certain ages, income levels and ethnicities, or even how certain formulations or strength of mango flavor are received by consumers.

“What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again? You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”

Lori Rothman

Owner, Lori Rothman Consulting

Lori Rothman, who runs her own consulting firm to advise companies with their product testing,worked much of the last 30 years at companies including Kraft and Kellogg to determine the most effective way to test a product and then design the corresponding trial. She used to have days or weeks to review data and consumer comments before plotting out the best way to move forward, she said.

In today’s marketplace, there is sometimes pressure to deliver within a day or even immediately. Some companies are even reacting in real time as information comes in — a precedent Rothman warned can be dangerous because of the growing amount of data available and the inherent complexity in understanding it.

“It’s continuing toward more data. It’s just going to get more and more and we just have to get better at knowing what to do with it, and how to use it, and what’s actually important. What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again?” Rothman said. “You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”

Sweet relief

Ferrara Candy, the maker of SweeTarts, Nerds and Brach’s, estimated the company considers more than 100 product ideas each year. An average of five typically make it to market.

To help whittle down the list, the candy company owned by Nutella-maker Ferrero conducts an array of tests with consumers, nearly all of them done without the customary focus group or in-person interview.

Daniel Hunt, director of insights and analytics for Ferrara, told Food Dive rather than working with outside vendors to conduct research, like the company would have a decade ago, it now handles the majority of testing itself.

In the past, the company might havespent $20,000 to run a major test. It would have paid a market research firm to write an initial set of questions to ask consumers, then refine them, run the test and then analyze the information collected.

Today, Hunt said Ferrara’s own product development team, most of whom have a research background, does most of the work creating new surveys or modifying previously used ones — all for a fraction of the cost. And what might have taken a few months to carry out in the past can sometimes be completed in as little as a few weeks.

Credit: Ferrara 

“Now when we launch a new product, it’s not much of a surprise what it does, and how it performs, and where it does well, and where it does poorly. I think a lot of that stuff you’ve researched to the point where you know it pretty well,” Hunt told Food Dive. “Understanding what is going to happen to a product is more important — and really understanding that early in the cycle, being able to identify what are the big potential items two years ahead of launching it, so you can put your focus really where it’s most important.”

Increasingly, technology is playing a bigger part in enabling companies such as Ferrara to not only do more of their own testing, but providing them with more options of how best to carry it out.

Data can be collected from message boards, chat rooms and online communities popular with millennials and Gen Zers. But technology does have its limits. Ferrara aims to keep the time commitment for its online surveys to fewer than seven minutes because Hunt said the quality of responses tends to diminish for longer ones, especially among people who do them on their smartphones. 

Other research can be far more rigorous, depending on how the company plans to use the information.

“I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything we’re doing more testing and research then before but maybe just in a slightly different way than we did in the past.”

Daniel Hunt

Director of insights and analytics, Ferrara

Last summer, Ferrara created an online community of 20 people to help it develop a chewy option for its SweeTarts brand. As part of a three-week program, participants submitted videos showing them opening boxes of candies with different sizes, shapes, flavors, tastes and textures sent to them by Ferrara. Some of the products were its own candies, while others came from competitors such as Mars Wrigley’s Skittles or Starburst. Ferrara wanted to watch each individual’s reaction as he or she tried the products.

Participants were asked what they liked or disliked, or where there were market opportunites for chewy candy to help Ferrara better hone its product development. These consumers wereasked to design their own products. 

Ferrara also had people either video record themselves shopping or writing down their experience. This helped researchers get a feel for everything from when people make decisions that are impulsive or more thought out, to what would make a shopper decide not to purchase a product. As people provided feedback, Ferrara could immediately engage with them to expound on their responses.

“All of those things have really helped us get information that is more useful and helpful,” Hunt said. “I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything, we’re doing more testing and research than before, but maybe just in a slightly different way than we did in the past.”

Convincing people to change

Getting people to change isn’t easy. To help execute on its vision, Conagra spent four years overhauling the way it went about developing and testing products — a lengthy process in which one of the biggest challenges was convincing employees used to doing things a certain way for much of their career to embrace a different way of thinking.

Conagra brought in data scientists and researchers to provide evidence to show how brands grow and what consumer behavior was connected to that increase. Nolan’s team had senior management participate in training courses “so people realize this isn’t just a fly-by-night” idea, but one based on science.

The CPG giant assembled a team of more than 50 individuals— many of whom had not worked with food before — to parse the complex data andfind trends. Thismarked a dramatic new way of thinking, Nolan said.

While people with food and market research backgrounds would have been picked to fill these roles in the past, Conagra knew it would be hard to retrain them in the company’s new way of thinking. Instead, it turned to individuals who had experience indata technology, hospitality and food service, even if it took them time to get up to speed on Conagra-specific information, like the brands in its portfolio or how they were manufactured.

Conagra’s reach extended further outside its own doors, too. The company now occasionally works with professors at the University of Chicago, just 8 miles south of its headquarters, to help assess whether it is properly interpreting how people will behave. 

“In the past, we were just like everybody else,” Nolan said. “There are just so many principles that we have thrown out that it is hard for people to adjust.”

Mars Wrigley has taken a different approach, maintaining the customary consumer testing while incorporating new tools, technology and ways of thinking that weren’t available or accepted even a few years ago.

 “I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”

Lisa Saxon Reed

Director of global sensory, Mars Wrigley

Lisa Saxon Reed, director of global sensory at Mars Wrigley, told Food Dive the sweets maker was recently working to create packaging for its Extra mega-pack with 35 pieces of gum, improving upon a version developed for its Orbit brand years before. This time around, the company — which developed more than 30 prototypes — found customers wanted a recyclable plastic container they believed would keepthe unchewed gum fresh. 

Shoppers also wanted to feel and hear the packaging close securely, with an auditory “click.” Saxon Reed, who was not involved with the earlier form of the package, speculated it didn’t resonate with consumers because it was made of paperboard, throwing into question freshness and whether the package would survive as long as the gum did.

The new packaging, which hit shelves in 2016 after about a year of development, has been a success, becoming the top selling gum product at Walmart within 12 months of its launch, according to Saxon Reed. Mars Wrigley also incorporated the same packaging design for a mega pack of its 5 gum brand because it was so successful.

“If we would not have made a range of packaging prototypes and had people use them in front of us, we would have absolutely missed the importance of these sensory queues and we would have potentially failed again in the marketplace,”  Saxon Reed​ said. “If I would have done that online, I’m not sure how I would have heard thoseclues. …I don’t think those would have come up and we would have missed an opportunity to win.”

The new approach extends to the product itself, too. Saxon Reed​ said Mars Wrigley was looking to expand its Extra gum line into a cube shape in fall 2017. Early in the process, Mars Wrigley asked consumers to compile an online diary with words, pictures and collages showing how they defined refreshment. The company wanted to customize the new offering to U.S. consumers, and not just import the cube-shaped variety already in China.

Credit: Mars Wrigley 

After Mars Wrigley noticed people using the color blue or drawing waterfalls, showers or water to illustrate a feeling of refreshment, product developers went about incorporating those attributes into its new Extra Refreshers line through the color, flavor or characteristics thatfeel cool or fresh to the mouthThey later tested the product on consumers who liked gum, including through the age-old testing process where people were given multiple samples to try and asked which they preferred. 

Extra Refreshers hit shelves earlier this year and is “off to a strong start,” Saxon Reed said.

“I don’t see it as an ‘either-or’ when it comes to technology and product testing. I really see it as a ‘yes-and,’ ” she said. “How can technology really help us better understand the reactions that we are getting? But at this point, I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”

Regardless of what process large food and beverage companies use, how much money and time they spend testing out their products, or even how heavily involved consumers are, CPG companies and product testing firms agreed that an item’s success is heavily defined by one thing that hasn’t and probably never will change: taste. 

“Everybody can sell something once in beautiful packaging with all the data, but if it tastes terrible it’s not going to sell again,” Bisceglia said.

Source : https://www.fooddive.com/news/behind-the-scenes-data-and-technology-bring-food-product-rd-into-the-21st/565760/

Patent Portfolio On Sale

Hilco Streambank is seeking offers to acquire the patent portfolio and related assets of Anki, a leading AI-enabled, cloud-connected home robotics and entertainment developer. The patent portfolio covers broad claims related to autonomously controlled devices incorporating artificial intelligence and adaptive data analytics. Available assets also include trademarks and the Anki.com domain name.

Patent Portfolio

Hilco Streambank is seeking offers to acquire the patent portfolio and related assets of Anki, a leading AI-enabled, cloud-connected home robotics and entertainment developer. The patent portfolio covers broad claims related to autonomously controlled devices incorporating artificial intelligence and adaptive data analytics. Available assets also include trademarks and the Anki.com domain name.


overdrive
  • The world’s most intelligent battle racing system
  • Unique combination of hardware and software working together to create a unique gaming experience
  • Users control cars through Overdrive app on mobile devices
  • Autonomous cars are equipped with a camera and infrared sensor to read information on the track, 2 engines and a 50 mhz processor that processes 500 transactions a second

Cosmo
  • AI-powered robot that gets smarter and evolves with the user
  • Incorporates facial recognition technology, integrates with Cozmo app, users can create content for Cozmo by accessing its core functionality while learning the basics of coding
  • #1 best-selling toy on Amazon (U.S.) in 2016 and 2017
  • Target audience: youth ages 8-14
  • Control what Cozmo says and how it moves

Vector
  • Home robot with a personality, Vector can read the room, express the weather, recognize people and objects while detecting and avoiding obstacles, take a picture and more
  • Self-charging capability
  • Connected to the cloud via WiFi, meaning Vector is always getting smarter
  • Target audience: techies, gadget lovers

overdrive assets

45 issued utility patents, including 35 U.S. patents
11 published patent applications
39 pending patent applications
3 utility patents in the National Phase (PCT)
73 issued design patents

Territories Covered
U.S., E.U., China, Germany, Canada, Japan, South Korea, among others.

Large Addressable Market
The patents have been utilized in the consumer electronics and gaming space, and the addressable market extends to the smart home, security, healthcare, manufacturing, and warehousing industries, among others.

additional
  • Supporting Trademarks:
    Anki, Cozmo, Vector, Anki Overdrive, more trademarks related to product lines in development
  • Anki.com Domain Name
SaleProcess

Offers to acquire some or all of the patents and additional assets, although the seller will entertain offers received prior to that date.

Source : https://www.hilcostreambank.com/acquisition-opportunities/anki

How IHOP Innovates in a Non-Standard IT Environment – Hospitality Tech

As participants in a rapidly changing industry, those of us in the restaurant business understand the importance of innovation. From the introduction of self-service digital experiences to the emergence of third-party delivery, technology innovation has continuously proven to be a powerful force in multi-unit restaurants’ ability to drive and respond to guest behavior. However, innovation done right isn’t easy; and it is even more difficult when that innovation needs to take place in a non-standard environment.

The truth of the matter is that many multi-unit restaurant brands, especially those that are franchised, are non-standard. While regional and market variations in menus, store layouts, and technology can provide a unique, tailored experience for guests residing in a specific area, these variations also present a challenge when it comes to implementing a brand-wide technology innovation strategy.

Here to discuss the best practices for overcoming the obstacles associated with non-standard technology environments is Michael Chachula, Head of IT for IHOP Restaurants.

Q: Where does innovation come from in IHOP?

Chachula: “Most of the innovation that happens here at IHOP comes from one of two places. The first is customer demand; We continuously engage with our guests to understand the points of friction in their experience or areas where we can surprise and delight. Many of our guests have begun expecting a similar technology experience with IHOP that they have had with not only other restaurant brands but with technology providers like Uber or Apple. We hold this feedback close when forming our technology strategies. The second is analysis around the in-restaurant journey. We recognize that our guests’ most valuable currency is their time, and as a result, we continuously aim to test new technologies that make their time with us more efficient, more enjoyable, and more memorable.”

Q: What is the key to being successful when you are evaluating a new technology solution for a non-standard operational environment?

Chachula: “The word to pay attention to here is standardization. Standardization is important to enabling scalability, but that standardization cannot stem creativity. For those that are currently battling this challenge, they should look to introduce a modular, flexible, and extensible technology platform that is easy to support, but configurable enough to allow creativity in their operations community. Configurability should always be one of the top five considerations when evaluating new technology solutions for a diverse multi-unit brand; that is where technology meets operations. On top of that, those decisions should be validated through partnerships with industry experts who can help confirm that the investment that you spend on a solution won’t be an investment wasted.”

Q: What is the right way to implement new technology in this type of environment?

Chachula: “What I have found is that most of our operators share about 80% of their needs and wants when it comes to technology. What that said, the first step in preparing for a successful implementation of new technology is identifying that 20% of functionality or uniqueness that may be required from one operator to another. As that is done, and you place those unique requirements and their operational requests into logical groupings, you can begin working on how to ensure that the new technology is configured and supported properly for each one of those different groups. In this model, you are essentially creating several different configuration ‘schemas’ aligned with each of these groups. This allows increased supportability and ease of implementation when it comes to putting this new technology into the field in a fast-paced environment like an IHOP.”

Profitability Challenge for Challenger Banks – Fincog

The rise of challenger banks

Over the past years, we have witnessed a steady rise of challenger banks, or neobanks. These newly established retail- and SME banks are challenging the established banks with modern banking propositions tailored to the digital world. In the aftermath of the financial crisis, many have been founded with the vision to create a better and more fair banking experience for customers.

Starting from scratch, they collectively managed to secure their position on the market and make a sizable impact. Our database of over 150 challenger banks worldwide currently counts a collective customer base of over 200 million customers, and still growing larger every month. Similarly, our Fincog Challenger Bank Index grew almost 8x larger since 2015, representing a growth of 55% per year.

One of the biggest success stories is Revolut from the UK. The company was founded in June 2013 and launched in July 2015 with foreign exchange services. Over time, it gradually expanded its offering to include amongst others current accounts and cryptocurrency trading. Nowadays it boasts a client base of 7 million customers.

Another success story is the Brazilian bank Nubank. It was founded in 2013 with the vision to bring simple and efficient financial services for Brazilian consumers to free them from existing high fees and unnecessary complexity. Nubank offers retail customer a free current account with a credit card and personal loans, combined with innovative financial management features. Since its initial launch in 2014, it achieved over 12 million customers in Brazil and is currently valued at $10 billion.

These success stories do not stand on their own. These challengers have appeared all over the world: for example Chime and Acorns from the USA, Toss and kakaobank from South-Korea, Judo Bank from Australia and WeBank from China, amongst others.

These challengers share some important commonalities. First, they have a strong focus on the digital world, and deliver advanced mobile apps with modern banking features – often only exclusively available through a mobile app. Not only the front-end, but also the back-end is largely automated, with minimum human interaction.

Second, they offer a great customer experience. The account opening process is simple and quick, daily banking services are easy to use and intuitive, and pricing is transparent. In addition, many offer financial management services (i.e. financial overview, savings tools) and seamless payments (i.e. instant P2P payments, mobile payments). Neobanks tend to focus on a specific customer segment or product, typically areas that are underserved or overpriced by incumbent players, with a better solution. Monese, for example, enables migrant workers to easily open a bank account, without the need of a postal address – which migrant workers may lack.

Third, they typically offer very competitive pricing to compete with established players. For example, many offer a free payment account, free or low cost international money transfers and travel money, and top rates on lending and deposits.

As opposed to incumbent players, neobanks are not hindered by legacy IT systems, large organizations, or physical distribution networks. Neither are they subject to the same regulatory requirements, as they often only provide a subset of banking services or operate under a different license (instead of a full banking license). In addition, they bring a fresh view and a new culture to banking, while focusing on the customer experience.

Collectively, the neobanks are making a permanent impact on the market, driving innovation and competition, setting the benchmark for incumbent players.

Low levels of income and profitability

While these challengers are successful in attracting large number of customers, many of them haven’t quite yet made profit. Simultaneously, the larger the size, the more the losses.

We have performed a benchmark on a selection of leading challengers internationally that are centered around payments (see infographic below). Over the years, they collectively secured a customer base of over 28 million Retail & Business customers. With a combined total funding of USD 2.9 billion, they are valued at USD 17.8 billion.

We have benchmarked them on their profile, propositions, pricing and financial results. What we observe is that all have negative profitability, losing money for every customer they serve. Monzo takes the bottom of the ranking with a total net loss of USD 58 mln (GBP 47.2 mln) in YE Feb-19, equivalent to a loss of USD 18.71 per customer. And as Monzo is growing, the losses only increase; the net loss increased from USD 37.6 (GBP 30.5 mln) in 2018, a rise of 54% YoY. Monzo is not alone; Revolut, N26 and the others also saw dramatic rises in their losses.

In terms of losses per customer, Nubank seems to be the closest to break-even, with a loss of (only) USD 2 per customer. The company has the highest number of customers in our sample, the most funding and highest valuation. Nubank’s income predominantly originates from credit cards, in which it secured its position thanks to a competitive interest rate, in combination with beneficial economic circumstances that drove customers to use credit cards as an alternative for consumer finance.

The large losses are predominantly driven by the bank’s low level of income. In our benchmark we measure total income net of the cost of sales, for example also subtracting the interest expense or commission expense to the total income. Mogo and Bunq achieve the highest income per customer, respectively USD 32.38 and USD 19.72. The income of the others is lower, for some even negative.

The low level of income can be explained by various factors. First of all, pricing is generally very competitive with thin margins. Second, they all offer only a sub-set of banking services, which limits their revenue potential. Third, while they generally do have a large customer base, a relatively large share is inactive and too often use it as secondary account.

The benchmark shows some important differences with incumbent players. Lloyds Banking Group (LBG) for example is one of the largest UK banks, serving over 30 million Retail and Business customers in the country, with a full range of financial service through a combination of both digital- and physical channels. Performing the same benchmark, it does have a much larger cost-base with operational cost of $335 per customer. However, with an income of $728 per customer, it achieves a profit of $180 per customer. Amongst others this is driven by a broader product portfolio and larger customer balances (i.e. around $18,200 in loans and $17,050 in deposits per customer).

This shows that the challengers still have a long way to go to deepen the customer relationship, to grow the revenue per customer and achieve profitability.

Some fundamental obstacles, but long-term success feasible

There are various reasons for the low profitability of the neobanks. First of all, many of the challengers are rather focused on growth of customers over profitability. Similar to the strategy of earlier Tech Giants, this approach assumes that they will find a way to capitalize on the large customer base later on.

For example, N26 caught lots of attention after its latest funding round in July 2019 when co-founder Maximilian Tayenthal stated that profitability was not one of their core metrics. Tayenthal said: “We want to build a global financial services company… In the years to come we won’t see profitability, we’re not aiming to reach profitability. The good news is we have a lot of investors that have very deep pockets and that share our deep vision.”

Two years ago the CEO of Monzo, Tom Blomfield shared a similar view: “The more you grow, the more you lose and you have to turn that corner at some point… Getting to profitability is not a goal we are prioritising over delivering customers real value. If that takes 10 years, we are committed to it.”

It is true that most challengers are still relatively early phase and operating at subscale. They require large initial investments to build the company and in marketing to attract customers. Once they have established their foundation with sufficient economies of scale, they should be better positioned to be profitable.

What makes it more difficult to achieve this is that the challengers must compete with existing banking infrastructure and banking relationships. Churn-rates in banking are rather low, depending on the market, typically around 2-5% per year. Many struggle to secure the primary customer relationship, which is the most sticky and most profitable one, and offers the best opportunities for cross-sell. Instead, customers more often use the neobanks as secondary accounts for specific services or features.

This requires the challengers to offer large incentives for customers to switch banks, being a better service or price. And in fact that is the strategy of most challengers who offer very competitive pricing, with free payment accounts, low-cost international transfers etc. Moreover, charging customers for certain services is consider unfair, ‘ripping off customers’. This leaves many of them with tiny – or even negative – margins on their services.

Last, most challengers still offer a rather limited product portfolio, typically centered around the payment account only. Especially when the core services are offered for free, this provides few options to generate substantial income. Many of the UK challengers, for instance, have largely relied on interchange fees on card payments, but this now seems to be an insufficient source of income on its own. As opposed to most incumbent players that offer a full range of banking services and can benefit from cross-sell opportunities to achieve much higher revenue per customer.

Overall, the less-active customer base combined with a limited product portfolio at lower margins, leaves many challengers with rather low income per customer and often negative profitability.

This may spark the question whether these challengers are able to survive in the long-run, to achieve sufficient scale and become profitable. Surely not all will survive, in fact we already have witnessed the end of various players, such as Hufsy (Denmark) that recently ceased operations.

However, we believe that with the right strategic choices challenger banks should be able to enhance their profitability, achieving a sustainable business model with a lasting positive impact on customers. In our next blog we will share some insights on challengers that are profitably, what we can learn from them and how to improve your own profitability.

Source : https://fincog.nl/blog/15/the-profitability-challenge-for-challenger-banks

Don’t get locked up into avoiding lock-in – MartinFowler

A significant share of architectural energy is spent on reducing or avoiding lock-in. That’s a rather noble objective: architecture is meant to give us options and lock-in does the opposite. However, lock-in isn’t a simple true-or-false matter: avoiding being locked into one aspect often locks you into another. Also, popular notions, such as open source automagically eliminating lock-in, turn out to be not entirely true. Time to have a closer look at lock-in, so you don’t get locked up into avoiding it!

One of an architect’s major objectives is to create options. Those options make systems change-tolerant, so we can defer decisions until more information becomes available or react to unforeseen events. Lock-in does the opposite: it makes switching from one solution to another difficult. Many architects may therefore consider it their archenemy while they view themselves as the guardians of the free world of IT systems where components are replaced and interconnected at will.

Lock-in – an architect’s archenemy?

But architecture is rarely that simple – it’s a business of trade-offs. Experienced architects know that there’s more behind lock-in than proclaiming that it must be avoided. Lock-in has many facets and can even be the favored solution. So, let’s get in the Architect Elevator to have a closer look at lock-in.

Open-source-hybrid-multi-cloud == lock-in free?

The platforms we are deploying software on these days are becoming ever more powerful – modern cloud platforms not only tell us whether our photo shows a puppy or a muffin, they also compile our code, deploy it, configure the necessary infrastructure, and store our data.

This great convenience and productivity booster also brings a whole new form of lock-in. Hybrid/multi-cloud setups, which seem to attract many architects’ attention these days, are a good example of the kind of things you’ll have to think of when dealing with lock-in. Let’s say you have an application that you’d like to deploy to the cloud. Easy enough to do, but from an architect’s point of view, there are many choices and even more trade-offs, especially related to lock-in.

You might want to deploy your application in containers. That sounds good, but should you use AWS’ Elastic Container Service (ECS) to run them? After all, it’s proprietary to Amazon’s cloud. Prefer Kubernetes? It’s open source and runs on most environments, including on premises. Problem solved? Not quite – now you are tied to Kubernetes – think of all those precious YAML files! So you traded one lock-in for another, didn’t you? And if you use a managed Kubernetes services such as Google’s GKE or Amazon’s EKS, you may also be tied to a specific version of Kubernetes and proprietary extensions.

If you need your software to run on premises, you could also opt for AWS Outposts, so you do have some options. But that again is proprietary. It integrates with VMWare, which you are likely already locked into, so does it really make a difference? Google’s equivalent, freshly minted Anthos, is built from open-source components, but nevertheless a proprietary offering: you can move applications to different clouds – as long as you keep using Anthos. Now that’s the very definition of lock-in, isn’t it?

Alternatively, if you neatly separate your deployment automation from your application run-time, doesn’t that make it fairly easy to switch infrastructure, reducing the effect of all that lock-in? Hey, there are even cross-platform infrastructure-as-code tools. Aren’t those supposed to make these concerns go away altogether?

For your storage needs, how about AWS S3? Other cloud providers offer S3-compatible APIs, so can S3 be considered multi-cloud compatible and lock-in free, even though it’s proprietary? You could also wrap all your data access behind an abstraction layer and thus localize any dependency. Is that a good idea?

It looks like avoiding lock-in isn’t quite so easy and might even get you locked up into trying to escape from it. To highlight that cloud architecture is fun nevertheless, I defer to Simon Wardley’s take on hybrid cloud.

Shades of lock-in

Lock-in isn’t an all-or-nothing affair.

Elevator Architects (those who ride the Architect Elevator up and down) see shades of gray where many only see black and white. When thinking about system design, they realize that common attributes like lock-in or coupling aren’t binary. Two systems aren’t just coupled or decoupled just like you aren’t simply locked into a product or not. Both properties have many nuances. For example, lock-in breaks down into numerous dimensions:

  • Vendor Lock-in: This is the kind that IT folks generally mean when they mention “lock-in”. It describes the difficulty of switching from one vendor to a competitor. For example, if migrating from Siebel CRM to SalesForce CRM or from an IBM DB2 database to an Oracle one will cost you an arm and a leg, you are “locked in”. This type of lock-in is common as vendors generally (more or less visibly) benefit from it. This lock-in includes commercial arrangements, such as long-term licensing and support agreements that earned you a discount off the license fees back then.
  • Product Lock-in: Related, but different is being locked into a product. When migrating from one vendor’s product to another vendor’s, you are usually changing both vendor and product, so the two are easily conflated. Open source products may avoid the vendor lock-in, but they don’t remove product lock-in: if you are using Kubernetes or Cassandra, you are certainly locked into a specific product’s APIs, configurations, and features. If you work in a professional (and especially enterprise) environment, you will also need commercial support, which will again lock you into a vendor contract – see above. Heavy customization, integration points, and proprietary extensions are forms of product lock-in: they make it difficult to switch to another product, even if it’s open source.
  • Version lock-in: Besides being locked into a product, you may even be locked into a specific version. Version upgrades can be costly if they break existing customizations and extensions you have built (SAP, anyone?). Other version upgrades essentially require you to rewrite your application – AngularJS vs. Angular 2 comes to mind. To make matters worse, version lock-in propagates: a certain product version may require a certain (often outdated) operating system version and so on, which turns any migration attempt into a Yak-shaving exercise. You feel this lock-in particularly badly when a vendor decides to deprecate your version or discontinues the whole product line: you have to choose between being out of support or doing a major overhaul. And things can get even worse, for example, if a major security vulnerability is found in your old version and patches aren’t provided.
  • Architecture lock-in: You may also be locked into a specific kind of architecture. For example. when you use Kubernetes extensively, you are likely building small-ish services that expose APIs and can be deployed as containers. If you want to migrate to a serverless architecture, you’ll want to change the granularity of your services closer to single functions, externalize state management, utilize an event-architecture, and probably a few more things. Such changes aren’t minor, but imply a major overhaul of your application architecture.
  • Platform lock-in: A special flavor of product lock-in is being locked into a platform, especially cloud platforms. Such platforms not only run your applications, but they may also hold your user accounts and associated access rights, security policies, infrastructure segmentations and many more aspects. They also provide application-level services such as storage or machine learning services, which are generally proprietary. Staying away from these services might seem like a way to reduce platform lock-in but it’d negate one of the major motivations for moving to the cloud in the first place. Non-software people call this finding yourself between a rock and a hard place.
  • Skills lock-in: As your developers are becoming familiar with a certain type of product or architecture, you’ll have skills lock-in: it’ll take you time to re-train (or hire) developers for a different product or technology. As skills availability is one of the major constraints in today’s IT shops, this type of lock-in is very real. Some niche enterprise products have a particularly limited supply of developers, causing your cost for developers to go up. This effect is particularly visible for products that employ custom languages or, somewhat ironically, for “config only” / no-code frameworks.
  • Legal lock-in: You may be locked into a specific solution for legal reasons, such as compliance. For example, you might not be able to migrate your data to another cloud provider’s data center if it’s located outside your country. Your software provider’s license may also not allow you to move your systems to the cloud even though they’d run perfectly fine. If you decide to do it anyway, you’ll be in violation of licensing terms. Legal aspects permeate more facets of engineering than we’d commonly assume: your small-engine air craft is likely to be powered by an engine that was designed back in the 1970s and burns heavily leaded fuel: new engine designs face high legal liabilities.
  • Mental Lock-in: The most subtle, but also the most dangerous type of lock-in is the one that affects your thinking. After working with a certain set of vendors and architectures, you are likely to absorb assumptions into your decision making, which may lead you to reject alternative options. For example, you may reject scale-out architectures as inefficient because they don’t scale linearly (you don’t get twice the performance when doubling the hardware). While technically accurate, this way of thinking ignores the fact that scalability, not efficiency, is the main driver. Or you may resent short release cycles as you have observed frequent changes leading to more defects. And surely you’ve been told that coding is expensive, time-consuming, and error-prone, so you’d be better off doing everything via configuration.

Open source software isn’t a magic cure for lock-in.

In summary, lock-in is far from an all-or-nothing affair, so understanding the different flavors can help you make more conscious architecture decisions. The list also debunks common myths, such as using open source source software magically eliminating lock-in. Open source can reduce vendor lock-in, but most of the other types of lock-in remain. This doesn’t mean open source is bad, but it isn’t a magic cure for lock-in.

Making better decisions using models

Experienced architects not only see more shades of gray, they also practice good decision discipline. That’s important because we are much worse decision makers than we commonly like to believe – a quick read of Kahneman’s Thinking, Fast and Slow is in order if you have any doubt.

One of the most effective ways to improve your decision making is to use models. Even, or especially, simple models are surprisingly effective at improving decision making:

Simple but evocative models are the signature of the great scientist, but over-elaboration and over-parameterization is often the mark of mediocrity.

— George Box

That’s why you shouldn’t laugh at the famed two-by-two matrix that’s so beloved by management consultants. It’s one of the simplest and therefore most effective models as we shall soon discover.

The more uncertain the environment, the more structured models can help you make better decisions.

There’s a second important point about models: a common belief tells us that in face of uncertainty you pretty much have to “shoot from the hip” – after all everything is in flux, anyway. The opposite is actually true: our generally poor decision making only gets worse when we have to deal with many interdependencies, high degrees of uncertainty, and small probabilities. Therefore, this is where models help the most to bring much needed structure and discipline into our decision-making. Deciding on whether and to what degree to accept lock-in falls well into this category, so let’s use some models.

Lock-in as a two-by-two matrix

A simple model can help us get past the “lock-in = bad” stigma. First, we have to realize that it’s difficult to not be locked into anything, so some amount of lock-in is inevitable. Second, we may happily accept some amount of lock-in if we get a commensurate pay-off, for example in form of a unique feature or utility that’s not offered by competitive products.

Let’s express these factors in a very simple model – a two-by-two matrix:

The matrix outlines our choices along the following axes:

  • switching cost (aka “lock-in”): how difficult will be for us to move to another solution?
  • unique utility: how much are we gaining from the solution compared to alternatives?

We can now consider each of the four quadrants:

  • Disposable: Components that don’t have a unique utility and are easy to replace are the ones we may have to worry about the least. We can leave them as is or, if we face any issues, we can easily replace them. Not a bad place to be for run-of-the-mill stuff. For example, most developer IDEs (EMACS likely being a notable exception!) fall into this category: mix and match as you please and don’t get too attached to them. Cloud storage for all your photos and other personal data has also largely moved your smartphone device into this box, but more on this later.
  • Accepted Lock-in: across the diagonal are the components that lock you into a specific product or vendor, but in return give you a unique feature or utility. While we generally prefer less lock-in, this trade-off may well be acceptable. You may use a product like Google Cloud BigQuery or AWS Bare Metal Instances, knowing well that you are locked in, having made a conscious decision based on the pay-off you’re getting. For a small application, you may also happily use native AWS services because a migration is unlikely and the reduction in development and operations effort is very welcome.
  • Caution: the least favorable box is the one that locks you in but doesn’t give you a lot of unique utility. Your traditional relational database may fall into this box – does using any proprietary database really increase your revenue? Not really. However, migrating off can be a lot of effort, so you better be sure that there’s a low likelihood you’re going to need to do that. If you selected a particular hardware for your embedded system that you launched into outer space, that’s likely OK – the chances of a migration are rather low.
  • Ideal: the best stuff is the one that gives you a unique utility but at the same time is easy to switch away from. While that sounds like the ideal to strive for, you’ll have to acknowledge that the box is a bit of an oxymoron: if a solution gives you unique utility, per definition competitive products won’t have it, making a migration difficult. S3 may be a suitable example for this category – multiple cloud vendors have adopted the same APIs, making a switch to let’s say GCP relatively easy. Still, each implementation has some distinct advantages regarding locality, performance, etc. To protect this kind of portability across differentiated products it’s important that we don’t allow APIs to be copyrighted or patented.

While the model is admittedly simple, placing your software (and perhaps hardware) components into this matrix is a worthwhile exercise. It not only visualizes your exposure but also communicates your decisions well to a variety of stakeholders.

For an every-day example of the four quadrants, you may have decided to use following items, which give you varying amounts of lock-in and utility (counter-clockwise from top-right):

  • Your beloved iPhone locks you into a vendor ecosystem, but it also gives unique utility, so you are likely OK to have this Accepted Lock-in.
  • Your mobile provider contract locks you into a single network, but doesn’t really provide much utility over other networks. It’s better to exercise Caution.
  • Your phone charger has a standard connector. Sadly, many iPhones don’t, but luckily an adapter cable places still makes this gadget Disposable.
  • Many of your apps, such as messaging, give you utility, such as having your friends on it, but they are still designed to make it easy to switch, for example by using your phone’s contact list. That’s Ideal.

A unique product feature doesn’t always translate into unique utility for you.

One word of caution on the unique utility: every vendor is going to give you some form of unique feature – that’s how they differentiate. However, what counts here is whether that feature translates into a concrete and unique value for you and your organization. For example, some cloud providers run Billion-user services over their amazing global network. That’s impressive and unique, but unlikely to be a utility for the average enterprise who’s quite happy to serve 1 million customers and may be restricted to doing business in a single country. Some people still buy Ferraris in small countries with strict speed limits, so apparently not all decision making is entirely rational, but perhaps a Ferrari gives you utility in more ways than a cloud platform can.

The actual cost of lock-in

Because this simple matrix was so useful, let’s do another one. The previous matrix treats switching cost as a single element (or dimension). A good architect can see that it breaks down into two dimensions:

The matrix differentiates between the cost of making the switch from the likelihood that you’ll have (or want) to make the switch. Things that have a low likelihood and a low cost shouldn’t bother you much while the opposite end, the ones with high switching cost and a high chance of switch, are no good and should be addressed. On the other diagonal, you are taking your chances on those options that will cost you, but are unlikely to occur – that’s where you’ll want to buy some insurance, for example by limiting the scope of change or by padding your maintenance budget. You could also accept the risk – how often would you really need to migrate off Oracle onto DB2, or vice versa? Lastly, if switches are likely but cheap, you achieved agility – you embrace change and designed your system for low cost of executing it. Oddly, this quadrant often gets less attention than the top left despite many small changes adding up quickly. That’s our poor decision making at work: the unlikely drama gets more attention because what if!

When discussing the likelihood of lock-in, you’ll want to consider a variety of scenarios that’ll make you switch: a vendor may go out of business, raise prices, or may no longer be able to support your scale or functional needs. Interestingly, the desire to reduce lock-in sometimes comes in form of a negotiation tool: when negotiating license renewals you can hint your vendor that you architected your system such that switching away from their product is realistic and inexpensive. This may help you negotiate a lower price because you’ve communicated that your BATNA – your Best Alternative To a Negotiated Agreement is low. This is an architecture option that’s not really meant to be used – it’s a deterrent, sort of like a stockpile of weapons in a cold war. You might be able to fake it and not actually reduce lock-in, but you better be a good poker player in case the vendor calls your bluff, e.g. by chatting with your developers at the water cooler.

Reducing lock-in: The strike price

Pulling in our options analogy from the very beginning once more, if avoiding lock-in gives you options, then the cost of making the switch is the option’s strike price: it’s how much you pay to execute the option. The lower the switching cost you want to achieve, the higher is the option’s value and therefore the price. While we’d dream of having all systems in the “green boxes” with minimal switching cost, the necessary invest may not actually pay off.

Minimizing switching costs may not be the most economical choice.

For example, many architects favor not being locked into a database vendor or cloud provider. However, how likely is a switch really? Maybe 5%, or even lower? How much will it cost you to bring that switching cost down from let’s say $50,000 (for a semi-manual migration) to near zero? Likely a lot more than the $2,500 ($50,000 x 5%) you can expect to save. Therefore, minimizing the switching cost isn’t the sole goal and can easily lead to over-invest. It’s the equivalent of being over-insured: paying a huge premium to bring the deductible down to zero may give you peace of mind, but it’s often not the most economical, and therefore, rational, choice.

A final model (for once not a matrix) can help you decide how much you should invest into reducing the cost of making a switch. The following diagram shows your liability, defined as the product of switching cost times the likelihood that it occurs in relation to the up-front invest you need to make (blue line).

By investing in options, you can surely reduce your liability, either by reducing the likelihood of a switch or by reducing the cost of executing it. For example, using an Object-relational Mapping (ORM) framework like Hibernate is a small investment that can reduce database vendor lock-in. You could also create a meta-language that is translated into each database vendor’s native stored procedure syntax. It’ll allow you to fully exploit the database’s performance without being dependent, but it’s going to take a lot of up-front effort for a relatively unlikely scenario.

The interesting function therefore is the red line, the one that adds the up-front invest to the potential liability. That’s your total cost and the thing you should be minimizing. In most cases, with increasing up-front invest, you’ll move towards an optimum range. Additional investment into reducing lock-in actually leads to higher total cost. The reason is simple: the returns on investment diminish, especially for switches that carry a small probability. If we make our architecture ever-so-flexible, we are likely stuck in this zone of over-investment. The Yagni (you ain’t gonna need it) folks may aim for the other end of the spectrum – as so often, the trick is to find the happy medium.

The total cost of avoiding lock-in

Now that we have a pretty good grip on the costs and potential pay-offs of being locked in, we need to have a closer look at the total cost of avoiding lock-in. In the previous model we assumed that avoiding lock-in is a simple cost. In reality, though, this cost can be broken down into several components:

Complexity can be the biggest price you pay for reducing lock-in.

  • Effort: This is the additional work to be done in terms of person-hours. If we opt to deploy in containers on top of Kubernetes in order to reduce cloud provider lock-in, this item would include the effort to learn a new tool, write Docker files, configure Kubernetes, etc.
  • Expense: This is the additional cash expense, e.g. for product licenses, to hire external providers, or to attend KubeCon.
  • Underutilization: This indirect cost occurs because avoiding lock-in often disallows you from using vendor-specific features. As a result, you get less utility out of the software you use. This in turn can mean more effort for you to build the missing features or it can cause a weakness in your product.
  • Complexity: Complexity is a core element of the equation, and too often ignored. Many efforts to reduce lock-in introduce an additional layer of abstraction: JDBC, Containers, common APIs. While all useful tools, such a layer adds another moving part, increasing the overall system complexity. This in turn increases the learning effort for new team members and the chance of systemic errors.
  • New Lock-ins: Avoiding one lock-in often comes at the expense of another one. For example, you may opt to avoid AWS CloudFormation and instead use Hashicorp’s Terraform or Pulumi, which both support multiple cloud providers. However, now you are locked into another product from an additional vendor and need to figure out whether that’s OK for you.

When calculating the cost of avoiding lock-in, an architect should make a quick run down this list to avoid blind spots. Also, be aware that attempts at avoiding lock-in can be leaky, very much like leaky abstractions. For example, Terraform is a fine tool, but its scripts use many vendor-specific constructs. Implementation details thus “leak” through, rendering the switching cost from one cloud to another decidedly non-zero.

Bringing it back together

With so much theory, let’s look at a few concrete examples.

Deploying Containers

I worked with a company who packages much of their code into Docker containers that they deploy to AWS ECS. Thus they are locked into AWS. Should they invest into replacing their container orchestration with Kubernetes, which is open source? Given that feature velocity is their main concern and the current ECS solution works well for them, I don’t think a migration would pay off. The likelihood of having to switch to another cloud provider is low and they have “bigger fish to fry”.

Recommendation: accept lock-in.

Relational database access

Many applications use a relational database that can be provided by numerous vendors and open source alternatives. However, SQL dialects, stored procedures, and bespoke management consoles all contribute to database lock-in. How much should you invest into avoiding this lock-in? For most languages and run-times common mapping frameworks such as Hibernate provide some level of database neutrality at a low cost. If you want to further minimize your strike price, you’d also need to avoid SQL functions and stored procedures, which may make your product less performant or require you to spend more on hardware.

Recommendation: use low-effort mechanisms to reduce lock-in. Don’t aim for zero switching cost.

Migrating to the cloud

Rather than switching from one database vendor to another, you may be more interested in moving your application, including its database, to the cloud. Besides technical considerations, you’ll need to be careful with some vendors’ licensing agreements that may make such a move uneconomical. In these cases, it’s wise to opt for an open source database.

Recommendation: select an open source database if it can meet your operational and support needs, but accept some degree of lock-in.

Multi-cloud

Many enterprises are fascinated the idea of portable multi-cloud deployments and come up with ever more elaborate and complex (and expensive) plans that’ll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you’d want to go to the cloud: low friction and the ability to use hosted services like storage or databases.

Recommendation: Exercise caution. Read my article on multi-cloud.

Architecture at the speed of thought

It may seem that one can put an enormous amount of time contemplating lock-in. Some may even dismiss our approach as “academic”, a word which I repeatedly fail to see as something bad because that’s where most of us got our education. Still, isn’t the old black-or-white method of architecture simpler and, perhaps, more efficient?

Architectural thinking is actually surprisingly fast if you focus and stick to simple models.

In reality thinking actually happens extremely fast. Running through all the models shown in this article may really just take a few minutes and yields well-documented decisions. No fancy tooling besides a piece of paper or a whiteboard is required. The key ingredient into fast architectural thinking is merely the ability to focus.

Compare that to the effort to prepare elaborate slide decks for lengthy steering committee meetings that are scheduled many weeks in advance and usually don’t have anyone attend who has the actual expertise to make an informed decision

Source : https://martinfowler.com/articles/oss-lockin.html

21 innovative growth strategies used by top growth teams – Appcues

A growth strategy isn’t just a set of functions you plug in to your business to boost grow your product—it’s also the way in which you organize and rally as a team. 

If growth is “more of a mindset than a toolkit,” as Ryan Holiday said, then it’s a collective mindset. 

Successful growth strategies are the product of engineering, marketing, leadership, design, and product management. Whether your team consists of 2 co-founders or a skyscraper full of employees, your growth hacking strategies will only be effective if you’re able to affix them to your organization, apply a workflow, and use the results of experiments to make intelligent decisions. 

In short, there’s no plugin for growth. To increase your product’s user base and activation rate, your company will need to be methodical and tailor the strategies you read about to your unique product, problem, and target audience.

What is a growth strategy?

Before we dive into specific examples of growth strategies, let’s take a moment to establish a  proper growth strategy definition:

A growth strategy is a plan of action that allows you to achieve a higher level of market share than you currently have. Contrary to popular belief, a growth strategy is not necessarily focused on short-term earnings—growth strategies can be long-term, too. Let’s keep that in mind with the following examples.

Another thing to keep in mind is that there are typically 4 types of strategies that roll up into a growth strategy. You might use one or all of the following:

  1. Product development strategy—growing your market share by developing new products to serve that market. These new products should either solve for a new problem or add to the existing problem you product solves.
  2. Market development strategy—growing your market share by developing new segments of the market, expanding your user base, or expanding your current users’ usage of your product.
  3. Market penetration strategy—growing your market share by bundling products, lowering prices, and advertising—basically everything you can do through marketing after your product is created.. This strategy is often confused with market development strategy.
  4. Diversification strategy—growing your market share by entering entirely new markets.

Below, we’ll explore 21 growth strategy examples from teams that have achieved massive growth in their companies. Many examples use one or more of the 4 classic growth strategies, but others are outside of the box. These out-of-the-box approaches are often called “growth hacking strategies”.

Growth strategy examples

Each of these examples should be understood in the context of the company where they were executed. While you can’t copy and paste their success onto your own unique product, there’s a lesson to be learned and leveraged from each one. 

Now let’s get to it!

1. How Clearbit drove 100k inbound leads by giving away free tools

Clearbit‘s APIs allow you to do amazing things—like enrich trial sign-ups on your homepage—but to use them effectively, you need a developer’s touch. Clearbit needed to get developers to try their tool in order to grow. Their strategy involved dedicating their own developer time to creating free tools, APIs, and browser extensions that would give other developers a chance to play. 

They experimented with creating free APIs for very specific purposes. One of the most successful was their free Logo API which allowed companies to quickly imprint their brand stamp onto pages of their website. Clearbit launched the API on ProductHunt and spread the word to their developer communities and email list—within a week, the Logo API had received 60,000 views and word-of-mouth traction had grown rapidly.

Clearbit Logo API free API example that helped Clearbit generate inbound leads

Clearbit made a bite-sized version of their overall product. The Logo API represents Clearbit at large—it’s a flexible and easy-to-implement way for companies to integrate data into their workflows. 

Offering a bite-sized version of your product that provides value for free creates an incredible first impression. It validates that what you’re making really works and drives testers to commit to your main product. And it can be an incredibly effective source of acquisition—Clearbit’s free APIs have driven over 100,000 inbound leads for the company.

2. How Segment increased conversions by experimenting with paid acquisition

As a customer analytics tool, Segment practices what it preaches when it comes to acquisition. The Segment team has developed a data-driven, experimental approach to identify its most successful acquisition channels and double down on those strategies. 

In an AMA, their head of marketing Diana Smith told the audience that they’d recently been experimenting with which paid channels worked for them. “In a nutshell, we’ve learned that retargeting definitely works and search does not,” Smith explained.

Segment learned that their marketing efforts were more effective when they reached out to users who’d viewed their site before versus when they relied on users finding them through search. So they set out to refine their retargeting strategy. They started customizing their Facebook and Twitter ads to visitors who’d viewed particular pages: to visitors who’d viewed their docs, they sent API-related messages; to visitors who’d looked at pricing, they sent free trial messages. 

By narrowing your acquisition strategy, you can dramatically increase ROI on paid acquisition, increasing conversions while minimizing CAC.

3. How Tinder tripled its user base by reaching target users in person

Tinder famously found success by gamifying dating. But to get its growth started, Tinder needed a strategy that would allow potential users to play the game and find a willing dating pool on the other side of the app.

In order to validate their product, people needed to see it in action. Tinder’s strategy was surprisingly high touch—they sent a team to visit potential users and demonstrate the product’s value in person.

  • They invested in a tour of sororities and fraternities at colleges to manually recruit signups from their target audience: millennials. It was a move that increased their user base from less than 5,000 users to over 15,000.
  • First, they helped groups of women install the app, guiding them past initial install friction.
  • Then they did the same pitch to a group of men. Both cohorts were able to see value quickly because the app was now used people who had something important in common—they all went to the same school.

To find the right growth strategy for your product, you have to understand what it will take for users to see it working. Tinder’s in-person pitches were a massive success because it helped users see value faster by populating the 2-sided app with more relevant connections.

4. How Zapier growth hacked signups by writing about other products

Zapier is all about integrations—it brings together tools across a user’s tech stack, allowing events in one tool to trigger events in another, from Asana to HubSpot to Buffer. The beauty of Zapier is that it sort of disappears behind these other tools. But that raises an interesting question: How do you market an invisible tool?

Zapier’s strategy was to leverage its multifaceted product personality through content marketing. The team takes every new integration on Zapier as a new opportunity to build authority on search and to appeal to a new audience. 

The blog reads like a collective guide to hundreds of tools, with specific titles like “How to Quickly Append Text to a Note in Evernote or OneNote from Your Browser” and “How to Automatically Generate Charts and Reports in Google Sheets and Docs.” Zapier’s strategy is to sneakily make itself a content destination for the audiences of all these different tools. 

screenshot from zapier blog article about onenote

This strategy helped their blog grow from scratch to over 600,000 readers in just 3 years, and the blog continues to grow as new tools and integrations are added to Zapier.

If you have a product with multiple use cases and integrations, try targeting your content marketing to specific audiences, rather than aiming for a catch-all approach.

5. How Twitter strengthened their network effect with onboarding suggestions

Andy Johns arrived at Twitter as a product manager in 2010, when the platform already had over 30 million active users. But according to Johns, growth was slowing. So the Twitter user growth team got creative and tried a new growth experiment every day—the team would pick an area in which to engage more users, create an experiment, and nudge the needle up by as much as 60,000 users in a day. 

One crucial user growth strategy that worked for Twitter was to coax users into following more people during the onboarding. They started suggesting 10 accounts to new users shortly after signup. 

Because users never had to encounter an empty Twitter feed, they were able to experience the product’s value much faster.

mobile screenshot of twitter onboarding suggestions for accounts to follow

Your users’ first aha moment—whether it’s connecting with friends, sending messages, or sharing files—should serve to give them a secure footing in your product and nudge your network effect into action one user at a time.

6. How LinkedIn growth hacked connections by asking a simple question

LinkedIn was designed to connect users. But in the very beginning, most users still only a few connections and needed help making more.

LinkedIn’s strategy was to capitalize on high user motivation just after signup. Nicknamed the “Reconnect Flow,” LinkedIn implemented a single question to new users during onboarding: “Where did you used to work?” 

Based on this input, LinkedIn then displayed a list of possible connections from the user’s former workplace. This  jogged new users’ memories and reduced the effort required to reconnect with old colleagues . Once they had made this step, users were more likely to make further connections on their own. 

Thanks to this simple prompt, LinkedIn’s pageviews increased by 41%, searches jumped up 33%, and users’ profiles became richer with 38% more work positions listed.

If you notice your users aren’t making the most of your product on their own, help them out while you have their attention. Use the momentum of your onboarding to help your users become engaged.

7. How Facebook increased week 1 retention by finding its north star metric

Facebook’s active user base surpassed 1 billion in 2012. It’s easy to look at the massive growth of Facebook and see it as a sort of big bang effect—a natural event difficult to pick apart for its separate catalysts. But Facebook’s growth can be pinned down to several key strategies.

Again and again, Facebook carved out growth by maintaining a steely focus on user behavior data. They’ve identified markers of user success and used those markers as North Star metrics to guide their product decisions. 

Facebook used analytics to compare cohorts of users—those who were still engaged in the site and those who’d left shortly after signing up. They found that the clearest indicator of retention whether or not users connected with 7 friends within 10 days. 

Once Facebook had identified their activation metric, they crafted the onboarding experience to nudge users up to the magic number. 

By focusing on a metric that correlates with stickiness, your team can take a scientific approach to growing engagement and retention, and measuring its progress.

8. How Slack got users to stick around by mirroring successful teams

Slack has grown by watching how teams interact with their product. Their own team was the very first test case and from then on, they’ve refined their product by engaging companies to act as testers. 

To understand patterns of retention and churn, Slack peered into their user data. They found that teams who’d sent 2,000 or more messages almost never dropped out of the product. That’s a lot of messages—you only get to that number by really playing around with the product and integrating it into your routine. 

Slack knew they had to give new users as many reasons as possible to send messages through the platform. They started plotting interactions with users in a way that encouraged multiple message sending. 

For example, Slack’s onboarding experience simulates how a seasoned Slack user behaves. New users are introduced to the platform through interactions with the Slackbot, and are encouraged to upload files, use keyboard shortcuts, and start new conversations.

slack new user onboarding screenshot with a new channel and slackbot introduction

Find what success means for your product by watching loyal users closely. Mirror that behavior for new users, and encourage them to get into a pattern that leads to long-term retention.

9. How ConvertKit grew $125,000 MRR by helping users switch tools

In early 2013, self-employed e-book writer Nathan Barry publicly set himself an unusual resolution. He announced the “Web App Challenge”—he wanted to build an app from scratch and get to $5,000+ in monthly recurring revenue within 6 months. 

Though he didn’t quite make it to that $5,000 mark, he did build a product—ConvertKit—with validated demand that went on to reach $125,000 in MRR per month. 

Barry experimented with a lot of growth strategies over the first 3 three years, but the one he kept turning back to was direct communication with potential customers. Through personalized emails, Barry found tons of people who loved the idea of ConvertKit but said it was too much trouble for them to think about switching tools—all their contacts and drafts were set up in their existing tools.

So Barry developed a “concierge migration service.” The ConvertKit team would literally go into whichever tool the blogger was using, scrape everything out, and settle the new customer into ConvertKit. Just 15 months after initiating this strategy, ConvertKit was making $125,000 in MRR. 

By actively reaching out and listening to you target users, you’ll be better able to identify precise barriers to entry and come up with creative solutions to help them overcome these hurdles.

10. How Yahoo doubled mobile revenue by rearranging their team

When Yahoo doubled their mobile revenue between 2012 and 2013, it wasn’t just the product that evolved. Yahoo had hired a new leader for its Mobile and Emerging Products, Adam Cahan. As soon as Cahan arrived, he set to work making organizational changes that allowed Yahoo’s mobile division to get experimental, iterate, and develop new products quickly.

  • First, he encouraged elements of a startup environment. Cahan brought together talented individuals from different disciplines—design, product management, engineering—and encouraged them to work like a founding team to focus solely on developing mobile products that would grow.
  • Cahan maintained that collaborative environment even as the division grew to 50 members. By making every member of the team focused on user experience before all else, he removed some of the bottlenecks and divisions that often build up in a large tech company. He gave the team a mission to discover how to make Yahoo better for customers, even if that meant dismantling the status quo or abandoning older software.

In 2 years, Cahan grew Yahoo’s mobile division from 150 million mobile users to 550 million. By hiring the right people and enabling them to focus on solving problems for users, he had opened the doors for organic growth.

11. How Stripe grew by looking after developers first

Payment processing platform Stripe always knew that developers were the key to adoption of their service. Founders John and Patrick Collison started Stripe to address a very specific problem—developers were sorely in need of a payment solution they could adapt to different merchant needs and match the speed and complexity of the buyer side of the ecommerce interface. 

Merchants started clamoring for Stripe because their developers were raving about it—today, Stripe commands 15.34% of the market share for payment processing. That’s in large part to Stripe’s strategy of prioritizing the needs of developers first and foremost. For instance:

  • Code could only get Stripe so far—so in order to drive adoption, they focused on creating clear, comprehensive documentation so that developers could pick up Stripe products and run with them.
  • Stripe created a library of docs that lead the user through each product. There’s more plain English in these docs than code, bridging the gap for new users.
  • There’s a “Try Now” section where users can see what it takes to tokenize a credit card with Stripe. 
stripe help documentation. stripe is a great example of a company with excellent develper help docs

Know your audience. By focusing on the people that are most directly affected by your problem, you can generate faster and more valuable word-of-mouth. 

12. How Groove turned high churn around with targeted emails

In 2013, help desk tool Groove was experiencing a worryingly high churn rate of 4.5%. They were acquiring new users just fine, but people were leaving as fast as they came. So they set out to get to know these users better. It was a strategy that would allow them to reduce churn from 4.5% to 1.6%. “Your customers probably won’t tell you when they hit a snag,” says Alex Turnbull, founder and CEO of Groove. “Dig into your data and look for creative ways to find those customers having trouble, and help them.”

  • Groove used Kissmetrics to examine customer data. They identified who was leaving and who was staying in the app.
  • They compared the user behavior of both cohorts and found that staying in the app was strongly correlated with performing certain key actions—like being able to create a support widget in 2 to 3 minutes. Users who churned were taking far longer, meaning that for some reason they weren’t able to get a grasp of the tool.
  • Groove was then able to send highly targeted emails to this second cohort, bringing them back into the app and helping them achieve value.

By using analytics, you can identify behaviors that drive engagement vs. churn, then proactively reach out to customers when you spot these behaviors in action. By getting ahead of individual cases of churn, you can drive engagement up.

13. How PayPal paid users to growth hack for them

PayPal was growth hacking referrals before it was cool. When PayPal launched, they were introducing a new type of payment method—and they knew that they needed to build trust and authority in order to grow. Their strategy involved getting early adopters to refer users to the platform. 

  • PayPal paid its first users to sign up. They literally gave them free money. These bonuses began at $20 for signing up.
  • As users grew accustomed to the idea of PayPal, signup bonuses were decreased to $10, then $5, then were phased out—but by that time, their user base had started to grow organically.

“We must have spent tens of millions in signup and referral bonuses the first year,” says David Sacks, original COO at PayPal. But that initial investment worked—PayPal’s radical first iteration of their referral program allowed them to grow to 5 million daily users in only a few months.

Incentivize your users in a way that makes sense for your business. If users adore your product, the initial cost of setting up a referral program can be recouped many times over as your users become advocates.

14. How Postmates reached 1 million deliveries by baking growth into engineering and product

In 2016, the on-demand delivery service Postmates, reached 1 million monthly deliveries. They also launched a subscription service, called Postmates Plus Unlimited. 

With growing demand, Postmates focused on developing products that are highly accessible and easy to use. At the same time, they gathered funding. In October 2016, they gained another $140 million investment taking their post-money valuation to $600 million. But to cope with this growth in valuation, Postmates needed to scale their growth team. 

According to Siqi Chen, VP of Growth at Postmates, the company had “an incredibly scrappy, hard working team who did the best they could with the tools given, but it’s very hard to make growth work at Postmates scale without dedicated engineering and product support.”

So the team shifted to include engineering and product at every level. Now, Postmates’ growth team has 3 arms of its own—“growth product,” “growth marketing,” and “user acquisition”—each one with its own engineering support.

By connecting their growth team directly to the technical decision makers, Postmates created a team that can scale with the company.

15. How BuzzFeed grew to 9 billion monthly visitors with their “golden rules of shareability”

BuzzFeed is a constantly churning content machine, publishing hundreds of pieces a day, and getting over billion content views per month. BuzzFeed’s key growth strategy has been to define virality, and pursue it in everything they do.

  • Jonah Peretti, BuzzFeed’s CEO, shut off the noise and started listening to readers. He found that readers were more concerned about their communities than about the content—they were disappointed when they didn’t find something to share with their friends. The most important metrics the Buzzfeed team could judge themselves by were social shares and traffic from social sites.
  • BuzzFeed created the Golden Rules of Shareability to further refine their criteria, and analyzed their viral content to create a formula for what makes something inherently shareable. This is important, because it makes it possible for Team BuzzFeed to take leaps into new topics and areas.
  • BuzzFeed’s focus has followed its social crowd and has been able to adapt to changing reading patterns and platforms. The company has also upped its political arm, and has made big investments in branded video.

The lesson? To go viral, you need to give the people what they want, and that means striking a balance between consistency and novelty. 

16. How Airbnb continued to scale by simplifying user reviews

Airbnb’s origin story is one of the infamous growth hacking tales. Founders Brian Chesky and Joe Gebbia knew their potential audience was already using Craigslist, so they engineered their own integration, allowing hosts to double post their ads to Airbnb and Craigslist at the same time. 

But it’s their review strategy that has enabled Airbnb to keep growing, once this short-term tactic wore out its effectiveness. Reviews enrich the Airbnb platform. For 50% of bookings, guests visit a host profile at least once before booking a trip, and hosts with more than 10 reviews are 10X more likely to receive bookings. 

Airbnb growth hacked their network effect by making reviewing really easy:

  • They made the review process double-blind, so feedback isn’t visible until both traveler and host have filled out the form. This not only ensures more honest reviews, but removes a key source of friction from the review process.
airbnb double-blind review process with new review notifications
  • They also enabled private feedback and reduced the timeline for leaving a review to 14 days, making reviewing more spontaneous and authentic.

By making reviews easier and more honest, Airbnb grew the number of reviews on the site, which in turn grew its authority. You can growth hack your shareability by identifying barriers to trust and smoothing out points of friction along the way.

17. How AdRoll used Appcues modal windows to increase adoption to 60%

AdRoll has a great MailChimp integration—it allows users to retarget ads to their email subscribers in MailChimp. But they found that very few users were actually making use of this feature.

Peter Clark, head of Growth at AdRoll, wanted to experiment with in-app messaging in order to target the right Adroll users more effectively. 

But growth experiments like this require rapid iteration. His engineers were better suited to longer development cycles, and he didn’t want to disrupt the flow of his organization.So Peter and his team started using Appcues to create custom  modal windows quickly and easily—and without input from their technical team members. 

With a code-free solution, AdRoll’s growth team could design and implement however many windows they needed to drive adoption of the features they were working on. Here’s how it worked for the MailChimp integration:

  • The team first used a tool called Datanyze to isolate users who used both AdRoll and MailChimp.
  • They copied this list into Appcues and created the modal window below, targeting it only to  appear to users with both tools who could take immediate advantage of the integration.
mailchimp and adroll integration feature announcement modal window made with appcues
  • They set the modal to appear as users arrived logged in to their dashboards—the core area of the AdRoll tool, in which users are already poised to take action on their ad campaigns.

This single experiment yielded thousands of conversions and ended up increasing adoption rate of the integration to 60%. The experiment is so easy to replicate that Clark and the team now use modal windows for all kinds of growth experiments.

18. How GitHub grew to 100,000 users in a year by nurturing its network effect

GitHub began as a software development tool called Git. It was designed to solve a problem its coder founders were having by enabling multiple developers to work together on a single project. But it was the discussion around Git—what the founders nicknamed “the Github”— that became the tool’s core value. 

Github’s founders realized that the problem of collaboration wasn’t just a practical software problem—the whole developer community was missing a communal factor. So they focused on growing the community side of the product, creating a freemium product with an open-source repository where coders could come together to discuss projects and solve problems with a collective mindset.

They created the ability to follow projects and track contributions, so there’s both an element of camaraderie and an element of competitiveness. This turned GitHub into a sort of social network for coding. A little over a year after launch, Github had gained its first 100,000 users. In July of 2012, GitHub secured $100M in venture capital

By catalyzing the network effect, it’s possible to turn a tool into a culture.For GitHub, the more developers got involved, the better the tool became. Find a community for your product and give them a place to come together.

19. How Yelp reached 176 million unique monthly visits by gamifying reviews

It’s relatively easy for a consumer review site to get drive-by traffic. What makes Yelp different, and allows it to draw return visitors and community members, is that it has strategically grown the social aspect of its platform. 

This is what has earned Yelp 176 million unique monthly visitors in Q2 2019 and has allowed them to overtake competitors by creating their own category of service. Yelp set out to amplify its existing network effect by rewarding users for certain behaviors.

  • They created user levels—users could achieve “Elite” status by writing good reviews frequently and for voting and commenting on other users’ reviews.
  • Yelp judged reviews based on several factors, including level of detail and how many votes of approval they received. All of these factors helped to make Yelp more shareable. Essentially, they were teaching loyal users to be better content creators by rewarding them for upping the quality of Yelp’s content.
yelp review from an elite reviewer, showing user profile friend and review count and buttons to rate a yelp review as useful, funny, or cool

By making reviews into a status symbol, Yelp turned itself into a community with active members who feel a sense of belonging there—and who feel motivated to use the platform more often. 

20. How Etsy grew to 42.7 million active buyers by empowering sellers 

Etsy reached IPO with a $2 billion valuation in 2015, ten years after the startup was founded. Today, the company boasts 42.7 million active buyers and 2.3 million active sellers who made $3.9 billion in annual gross merchandise sales in 2018. Not too shabby (chic)!

The key to their success was Etsy’s creation of a “community-centric” platform. Rather than building a simple ecommerce site, Etsy set about to create a community of like-minded craft-makers. One of the ways they did this was by boosting organic new user growth by actively encouraging sellers to share their wares on social media.

  • First, Etsy’s strategy was to focus on the seller side of its user acquisition. They gave their sellers tons of support but also tons of independence to promote and curate their businesses—which ultimately gave sellers a sense of ownership over their own success. Thanks to this approach, Etsy sellers were motivated to recruit their own buyers, who then visited Etsy and got hooked on the site itself.
  • Etsy’s seller handbook is basically a course in how to operate a small online business—hashtags and all. Vendors create their own regulars, and drum up their own new business through social sharing, while Etsy positions itself as the supportive platform.
etsy seller handbook dashboard

If your product involves a 2-sided market, focus on one side of that equation first. What can you do to enable those people to become an acquisition channel in and of themselves?

21. How IBM created a growth hacking team to spur startup-level growth

As cloud-based software has taken off, traditional hardware technology companies have struggled. IBM has been proactive in their efforts to redefine  its brand and product offering for an increasingly mobile audience.

Faced with an increasingly competitive, cloud-based landscape, IBM decided that it was time to start telling a different story. This legacy giant began acting more like a nascent startup, as the company aggressively reinvented its portfolio. 

Their strategy for reinvigorating growth and achieving startup-like mentality has been to take a product-led approach

  • In 2014, IBM created a growth hacking team. Already a large corporation, IBM didn’t need to climb the initial hill of growth to get its product off the ground. But by building this focused team, it aimed to grow into new areas and new audiences with “data-driven creativity,” by using the small business strategies it was seeing in the startup scene.
  • IBM now essentially has startup-sized teams within its massive team, working in a lab style with the autonomy to test marketing strategies

No matter what your team looks like—whether it’s a nimble 10-person startup or an enterprise with low flexibility—you can turn your organizational structure into a space where growth can thrive. Of course, that achievement is not without its struggles. But as Nancy Hensley, Chief Digital Officer of Data and AI at IBM says:

“There’s always pain in transformation. That’s how you know you’re transforming!”

Listen up before you get loud

None of these growth spurts happened by changing a whole company all at once. Instead, these teams found something—something small, a way in, a loophole, a detail—and carved out that space so growth could follow. 

Whether you find that a single feature in your product is the key to engaging users, or you discover a north star metric that allows you to replicate success—pinpoint your area for growth and dig into it. 

Pay attention. Listen to your users and notice what’s happening in your product and what could be happening better. That learning is your next growth strategy

Deep Dive into the Past, Present, & Future of Income Share Agreements – Erik

Introduction

Imagine a world where you had a personal board of advisors — the people you most admire and respect — and you gave them upside in your future earnings in exchange for helping you (e.g our good friend Mr. Mike Merrill.)

Imagine if there was a “Kickstarter for people” where you could support up-and-coming artists, developers, entrepreneurs — when they need the cash the most, and most importantly, you’d only profit when they profit.

Imagine if you could diversify by pooling 1% of your future income with your ten smartest friends.

Now think about how much you’d go out of your way to help, say, your brother-in-law or step-siblings. Probably much more than a stranger. Why is that?

To pose a thought experiment: If you didn’t know your cousins were related to you, you might treat them like any other person. But because we have this social context of an “extended family,” you have a sort of genetic equity in them — a feeling that your fates are shared and it’s your responsibility to support them.

This begs the questionHow can we create the social context needed for people to truly care about others outside of their extended family?

If you believe that markets and trade have helped the world become a less violent place — because why hurt someone when it’ll also take money out of your pocket? — then you should believe that adding more markets (with proper safeguards) will make the world even less violent.

This is the hope of income share agreements (ISAs).

ISAs align economic incentives in ways that encourage us to help others beyond our extended family, give people economic opportunity who don’t have it today, and free people from the shackles of debt.

What are these ISAs you speak of?

An Income Share Agreement is a financial arrangement where an individual or organization provides something of value to a recipient, who, in exchange, agrees to pay back a percentage of their income for a certain period of time.

In the context of education, ISAs are a debt-free alternative to loans.

Rather than go into debt, students receive interest-free funding from an investor or benefactor. In exchange, the student agrees to share a percentage of future income with their counterparty. They come in different shapes and sizes, but almost always with terms that take into account a plethora of potential scenarios.

“Part of the elegance of an ISA is that the lender only wants a share of income when the borrower is getting a regular income “If you’re unemployed or underemployed, they’re not interested… you’re automatically getting a suspension of payments when you’re not doing well.”

– Mark Kantrowitz, a leading national expert on student loans who has testified before Congress about student aid policy.

There is a long and storied history of income share agreements, but they’ve only recently become popular due to the rise of Lambda School, a school that lets students attend for free and, if they do well after school, pay a percentage of their income until they pay Lambda back.

Wait, a popular meme sarcastically asks, did you just invent taxes?

No. Lambda only gets paid if and only if the student earns a certain amount after graduation. In other words, incentives are aligned. The student is the customer. Not the government. Not the state. Not the parents.

To be sure, it’s early days for ISAs: Adverse selection, legalization, concerns about individuals being corporations (derivatives? Shorting people?!) — there’s a lot left to figure out.

Still, it’s an idea that once you see, you can’t unsee.

Here’s a hypothetical story to help you picture how ISAs work:


Picture Janet, a Senior at Davidson High School. She has a 4.0 GPA, is captain of the debate team and star center forward of the Varsity Soccer team. She’s a shoo-in for a top 20 university, but her parents can’t afford it even with a scholarship, so she’s not even going to apply, and is headed for State. Then she learns from a news article that she’s a pretty good bet as someone who’s going to succeed down the road, and that might allow her to put some much needed cash towards her education. She goes for it, makes a profile on an ISA, and sure enough, a few strangers bet $50,000 on her college education! She immediately gets to work filling out Ivy League scholarship applications.

Throughout college, she keeps in touch with her investors, they give her advice, and because of her interest in politics, one even helps her get an internship with a governor’s election campaign over the summer. Once she graduates, she knows the clock is ticking — at 23 she’ll need to start paying back the investors 5% of her after tax income, so she hustles to work her way through the ranks.

From age 23 to 33, the payback period, Janet becomes a lawyer at a top tier firm, and the investors make a 3x cash on cash return


The above is purely hypothetical.

ISAs for traditional higher education are much more complicated than say, vocational training, where there is more direct alignment of ‘skills-development-to-job’ pathway for students. But, the beauty of ISAs is in their flexibility, so there is lots of room for innovation.

So: this is the dream — why hasn’t it happened yet?

ISAs and other related instances of securitizing human capital have been tried. Here’s a brief history:

Economist Milton Friedman Proposes Use of ISAs in Education —

In modern times, the first notable mention of the concept of ISAs was by Nobel-prize winning economist Milton Friedman in his 1955 essay The Role of Government in Education.

In a section devoted specifically to vocational and professional education, Friedman proposed that an investor could buy a share in a student’s future earning prospects.

It’s worth noting that the barriers to adoption that Friedman identified back in the 1950s still hold true today:

  1. The potential high costs of administration;
  2. The sheer novelty of the idea;
  3. The reluctance to think of investments in human beings as comparable to investments in physical assets; and
  4. Legal and conventional limitations by suitable financial intermediaries

Society might not have been ready for ISAs in the 1950s, but 16 years later, another Nobel Prize-winning economist, James Tobin, would help launch the first ISA option for college students at Yale University.

Yale experiments with ISAs —

In the 1970s, Yale University ran an experiment called the Tuition Postponement Option (“TPO”). The TPO was a student loan program that enabled groups of undergraduates to pay off loans as a “cohort” by committing a portion of their future annual income.

Students who signed up for the program (3,300 in total) were to pay four percent of their annual income for every $1,000 borrowed until the entire group’s debt had been paid off. High earners could buy out early, paying 150% of what was borrowed plus interest.

Within each cohort, many low earners defaulted, while the highest earners bought out early, leaving a disproportionate debt burden for the remaining graduates.

Administrators also did not account for the changes to the tax code and skyrocketing inflation in the 1980s, which only exacerbated the inequitable arrangement.

“We’re all glad it’s come to an end,” It was an experiment that had good intentions but several design flaws.” — Yale President Richard Levin.

While the TPO is generally considered a failure, it was the first instance of a major university offering ISAs and a useful example for how not to structure ISAs — specifically, pooling students by cohort and allowing the highest earning students to buy out early.

ISAs as a Financial Aid Option —

It would be decades after Yale’s failed experiment before universities started experimenting again with ISAs, but today a company called Vemo Education is leading the way.

This is a crucial point: Vemo isn’t competing directly with loans, but instead is unlocking other sorts of value (i.e., helping students better choose their college). The key here is that Vemo links an individual’s fortunes to the institution’s fortunes. The company helps universities signal value to students by helping them offer ISAs that signal that the university wants to better align cost with value of its higher education program.

The first company that Vemo partnered with to offer ISAs was Purdue University.

In 2016, Purdue University began partnering with Vemo Education to offer students an ISA tuition option through its “Back a Boiler” ISA Fund. They started with a $2 million fund, and since then have raised another $10.2 million and have issued 759 contracts totaling $9.5 million to students.

Purdue markets its ISA offering as an alternative to private student loans and Parent PLUS Loans. Students of any major can get $10,000 per year in ISA funding at rates that vary between 1.73% and 5.00% of their monthly income. Purdue caps payments at 2.5x the ISA amount that students take out and payment is waived for students making less than $20,000 in annual income.

In the last few years, Vemo has emerged as the leading partner for higher education institutions looking to develop, launch and implement ISAs. In 2017, Vemo powered $23M of ISAs for college students across the US.

Upstart: A Short-Lived Attempt at “Kickstarter for People” —

Fintech company Upstart initially launched with a model of “crowdfunding for education”. However, they eventually pivoted to offering traditional loans when they realized that their initial model was simply not viable.

Why? Not enough supply.

The fact that only accredited investors (over $1m in net worth) could invest severely limited the total potential funders on the site. And yet, while Upstart never got enough traction (they pivoted successfully), they paved the way for a platform like it to eventually be built.

ISAs for Vocational Training —

While Upstart failed to gain traction, technical educational bootcamps have seen tremendous growth while offering their students ISAs to finance their education.

And Lambda School is leading the way.

Lambda School is an online bootcamp that trains students to become software engineers at no upfront cost. Instead of paying tuition, students agree to pay 17% of their income for the first two years that they’re employed. Lambda School includes a $50,000 minimum income threshold and caps total payments at an aggregate $30,000. They also give students the option to pay $20,000 upfront if they’d rather not receive an ISA.

Lambda School students enroll for nine months and end up with 1,500–2,000 hours of training, comparable to the level of training they’d receive during a CS-focused portion of a four-year CS degree.

“Lambda School looks like a charity from the outside, but we’re really more like a hedge fund.

We bet that smart, hardworking people are fundamentally undervalued, and we can apply some cash and leverage to fix that, taking a cut.” — Austin Allred (Lambda School CEO)

In our opinion, Lambda is legitimizing ISAs and may just be the wedge that makes ISAs mainstream.

An Outlook for the Future of ISAs

Given where we are today, and with the potential for this type of financial innovation, what might the future look like?

There are three major themes in particular that get us excited for the future of ISAs: aggregation, novel incentive structures, and crypto.

Aggregation —

We believe that it’s possible to pool together various segments of people to decrease overall risk of that population and provide more to each individual person.

If we assume that each individual is fairly independent from each other, this should be a possibility. As risk declines, your expected return should increase. And as your expected return increases, more investors and ISA providers will likely jump in to provide even more capital for more people.

“There is no reason you have to do this at the individual level. Most likely, it will first occur in larger aggregated groups — based on either geography, education, or other group characteristics. As with the housing market, it is important to aggregate enough individual sample points to reduce risk.” — Dave McClure

Another take on aggregation could be an individual electing to group together with their close friends or peers.

This can have the magical benefit of further aligning incentives with those around you, increasing the value of cooperation, lowering downside risk, and promoting more potential risk taking or thinking outside the box, all of which should have the benefit of increasing economic growth.

In addition to that, being able to take a more active role in a friend’s life (helping when need be, sharing in their wins, supporting in their losses, etc.) can be an extremely rewarding experience. That said, there are some definite downsides and risks to be aware of with these types of arrangements.

Novel Incentive Structures —

How can we create financial products to incentivize service provides (i.e. teachers, doctors, etc.) where they are indirectly having massive impacts to income from future generations?

Just imagine if every teacher was able to take even just a tiny percentage of every one of their students’ future earnings the difference that tweak could make. Teachers today unfortunately don’t make nearly as much money as they should given the significant consequences they have on future generations. A great teacher can create the spark for the next Einstein or Elon Musk. A terrible teacher could damage the potential Einstein or Elon Musk enough where they never realize their potential. Imagine how many more incredible people we could have.

There will always be incredible teachers regardless of monetary return, but we bet there could be more. It all comes down to aligning incentives.

This same thinking can be applied to other service providers like doctors. Currently, doctors are paid the same amount (all else equal) whether they succeed or not in a life-saving surgery. But what if the service provider also took a tiny fraction of future earnings from their patient? Incentives are more aligned. That doctor may not even realize it, but they likely would work a bit harder knowing what’s at stake.

Crypto —

Crypto can securitize so much more than we currently do; in essence, we could tokenize ourselves and all future income. Once those personal tokens exist, they can be traded instantly anywhere on the world with infinite divisibility. Arbitrageurs and professional traders could create new financial products (i.e. ISA aggregations) and buy / sell with each other to price things to near perfection.

What’s next?

We’d love to continue the conversation! This is a fascinating space with a ton of opportunity. If you’re thinking about or building anything here, feel free to leave your comments or reach out to talk more.

Special shoutout to David Weinstein & Jake Hallac for their help writing as well as Ray Batra, Dani Grant, Zander Adell, Dave McClure, Sam Lessin and Alex Marcus for their help reviewing / editing!

***

Appendix: Addressing Common Concerns —

Isn’t giving up the legal right to a portion of future income equivalent to modern-day indentured servitude?

Quick refresher: Indentured servants were immigrants who bargained away their labor (and freedom) for four-to-seven years in exchange for passage to the British colonies, room, board and freedom dues (a prearranged severance). Most of these immigrants were English men who came to British colonies in the 17th century.

On the surface this seems like a decent deal, but not so fast. They could be sold, lent out or inherited. Only 40% of indentured servants lived to complete the terms of their contracts. Masters traded laborers as property and disciplined them with impunity, all lawful at the time.

Rebuttal: We are in no way advocating a return to indentured servitude (voluntary or otherwise). Modern-day ISAs must be structured to have proper governance, ensure alignment of interests and contain legal covenants that protect both parties.

We are advocating for ISAs that (i) are voluntary, (ii) do not force the recipient to work for the investor, and (iii) are a promise to share future income, not an obligation to repay a debt.

ISAs are unregulated. How do we structure and enforce ISAs without a legal framework to rely on?

Our Response: ISAs offered by Lambda School, Holberton School and other companies are legal under current US law. To the best of our knowledge, all companies offering ISAs operate according to best practices (i.e., consumer disclosure and borrower protections) as set forth in proposed federal legislation.

The Investing in Student Success Act (H.R.3432S.268) has been proposed in both the US House of Representatives and the US Senate. Under this legislation, ISAs would be classified as qualified education loans (rather than equity or debt securities), making them dischargeable in bankruptcy. Furthermore, the bill would exempt ISAs from being considered an investment company under the Investment Company Act of 1940.

Importantly, the bill includes consumer protections (i.e., required disclosures, payback periods, payback caps, and limits on income share amounts). The bill also includes tax stipulations that preclude ISA recipients from owning any taxes and limiting taxes for investors to apply to profits earned from ISAs.

Given that ISAs are riskier than student loans, but don’t require the same qualifications, aren’t ISAs prone to adverse selection?

Quick refresher: Adverse selection describes a situation in which one party has information that the other does not have. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage of raising premiums.

Our Response: In September 2018, Purdue University published a research study that looked into adverse selection in ISAs. The study concluded that there was no adverse selection by student ability among borrowers. However, ISA providers need to properly structure the ISA so as not to cap a recipient’s upside by too much. In addition, this risk can be mitigated by (i) offering a structured educational curriculum for high-income jobs and (ii) an application process that ensures that students have the ability and motivation to complete a given vocational program.

Couldn’t ISAs result in lack of diversity and discriminatory practices?

Our Response: Properly structured ISAs paired with effective offerings (i.e., skills-based training, career development assistance) have the potential to mitigate inequality and discriminatory practices. ISA programs like Lambda School require students to be motivated to succeed and have enough income to complete the program, but in no way discriminate based on age, gender or ethnicity.

However, as ISAs become more common, new legislation must include explicit protections to guard against discrimination in administration of ISAs (especially given that it’s unclear whether the Equal Credit Opportunity Act would apply to ISAs since they aren’t technically loans).

Can’t students simply refuse to pay once they start earning income after graduation?

Our Response: ISA providers like Lambda School are already starting to negotiate directly with employers to ensure that students have a job after completing the curriculum. These relationships mitigate the risk of a student refusing to pay. Lambda School is able to do this because it’s developed such a strong curriculum. Furthermore, students face reputation risk should they try to avoid meeting their obligations to the ISA provider.

Future legislation should address instances where a student avoids payment or chooses to take a job with no salary (i.e., a student completes a coding bootcamp, but has a change of heart and goes to work at a non-profit that pays below the minimum income threshold.

Equity is expensive (relative to debt), so wouldn’t students be better off sticking with traditional debt financing?

Our Response: ISAs are not for everyone. ISA’s are best suited for people with greater expected volatility in their future earnings (instead of people with a strong likelihood of a certain amount of salary). This is similar to new businesses choosing between equity investment vs. debt to finance their operations. Businesses with clear expectations of future cashflows generally benefit more from debt vs. equity. Individuals looking to finance their education are no different. Similarly, ISA’s don’t need to be all or nothing. Individuals can choose to capitalize their education with a mix of student loans + ISA’s to get a more optimal mix.

Source: https://medium.com/@eriktorenberg_/life-capital-9e5028c0ea12

Customer Journey Maps – Walking a Mile in Your Customer’s Shoes – IDF

Perhaps the biggest buzzword in customer relationship management is “engagement”. Engagement is a funny thing, in that it is not measured in likes, clicks, or even purchases. It’s a measure of how much customers feel they are in a relationship with a product, business or brand. It focuses on harmony and how your business, product or brand becomes part of a customer’s life. As such, it is pivotal in UX design. One of the best tools for examining engagement is the customer journey map.

As the old saying in the Cherokee tribe goes, “Don’t judge a man until you have walked a mile in his shoes” (although the saying was actually promoted by Harper Lee of To Kill a Mockingbird fame). The customer journey map lets you walk that mile.

“Your customer doesn’t care how much you know until they know how much you care.”

– Damon Richards, Marketing & Strategy expert

Copyright holder: Alain Thys, Flickr. Copyright terms and license: CC BY-ND 2.0

Customer journey maps don’t need to be literal journeys, but they can be. Creativity in determining how you represent a journey is fine.

What is a Customer Journey Map?

A customer journey map is a research-based tool. It examines the story of how a customer relates to the business, brand or product over time. As you might expect – no two customer journeys are identical. However, they can be generalized to give an insight into the “typical journey” for a customer as well as providing insight into current interactions and the potential for future interactions with customers.

Customer journey maps can be useful beyond the UX design and marketing teams. They can help facilitate a common business understanding of how every customer should be treated across all sales, logistics, distribution, care, etc. channels. This in turn can help break down “organizational silos” and start a process of wider customer-focused communication in a business.

They may also be employed to educate stakeholders as to what customers perceive when they interact with the business. They help them explore what customers think, feel, see, hear and do and also raise some interesting “what ifs” and the possible answers to them.

Adam Richardson of Frog Design, writing in Harvard Business Review says: “A customer journey map is a very simple idea: a diagram that illustrates the steps your customer(s) go through in engaging with your company, whether it be a product, an online experience, retail experience, or a service, or any combination. The more touchpoints you have, the more complicated — but necessary — such a map becomes. Sometimes customer journey maps are “cradle to grave,” looking at the entire arc of engagement.”

Copyright holder: Stefano Maggi, Flickr. Copyright terms and license: CC BY-ND 2.0

Here, we see a customer journey laid out based on social impact and brand interaction with that impact.

What Do You Need to Do to Create a Customer Journey Map?

Firstly, you will need to do some preparation prior to beginning your journey maps; ideally you should have:

  • User-personas. If you can’t tell a typical user’s story, how will you know if you’ve captured their journey?
  • A timescale. Customer journeys can take place in a week, a year, a lifetime, etc., and knowing what length of journey you will measure before you begin is very useful indeed.
  • A clear understanding of customer touchpointsWhat are your customers doing and how are they doing it?
  • A clear understanding of the channels in which actions occur. Channels are the places where customers interact with the business – from Facebook pages to retail stores. This helps you understand what your customers are actually doing.
  • An understanding of any other actors who might alter the customer experience. For example, friends, family, colleagues, etc. may influence the way a customer feels about any given interaction.
  • A plan for “moments of truth” – these are the positive interactions that create good feelings in customers and which you can use at touchpoints where frustrations exist.

Copyright holder: Hans Põldoja, slideshare.net. Copyright terms and license: CC BY-SA 4.0

User personas are incredibly useful tools when it comes to putting together any kind of user research. If you haven’t developed them already, they should be a priority for you, given that they will play such a pivotal role in the work that you, and any UX teams you join in the future, will produce.

Once you’ve done your preparation, you can follow a simple 8-point process to develop your customer journey maps:

  • Review Organization Objectives – what are your goals for this mapping exercise? What organizational needs do you intend to meet?
  • Review Current User Research – the more user research you have at your fingertips, the easier this exercise will be. Be creative, and if you don’t have the right research to define the journey, then consider how you can carry that research out.
  • Review Touchpoints and Channels – the next step is to ensure that you effectively map touchpoints and channels. A touchpoint is a step in the journey where the user interacts with a company or product, and a channel is the means by which the user does this. So, for example, a touchpoint could be “pay this invoice” and channels could be “online”, “retail”, “over the phone”, “mail”, etc. It can also help to brainstorm at this stage and see if there are any touchpoints or channels you’ve missed in your original data collection exercise.
  • Create an Empathy Map. An empathy map examines how the customer feels during each interaction – you want to concentrate on how the customer feels and thinks as well as what he/she will say, do, hear, etc. in any given situation.
  • Build an affinity diagram. The idea here is first to brainstorm around each concept you’ve touched on and then to create a diagram which relates all these concepts, feelings, etc. together. This is best achieved by grouping ideas in categories and labeling them. You can eliminate concepts and the like which don’t seem to have any impact on customer experience at this stage, too.
  • Sketch the customer journey. How you do this is up to you; you can build a nice timeline map that brings together the journey over the course of time. You could also turn the idea into a video or an audio clip or use a completely different style of diagram. The idea is simply to show the motion of a customer through touchpoints and channels across your time frame and how that customer feelsabout each interaction on that journey. The map should include the outputs of your empathy map and affinity diagram.
  • Iterate and produce. Then, take your sketches and make them into something useful; keep refining the content and then produce something that is visually appealing and useful to stakeholders, team members, etc. Don’t be afraid to rope in a graphic designer at this stage if you’re not good at making things look awesome.
  • Distribute and utilize. The journey map serves no purpose sitting on your hard drive or in your desk drawer – you need to get it out there to people and explain why it’s important. Then, it needs to be put to use; you should be able to define KPIs around the ideal journey, for example, and then measure future success as you improve the journey.

Copyright holder: Rosenfeld Media. Copyright terms and license: CC BY 2.0

A complete customer journey map by adaptive path for the experience of interacting with railway networks.

Anatomy of a customer journey map

A customer journey map can take any form or shape you like, but let’s take a look at how you can use the Interaction Design Foundation’s template (link below).

Copyright holder: The Interaction Design Foundation. Copyright terms and license: CC BY-SA

A basic customer journey map template.

The map here is split into several sections: In the top zone, we show which persona this journey refers to and the scenario which is described by the map.

The middle zone has to capture the thoughts, actions and emotional experiences for the user, at each step during the journey. These are based on our qualitative user research data and can include quotes, images or videos of our users during that step. Some of these steps are “touchpoints” – i.e., situations where the customer interacts with our company or product. It’s important to describe the “channels” in each touchpoint – i.e., how that interaction takes place (e.g., in person, via email, by using our website, etc.).

In the bottom zone, we can identify the insights and barriers to progressing to the next step, the opportunities which arise from these, and possibly an assignment for internal team members to handle.

The Take Away

Creating customer journeys (including those exploring current and future states) doesn’t have to be a massively time-consuming process – most journeys can be mapped in less than a day. The effort put in is worthwhile because it enables a shared understanding of the customer experience and offers each stakeholder and team member the chance to contribute to improving that experience. Taking this “day in the life of a customer” approach will yield powerful insights into and intimate knowledge of what “it’s like” from the user’s angle. Seeing the details in sharp relief will give you the chance to translate your empathy into a design that better accommodates your users’ needs and removes (or alleviates) as many pain points as possible.

References & Where to Learn More

Hero Image: Copyright holder: Espen Klem, Flickr. Copyright terms and license: CC BY 2.0

Boag, P. (2015). Customer Journey Mapping: Everything You Need to Know. https://www.sailthru.com/marketing-blog/written-customer-journey-mapping-need-to-know/

Designing CX.The customer experience journey mapping toolkit. http://designingcx.com/cx-journey-mapping-toolkit/

Kaplan, K. (2016). When and How to Create Customer Journey Maps. https://www.nngroup.com/articles/customer-journey-mapping/

Richardson, A. (2010). Using Customer Journey Maps to Improve Customer Experience. Harvard Business Review https://hbr.org/2010/11/using-customer-journey-maps-to/

You can see Nielsen Norman Group’s guidelines for designing customer journey maps here:

https://www.nngroup.com/articles/customer-journey-mapping/

Source : https://www.interaction-design.org/literature/article/customer-journey-maps-walking-a-mile-in-your-customer-s-shoes?r=dianne_rees

Scroll to top