Category: Consumer

Which investments generate the greatest value in venture: Consumer or Enterprise? – Sapphire

A Dive into Enterprise vs Consumer Exit Activity

In today’s fast-paced market — where major funding or exit announcements seem to roll in daily — we at  Sapphire Partners like to take a step back, ask big picture questions, and then find concrete data to answer them. 

One of our favorite areas to explore is: as a venture investor, do your odds of making better returns improve if you only invest in either enterprise or consumer companies?  Or do you need a mix of both to maximize your returns? And how should recent investment and exit trends influence your investing strategy, if at all? 

To help answer questions like these, we’ve collected and analyzed exit data for years.  What we’ve found is pretty intriguing: portfolio value creation in enterprise tech is often driven by a cohort of exits, while value creation in consumer tech is generally driven by large, individual exits. 

In general, this trend has held for several years and has led to the general belief, that if you are a consumer investor, the clear goal is to not miss that “one deal” that has a huge spike in exit valuation creation (easier said than done of course). And if you’re an enterprise investor, you want to create a “basket of exits” in your portfolio.

What Creates More Portfolio Value: Consumer or Enterprise?

2019 has been a powerhouse year for consumer exit value, buoyed by Uber and Lyft’s IPOs (their recent declines in stock price notwithstanding). The first three quarters of 2019 alone surpassed every year since 1995 for consumer exit value – and we’re not done yet. If the consumer exit pace continues at this scale, we will be on track for the most value created at exit in 25 years, according to our analysis.

Source: S&P Capital IQ, Pitchbook

Since 1995, the number of enterprise exits has consistently outpaced consumer exits (blue line versus green line above), but 2019 is the closest to seeing those lines converge in over two decades (223 enterprise vs 208 consumer exits in the first three quarters of 2019). Notably, in five of the past nine years, the value generated by consumer exits has exceeded enterprise exits.[1]

At Sapphire, we observe the following:

  • Venture-backed enterprise tech companies have generated $884B in value since 1995; $349B from M&A and $535B from IPOs.
  • Venture-backed consumer tech companies have generated $773B in value since 1995; $153B from M&A and $620B from IPOs.
  • In total, there were 5,600+ venture-backed exits in enterprise tech and 3,300+ exits in consumer tech.

While the valuation at IPO serves as a proxy for an exit for venture investors, most investors face the lockup period. 2019 has generated a tremendous amount of value through IPOs, roughly $223 billion. However, after trading in the public markets, the aggregate value of those IPOs have decreased by $81 billion as of November 1, 2019.[3] This decrease is driven by Uber and Lyft from an absolute value basis, accounting for roughly 66% of this markdown over the same period, according to our figures. Over half of the IPO exits in 2019 have been consumer, and despite these stock price changes, consumer exits are still outperforming enterprise exits YTD given the enormous alpha they generated initially.

As we noted in the introduction, since 1995, historical data shows that years of high value creation from enterprise technology is often driven by a cohort of exits versus consumer value creation that is often driven by large, individual exits. The chart below illustrates this, showing a side-by-side comparison of exits and value creation.

Source: Pitchbook

At Sapphire, we observe the following:

  • The top five enterprise companies with the largest exits account for $79B in value creation, or 9% of the $884B generated in the enterprise category since 1995.
  • The top five consumer companies with largest exits account for $276B in value creation, or 36% of the $773B generated in the consumer category since 1995.

The value generated by the top five consumer companies is 3.5x greater than that of enterprise companies. 

Understanding the Consumer Comeback

While total value of enterprise companies exited since 1995 ($884B) exceeds that of consumer exits ($773B), in the last 15 years, consumer returns have been making a comeback. Specifically, total consumer value exited ($538B) since 2004 exceeds that of enterprise exits ($536B).  This difference has become more stark in the past 10 years with total consumer value exited ($512B) surpassing that of enterprise ($440B). As seen in the chart below, the rolling 10-year total enterprise exit value exceeded that of consumer, until the decade between 2003-2012 where consumer exit value took the lead.

Note: Data from S&P Capital IQ and Pitchbook

Source: S&P Capital IQ, Pitchbook

We believe size and then the inevitable hype around consumer IPOs has the potential to cloud investor judgment since the volume of successful deals is not increasing.  The data clearly shows the surge in outsized returns comes from the outliers in consumer. 

As exhibited below, large, consumer outliers since 2011 such as Facebook, Uber, and Snap often account for more than the sum of enterprise exits in any given year. For example, in the first three quarters of 2019, there have been 15 enterprise exits valued at over $1B for a total of $96B.  In the same time, there have been nine consumer exits valued at over $1B for a total of $139B. Anecdotally, this can be seen from four out of the past five years being headlined by a consumer exit. While 2016 showcased an enterprise exit, it was a particularly quiet exit year.

  • 2015 – Consumer: Fitbit ($6B)
  • 2016 – Enterprise: Nutanix ($5B)
  • 2017 – Consumer: Snap ($27B)
  • 2018 – Consumer: Dropbox ($11B)
  • First 3 quarters of 2019 – Consumer: Uber ($85B)

Source: S&P Capital IQ, Pitchbook

Enterprise Deals Still Rule in M&A

While consumer deals have taken the lead in IPO value in recent years, on the M&A front, enterprise still has the clear edge. Since 1995 there have been 76 exits of $1 billion or more in value, of which 49 are enterprise companies and 27 are consumer companies. The vast majority of value from M&A has come from enterprise companies since 1995 — more than 2x that of consumer. 

Similar to the IPO chart above, acquisition value of enterprise companies outpaced that of consumer companies until recently, with 2010-2014 being the exception.

Source: S&P Capital IQ, Pitchbook

Of course, looking only at outcomes with $1 billion or more in value covers only a fraction of where most VC exits occur. Slightly less than half of all exits in both enterprise and consumer are $50 million or under in size, and more than 70 percent of all exits are under $200 million. Moreover, in the distribution chart below, we capture only the percentage of companies for which we have exit values. If we change the denominator to all exits captured in our database (i.e. measure the percentage of $1 billion-plus exits by using a higher denominator), the percentage of outcomes drops to around 3 percent of all outcomes for both enterprise and consumer.

Source: S&P Capital IQ, Pitchbook

What Does All of this Mean for Venture Investors?

There’s an enormous volume of information available on startup exits, and at Sapphire Partners, we ground our analyses and theses in the numbers. At the same time, once we’ve dug into the details, it’s equally important to zoom out and think about what our findings mean for our GPs and fellow LPs. Here are some clear takeaways from our perspective:

  • Consumer exits have surpassed enterprise over the past 15 years.
  • Consumer exits value is highly concentrated in the top deals.
  • There are more billion-dollar enterprise IPOs than billion-dollar consumer exits, so you may have more opportunities for a unicorn enterprise outcome than you do a consumer.
  • However, if you happen to invest in one of the outlier consumer exits, you could experience significant returns.  

In a nutshell, as LPs we like to see both consumer and enterprise deals in our underlying portfolio as they each provide different exposures and return profiles.  However, when these investments get rolled up as part of a venture fund’s portfolio, success is often then contingent on the fund’s overall portfolio construction… but that’s a question to explore in another post.


NOTE: Total Enterprise Value (“TEV”) presented throughout analysis considers information from CapIQ when available, and supplements information from Pitchbook last round valuation estimates when CapIQ TEV is not available. TEV (Market Capitalization + Total Debt + Total Preferred Equity + Minority Interest – Cash & Short Term Investments) is as of the close price for the initial date of trading. Classification of “Enterprise” and “Consumer” companies presented herein is internally assigned by Sapphire. Company logos shown in various charts presented herein reflect the top (4) companies of any particular time period that had a TEV of $1BN or greater at the time of IPO, with the exception of chart titled “Exits by Year, 1995- Q3 2019”, where logos shown in all charts presented herein reflect the top (4) companies of any particular year that had a TEV of $7.5BN or greater at the time of IPO. During a time period in which less than (4) companies had such exits, the absolute number of logos is shown that meet described parameters. Since 1995 refers to the time period of 1/1/1995 – 9/30/2019 throughout this article.

[1]  Includes the first three quarters of 2019. IPO exit values refer to the total enterprise value of a company at the end of the first day of trading according to S&P Capital IQ. Analysis considers a combination of Pitchbook and S&P Capital IQ to analyze US venture-backed companies that exited through acquisition or IPO between 1/1/1995 – 9/30/2019.[2] Lockup period is a predetermined amount of time following an initial public offering (“IPO”) where large shareholders, such as company executives and investors representing considerable ownership, are restricted from selling their shares. [3] Total enterprise value at the end of 10/15/2019 according to S&P Capital IQ.

Source : https://sapphireventures.com/blog/openlp-series-which-investments-generate-the-greatest-value-in-venture-consumer-or-enterprise/

Behind the scenes: Data and technology bring food product R&D into the 21st century – Food Dive

With CPG companies under pressure to develop items faster and stretch their spending, Conagra, Mars Wrigley and Ferrara are rethinking the decades-old way of creating new things for consumers.

There was little doubt four years ago that Conagra Brands’ frozen portfolio was full of iconic items that had grown tired and, according to its then-new CEO Sean Connolly, were “trapped in time.” 

While products such as Healthy Choice — with its heart-healthymessage — and Banquet — popular for its $2 turkey and gravy and salisbury steak entrees — were still generating revenue, the products lookedmuch the same as decades before. The result: sales sharply fell as consumers turned to trendier flavors and better-for-youoptions.

Executives realized the decades-old process used to create and test products wasn’t translating into meaningful sales. Simply introducing new flavors or boosting advertising was no longer enough to entice consumers to buy. If Conagra maintained the status quo, the CPG giant only risked exacerbating the slide and putting its portfolio of brands further behind the competition.

“We were doing all this work into what I would call validation insights, and things weren’t working,” Bob Nolan, senior vice president of demand sciences at Conagra, told Food Dive. “How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”

Credit: Conagra 

Nolan and other officials at Conagra eventually decided to abandon traditional product testing and market research in favor of buying huge quantities of behavioral data. Executives were convinced the datacould do a better job of predicting eventual product success than consumers sitting in an artificial setting offering feedback.

Conagra now spends about $15 million less on testing products than it did three years ago, with the much of the money now going toward buying data in food service, natural products, consumption at home, grocery retail and loyalty cards. When Nolan started working at Conagra in 2012, he estimated 90% of his budget at the company was spent on traditional validation research such as testing potential products, TV advertisements or marketing campaigns. Today, money spent on those methods hasbeen cut to zero.

While most food and beverage companies have not changed how they go about testing their products as much as Conagra, CPG businesses throughout the industry are collectively making meaningful changes to their own processes.

With more data avaliable now than ever before, companies can change their testing protocol to answer questions they might have previously not had the budget or time to address. They’re also turning to technology such as videos and smartphones to immediately enagage with consumers or to see firsthand how they would respond to their prototype products in real-life settings, like their own homes.

As food manufacturers scramble to remain competitive and meet the shopper’s insatiable demand fornew tastes and experiences,changing how they go about testing can increase the liklihood that a product succeeds — enabling corporations to reap more revenue and avoid being one of the tens of thousands of products that fail every year. 

For Conagra, the new approach already is paying off. One success story came in the development of the company’s frozen Healthy Choice Korean-Inspired Beef Power Bowl. By combing data collected from the natural food channel and specialty stores like Whole Foods and Sprouts Farmers Market, the CPG giant found people were eating more of their food in bowls — a contrast to offerings in trays.

“How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”

 Bob Nolan

Senior vice president of demand sciences, Conagra

At the same time, information gathered from restaurants showed Korean was the fastest-growing cuisine. The data also indicatedthe most popularflavors within that ethnic category. Nolan said without the data it would have been hard to instill confidence at Conagra that marketing a product like that would work, and executives would have been more likely to focus on flavors the company was already familiar with.

Since then, Conagra rebranded Healthy Choice around cleaner label foods with recognizable, modern ingredients that were incorporated into innovations such as the Power Bowl. The overhaul helped rejuvenate the 34-year old brand, with sales jumping 20% during the last three years after declining about 10% during the prior decade, according to the company. 

Conagra has experienced similar success by innovating its other frozen brands, including Banquet and Marie Callender’s. For a company where frozen sales total $5.1 billlion annually, the segment is an important barometer for success at Conagra.

A decades-old approach

For years, food companies would come up with product ideas using market research approaches that dated back to the 1950s. Executives would sit in a room and mull over ways to grow a brand. They would develop prototypes before testing and retesting a few of them to find the one that would have the best chance of resonating with consumers. Data used was largely cultivated through surveys or focus groups to support or debunk a company idea.

“It’s an old industry and innovation has been talked about before but it’s never been practiced, and I think now it’s starting to get very serious because CPG companies are under a lot of pressure to innovate and get to market faster,” ​Sean Bisceglia, CEO of Curion, told Food Dive. “I really fear the ones that aren’t embracing it and practicing it … may damage their brand and eventaully damage their sales.”

Credit: Curion 

Information on nearly every facet of a consumer’s shopping habits and preferences can be easily obtained. There is data showing how often people shop and where they go. Tens of millions of loyalty cards reveal which items were purchased at what store, and even the checkout lane the person was in. Data is available on a broader level showing how products are selling, but CPGs can drill down on an even more granular level to determine the growth rate of non-GMO or organic, or even how a specific ingredient like turmeric is performing.

Market research firms such as Nielsen and Mintel collect reams of valuable data, including when people eat, where and how they consume their food, how much time they spend eating it and even how it was prepared, such as by using a microwave, oven or blender. 

To help its customers who want fast results for a fraction of the cost, Bisceglia said Curion has created a platform in which a product can be tried out among a random population group — as opposed to a specifically targeted audience made up of specific attributes, like stay-at-home moms in their 30s with two kids —​ with the data given to the client without the traditional in-depth analysis. It can cost a few thousand dollars with results available in a few days, compared to a far more complicated and robust testing process over several months that can sometimes cost hundreds of thousands of dollars, he said.

Curion, which has tested an estimated 8,000 products on 700,000 people during the last decade, is creating a database that could allow companies to avoid testing altogether.

For example, a business creating a mango-flavored yogurt could initially use data collected by a market research firm or someone else showing how the variety performed nationwide or by region. Then, as product development is in full swing, the company could use Curion’s information to show how mango yogurt performed with certain ages, income levels and ethnicities, or even how certain formulations or strength of mango flavor are received by consumers.

“What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again? You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”

Lori Rothman

Owner, Lori Rothman Consulting

Lori Rothman, who runs her own consulting firm to advise companies with their product testing,worked much of the last 30 years at companies including Kraft and Kellogg to determine the most effective way to test a product and then design the corresponding trial. She used to have days or weeks to review data and consumer comments before plotting out the best way to move forward, she said.

In today’s marketplace, there is sometimes pressure to deliver within a day or even immediately. Some companies are even reacting in real time as information comes in — a precedent Rothman warned can be dangerous because of the growing amount of data available and the inherent complexity in understanding it.

“It’s continuing toward more data. It’s just going to get more and more and we just have to get better at knowing what to do with it, and how to use it, and what’s actually important. What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again?” Rothman said. “You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”

Sweet relief

Ferrara Candy, the maker of SweeTarts, Nerds and Brach’s, estimated the company considers more than 100 product ideas each year. An average of five typically make it to market.

To help whittle down the list, the candy company owned by Nutella-maker Ferrero conducts an array of tests with consumers, nearly all of them done without the customary focus group or in-person interview.

Daniel Hunt, director of insights and analytics for Ferrara, told Food Dive rather than working with outside vendors to conduct research, like the company would have a decade ago, it now handles the majority of testing itself.

In the past, the company might havespent $20,000 to run a major test. It would have paid a market research firm to write an initial set of questions to ask consumers, then refine them, run the test and then analyze the information collected.

Today, Hunt said Ferrara’s own product development team, most of whom have a research background, does most of the work creating new surveys or modifying previously used ones — all for a fraction of the cost. And what might have taken a few months to carry out in the past can sometimes be completed in as little as a few weeks.

Credit: Ferrara 

“Now when we launch a new product, it’s not much of a surprise what it does, and how it performs, and where it does well, and where it does poorly. I think a lot of that stuff you’ve researched to the point where you know it pretty well,” Hunt told Food Dive. “Understanding what is going to happen to a product is more important — and really understanding that early in the cycle, being able to identify what are the big potential items two years ahead of launching it, so you can put your focus really where it’s most important.”

Increasingly, technology is playing a bigger part in enabling companies such as Ferrara to not only do more of their own testing, but providing them with more options of how best to carry it out.

Data can be collected from message boards, chat rooms and online communities popular with millennials and Gen Zers. But technology does have its limits. Ferrara aims to keep the time commitment for its online surveys to fewer than seven minutes because Hunt said the quality of responses tends to diminish for longer ones, especially among people who do them on their smartphones. 

Other research can be far more rigorous, depending on how the company plans to use the information.

“I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything we’re doing more testing and research then before but maybe just in a slightly different way than we did in the past.”

Daniel Hunt

Director of insights and analytics, Ferrara

Last summer, Ferrara created an online community of 20 people to help it develop a chewy option for its SweeTarts brand. As part of a three-week program, participants submitted videos showing them opening boxes of candies with different sizes, shapes, flavors, tastes and textures sent to them by Ferrara. Some of the products were its own candies, while others came from competitors such as Mars Wrigley’s Skittles or Starburst. Ferrara wanted to watch each individual’s reaction as he or she tried the products.

Participants were asked what they liked or disliked, or where there were market opportunites for chewy candy to help Ferrara better hone its product development. These consumers wereasked to design their own products. 

Ferrara also had people either video record themselves shopping or writing down their experience. This helped researchers get a feel for everything from when people make decisions that are impulsive or more thought out, to what would make a shopper decide not to purchase a product. As people provided feedback, Ferrara could immediately engage with them to expound on their responses.

“All of those things have really helped us get information that is more useful and helpful,” Hunt said. “I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything, we’re doing more testing and research than before, but maybe just in a slightly different way than we did in the past.”

Convincing people to change

Getting people to change isn’t easy. To help execute on its vision, Conagra spent four years overhauling the way it went about developing and testing products — a lengthy process in which one of the biggest challenges was convincing employees used to doing things a certain way for much of their career to embrace a different way of thinking.

Conagra brought in data scientists and researchers to provide evidence to show how brands grow and what consumer behavior was connected to that increase. Nolan’s team had senior management participate in training courses “so people realize this isn’t just a fly-by-night” idea, but one based on science.

The CPG giant assembled a team of more than 50 individuals— many of whom had not worked with food before — to parse the complex data andfind trends. Thismarked a dramatic new way of thinking, Nolan said.

While people with food and market research backgrounds would have been picked to fill these roles in the past, Conagra knew it would be hard to retrain them in the company’s new way of thinking. Instead, it turned to individuals who had experience indata technology, hospitality and food service, even if it took them time to get up to speed on Conagra-specific information, like the brands in its portfolio or how they were manufactured.

Conagra’s reach extended further outside its own doors, too. The company now occasionally works with professors at the University of Chicago, just 8 miles south of its headquarters, to help assess whether it is properly interpreting how people will behave. 

“In the past, we were just like everybody else,” Nolan said. “There are just so many principles that we have thrown out that it is hard for people to adjust.”

Mars Wrigley has taken a different approach, maintaining the customary consumer testing while incorporating new tools, technology and ways of thinking that weren’t available or accepted even a few years ago.

 “I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”

Lisa Saxon Reed

Director of global sensory, Mars Wrigley

Lisa Saxon Reed, director of global sensory at Mars Wrigley, told Food Dive the sweets maker was recently working to create packaging for its Extra mega-pack with 35 pieces of gum, improving upon a version developed for its Orbit brand years before. This time around, the company — which developed more than 30 prototypes — found customers wanted a recyclable plastic container they believed would keepthe unchewed gum fresh. 

Shoppers also wanted to feel and hear the packaging close securely, with an auditory “click.” Saxon Reed, who was not involved with the earlier form of the package, speculated it didn’t resonate with consumers because it was made of paperboard, throwing into question freshness and whether the package would survive as long as the gum did.

The new packaging, which hit shelves in 2016 after about a year of development, has been a success, becoming the top selling gum product at Walmart within 12 months of its launch, according to Saxon Reed. Mars Wrigley also incorporated the same packaging design for a mega pack of its 5 gum brand because it was so successful.

“If we would not have made a range of packaging prototypes and had people use them in front of us, we would have absolutely missed the importance of these sensory queues and we would have potentially failed again in the marketplace,”  Saxon Reed​ said. “If I would have done that online, I’m not sure how I would have heard thoseclues. …I don’t think those would have come up and we would have missed an opportunity to win.”

The new approach extends to the product itself, too. Saxon Reed​ said Mars Wrigley was looking to expand its Extra gum line into a cube shape in fall 2017. Early in the process, Mars Wrigley asked consumers to compile an online diary with words, pictures and collages showing how they defined refreshment. The company wanted to customize the new offering to U.S. consumers, and not just import the cube-shaped variety already in China.

Credit: Mars Wrigley 

After Mars Wrigley noticed people using the color blue or drawing waterfalls, showers or water to illustrate a feeling of refreshment, product developers went about incorporating those attributes into its new Extra Refreshers line through the color, flavor or characteristics thatfeel cool or fresh to the mouthThey later tested the product on consumers who liked gum, including through the age-old testing process where people were given multiple samples to try and asked which they preferred. 

Extra Refreshers hit shelves earlier this year and is “off to a strong start,” Saxon Reed said.

“I don’t see it as an ‘either-or’ when it comes to technology and product testing. I really see it as a ‘yes-and,’ ” she said. “How can technology really help us better understand the reactions that we are getting? But at this point, I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”

Regardless of what process large food and beverage companies use, how much money and time they spend testing out their products, or even how heavily involved consumers are, CPG companies and product testing firms agreed that an item’s success is heavily defined by one thing that hasn’t and probably never will change: taste. 

“Everybody can sell something once in beautiful packaging with all the data, but if it tastes terrible it’s not going to sell again,” Bisceglia said.

Source : https://www.fooddive.com/news/behind-the-scenes-data-and-technology-bring-food-product-rd-into-the-21st/565760/

Improving the Accuracy of Automatic Speech Recognition Models for Broadcast News – Appen

Sound Waves illustration
In their paper entitled English Broadcast News Speech Recognition by Humans and Machines, the team proposes to identify techniques that close the gap between automatic speech recognition (ASR) and human performance.

Where does the data come from?

IBM’s initial work in the voice recognition space was done as part of the U.S. government’s Defense Advanced Research Projects Agency (DARPA) Effective Affordable Reusable Speech-to-Text (EARS) program, which led to significant advances in speech recognition technology. The EARS program produced about 140 hours of supervised BN training data and around 9,000 hours of very lightly supervised training data from closed captions from television shows. By contrast, EARS produced around 2,000 hours of highly supervised, human-transcribed training data for conversational telephone speech (CTS).

Lost in translation?

Because so much training data is available for CTS, the team from IBM and Appen endeavored to apply similar speech recognition strategies to BN to see how well those techniques translate across applications. To understand the challenge the team faced, it’s important to call out some important differences between the two speech styles:

Broadcast news (BN)

  • Clear, well-produced audio quality
  • Wide variety of speakers with different speaking styles
  • Varied background noise conditions — think of reporters in the field
  • Wide variety of news topics

Conversational telephone speech (CTS)

  • Often poor audio quality with sound artifacts
  • Unscripted
  • Interspersed with moments where speech overlaps between participants
  • Interruptions, sentence restarts, and background confirmations between participants i.e. “okay”, “oh”, “yes

People speaking into a phone
How the team adapted speech recognition models from CTS to BN

The team adapted the speech recognition systems that were so successfully used for the EARS CTS research: Multiple long short-term memory (LSTM) and ResNet acoustic models trained on a range of acoustic features, along with word and character LSTMs and convolutional WaveNet-style language models. This strategy had produced results between 5.1% and 9.9% accuracy for CTS in a previous study, specifically the HUB5 2000 English Evaluation conducted by the Linguistic Data Consortium (LDC). The team tested a simplified version of this approach on the BN data set, which wasn’t human-annotated, but rather created using closed captions.

Instead of adding all the available training data, the team carefully selected a reliable subset, then trained LSTM and residual network-based acoustic models with a combination of n-gram and neural network language models on that subset. In addition to automatic speech recognition testing, the team benchmarked the automatic system against an Appen-produced high-quality human transcription. The primary language model training text for all these models consisted of a total of 350 million words from different publicly available sources suitable for broadcast news.

Getting down to business

In the first set of experiments the team separately tested the LSTM and ResNet models in conjunction with the n-gram and FF-NNLM before combining scores from the two acoustic models in comparison with the results obtained on the older CTS evaluation. Unlike results observed on original CTS testing, no significant reduction in the word error rate (WER) was achieved after scores from both the LSTM and ResNet models were combined. The LSTM model with an n-gram LM individually performs quite well and its results further improve with the addition of the FF-NNLM.

For the second set of experiments, word lattices were generated after decoding with the LSTM+ResNet+n-gram+FF-NNLM model. The team generated n-best lists from these lattices and rescored them with the LSTM1-LM. LSTM2-LM was also used to rescore word lattices independently. Significant WER gains were observed after using the LSTM LMs. This led the researchers to hypothesize that the secondary fine-tuning with BN-specific data is what allows LSTM2-LM to perform better than LSTM1-LM.

The results

Our ASR results have clearly improved state-of-the-art performance, and significant progress has been made compared to systems developed over the last decade. When compared to the human performance results, the absolute ASR WER is about 3% worse. Although the machine and human error rates are comparable, the ASR system has much higher substitution and deletion error rates.

Looking at the different error types and rates, the research produced interesting takeaways:

  • There’s a significant overlap in the words that ASR and humans delete, substitute, and insert.
  • Humans seem to be careful about marking hesitations: %hesitation was the most inserted symbol in these experiments. Hesitations seem to be important in conveying meaning to the sentences in human transcriptions. The ASR systems, however, focus on blind recognition and were not successful in conveying the same meaning.
  • Machines have trouble recognizing short function words: theandofathat and these get deleted the most. Humans on the other hand, seem to catch most of them. It seems likely that these words aren’t fully articulated so the machine fails to recognize them, while humans are able to infer these words naturally.

Silhouette of person speaking on phone
Conclusion

The experiments show that speech ASR techniques can be transferred across domains to provide highly accurate transcriptions. For both acoustic and language modeling, the LSTM- and ResNet-based models proved effective and human evaluation experiments kept us honest. That said, while our methods keep improving, there is still a gap to close between human and machine performance, demonstrating a continued need for research on automatic transcription for broadcast news.

Source : https://appen.com/blog/improving-the-accuracy-of-automatic-speech-recognition-models-for-broadcast-news/

 

Which New Business Models Will Be Unleashed By Web 3.0? – Fabric

The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.

The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?

A history of Business Models across Web 1.0, Web 2.0 and Web 3.0

We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.

To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:

  • In 1999, while enjoying good traffic, they were clearly struggling with their business model. Their lead investor Mike Moritz (Sequoia Capital) openly stated “we really couldn’t figure out the business model, there was a period where things were looking pretty bleak”.
  • In 2001, Google was making $85m in revenue while their rival Overture was making $288m in revenue, as CPM based online advertising was falling away post dot-com crash.
  • In 2002, adopting Overture’s ad model, Google went on to launch AdWords Select: its own pay-per-click, auction-based search-advertising product.
  • Two years later, in 2004, Google hits 84.7% of all internet searches and goes public with a valuation of $23.2 billion with annualised revenues of $2.7 billion.

After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.

Looking back at the wave of Web 2.0 Business Models

Content

The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.

Marketplaces

Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.

Open Source Software

Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.

SaaS

In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.

Sharing Economy

It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.

Advertising

While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.

Emerging Web 3.0 Business Models

Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.

By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.

  • Issuing a native asset
  • Holding the native asset, building the network:
  • Taxation on speculation (exchanges)
  • Payment tokens
  • Burn tokens
  • Work Tokens
  • Other models

Issuing a native asset:

Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.

These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.

Holding the native asset, building the network:

Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.

While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.

Taxing the Speculative Nature of these Native Assets:

The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.

Payment Tokens:

With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.

Burn Tokens:

Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.

Work Tokens:

One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.

A wide array of other models are being explored and worth touching upon:

  • Dual token model such as MKR/DAI & SPANK/BOOTY where one asset absorbs the volatile up- & down-side of usage and the other asset is kept stable for optimal transacting.
  • Governance tokens which provide the ability to influence parameters such as fees and development prioritisation and can be valued from the perspective of an insurance against a fork.
  • Tokenised securities as digital representations of existing assets (shares, commodities, invoices or real estate) which are valued based on the underlying asset with a potential premium for divisibility & borderless liquidity.
  • Transaction fees for features such as the models BloXroute & Aztec Protocol have been exploring with a treasury that takes a small transaction fee in exchange for its enhancements (e.g. scalability & privacy respectively).
  • Tech 4 Tokens as proposed by the Starkware team who wish to provide their technology as an investment in exchange for tokens — effectively building a treasury of all the projects they work with.
  • Providing UX/UI for protocols, such as Veil & Guesser are doing for Augur and Balance is doing for the MakerDAO ecosystem, relying on small fees or referrals & commissions.
  • Network specific services which currently include staking providers (e.g. Staked.us), CDP managers (e.g. topping off MakerDAO CDPs before they become undercollateralised) or marketplace management services such as OB1 on OpenBazaar which can charge traditional fees (subscription or as a % of revenues)
  • Liquidity providers operating in applications that don’t have revenue generating business models. For example, Uniswap is an automated market maker, in which the only route to generating revenues is providing liquidity pairs.

With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.

When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.

Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller

Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10

Money Out of Nowhere: How Internet Marketplaces Unlock Economic Wealth – Bill Gurley

In 1776, Adam Smith released his magnum opus, An Inquiry into the Nature and Causes of the Wealth of Nationsin which he outlined his fundamental economic theories. Front and center in the book — in fact in Book 1, Chapter 1 — is his realization of the productivity improvements made possible through the “Division of Labour”:

It is the great multiplication of the production of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people. Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of society.

Smith identified that when men and women specialize their skills, and also importantly “trade” with one another, the end result is a rise in productivity and standard of living for everyone. In 1817, David Ricardo published On the Principles of Political Economy and Taxation where he expanded upon Smith’s work in developing the theory of Comparative Advantage. What Ricardo proved mathematically, is that if one country has simply a comparative advantage (not even an absolute one), it still is in everyone’s best interest to embrace specialization and free trade. In the end, everyone ends up in a better place.

There are two key requirements for these mechanisms to take force. First and foremost, you need free and open trade. It is quite bizarre to see modern day politicians throw caution to the wind and ignore these fundamental tenants of economic science. Time and time again, the fact patterns show that when countries open borders and freely trade, the end result is increased economic prosperity. The second, and less discussed, requirement is for the two parties that should trade to be aware of one another’s goods or services. Unfortunately, either information asymmetry or physical distances and the resulting distribution costs can both cut against the economic advantages that would otherwise arise for all.

Fortunately, the rise of the Internet, and specifically Internet marketplace models, act as accelerants to the productivity benefits of the division of labour AND comparative advantage by reducing information asymmetry and increasing the likelihood of a perfect match with regard to the exchange of goods or services. In his 2005 book, The World Is Flat, Thomas Friedman recognizes that the Internet has the ability to create a “level playing field” for all participants, and one where geographic distances become less relevant. The core reason that Internet marketplaces are so powerful is because in connecting economic traders that would otherwise not be connected, they unlock economic wealth that otherwise would not exist. In other words, they literally create “money out of nowhere.”

EXCHANGE OF GOODS MARKETPLACES

Any discussion of Internet marketplaces begins with the first quintessential marketplace, ebay(*). Pierre Omidyarfounded AuctionWeb in September of 1995, and its rise to fame is legendary. What started as a web site to trade laser pointers and Beanie Babies (the Pez dispenser start is quite literally a legend), today enables transactions of approximately $100B per year. Over its twenty-plus year lifetime, just over one trillion dollars in goods have traded hands across eBay’s servers. These transactions, and the profits realized by the sellers, were truly “unlocked” by eBay’s matching and auction services.

In 1999, Jack Ma created Alibaba, a Chinese-based B2B marketplace for connecting small and medium enterprise with potential export opportunities. Four years later, in May of 2003, they launched Taobao Marketplace, Alibaba’s answer to eBay. By aggressively launching a free to use service, Alibaba’s Taobao quickly became the leading person-to-person trading site in China. In 2018, Taobao GMV (Gross Merchandise Value) was a staggering RMB2,689 billion, which equates to $428 billion in US dollars.

There have been many other successful goods marketplaces that have launched post eBay & Taobao — all providing a similar service of matching those who own or produce goods with a distributed set of buyers who are particularly interested in what they have to offer. In many cases, a deeper focus on a particular category or vertical allows these marketplaces to distinguish themselves from broader marketplaces like eBay.

  • In 2000, Eric Baker and Jeff Fluhr founded StubHub, a secondary ticket exchange marketplace. The company was acquired by ebay in January 2007. In its most recent quarter, StubHub’s GMV reached $1.4B, and for the entire year 2018, StubHub had GMV of $4.8B.
  • Launched in 2005, Etsy is a leading marketplaces for the exchange of vintage and handmade items. In its most recent quarter, the company processed the exchange of $923 million of sales, which equates to a $3.6B annual GMV.
  • Founded by Michael Bruno in Paris in 2001, 1stdibs(*) is the world’s largest online marketplace for luxury one-of-a-kind antiques, high-end modern furniture, vintage fashion, jewelry, and fine art. In November 2011, David Rosenblatt took over as CEO and has been scaling the company ever since. Over the past few years dealers, galleries, and makers have matched billions of dollars in merchandise to trade buyers and consumer buyers on the platform.
  • Poshmark was founded by Manish Chandra in 2011. The website, which is an exchange for new and used clothing, has been remarkably successful. Over 4 million sellers have earned over $1 billion transacting on the site.
  • Julie Wainwright founded The Real Real in 2011. The company is an online marketplace for authenticated luxury consignment. In 2017, the company reported sales of over $500 million.
  • In 2015, Eddy Lu and Daishin Sugano launched GOAT, a marketplace for the exchange of sneakers. Despite this narrow focus, the company has been remarkably successful. The estimated annual GMV of GOAT and its leading competitor Stock X is already over $1B per year (on a combined basis).

SHARING ECONOMY MARKETPLACES

With the launch of Airbnb in 2008 and Uber(*) in 2009, these two companies established a new category of marketplaces known as the “sharing economy.” Homes and automobiles are the two most expensive items that people own, and in many cases the ability to own the asset is made possible through debt — mortgages on houses and car loans or leases for automobiles. Despite this financial exposure, for many people these assets are materially underutilized. Many extra rooms and second homes are vacant most of the year, and the average car is used less than 5% of the time. Sharing economy marketplaces allow owners to “unlock” earning opportunities from these underutilized assets.

Airbnb was founded by Joe Gebbia and Brian Chesky in 2008. Today there are over 5 million Airbnb listings in 81,000 cities. Over two million people stay in an Airbnb each night. In November of this year, the company announced that it had achieved “substantially” more than $1B in revenue in the third quarter. Assuming a marketplace rake of something like 11%, this would imply gross room revenue of over $9B for the quarter — which would be $36B annualized. As the company is still growing, we can easily guess that in 2019-2020 time frame, Airbnb will be delivering around $50B per year to home-owners who were previously sitting on highly underutilized assets. This is a major “unlocking.”

When Garrett Camp and Travis Kalanick founded Uber in 2009, they hatched the industry now known as ride-sharing. Today over 3 million people around the world use their time and their underutilized automobiles to generate extra income. Without the proper technology to match people who wanted a ride with people who could provide that service, taxi and chauffeur companies were drastically underserving the potential market. As an example, we estimate that ride-sharing revenues in San Francisco are well north of 10X what taxis and black cars were providing prior to the launch of ride-sharing. These numbers will go even higher as people increasingly forgo the notion of car ownership altogether. We estimate that the global GMV for ride sharing was over $100B in 2018 (including Uber, Didi, Grab, Lyft, Yandex, etc) and still growing handsomely. Assuming a 20% rake, this equates to over $80B that went into the hands of ride-sharing drivers in a single year — and this is an industry that did not exist 10 years ago. The matching made possible with today’s GPS and Internet-enabled smart phones is a massive unlocking of wealth and value.

While it is a lesser known category, using your own backyard and home to host dog guests as an alternative to a kennel is a large and growing business. Once again, this is an asset against which the marginal cost to host a dog is near zero. By combining their time with this otherwise unused asset, dog sitters are able to offer a service that is quite compelling for consumers. Rover.com (*) in Seattle, which was founded by Greg Gottesman and Aaron Easterly in 2011, is the leading player in this market. (Benchmark is an investor in Rover through a merger with DogVacay in 2017). You may be surprised to learn that this is already a massive industry. In less than a decade since the company started, Rover has already paid out of half a billion dollars to hosts that participate on the platform.

EXCHANGE OF LABOR MARKETPLACES

While not as well known as the goods exchanges or sharing economy marketplaces, there is a growing and exciting increase in the number of marketplaces that help match specifically skilled labor with key opportunities to monetize their skills. The most noteworthy of these is likely Upwork(*), a company that formed from the merger of Elance and Odesk. Upwork is a global freelancing platform where businesses and independent professionals can connect and collaborate remotely. Popular categories include web developers, mobile developers, designers, writers, and accountants. In the 12 months ended June 30, 2018, the Upwork platform enabled $1.56 billion of GSV (gross services revenue) across 2.0 million projects between approximately 375,000 freelancers and 475,000 clients in over 180 countries. These labor matches represent the exact “world is flat” reality outlined in Friedman’s book.

Other noteworthy and emerging labor marketplaces:

  • HackerOne(*) is the leading global marketplace that coordinates the world’s largest corporate “bug bounty” programs with a network of the world’s leading hackers. The company was founded in 2012 by Michiel PrinsJobert AbmaAlex Rice and Merijn Terheggen, and today serves the needs of over 1,000 corporate bug bounty programs. On top of that, the HackerOne network of over 300,000 hackers (adding 600 more each day) has resolved over 100K confirmed vulnerabilities which resulted in over $46 million in awards to these individuals. There is an obvious network effect at work when you bring together the world’s leading programs and the world’s leading hackers on a single platform. The Fortune 500 is quickly learning that having a bug bounty program is an essential step in fighting cyber crime, and that HackerOne is the best place to host their program.
  • Wyzant is a leading Chicago-based marketplace that connects tutors with students around the country. The company was founded by Andrew Geant and Mike Weishuhn in 2005. The company has over 80,000 tutors on its platform and has paid out over $300 million to these professionals. The company started matching students with tutors for in-person sessions, but increasingly these are done “virtually” over the Internet.
  • Stitch Fix (*) is a leading provider of personalized clothing services that was founded by Katrina Lake in 2011. While the company is not primarily a marketplace, each order is hand-curated by a work-at-home “stylist” who works part-time on their own schedule from the comfort of their own home. Stitch Fix’s algorithms match the perfect stylist with each and every customer to help ensure the optimal outcome for each client. As of the end of 2018, Stitch Fix has paid out well over $100 million to their stylists.
  • Swing Education was founded in 2015 with the objective of creating a marketplace for substitute teachers. While it is still early in the company’s journey, they have already established themselves as the leader in the U.S. market. Swing is now at over 1,200 school partners and has filled over 115,000 teacher absence days. They have helped 2,000 substitute teachers get in the classroom in 2018, including 400 educators who earned permits, which Swing willingly financed. While it seems obvious in retrospect, having all substitutes on a single platform creates massive efficiency in a market where previously every single school had to keep their own list and make last minute calls when they had vacancies. And their subs just have to deal with one Swing setup process to get access to subbing opportunities at dozens of local schools and districts.
  • RigUp was founded by Xuan Yong and Mike Witte in Austin, Texas in March of 2014. RigUp is a leading labor marketplace focused on the oilfield services industry. “The company’s platform offers a large network of qualified, insured and compliant contractors and service providers across all upstream, midstream and downstream operations in every oil and gas basin, enabling companies to hire quickly, track contractor compliance, and minimize administrative work.” According to the company, GMV for 2017 was an impressive $150 million, followed by an astounding $600 million in 2018. Often, investors miss out on vertically focused companies like RigUp as they find themselves overly anxious about TAM (total available market). As you can see, that can be a big mistake.
  • VIPKid, which was founded in 2013 by Cindy Mi, is a truly amazing story. The idea is simple and simultaneously brilliant. VIPKid links students in China who want to learn English with native English speaking tutors in the United States and Canada. All sessions are done over the Internet, once again epitomizing Friedman’s very flat world. In November of 2018, the company reported having 60,000 teachers contracted to teach over 500,000 students. Many people believe the company is now well north of a US$1B run rate, which implies that around $1B will pass hands from Chinese parents to western teachers in 2019. That is quite a bit of supplemental income for U.S.-based teachers.

These vertical labor marketplaces are to LinkedIn what companies like Zillow, Expedia, and GrubHub are to Google search. Through a deeper understanding of a particular vertical, a much richer perspective on the quality and differentiation of the participants, and the enablement of transactions — you create an evolved service that has much more value to both sides of the transaction. And for those professionals participating in these markets, your reputation on the vertical service matters way more than your profile on LinkedIn.

NEW EMERGING MARKETPLACES

Having been a fortunate investor in many of the previously mentioned companies (*), Benchmark remains extremely excited about future marketplace opportunities that will unlock wealth on the Internet. Here are an example of two such companies that we have funded in the past few years.

The New York Times describes Hipcamp as “The Sharing Economy Visits the Backcountry.” Hipcamp(*) was founded in 2013 by Alyssa Ravasio as an engine to search across the dozens and dozens of State and National park websites for campsite availability. As Hipcamp gained traction with campers, landowners with land near many of the National and State parks started to reach out to Hipcamp asking if they could list their land on Hipcamp too. Hipcamp now offers access to more than 350k campsites across public and private land, and their most active private land hosts make over $100,000 per year hosting campers. This is a pretty amazing value proposition for both land owners and campers. If you are a rural landowner, here is a way to create “money out of nowhere” with very little capital expenditures. And if you are a camper, what could be better than to camp at a unique, bespoke campsite in your favorite location.

Instawork(*) is an on-demand staffing app for gig workers (professionals) and hospitality businesses (partners). These working professionals seek economic freedom and a better life, and Instawork gives them both — an opportunity to work as much as they like, but on their own terms with regard to when and where. On the business partner side, small business owners/managers/chefs do not have access to reliable sources to help them with talent sourcing and high turnover, and products like  LinkedIn are more focused on white-collar workers. Instawork was cofounded by Sumir Meghani in San Franciso and was a member of the 2015 Y-Combinator class. 2018 was a break-out year for Instawork with 10X revenue growth and 12X growth in Professionals on the platform. The average Instawork Professional is highly engaged on the platform, and typically opens the Instawork app ten times a day. This results in 97% of gigs being matched in less than 24 hours — which is powerfully important to both sides of the network. Also noteworthy, the Professionals on Instawork average 150% of minimum wage, significantly higher than many other labor marketplaces. This higher income allows Instawork Professionals like Jose, to begin to accomplish their dreams.

THE POWER OF THESE PLATFORMS

As you can see, these numerous marketplaces are a direct extension of the productivity enhancers first uncovered by Adam Smith and David Ricardo. Free trade, specialization, and comparative advantage are all enhanced when we can increase the matching of supply and demand of goods and services as well as eliminate inefficiency and waste caused by misinformation or distance. As a result, productivity naturally improves.

Specific benefits of global internet marketplaces:

    1. Increase wealth distribution (all examples)
    2. Unlock wasted potential of assets (Uber, AirBNB, Rover, and Hipcamp)
    3. Better match of specific workers with specific opportunities (Upwork, WyzAnt, RigUp, VIPKid, Instawork)
    4. Make specific assets reachable and findable (Ebay, Etsy, 1stDibs, Poshmark, GOAT)
    5. Allow for increased specialization (Etsy, Upwork, RigUp)
    6. Enhance supplemental labor opportunities (Uber, Stitch Fix, SwingEducation, Instawork, VIPKid), where the worker is in control of when and where they work
    7. Reduces forfeiture by enhancing utilization (mortgages, car loans, etc) (Uber, AirBnb, Rover, Hipcamp)

Source : http://abovethecrowd.com/2019/02/27/money-out-of-nowhere-how-internet-marketplaces-unlock-economic-wealth/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

Data-driven transformation of the life sciences industry – RockHealth

Digital health innovation continues moving full-force in transforming the business of healthcare. For pharma and medtech companies in particular, this ongoing shift has pushed them to identify ways to create value for patients beyond the drugs themselves. From new partnerships between digital health and life science companies to revamped commercial models, collecting and extracting insights from data is at the core of these growth opportunities. But navigating the rapidly evolving terrain is no simple task.

To help these companies effectively incorporate and utilize digital health tools, Rock Health partner ZS Associates draws on over 30 years of industry expertise to guide them through the complex digital health landscape. We chatted with Principal Pete Masloski to discuss how he works with clients to help identify, develop, and commercialize digital health solutions within their core businesses—and where he sees patients benefiting the most as a result.

Note: This interview has been lightly edited for clarity.

Where does ZS see the promise of data- and analytics-enabled digital health tools leading to in the next five years, 10 years, and beyond?

Data and analytics will play a central role in the digital health industry’s growth over the next five to ten years. Startups are able to capture larger, novel sets of data in a way that large life science companies historically have not been able to. As a result, consumers will be better informed about their health choices; physicians will have more visibility into what treatment options work best for whom under what circumstances; health plans will have a better understanding of treatment choices; and pharmaceutical and medical device companies will be able to strategically determine which products and services to build.

We see personalized medicine, driven by genomics and targeted therapies, rapidly expanding over the next few years. Pharmaceutical discovery and development will also transition to become more digitally enabled. The ability to match patients with clinical trials and improve the patient experience will result in lower costs, faster completion, and more targeted therapies. The increase in real-world evidence will be used to demonstrate the efficacy of therapeutics and devices in different populations, which assures payers and providers that outcomes from studies can be replicated in the real world.

How is digital health helping life sciences companies innovate their commercial models? What is the role of data and analytics in these new models?

The pharmaceutical industry continues to face a number of challenges, including the increasingly competitive markets, growing biosimilar competition, and overall scrutiny on pricing. We’ve seen excitement around solutions that integrate drugs with meaningful outcomes and solutions that address gaps in care delivery and promote medication adherence.

Solving these problems creates new business model opportunities for the industry through fresh revenue sources and ways of structuring agreements with customers. For example, risk-based contracts with health plans, employers, or integrated delivery networks (IDNs) become more feasible when you can demonstrate increased likelihood of better outcomes for more patients. We see this coming to fruition when pharma companies integrate comprehensive digital adherence solutions focused on patient behavior change around a specific drug, as in Healthprize’s partnership with Boehringer Ingelheim. In medtech, digital health tools can both differentiate core products and create new profitable software or services businesses. Integrating data collection technology and connectivity into devices and adding software-enabled services can support a move from traditional equipment sales to pay-per-use. This allows customers to access the new equipment technology without paying a large sum up front—and ensures manufacturers will have a more predictable ongoing source of revenue.

That said, data and analytics remain at the core of these new models. In some cases, such as remote monitoring, the data itself is the heart of the solution; in others, the data collected helps establish effectiveness and value as a baseline for measuring impact. Digital ambulatory blood pressure monitors capture an individual’s complete blood pressure profile throughout the day, which provides a previously unavailable and reliable “baseline.” Because in-office only readings may be skewed by “white coat hypertension,” or stress-induced blood pressure readings, having a more comprehensive look at this data can lead to deeper understandings of user behaviors or conditions. Continuous blood pressure readings can help with diagnoses of stress-related drivers of blood pressure spikes, for example. These insights become the catalyst for life science companies’ new product offerings and go-to-market strategies.

What are some examples of how data sets gathered from partnerships with digital health companies can be leveraged to uncover new value for patients and address their unmet needs?

As digital health companies achieve a certain degree of scale, their expansive data sets become more valuable because of the insights that can be harnessed to improve outcomes and business decisions. Companies like 23andMe, for example, have focused on leveraging their data for research into targeted therapies. Flatiron Health is another great example of a startup that created a foundational platform (EMR) whose clinical data from diverse sources (e.g., laboratories, research repositories, and payer networks) became so highly valued in cancer therapy development that Roche acquired it earlier this year for close to $2B.

It’s exciting to think about the wide array of digital health solutions and the actionable insight that can be gleaned from them. One reason partnerships are important for the industry is few innovators who are collecting data have the capabilities and resources to fully capitalize on its use on their own. Pharma companies and startups must work together to achieve all of this at scale. Earlier this year, Fitbit announced a new partnership with Google to make the data collected from its devices available to doctors. Google’s API can directly link heart rate and fitness activity to the EMR, allowing doctors to easily review and analyze larger amounts of data. This increase in visibility provides physicians with more insight into how patients are doing in between visits, and therefore can also help with decision pathways.

Another example announced earlier this year is a partnership between Evidation Health and Tidepool, who are conducting a new research study, called the T1D Sleep Pilot, to study real-world data from Type 1 diabetics. With Evidation’s data platform and Tidepool’s device-agnostic consumer software, the goal is to better understand the dynamics of sleep and diabetes by studying data from glucose monitors, insulin pumps, and sleep and activity trackers. The data collected from sleep and activity trackers in particular allows us to better understand possible correlations between specific chronic conditions, like diabetes, and the impact of sleep—which in the past has been challenging to monitor. These additional insights provide a more comprehensive understanding of a patient’s condition and can lead to changes in treatment decisions—and ultimately, better outcomes.

How do you assess the quality and reliability of the data generated by digital health companies? What standards are you measuring them against?

Data quality management (DQM) is the way in which leading companies evaluate the quality and reliability of data sources. ISO 9000’s definition of quality is “the degree to which a set of inherent characteristics fulfills requirements.” At ZS, we have a very robust DQM methodology, and our definition goes beyond the basics to include both the accuracy and the value of the data. Factors such as accuracy and absence of errors, and fulfilling specifications (business rules, designs, etc.), are foundational, but in our experience it’s most important to also include an assessment of value, completeness, and lack of bias because often these factors can lead to misleading or inaccurate insights from analysis of that data.

However, it’s not easy assessing the value of a new data source, which presents an entirely different set of challenges. One very important one is the actual interpretation of the data that’s being collected. How do you know when someone is shaking their phone or Fitbit to inflate their steps, or how do you interpret that the device has been taken off and it’s not tracking activity? How do you account for that and go beyond the data to understand what is really happening? As we get more experience with IOT devices and algorithms get smarter, we will get better at interpreting what these devices are collecting and be more forgiving of underlying data quality.

What are the ethical implications or issues (such as data ownership, privacy, and bias) you’ve encountered thus far, or anticipate encountering in the near future?

The ethical stewardship and protection of personal health data are just as essential for the long-term sustainability of the digital health industry as the data itself. The key question is, how can the industry realize the full value from this data without crossing the line? Protecting personal data in an increasingly digitized world—where we’ve largely become apathetic to the ubiquitous “terms and conditions” agreements—is a non-negotiable. How digital health and life science companies collect, manage, and protect users’ information will remain a big concern.

There are also ethical issues around what the data that is captured is used for. Companies need to carefully establish how to appropriately leverage the data without crossing the line. For example, using de-identified data for research purposes with the goal of improving products or services is aligned with creating a better experience for the patient, as opposed to leveraging the data for targeted marketing purposes.

Companies also face the issue of potential biases that may emerge when introducing AI and machine learning into decision-making processes around treatment or access to care. Statistical models are only as good as the data that are used to train them. Companies introducing these models need to test datasets and their AI model outputs to ensure gaps are eliminated from training data, the algorithms don’t learn to introduce bias, and they establish a process for evaluating bias as the models continue to learn and evolve.

Source : https://rockhealth.com/the-data-driven-transformation-of-the-life-sciences-industry-a-qa-with-zs-associates-pete-masloski/

How redesigning an enterprise product taught me to extend myself – Instacart

As designers, we want to work on problems that are intriguing and “game-changing”. All too often, we limit the “game-changing” category to a handful of consumer-facing mobile apps and social networks. The truth is: enterprise software gives designers a unique set of complex problems to solve. Enterprise platforms usually have a savvy set of users with very specific needs — needs that, when addressed, often affect a business’s bottom line.

One of my first projects as a product designer here at Instacart was to redesign elements of our inventory management tool for retailers (e.g. Kroger, Publix, Safeway, Costco, etc.). As I worked on the project more and more, I learned that Enterprise tools are full of gnarly complexity and often present opportunities to practice deep thought. As Jonathan, one of our current enterprise platform designers said —

The greater the complexity, the greater the opportunity to find elegance.

New login screen

As we scoped the project we found that the existing product wasn’t enabling retailers to manage their inventories as concisely and efficiently as they could. We found retailer users were relying on customer support to help carry out smaller tasks. Our goal with the redesign was to build and deliver a better experience that would enable retailers to manage their inventory more easily and grow their business with Instacart.

The first step in redesigning was to understand the flow of the current product. We mapped out the journey of a partner going through the tool and spoke with the PMs to figure out what we could incorporate into the roadmap.

Overview of the older version of the retailer tool

Once we had a good understanding of the lay of the land, engineering resources, and retailers’ needs, we got into the weeds. Here are a few improvements we made to the tool —

Aisle and department management for Retailers

We used the department tiles feature from our customer-facing product as the catalog’s landing page (1.0 above). With this, we worked to:

  • Refine our visual style
  • Present retailers with an actionable page on the get-go
  • Make it quick and easy to add, delete, and modify items
New Departments page for the Partner Tool. Responsive tiles allow partners to view and edit their Aisles and Departments quickly.

Establishing Overall Hierarchy

Older item search page
Beverages > Coffee returns a list of coffees from the retailer’s catalog

Our solution simplified a few things:

  • A search bar rests atop the product to help find and add items without having to be on this specific page. It pops up a modal that offers a search and add experience. This was visually prioritized since it’s the most common action taken by retailers
  • Decoupled search flow and “Add new product” flow to streamline the workflows
  • Pagination, which was originally on the top and bottom, is now pinned to the bottom of the page for easy navigation
  • We also rethought the information hierarchy on this page. In the example below, the retailer is in the “Beverages” aisle under the “Coffee” item category, which is on the top left. They are editing or adding the item “Eight O’Clock Coffee,” which is the page title. This title is bigger to anchor the user on the page and improve navigation throughout the platform
Focused view of top bar. The “New Product” button is disabled since this is a view to add products

Achieving Clarity

While it’s great that the older Item Details page was partitioned into sections, from an IA perspective, it offered challenges for two reasons:

  1. The category grouping didn’t make sense to retailers
  2. Retailers had to read the information vertically but digest it horizontally and vertically
Older version of Item Details page

To address this, we broke down the sections into what’s truly necessary. From there, we identified four main categories of information that the data fell under:

  1. Images — This is first to encourage retailers to add product photos
  2. Basic Info — Name, brand, size, and unit
  3. Item description — Below the item description field, we offered the description seen on the original package (where the data was available) to help guide them as they wrote
  4. Product attributes — help better categorize the product (e.g. Kosher)

Sources now pop up on the top right of the input fields so the editor knows who last made changes.


Takeaways

Seeking validation through numbers is always fantastic. We did a small beta launch of this product and saw an increase in weekly engagement and decrease in support requests.

I learned that designing enterprise products helps you extend yourself as a visual designer and deep product thinker. I approached this project as an opportunity to break down complex interactions and bring visual elegance to a product through thoughtful design. To this day, it remains one of my favorite projects at Instacart as it stretched my thinking and enhanced my visual design chops. Most importantly, it taught me to look at Enterprise tools in a new light; now when I look at them, I am able to appreciate the complexity within

Source: https://tech.instacart.com/how-redesigning-an-enterprise-product-taught-me-to-extend-myself-8f83d72ebcdf

Augmented reality , the state of art in the industry- Miscible

Miscible.io attended The Augmented World Expo in Europe / Munich , October 2018, here is my report.

What a great #AWE2018 show in Munich, with a strong focus on the industry usage and, of course , the german automotive industry was well represented. Some new , simple but efficient, AR devices , and plenty of good use cases with a confirmed ROI. This edition was PRAGMATIC.

Here are my six take aways from this edition. Enjoy it !

1 – The return of investment of the AR solutions

The use of XR by automotive companies, big pharma, and teachers confirmed some good ROI with some “ready to use” solutions, especially in this domains :

2 – This is still the firstfruits of AR and some improvements are expected for drawbacks

  • Hardware : field of view, contrast/brigtness , 3D asset resolutions
  • Some AR headset are heavy to wear, it can have some consequences on the operator confort and security.
  • Accuracy between virtual and reality overlay / recognition
  • Automation process from Authoring software to build an end user solution.

3 – Challenge of the Authoring

To create specific and advanced AR Apps, there is still some challenges with the content authoring and with the integration to the legacy systems to retrieve master data and 3D assets. Automotized and integrated AR app need some ingenious developments.

An interesting use case from Boeing ( using hololens to assist the mounting of cables) shows how they did to get an integrated and automatized AR app. Their AR solution architecture in 4 blocks :

  • A web service to design the new AR app (UX and workflow)
  • A call to legacy systems to collect Master Data and 3D data / assets
  • Creation of an integrated Packaged data = asset bundle for the AR
  • Creation of the specific AR app (Vuforia / Unity) , to be transfered to the stand alone system, the Hololens glass.

4 – concept of 3D asset as a master data

The usage of AR and VR becomes more important in many domains : From conception to maintenance and sales (configurator, catalogs …)

The consequence is that original CAD files can be transformed and used in different processes of your company, where it becomes a challenge to use high polygon from CAD applications into other 3D / VR / AR applications, where there is a need of lighter 3D assets, also with some needs of texture and rendering adjustment.

gIFT can be a solution , glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.

The main challenge is to implement a good centralised and integrated 3D asset management strategy, considering them as important as your other key master data.

5 – service company and expert to design advanced AR / VR solutions , integrated in the enterprise information system.

The conception of advanced and integrated AR solutions for large companies needs some new expert combining knowlegde in 3D apps and experience in system integration.

This projects need new types of information system architecture taking in account the AR technologies.

PTC looks like a leader in providing efficient and scalable tools for large companies. PTC, owner of Vuforia is also exceling with other 3D / PLM management solutions like windchill , to smoothly integrate 3D management in all the processes and IT of the enterprise.

Sopra Steria , the french IS integration company, is also taking this role , bringing his system integration experience into the new AR /VR usages in the industry.

If you don’t want to invest in this kind of complex projects, for a first step in AR/VR or for some quick wins at a low budget , new content authoring solutions exist to build your AR app with some simple user interfaces and workflows : skylight by Upskill , worklink by Scope AR

6 – The need for an open AR Cloud

“A real time 3D (or spatial) map of the world, the AR cloud, will be the single most important software infrastructure in computing. Far more valuable than facebook social graph, or google page rank index” say Ori Inbar, Co-Founder and CEO of Augmented Reality.ORG. A promising prediction.

The AR cloud provide a persistant , multiuser and cross device AR landscape. It allows people to share experiences and collaborate. The most known AR cloud experience so far is the famous Pokemon Go game.

So far the AR map works using GPS or image recognition, or local point of cloud for a limited space / a building. The dream will be to copy the world as a point of cloud, for a global AR cloud landscape. A real time systems that could be used by robots, drones etc…

The AWE exhibition presented some interesting AR cloud initiative :

  • The Open AR Cloud Initiative launched at the event and had its first working session.
  • Some good SDK are now available to build your own local AR clouds : Wikitude an Immersal

Source : https://www.linkedin.com/pulse/augmented-reality-state-art-industry-fr%C3%A9d%C3%A9ric-niederberger/

 

How sustainable is the food packaging industry? – Food Dive

As more consumers search for sustainable packaging options, food and beverage companies are forced to make tough decisions about their products.

Shoppers and investors are increasingly looking for companies and brands to take the initiative on environmental issues. A Horizon Media study found 81% of millennials expect companies to make public commitments to good corporate citizenship and 66% of consumers will pay more for products from brands committed to environmentally friendly practices, according to the Nielsen Global Corporate Sustainability Report.

But more eco-friendly practices haven’t been easy for the food packaging industry. Designing eco-friendly packaging that can keep products fresh and endure temperature changes that come with cooking can be a challenge. Packaging companies told Food Dive they recently made moves to offer sustainable options with water-based ink and more compostable packaging, but have faced obstacles along the way. While some brands are aiming to only appear more sustainable, others are making slow efforts to be eco-friendly with new innovations and products.

For major food and beverage companies, the higher cost of sustainable materials and the struggle to keep food fresh are barriers. Production costs for sustainable options be about 25% more compared to traditional packaging. These materials also tend to be less effective in maintaining freshness, since packaging companies say plastic can have a tighter seal and keep out air better than other materials.

“That’s their compromise, it looks eco-friendly — but it’s not.” Damon Leach – Account representative at Green Rush Packaging

Some companies have found a way around the high costs. Damon Leach, an account representative at Green Rush Packaging, told Food Dive that a solution for some food companies has been to use material that looks recyclable to shoppers, but in reality, is not.

Instead of paying more for eco-friendly materials, companies have been picking material, like kraft paper, that looks more sustainable to consumers, he added. Leach said the products that appear to be more green do sell better.

Although Leach said more suppliers and consumers theoretically want sustainable packaging, those materials typically don’t have a long shelf life and consumers don’t want to pay the extra money. But some companies are still making an effort to pay more for eco-friendly packaging despite the challenges.

When will sustainable be the norm?

From producers and companies to retailers, consumers and recycling organizations, packaging can affect the whole supply chain. So the challenge for packaging manufacturers becomes determining what new innovations and materials are the best investments.

Randall LaVeau, business development manager at Interpress Technologies, which manufactures formed paperboard and plastic food packaging products, told Food Dive there is a huge push for more recycling in the marketplace. But he said it is hard to get an eco-friendly material that holds water but isn’t plastic and doesn’t degrade — a necessity for microwavable products that need water to cook.

Many companies are now working to develop recyclable packaging that can withstand heat and hold liquids, but LaVeau​ said there is still a lot of research and development to go before it is widespread.

“Everybody is in the shop trying to figure it out,” LaVeau said. “People have been working on it for the last 10 years or longer, they just haven’t had a good success for it.” 

Evo Ware

For companies that have made sustainability goals, the time is ticking to figure it out. Mondelez just announced its plans to make all their packaging recyclable by 2025. Nestlé, Unilever and PepsiCo have agreed to phase in packaging made from recyclable, compostable and biodegradable materials with more recycled content by 2025, but haven’t released specific details about their plans. In fact, a recent report identified Coca-Cola, PepsiCo and Nestle as businesses contributing most to pollution.

But as these big companies push for more development on sustainable materials, that means cost could continue to be an obstacle. Although consumers say they are willing to pay more for sustainability, they don’t always pick up the more expensive options in stores.

“Just like anything else, when something new comes out… it is more expensive until they can work with it in time and maximize their efficiencies for the cost to come down,” LaVeau said.

New innovations in sustainable packages

Several companies have developed more sustainable options this year. For example, HelloFresh is rolling out more sustainable packaging for its meal kits with recyclable liners created by sustainable design company TemperPack.

And some new developments haven’t come into the mainstream yet. U.S. Department of Agriculture researchers have developed an edible, biodegradable packaging film made of casein, a milk protein, that can be wrapped around food to prevent spoilage.

Other companies are working to find new ways to help the environment. Wayne Shilka, vice president of innovation and technical support at Eagle Flexible Packaging, a printer of packaging in Chicago, has prioritized offering more sustainable options to their customers. Eagle Flexible Packaging uses a water-based ink because it doesn’t create any probable organic compounds that then go out into the atmosphere, making it more environmentally friendly, Shilka said.

“We are finding that sustainable packaging is getting more and more and more interest.”, Wayne Shilka – Vice president of Eagle Flexible Packaging

Six years ago, Eagle Flexible Packaging put together a compostable material for packaging, and about 100 companies discussed the option with them. Only one customer ended up using the compostable product because it cost more any other packaging option the company offered. Every year since, a few more customers have worked with them to outfit their products with compostable material, Shilka said.

As more companies turn to compostable and sustainable packaging, the price will come down and make it more appealing, Shilka added.

“It continues growing to the point that it’s becoming not mainstream, but it’s much more routine that we had people who are calling and are interested and are actually doing something sustainable,” Shilka said.

‘Recyclable to an extent’

While some companies work to find new recyclable materials, others are satisfied with current packaging options. Flexible packaging — which is any package whose shape can be readily changed, such as bags and pouches — is popular. Representatives at packaging companies said flexible packaging can be an issue for sustainability since it has multilayer films with plastic and paper that need to be separated to be recycled.

LaVeau said most of his products are “recyclable to an extent” because of the layers. Certain recycling mills can handle his products, but at others, consumers need to separate the packaging for recycling — which doesn’t always happen.

Green Rush Packaging has the same issue.

“We got to get the end users to separate and recycle better instead of just facilities otherwise it is just waste, bad for the environment,” Leach said. 

Flexible packaging can also provide a higher product-to-package ratio, which creates fewer emissions during transportation and ultimately uses less space in landfills.

Some companies stand by their use of packages that aren’t fully sustainable. Robert Reinders, president of Performance Packaging, a family owned corrugated box plant founded in 1995 by packaging professionals, told Food Dive that about 5% of his products are recyclable. He said flexible packaging is a sustainable option because it uses up less energy and prolongs the shelf life of the food so it eliminates food waste.

“There is all kinds of great benefits to flexible packaging that gets drowned out by the recycle, compostable needs,” Reinders said.

Falling behind other countries’ sustainable goals

In the last two years, more than 70 bills have been introduced in state legislatures regarding plastic bags — encompassing bans, fees and recycling programs. However, many of those laws have not impacted the food packaging industry.

compostable packaging

In comparison, countries across the globe are increasing their efforts and goals when it comes to sustainability for both food and beverage product packaging. But U.S. companies are still in the development stage on many of their innovations.

The Singapore Packaging Agreement — a joint initiative by government, industry and NGOs to reduce packaging waste — has averted about 46,000 metric tons of packaging waste during the past 11 years, according to Eco-Business. In Australia, national, state and territory environment ministers have agreed that 100% of Australian packaging will be recyclable, compostable or reusable by 2025.

Vancouver, Canada has adopted a ban on the distribution of polystyrene foam cups and containers, as well as restrictions on disposable cups and plastic shopping bags. The U.K. also plans to eliminate plastic waste by 2042.

As countries around the world change their packaging to adjust to these sustainability goals, Reinders said U.S. companies will likely adopt more changes. And as more CPG makers start mass producing sustainable options around the world, he said it will drive prices down globally.

“I was at Nestlé headquarters in Switzerland and they are currently making the efforts to find different materials and different processes so they can be recyclable,” Reinders said. “It’s all starting now. The more the big guys get into it, the better it will be.”

Source : https://www.fooddive.com/news/how-sustainable-is-the-food-packaging-industry/539089/

 

Scroll to top