HBR : How Netflix Redesigned Board Meetings

The board of directors of a public company has a long list of oversight responsibilities, but it is not always the case that directors receive the information they need to make fully informed decisions on key matters, such as strategy, succession, and performance monitoring. We recently studied a novel approach to information sharing at Netflix that provides a model for overcoming this governance shortfall.

At most companies, directors have a less complete understanding of the company than executives because of their limited exposure to day-to-day activities. The format of the information they receive does little to overcome this information deficit. The typical board book of a large corporation is a dense PowerPoint presentation spanning hundreds of pages in length. Some directors find these presentations heavy on data but light on analysis.

Furthermore, boardroom dynamics impede information flow, particularly in settings where the CEO maintains strict control over the content presented, when presentations are carefully scripted, and when presentations are made by only a limited number of executives.

Netflix takes a radically different approach. It incorporates two unique practices. First, board members periodically attend (in an observing capacity only) monthly and quarterly senior management meetings. What’s more, communication with the board comes in the form of a short, online memo that allows directors to ask questions and comment within the document. Executives can amend the text and answer questions in what is essentially a living document. We believe these two innovations meaningfully contributed to Netflix’s extraordinary performance in recent years.

Governance by Walking About

Unlike the stiff, formalized approach to most director-executive interactions, Netflix encourages its board members to spend time watching the company operate “in the wild.” The company holds three regularly scheduled executive meetings to which board members attend

  • Staff meetings (R-Staff) are monthly meetings of the top seven leaders.
  • Executive Staff meetings (E-Staff) are quarterly meetings of the top 90 executives.
  • Quarterly Business Reviews (QBR) are two-day gatherings of the top 500 employees.

One board member attends R-Staff meetings, one or two attend E-Staff meetings, and between two and four attend Quarterly Business Reviews. Directors who attend these meetings are expected to observe but not influence or participate in the discussion. The purpose of their attendance is primarily educational: By directly observing management, directors gain a greater understanding of the range of issues facing the company, the analytical approach that underpins managerial decisions, and the full scope of the tradeoffs involved. Ultimately, the aspiration is that this will translate to significantly more confidence in management and its choices. While warning that directors must be disciplined and exercise self-restraint about influencing decisions outside the boardroom, Netflix Founder and CEO Reed Hastings told us that providing deep access to management discussion “is an efficient way for the board to understand the company better.”

One director describes the benefit of attending management meetings: “You see a different level of dynamic of the executive team. You really see how different individuals contribute, you see their expertise, you see the voice that they have around the table, and you see the dynamic with the CEO. You see how the topics that have been discussed, resolved, and reported on in a board meeting actually got processed.”

Netflix directors believe that direct exposure to active strategic discussions gives them substantially deeper knowledge of the company than orchestrated visits to company offices or facilities. They also believe that frequent interaction with the senior executive team positions the board well for an eventual CEO succession. One small hazard: Hastings cautions that directors granted this level of access to management discussion and documentation need to exercise self-restraint about influencing decisions outside the boardroom.

A New Way to Write Board Memos

 The next innovation is that Netflix’s board communications are structured as approximately 30-page online memos in narrative form that not only include links to supporting analysis but also allow open access to all data and information on the company’s internal shared systems. This includes the ability for directors to ask clarifying questions of the subject authors. This quarterly memo is written by and shared with the top 90 executives as well as the board.

The memo itself consists written text that highlights business performance, industry trends, competitive developments, and other strategic and organizational issues. High-level data is summarized in charts and graphs, but the memo’s emphasis is primarily the written discussion and analysis of issues. Embedded links within each section of the memo connect the reader to supplemental analysis, data, and details that support and expand the written discussion.

Board members receive the memo a few days prior to board meetings and are self-directed in reviewing the material and clicking through to review supplemental analysis on topics or issues they believe are most important, interesting, or require the most attention from a fiduciary standpoint. Directors estimate they spend four to six hours in preparation. They have the ability to pose questions or ask for clarification directly within the digital memo, to which senior management responds prior to the meeting. Directors take active advantage of this capability.

Because directors are extensively prepared, board meetings themselves are significantly more efficient, with a focus on questions and discussion rather than presentation. Meetings are only three to four hours in length (compared to all day or multiple days at many large corporations). Senior executives attend board meetings and answer questions if needed.

The Netflix approach to board governance is rooted in and reflective of the company’s culture and leadership. The Netflix culture emphasizes individual initiative, the sharing of information, and a focus on results rather than processes. In the words of one director, “A lot of CEOs like the notion of transparency. The difference is that Reed has decided to put mechanisms in place … that actually make it happen.”

Netflix directors believe that these processes give them confidence in management, particularly when facing challenges. Examples include its fierce competition with Blockbuster for dominance in the DVD-subscription market, costly decisions to invest in content for its website, international expansion, and the significant cost and risk of producing original content. According to one director, “It’s hard to imagine we could have done it without the intimate knowledge of the operations and the people.”

Directors of major companies take their oversight job seriously—but too often they are handicapped by a lack of transparency and usable information. Netflix provides a model for companies looking to overcome this challenge.

https://hbr.org/2018/05/how-netflix-redesigned-board-meetings

 

Chiefmartec – 54 marketing stacks from The Stackies 2018: Marketing Tech Stack Awards

54 marketing stacks from The Stackies 2018: Marketing Tech Stack Awards

The Stackies 2018: Marketing Tech Stack Awards

Wow, I am just in awe of the 54 entries we received into The Stackies 2018: Marketing Tech Stack Awards. The quality of the illustrations this year were particularly remarkable. These are marketers who clearly have a lot of passion for their infrastructure.

(How’s that for something you wouldn’t have heard 5 years ago?)

If you’re new to this, a “marketing stack” is the collection of software that a marketing team uses to ply their trade. The Stackies is a fun awards program at the MarTech conference where we invite marketers to send in a single slide that illustrates their marketing stack in some conceptual way.

We share all entries with the community, because the real purpose of The Stackies is to give all of us lots of real-world examples of marketing stacks at different companies across different industries. We learn a lot about the reality of martech from those who are willing to contribute.

As a way of expressing our gratitude to those contributors, we make a donation on behalf of every entry to charity. This year, we decided to donate $100 for each entry to Girls in Tech — a total of $5,400.

We (the editorial teams at chiefmartec.comMarTech Today, and the MarTech conference) select 5 winners to receive trophies. The winners are chosen based on the quality of their slide illustration — but also based on innovations that they bring in the way that they visualized their stack, giving us a new way of looking at them.

Of course, the test of a real “winning” marketing stack is how well it performs. So while the winning illustrations are a fun way to celebrate The Stackies, all of our entrants are winners if their stack delivers for them.

The Stackies 2018 Winners: 5 Amazing Marketing Stacks

But officially, the 5 winners of The Stackies 2018 are:

1. BlackRock Marketing Tech Stack

Blackrock Marketing Tech Stack

This is a spectacular illustration — the SimCity of Marketing Stacks — of “MarTechtropolis.” On visual production values alone, this slide by BlackRock is impressive.

But it’s more than a pretty picture. The four sections of MarTechtropolis capture the cyclical process that BlackRock uses to iterate on marketing strategy and execution:

  1. Discover City — defining the opportunity, value, or user
  2. Concept Park — defining the direction or idea
  3. Plan-Ville — assembling the team and prioritizing work
  4. Do Town — executing on the vision and plan

In each of these sections, they’ve identified the key technologies they use and, in the legend on the left, they’ve annotated what marketing capability the use those tools for.

2. Earth Networks Marketing Tech Stack

Earth Networks Marketing Tech Stack

Earth Networks, a meteorological data and weather intelligence company, takes top honors for “best brand metaphor” with their marketing stack.

But their stack-as-a-weather-system illustration shares real meaningful structure:

  • A “jet stream” of core platforms that run throughout the entire marketing process.
  • Three “fronts” of marketing efforts and their tools: attract, engage, and analyze.
  • Color coding of tools to categorize them in one of five different capabilities: content creation (blue), distribution (orange), customer experience (green), website (yellow), and data management (magenta).

One last shout-out for their metaphor: I love that opportunity and revenue are both “highs” and churn is a “low.” And that the whole thing is called a “marketological forecasting stack.”

Well-played, Earth Networks. Well-played.

3. Cisco Marketing Tech Stacks (2017 to 2018)

Cisco Marketing Tech Stack

As you may know, Cisco entered The Stackies last year and won an award for sharing their marketing stack with 39 marketing technologies. They had a clever way of illustrating how martech was applied for different stakeholders through different stages of a customer’s lifecycle.

This year they entered again — and won again.

Now, there wasn’t much that changed about the design of their entry this year. So you might be wondering if we just really, really liked that design to award it a Stackie trohpy twice in a row.

Well, we do love their design. But the reason they won this year was because by submitting an update to their stack one year later, they’ve given us an amazing example of how an enterprise marketing stack evolves over time.

Cisco 2017 to 2018 Marketing Tech Stack

Some products were removed. Some new ones were added. They tweaked the organization of their foundational layer. They even let us in on their work-in-progress, noting that they’re currently evaluating customer data platforms and showing where they think that could fit in their stack’s overall architecture.

For this incredible transparency into the real-world evolution of a truly world-class enterprise marketing stack, Cisco definitely deserves their second trophy and the gratitude of the martech community.

4. Janus Henderson Marketing Tech Stack

Janus Henderson Marketing Tech Stack

In the same theme of martech evolution, investment firm Janus Henderson, won a Stackie for illustrating their marketing stack as it evolved through the merger of Janus Capital Group and Henderson Group, showing us their pre-merger stack and their post-merger stack.

It’s a terrific example of “rationalizing” a larger stack with overlapping solutions into a slimmer, more unified stack. We get to see what stayed, what went, and what was added.

The elements of their stack are well organized too, shown with four clusters of technologies — production, awareness, engagement, and analysis — grouped into three levels using Gartner’s pace layering architecture:

  • Systems of Record — common ideas, change infrequently
  • Systems of Differentiation — better ideas, change occasionally
  • Systems of Innovation — new ideas, change frequently

No flashy metaphor, but a solid marketing stack — and they’ve let us witness its evolution through a major company transition. That’s a wonderful contribution.

5. Element Three Marketing Tech Stack

Element Three Marketing Tech Stack

Speaking of flashy metaphors, this was clearly a bumper crop year for them in The Stackies. This stack from marketing agency Element Three is a particularly fun one.

They grouped different functional areas of their stack and labeled them with riffs on popular movie titles, such as:

  • Minority Report[ing] Analytics)
  • Enemy of the Stack (Security & Performance)
  • Catch Me If You Can (Lead Capture and Nurturing)
  • How to Generate a Lead in 10 Days (Ad Tech)

Element Three Martech Stack Legend

But they didn’t win for their clever movie titles. They won because they introduced a novel insight into their marketing stack illustration that shows the amount of time they spend in each tool.

From a small amount of time (a ticket) up to a lot of time (a big tub of popcorn), they’re giving us a sense of how their team operationally uses these different tools.

That’s an interesting view into a marketing stack, and it’s an exercise that others may want to follow as they thinking about investing in staff training and tool-specific process optimization.

Marketing Stacks from Non-Tech Companies

It was really hard for the editorial teams to choose the winners this year because they were so many great entries.

However, I was particularly impressed by — and grateful for — the number of non-tech (or at least non-martech) companies who shared their stacks this year. It’s a sign of how mainstream marketing technology is becoming that firms from so many different industries are carefully building their martech infrastructure (and sharing it in The Stackies).

Marketing Stacks of Non-Tech Companies

In addition to some of the winners, here are a few of the non-tech company marketing tech stacks that stood out to me:

Airstream Marketing Tech Stack

Airstream Martech Stack

Space NK Marketing Tech Stack

Space NK Martech Stack

Stantec Martech Stack

Stantec Martech Stack

Tennant Martech Stack

Tennant Martech Stack

All 54 marketing stacks entered into The Stackies

Thank you to everyone who contributed to The Stackies this year. Here are all 54 stacks that were entered on SlideShare, yours to download and study:

https://www.slideshare.net/sjbrinker/the-stackies-2018-marketing-tech-stack-awards-95242883

Rami Branitzky – How Startups Power PepsiCo’s Innovation Strategy

Innovation in large enterprises once occurred over the course of decades, but today, that’s a luxury many enterprises no longer have. In 1965, the average company on the S&P 500 remained for 33 years. By 1990 it shrunk to 20 years, and by 2026, it’s expected to shrink to 14 years.

Rapid innovation is a prerequisite for survival.

Yet, many say enterprises don’t have what it takes. They take too long to adopt solutions and get bogged down by legacy systems. Their progress is incremental rather than disruptive.

But the biggest companies in the world aren’t sitting still. They can be catalysts for innovation and first adopters of new technology, if they understand how to create a framework for innovation within their company. At Sapphire, we collaborate regularly with corporate innovators that seek to navigate dynamic new ecosystems often populated by disruptive startups and emerging technologies.

Shakti Jauhar, head of Global HR Operations and Shared Service at PepsiCo, talks about the importance of constant innovation and created a program that helps his team evaluate and bring in new technology innovations from startups in the HR space. Called the 90/90, the program has seen early success, so I sat down with Shakti to learn more about the framework he uses to speed up startup collaboration — one that any enterprise can leverage to make fast-moving innovation part of their ethos.

Below is an excerpt from our conversation in which Shakti shares the initial steps a company should take to create a framework for working with startups.

Step 1: Create Alignment and Agree on Objectives

This first step may seem obvious, but is often overlooked. Misalignment can and will kill every attempt to innovate in the enterprise. Enterprises are complex machines that rely on many systems running in tandem. If the legal team, IT department and procurement each have conflicting priorities, it will be difficult to succeed. With the increasing trend of business driving tech adoption independent of IT, CXOs would also do well to align closely with CIOs and IT leadership on questions of specific innovation priorities, where to partner vs. build, speed of adoption, appetite for technological risk and so on.

At PepsiCo, an important alignment step is to identify a need or areas of opportunity and then present them to the startup and innovation communities for solutions. Problem statements ranged from CoEs looking to implement a new program to efficiency plays. Every six to nine months, the team would identify a small group of startups and invite them to gain alignment with stakeholders aligned to the agreed-upon problem statements. This alignment is a key enabler in the eventual success of startups graduating through the program.

Achieving alignment will put in place a realistic understanding both of what is possible and how it will play out across an organization. Working out internal problems is the foundation of an internal framework for innovation and CXOs should do this well before they bring startups into the equation.

Step 2: Ready Internal Infrastructure and Platforms

Another critical step is reviewing the infrastructure that a company has in place and updating it if necessary. As a key first step, PepsiCo has re-architected its core HR system onto a single platform across 83 countries for ~260,000 employees. This, along with other technology deployments enabled it to create the equivalent of a “plug and play” system, where new solutions could be adopted into the core platform.

Allowing some experimentation on this platform can also be an enabler of startup success. For example, partners adopted some of the ideas for startups where they have launched an app store or made an environment available for Startups to write their own APIs into an HR platform. Taking a platform-based approach has been a holy grail in the enterprise for some time, and for PepsiCo HR this infrastructure is a key ingredient to accelerate serving up innovation at scale for employees.

Step 3: Build a Blueprint

The next step is to create a blueprint which enables finding, incorporating and scaling new processes. This allows enterprises to lock in their ability to innovate for years to come and continually work with the best emerging startups in their field.

As part of the 90/90 program, participating startups commit 90 days to both deploy their solution within PepsiCo and demonstrate their ROI. This provides a clear framework for all parties to quickly evaluate success. That means PepsiCo is evaluating solutions based on how they drive broader business goals and address problem statements. For the startups, that means quickly assessing their readiness to scale to enterprise grade.

To assemble a system for scaling innovation by partnering with startups, enterprises should:

  • Use their connections with VC firms, founders and angel investors to scout partnership opportunities. For example, PepsiCo’s partnership with Sapphire Ventures has exposed the company to a wide range of startups and emerging technologies that fuel its innovation roadmap.
  • Specify a hard timeline for testing innovation and partnerships. This helps focus the system on accomplishing set goals. It also standardizes the process for bringing on new tech, making it repeatable.
  • Focus on finding fit. When dealing with a shorter timeframe, like the 90/90 framework, big investments are not necessary. The real ROI might come from finding something that continually pays for itself in a short time.

The goal in making a blueprint for a framework like 90/90 is to keep things moving for the enterprise and to make partnerships easier by laying out a clear vision of how successful adoption of new technology will work out.

Step 4: Lean All The Way In

Setting the wheels of innovation in motion is only half of the work in a program like this. The other half is building long-term relationships with the best new companies out there. The companies that find success in a startup-enterprise relationship are open, proactive and willing to make an investment beyond the short-term.

Enterprises also need to keep a close eye on the startups in their industry. But when so many startups fail, enterprises can be wary of spending too much time trying to dissect the space.

That’s a huge mistake. Yes, many startups don’t survive. But, over time, startups will evolve the way organizations think about innovation and agility. And ultimately one of them will end up disrupting business in a way that will be unprecedented. Leaders need to be paying close attention to their market to stay on that curve.

Set Up for Success

It’s up to large enterprises to carve out their own future. In today’s world, that means finding ways to innovate at high speeds. Although they certainly have more to coordinate than smaller companies, this doesn’t mean they’re doomed to lag behind.

Instead, savvy global enterprises like PepsiCo, are putting themselves on the forefront of innovation in their industry. They’re building long-term partnerships within the startup and venture communities, and creating a way for innovation to regularly cycle through their companies. They’re streamlining their internal processes to scale novel solutions and as they’re doing these things, they’re securing the legacy of their company for years to come.

https://medium.com/sapphire-ventures-perspectives/how-startups-power-pepsicos-innovation-strategy-52fea23ade30

Medici – Paths to Market Dominance – Two Main Strategies Used by Tech Corporations

There are two recognizable patterns of development for technology companies in 2018, with most corporations being a hybrid model of one or another. Nonetheless, some are closer in their approach to the ecosystem model, where technology giants aim to become an operating system of one’s life. Others are closer to the model of infrastructural dominance, building solutions that will thread the future of industries.

Ecosystem approach of e-commerce giants focused on empowering users

The ecosystem approach is a fairly straightforward one and leads to market consolidation and concentration of data, money, and market power within a closed loop of highly integrated, tightly connected platforms. Ecosystems tend to have a customer/user at the center while operating a tight ship of merchants/vendors at the back end to serve the customer within their ecosystem.

The barriers to entry into the ecosystem for the customer/user are virtually non-existent, while the cost of exit is often revolting enough to remain loyal. It is largely due to highly atomized competition that cannot offer a comprehensive enough solution in one place, inconveniencing the user to sign up for a disparity of unconnected services.

An overwhelming grip on user behavior in ecosystems is reached through having an answer to every question the user may have any moment – search for products/services, payment for a product/service online and offline, calendar, contacts book, P2P payments, recommendation engines, news feeds, social platforms, etc. Comprehensiveness and complete integration of the elements of the ecosystem keep competition outside while locking the user inside.

Amazon, Alibaba, and Apple are all close to the ecosystem end of the spectrum. Amazon, in particular, is one aggressive wave. The company wants to be everything for everyone, with its Prime program being a masterpiece of marketing and commerce. Amazon does have a business-focused side, but the company is more focused on wrapping its world of services around the user.

Facebook is more of a mixed case, containing hallmarks of the second model. While Facebook does operate a thought-through integrated network of social platforms, recently, the company started pivoting towards commercial focus, enabling businesses to reach their potential customers on the social network and interact with them through Messenger chatbots, for example.

It’s inevitable for ecosystems to collide in the long term because everyone runs to the point of having the same spectrum of solutions to empower an individual. Ecosystems aim to become the ultimate operating system of one’s life, touching and enabling a wide range of activities.

Infrastructural dominance focused on empowering businesses and entrepreneurs

Another interesting strategy involves the ability to become a universal connective tissue across industries. IBM and Google are probably some of the most vivid examples, but UPI can also be seen using that strategy.

One of the most important hallmarks of the strategy involving an infrastructural dominance is the ability to cover every aspect of back-end operations for businesses – no matter the size.

Startup with Google and Google for Entrepreneurs perfectly exemplify this strategy, as well as the company’s latest announcement of a Shopping Actions, allowing retailers to list their products across Google Search, in its Google Express shopping service, and in the Google Assistant app for smartphones and on smart speakers, like the Google Home. The program offers online shoppers a universal cart whether they’re shopping on mobile, desktop or via a voice-powered device.

Google is working with a range of top retailers on the new effort, including Target, Walmart, Ulta Beauty, Costco, and Home Depot. The company reports that its retail partners saw the average size of a customer’s shopping basket increase by 30% after joining the program. Ulta, for example, saw average order values increase 35%. Target reported that its Google Express shopping baskets increased nearly 20%, on average.

While Google does invest in enabling technologies for businesses and individuals, IBM is a more enterprise-focused example following a similar strategy. Instead of hammering on the competition with marketing resources and brand name to promote a proprietary solution, IBM, for example, offers the toolbox for businesses to build their own virtual assistants. IBM’s clients can train their assistants using their own data sets, and IBM says it’s easier to add relevant actions and commands than with other assistant tech. Each integration of Watson Assistant keeps its data to itself, meaning big tech companies aren’t pooling information on users’ activities across multiple domains.

If we were to zoom out a bit on Google, Alphabet would become a mixed case. Alphabet itself would likely be a hybrid model, as certain elements of it are tightly integrated to serve the needs of the user, while others are built to underline business operations across industries. But let’s go back to IBM.

IBM has its own version of Google’s stack for entrepreneurs and startups – a portal called developerWorks – a source of how-tos, tools, code, and communities to help solve the complex problems that one faces as a developer in an enterprise organization. At developerWorks, developers can learn about, try, and quickly gain skills in the latest IBM products and open standards technologies.

Overall, both Google and IBM are quite similar in their approach to development – they focus on underlying technologies that will constitute the infrastructure of the future. Those businesses are gradually moving towards the dominance at the back-end. Google is not an e-commerce company, but the ability to leverage its solutions is vital for any e-commerce business. IBM is not a bank, not an insurer, and not a RegTech company, but it offers technology to power various aspects of business operations across those industries.

As companies pivot, merge and acquire, as well as develop new offerings, we’ll continue to observe the differences in success rates and long-term implications of each approach.

https://gomedici.com/paths-to-market-dominance-two-main-strategies-used-by-tech-corporations/

AgFunder – Entrepreneurs Weigh in on Corporate vs VC Investor Debate

Corporate venture capital can help agrifood tech startups scale-up and expand to new markets, but can also be difficult to work with, according to agrifood entrepreneurs attending the Seeds & Chips conference in Milan last week.

While corporate investment can bring industry knowledge, technical know-how and distribution channels, some entrepreneurs are wary of taking their money, especially if their objectives do not align.

If a corporate VC has been founded to defend the parent company’s market share or to scout for new acquisition opportunities, there might be a conflict of interest with the companies they are trying to invest in, some entrepreneurs argued.

“(Corporate VC) can probably bring a lot value. They have a lot of expertise, money, etc. They basically have everything we need. But sometimes I feel like they want us to be their company,” Alvyn Severien, CEO of Algama, the microalgae-based food startup, told AgFunderNews on the sidelines of the event. “The more we discuss, they want some kind of exclusivity; they want to own us.”

Of course, there are corporate VCs who do not operate that way and aim to invest in startups like a commercial venture capital fund would, added Severein. “I think this a good attitude to have, and we are open to talk to these people.”

Another agrifood entrepreneur highlighted the different value propositions for a startup in commercial and corporate venture capital.

“Commercial VCs are very professional and understand how to create a solid business model, as well as how to guide a startup through a capital raising or an exit. However, their value beyond that can be limited. For example, they might say they have a network that can help set up a distribution infrastructure, but in reality, it will only help to a certain point. Corporate VCs, however, are not as professional and do not have the experience of deals and rounds because most of them have not been around for a long time. But on the flipside, they can bring a lot of added value to startups, for example by providing access to their distribution network or R&D capabilities.”

The different value propositions can sometimes be exploited to accelerate the growth of startups. Dan Altschuler Malek, venture partner at New Crop Capital, mentioned during a panel discussion that he enjoyed co-investing with Tyson Ventures, the venture arm of Tyson Foods, in cultured meat startup Memphis Meats. Together they can help Memphis Meats on multiple dimensions, but he also highlighted how important it was that Tyson Ventures has a good understanding of where the meat industry is going, and that it is seeing cultured meat as a complementary product to what they are currently offering.

Sami Vekkeli, CEO of Nordic Insect Economy, talked about what he is looking for in investors.

“In our case, we have a sustainability mission. Of course, we provide an opportunity; it is like any investment. But we are really focusing on the sustainability aspect. And most of our investors are really interested in that as well.”

Ultimately, it is all about sharing the same vision. Commercial VCs can more easily go along with the vision of their portfolio companies. A corporate’s primary interest is to ensure growth and a good performance of their core business, which might not align with the mission of a startup.

Entrepreneurs should look at their mission and strategic needs to determine what kind of investor would suit them.

https://agfundernews.com/agrifood-entrepreneurs-weigh-corporate-vs-vc-investor-debate.html

Mission – Greatest Sales Pitch I’ve Seen All Year

A few weeks ago, I met a CMO named Yvette in the office kitchen at OpenView Venture Partners. She was chewing on a bagel during a lunch break from the VC firm’s all-day speaker event, and she was clearly upset.

“How in the world,” Yvette said, reaching for the cream cheese, “am I going to inform my team that our entire approach to marketing is wrong?”

The CEO of another company, overhearing Yvette, chimed in. “Right? I just texted my VP of sales that the way we’re selling is obsolete.”

In fact, virtually every CEO, sales exec, and marketing VP in attendance seemed suddenly overwhelmed by an urgent desire to change the way they worked.

The reason?

They had just been on the receiving end of the best sales pitch I’ve seen all year.

The 5 Elements of Drift’s Powerful Strategic Narrative

There were many great speakers at OpenView’s Boston headquarters that morning — JetBlue’s VP of marketing, senior execs from OpenView’s portfolio—yet none moved the crowd quite like Drift director of marketing Dave Gerhardt. By the time Gerhardt was finished, the only attendees who weren’t plotting to secure budget for Drift’s platform were the ones humble-bragging about how they’d already implemented it.

How did Gerhardt do it? For that matter, how has Drift—a web-based, live-chat tool for salespeople and marketers—managed to differentiate itself in a market crowded with similar offerings? (The company recently raised a $32 million round of Series B capital led by General Catalyst, with participation from Sequoia, and boasts over 40,000 businesses on its platform.)

The answer to both starts with a brilliant strategic narrative, championed by Drift CEO David Cancel, that has transformed the company into something more like a movement. In fact, two weeks after hearing Gerhardt speak, I saw Cancel pitch a new feature at Drift’s day-long Hypergrowth event, and he told virtually the same story, to similar effect.

Here, then, are the 5 elements of every compelling strategic story, and how Drift is leveraging them to achieve breakout success. If you’re pitching anything to anyone, lay them out in exactly this order:

(I have no financial stake in Drift. However, I saw Cancel pitch at Hypergrowth because I also spoke at the conference. The images below are a mix of Gerhardt’s OpenView slides and Cancel’s from Hypergrowth.)

#1. Start with a big, undeniable change that creates stakes

No matter what you’re selling, your most formidable obstacle is prospects’ adherence to the status quo. Your primary adversary, in other words, is a voice inside people’s heads that goes, We’ve gotten along just fine without it, and we’ll always be fine without it.

How do you overcome that? By demonstrating that the world has changed in such a fundamental way that prospects have to change, too.

Drift kicks off its strategic narrative with a dramatic change in the life of a typical business buyer. She’s so connected now that she practically sleeps with her phone:

Drift’s change in the world (1)

And her preferred way to interact—professionally and socially—is through always-on messaging platforms like Facebook Messenger, SMS, Instagram, and Slack:

Drift’s change in the world (2)

Note that the change Drift talks about is (1) undeniably happening and (2) happening independently of Drift — that is, whether Drift exists or not. It also (3) gives rise to stakes. All three must be true if you want prospects’ trust as you lead them down the path of questioning their love for the status quo.

While Drift’s slides don’t name the stakes explicitly, Cancel and Gerhardt’s voiceovers make them clear enough: Interact with prospects through these new channels—in real time—or don’t interact with them at all.

#2. Name the enemy

Luke fought Vader. Moana battled the Lava Monster. Marc Benioff squared off against software.

One of the most powerful ways to turn prospects into aspiring heroes is to pit them against an antagonist. What’s stopping marketers and salespeople—the heroes of Drift’s strategic story—from reaching prospects in the new, changed world?

According to Drift, it’s tools of the trade like lead forms— “fill in your name, company, and title, and maybe we’ll get back to you”— designed for a bygone era:

The enemy: tools of the trade designed for a bygone era

Drift even steals a page from Salesforce’s “no software” playbook, except here it’s those “forms” that play the role of villain:

Drift’s “no forms” icon

Naming your customer’s enemy differentiates you — not directly in relation to competitors (which comes off as “salesy”), but in relation to the old world that your competitors represent. To be sure, “circle-slash” isn’t the only way to do that, but once you indoctrinate audiences with your story, icons like this can serve as a powerful shorthand. (I bet the first time you saw Benioff’s “no software” image, you had no idea what he was talking about; once you heard the story, it spoke volumes.)

Incidentally, both Cancel and Gerhardt execute a total ninja move by including themselves in the legions of marketers and salespeople seduced by the enemy—that is, brought up to believe that phone calls, forms and email are how you reach prospects. (Cancel was, after all, chief product officer at Hubspot, a big enabler of forms.) In their story, it’s not you who needs help, but we:

Drift isn’t pointing an accusatory finger at your problem. Instead, they’re inviting you to join them in a revolution, to fight with them against a common foe.

#3. Tease the “Promised Land”

In declaring the old way to be a losing path, Drift plants a question in audiences’ minds: OK, so how do I win?

It‘s tempting to answer that question by jumping right to your product and its capabilities, but you’ll be wise to resist that urge. Otherwise audiences will lack context for why your capabilities matter, and they’ll tune out.

Instead, first present a glimpse of the “Promised Land “— the state of winning in the new world. Remember, winning is not having your product but the future that’s possible thanks to having your product:

Drift lays out the Promised Land

It’s wildly effective to introduce your Promised Land, as Drift does, so it feels like we’re watching you think it through (“what we realized was”). However you do it, your Promised Land should be both (a) desirable (obviously) and (b) difficult for prospects to reach without you. Otherwise, why do you exist?

#4. Position capabilities as “magic” for slaying “monsters”

Once audiences buy into your Promised Land, they’re ready to hear about your capabilities. It’s the same dynamic that plays out in epic films and fairy tales: We value Obiwan’s gift of a lightsaber precisely because we understand the role it can play in Luke’s struggle to destroy the Death Star.

So yes, you’re Obiwan and your product (service, proposal, whatever) is a lightsaber that helps Luke battle stormtroopers. You’re Moana’s grandmother, Tala, and your product is the ancient wisdom that propels Moana to defeat the Lava Monster.

Drift’s “magic” for annihilating forms is an always-on chat box that prospects see when they visit your website:

Slaying forms: Drift’s always-on chat box

Cancel’s keynote doubled as a launch announcement for a new feature called Drift Email, so he transitioned next to another monster that’s keeping people from reaching Drift’s conversational Promised Land:

A new monster

Email, obviously, would be a way to facilitate that. But wait, wasn’t email the enemy—or at least an evil henchman?

Well, if the Terminator can be resurrected as a force for good (Terminator 2), then email can be too. Cancel lays out three “mini-monsters” blocking that transformation:

Then he introduces Drift Email—not as a set of context-free features or even benefits, but as a collection of magic for slaying the monsters. With Drift Email, your prospects can react to an email instantly by, for instance, booking a demo without waiting for you to email them back:

After a prospect returns to your website by responding to an email, you can continue the conversation in a relevant way, rather than by bombarding the prospect with more emails:

And no matter what channel the prospect wanted to talk through next, the context (history, etc.) persists:

#5. Present your best evidence

Of course, even if you’ve laid out the story perfectly, audiences will be skeptical. As they should be, since your Promised Land is by definition difficult to reach!

So you must present evidence of your ability to deliver happily-ever-after. The best evidence is stories about people—told in their own voices—who say you helped reach the Promised Land:

Evidence: Drift got us to the Promised Land

What if you’re so early that you’re not yet blessed with a stack of glowing success stories and testimonials? That must have been the case with Drift Email, so Cancel had his team dogfood it (use it themselves) and showed the results:

The results of “dogfooding” Drift Email

Drift’s Story Works Because They Tell It Everywhere—and Commit to Making It Come True

Whether I’m running a strategic messaging and positioning project for a CEO and his/her leadership team, or training larger sales and marketing groups, I use the framework above to help them craft a customer-centric narrative, as Cancel has. But that’s just the first step.

To achieve what Drift has, the CEO (or whatever the leader is called) must commit to telling the story over and over through all-hands talks, recruiting pitches, investor presentations, social channels — everywhere — and to making it come true.

That’s what Cancel does. His entire team seems maniacally focused on getting customers to the Promised Land — through sales conversations, customer interactions, marketing collateral, success stories, events, and podcasts. Even product: Once you pinpoint the Promised Land, you see monsters everywhere, each an opportunity for profitable new features. Among the questions Gerhardt received from the audience at OpenView: How do we schedule salespeople for live-chat duty? How do we qualify prospects if every one of them gets to chat with you? If Drift hasn’t already conjured up magic for taming those beasts, I’m guessing it’s on its way.

The Rise of Story-Led Differentiation

If I had any doubt that Cancel believes, as I do, that it’s more important now than ever to differentiate through a customer-centric strategic narrative, it was erased when, a few days later, he posted this on Linkedin:

Cancel’s Linkedin update

Some commenters disagreed, but most wanted to know what he meant by “act accordingly.” Cancel never responded, but his actions—and Drift’s success—scream his answer.

Product differentiation, by itself, has become indefensible because today’s competitors can copy your better, faster, cheaper features virtually instantly. Now, the only thing they can’t replicate is the trust that customers feel for you and your team. Ultimately, that’s born not of a self-centered mission statement like “We want to be best-in-class X” or “to disrupt Y,” but of a culture whose beating heart is a strategic story that casts your customer as the world-changing hero.

That’s the big, undeniable shift in the world that I spoke about as the final speaker at OpenView that day, several hours after Gerhardt left the stage.

https://medium.com/the-mission/the-best-sales-pitch-ive-seen-all-year-7fa92afaa248

Hubspot – The Hard Truth About Acquisition Costs (and How Your Customers Can Save You)

Trust in businesses is eroding, and so is patience. Marketing and sales are getting harder, and the math behind most companies’ acquisition strategy is simply unworkable.

The best point of leverage you have to combat these changes? An investment in customer service.


Consumers don’t trust businesses anymore

The way people interact with businesses has changed — again. The internet’s rise three decades ago did more to change the landscape of business than anyone could have imagined in the 1990s. And now it’s happening again.

Rapid spread of misinformation, concerns over how online businesses collect and use personal data, and a deluge of branded content all contribute to a fundamental shift — we just don’t trust businesses anymore.

1-trust-in-business

  • 81% trust their friends and family’s advice over advice from a business
  • 55% no longer trust the companies they buy from as much as they used to
  • 65% do not trust company press releases
  • 69% do not trust advertisements, and 71% do not trust sponsored ads on social networks

We used to trust salespeople, seek out company case studies, and ask companies to send us their customer references. But not anymore. Today, we trust friends, family, colleagues, and look to third-party review sites like Yelp, G2Crowd, and Glassdoor to help us choose the businesses we patronize, the software we buy, and even the places we work.

Consumers are also becoming more impatient, more demanding, and more independent.

2-consumers-are-impatient

In a survey of 1,000 consumers in the United States, United Kingdom, Australia, and Singapore, we found that 82% rated an immediate response as “important” or “very important” when they were looking to buy from a company, speak with a salesperson, or ask a question about a product or service. That number rises to 90% when looking for customer service support.

But what does “immediate” mean? Over half (59%) of buyers expect a response within 30 minutes when they want to learn more about a business’ product or service. That number rises to 72% when they’re looking for customer support and 75% when they want to speak with a salesperson.

Modern consumers are also unafraid to tell the world what they think. Nearly half (49%) reported sharing an experience they had with a company on social media, good or bad. While buyers are fairly even split between being more likely to share a good experience (49%) vs. a bad one (51%), every customer interaction you have is an opportunity to generate buzz — or risk public shaming.

The hard truth is that your customers need you a lot less than they used to. They learn from friends, not salespeople. They trust other customers, not marketers. They’d rather help themselves than call you.

Acquisition is getting harder

The erosion of consumer trust is a difficult issue for companies to grapple with on its own. But as if that wasn’t enough, the internet, which has always fundamentally transformed the traditional go-to-market strategy, is moving the goalposts again.

Let’s break this down into two functions: Marketing and sales.

Marketing is getting more expensive

We’ve taught inbound marketing to tens of thousands of companies and built software to help them execute it. Inbound marketing accelerated business growth through a repeatable formula: Create a website, create search-optimized content that points to gated content, then use prospects’ contact information to nurture them to a point of purchase.

This still works — but the market is experiencing four trends that, combined, have made it harder for growing businesses to compete with long-established, better-resourced companies.

Trend 1: Google is taking back its own real estate

Much of modern marketing is dependent on getting found online. Without the multimillion-dollar brand awareness and advertising budgets of consumer goods titans, the best way a growing business can compete is creating content specific to their niche and optimizing it for search.

Google, the arbiter of online content discoverability, has made significant changes in the last few years that make it harder for marketers to run this model at scale without a financial investment.

First, through featured snippets and “People Also Ask” boxes, Google is reclaiming its own traffic.

featured snippet is a snippet of text that Google serves on the search engine results page (SERP). You’ve likely been served a featured snippet when you were searching for a definition, or something that involved a step-by-step explanation.

Here’s an example of a featured snippet. It’s designed to pull information onto the SERP itself so there’s no need to click into the full recipe, hosted on another website.

3-featured-snippet

“People also ask” boxes are a different permutation of a featured snippet. These display questions related to your original search, live on the SERP, and are expandable with a click, like so:

4-people-also-ask

Each time you expand a “People Also Ask” section, Google adds 2-4 queries to the end of the list.

The combined effect of featured snippets and “People Also Ask” boxes? It depends. If your site is the first result and gets featured in the snippet, your traffic should increase. But if you don’t win the featured snippet, even if your post is ranked at position 1, it’s likely your overall traffic will decrease.

Second, Google’s also changed its search engine results page (SERP), moving search ads from a sidebar to the top four slots. Organic results fall much further down the page, and on a mobile device they disappear entirely.

Search won’t ever become purely pay-to-play. But in a world where screen real estate is increasingly dominated by sponsored content, marketers need to factor paid tactics into any organic strategy.

Voice search adds a third wrinkle to these shifts — the winner-take-all market. As the use of voice search has proliferated, it’s become more and more important to become the answer, as voice assistants only provide one result when asked a question.

On Google, featured snippets demonstrate this necessity. Amazon has also introduced “Amazon’s Choice” products, the first items suggested when consumers order items via voice assistant. It’s not hard to imagine a future where all Amazon’s Choice products are also Amazon-branded, manufactured, and distributed.

Trend 2: Social media sites are walled gardens

A decade ago, social media sites were promotion channels that served as a path between users and the poster’s site. The borders between different sites were fluid — people would discover content on Facebook, Twitter, and LinkedIn, then click through to content (usually hosted on another site).

Today, social media sites are walled gardens. Algorithms have been rewritten to favor onsite content created specifically for that platform. Facebook Messenger and evolving paid tools like Lead Ads are becoming a table-stakes marketing channel, meaning businesses can’t just “be on Facebook” — they must recreate their marketing motion in a second place.

Facebook and LinkedIn have also deprioritized showing content that links offsite in favor of family and friends’ content (on Facebook) and onsite video and text (on LinkedIn). Not only does your branded content have a harder time competing with other brands, it will also have to compete for attention with your prospects’ personal network. Twitter’s investment in streaming video partnerships with entertainment and news networks are a nod to bringing consumers content they’d watch anyway in a platform-owned experience.

Sites like Amazon and Facebook are also becoming starting points for search. Over half of product searches (52%) begin on Amazon, while 48% of searches for content originate on Facebook — almost equivalent to Google’s reach (52%). And both Amazon and Facebook sell targeted advertising space.

5-top-content-channels

Why is any of this important?

These algorithm changes reflect the desire companies have to keep the audiences they own, on their own sites. As long as they can monetize their traffic, they have no incentive to move back to the old passthrough model.

Increasingly, Facebook is a destination. Twitter is a destination. LinkedIn is a destination. It’s no longer enough to create a piece of content for your own site, then schedule out promotion across channels that point back to that content.

Savvy marketers know their ideas must be channel-agnostic and channel-specific at the same time. To get the most mileage out of a piece of content, its core concept must perform well across multiple channels, but marketers have to do more upfront work to create separate versions of this content to best suit the channel on which it’s appearing.

Trend 3: It’s getting more expensive to do marketing

Search and social media titans have moved their goalposts to create a more competitive content discovery landscape. At the same time, barriers to entry on these platforms are getting higher in two ways:

1. Organic acquisition costs are rising.

According to ProfitWell, overall customer acquisition costs (CAC) have been steadily rising for B2B and B2C companies.

Over the last five years, overall CAC has risen almost 50% — and while paid CAC is still higher than content marketing (organic) CAC, organic costs are rising at a faster rate.

Profitwell_RisingCAC(source: ProfitWell)

2. Content marketers are commanding higher salaries.

It’s not only harder to get value from content, it’s getting more expensive to create it. ProfitWell’s study examined the rise of content marketers’ salaries by location — median salary has risen 24.9% in metropolitan areas and 18.9% for remote workers in the last five years.

Profitwell_CMSalaries

(source: ProfitWell)

This rise is partially explained by changes in the content marketing profession. Google’s changing algorithm requires more specialized knowledge than ever. Not only are there specific optimization best practices to win featured snippets, Google’s current algorithmic model favors sites that are architected using the topic cluster model. Depending on the size of your site, this can be a massive undertaking — at HubSpot, it took us over six months to fully organize our blog content by this model.

Trend 4: GDPR

The following is not legal advice for your company to use in complying with EU data privacy laws like the GDPR. Instead, it provides background information to help you better understand the GDPR. This legal information is not the same as legal advice, where an attorney applies the law to your specific circumstances, so we insist that you consult an attorney if you’d like advice on your interpretation of this information or its accuracy. In a nutshell, you may not rely on this as legal advice, or as a recommendation of any particular legal understanding.

The General Data Protection Regulation recently passed by the European Union (EU) imposes new regulations on how businesses are allowed to obtain, store, manage, or process personal data of EU citizens.

At a high level, here’s what GDPR means for marketing teams:

  • Businesses collecting prospect data must explicitly state how that data will be used, and may only collect what’s necessary for that stated purpose
  • Businesses may only use that data for the specified purposes above, and ensure it’s stored according to GDPR provisions
  • Businesses may only keep personal data for as long as is necessary to fulfill the intended purpose of collection
  • EU citizens may request that businesses delete their personal data at any time, and businesses must comply

GDPR doesn’t go into effect until May 25, 2018, so it’s hard to predict the exact impact it will have on lead generation and collection. But we feel confident that GDPR is the first step toward more regulation of how businesses interact with consumers globally, further limiting your marketing team’s power.

In combination, these four trends mean that:

  • It’s harder to stand out in a crowded internet
  • It’s more expensive to find talent and produce content
  • Algorithmic changes will force investment in a multichannel marketing strategy

So it’s getting harder to get prospects to your site. But once you get them in the door, it should be standard operating procedure to getting those deals closed, right? Turns out … not quite.

Sales is getting harder, too

Every year, HubSpot surveys thousands of marketers and salespeople to identify the primary trends and challenges they face. And year after year, salespeople report that their jobs are becoming more difficult.

Consider this chart (a preview of State of Inbound 2018).

8-prospecting-is-harder

A whopping 40% of respondents reported that getting a response from prospects has gotten harder, while 29% and 28% respectively identify phone connections and prospecting as pain points.

Almost a third (31%) have to engage with multiple decision makers to move a single deal forward, and just as many find it difficult to close those deals.

Salespeople have to overcome an additional challenge on top of these sobering statistics: They aren’t trusted. Year over year, consumers report that salespeople are their least trusted source of information when making purchase decisions.

9-sources-of-information

And even when there’s no purchase decision being made, salespeople don’t have a great reputation. A 2016 HubSpot study found that sales and marketing are among two of the least trusted professions — only above stockbrokers, car salesmen, politicians, and lobbyists.

10-marketing-sales-trust

For software companies, sales is becoming a more technical field. Buyers contact sales later in the process — more prefer a “try before you buy” approach through free trials or “freemium” versions of paid products. At these companies, the actual onboarding flow and user experience of the product are often more important than the sales team, as most customers become free users before ever speaking with a human.

In the same way that a big chunk of sales work was consumed by marketers 10 years ago, a big chunk of sales work today is being consumed by developers and growth marketers.

The implications are clear. Buyers no longer rely on salespeople to steward them through a purchasing process, preferring to do independent research or lean on their networks for an opinion. The inherent distrust of the profession is diluting salespeople’s influence salespeople in a purchasing process, making your acquisition strategy less and less consistently reliable.

This is scary stuff. But there’s a bright side. Within the pain of change lies opportunity, and your business boasts a huge, overlooked source of growth you probably haven’t invested in at all — your customers.

Your customers are your best growth opportunity

When you’re growing a business, two numbers matter more than anything else:

  1. How much it costs to acquire a new customer (or “CAC”)
  2. That customer’s lifetime value — how much they’ll spend with you over their lifetime (or “LTV”)

For many years, most businesses (us included) focused on lowering CAC. Inbound marketing made this relatively easy, but the new rules of the internet mean this is getting harder. As Facebook, Amazon, and Google tighten their grips on content, the big opportunity for today’s companies is raising LTV.

If your customers are unhappy, you might be in trouble. But if you’ve invested in their experience, you’re well-poised to grow from their success.

When you have a base of successful customers who are willing and able to spread the good word about your business, you create a virtuous cycle.

Happy customers transform your business from a funnel-based go-to-market strategy into a flywheel. Through promoting your brand, they’re supplementing your in-house acquisition efforts. This creates a flywheel where post-sale investments like customer service actually feed “top of the funnel” activities.

flywheel-product

Buyers trust people over brands, and brands are getting crowded out of their traditional spaces, so why throw more money at the same go-to-market strategy when you could activate a group of people who already know and trust you?

Customers are a source of growth you already own, and a better and more trusted way for prospects to learn about your business. The happier your customers, the more willing they are to promote your brand, the faster your flywheel spins, and the faster your business grows. Not only is this the right thing to do by your customers, it’s the financially savvy thing to do for your business. It’s a win-win-win.

At some point, your acquisition math will break

More and more businesses are moving to a recurring-revenue, or subscription-based model. A recurring revenue model means customers pay a monthly fee for membership or access to products.

A recurring revenue model makes it easy to project expected revenue over a set period of time. Understanding how money moves in and out of the business makes headcount planning, expansion planning, and R&D efforts far easier.

Luckily it doesn’t matter if your company is subscription-based or not — a recurring revenue model contains lessons that apply to all businesses. Before we dive in, there are three core assumptions this model relies on.

12-recurring-revenue-assumptions

First, every business has a defined total addressable market, or TAM. Your TAM is the maximum potential of your market. It can be bound by geography, profession, age, and more — but in general, every product serves a finite market.

Second, every company aims to create repeat customers — not just subscription-based ones. All of the examples below are businesses that benefit from recurring revenue, even if it’s not formalized through contracts or subscription fees:

  • A beauty products store where customers typically purchase refills once every three months
  • A hotel chain that becomes the default choice for a frequent traveler
  • A neighborhood restaurant that’s cornered the market on Saturday date nights

Third, the key to growth is to retain the customers you already have, while expanding into the portion of your TAM that you haven’t won yet.

The easiest way to understand why thinking about your business like a subscription-based company does is valuable is to walk through the following hypothetical example — let’s call it Minilytics Inc.

13-acquisition-math-broken

Minilytics starts with a customer base of 10 people, and a churn rate of 30% — meaning three of their customers will not buy from them again. Each of Minilytics’ salespeople can sell five new customers per month. Because Minilytics’ customer base is so small, they only need to hire one salesperson to grow.

Fast forward several months, and Minilytics now has 50 customers, 15 of whom churn. To grow, Minilytics’ CEO has to bring on three more salespeople, who create additional overhead cost — their salaries.

You can probably see where this is going. At 100 or 1,000 customers, Minilytics’ CEO simply cannot hire enough salespeople to grow. The sheer cost of paying a staff to simply maintain a business that’s losing 30% of its customers each month will shutter most businesses on its own.

While Minilytics is struggling to plug the leaks in its business, something else is happening that will tank the company — even with an army of salespeople.

Remember TAM? While Minilytics’ CEO was hiring salespeople to replace churned customers, the company was also rapidly burning through its TAM. Generally, customers that churn do not come back — it’s hard enough to gain a consumer’s trust. To break trust through a poor experience, then try to rebuild it, is nearly impossible.

Even if Minilytics is able to afford a rapidly expanding sales team, it’s been rapidly churning through its TAM. Eventually, Minilytics will run through its entire total addressable market — and there will be no room to grow.

Luckily, Minilytics isn’t destined for this fate. Let’s rewind to that first month and explore what they could have done differently.

Building a good customer experience is the foundation of growth

Growing a sustainable company is all about leverage.

In plain English, if you can identify the parts of your business model that require great effort but provide little reward, then re-engineer them to cost you less effort or provide more reward, you’ve identified a point of leverage.

Most companies hunt for leverage in their go-to-market strategy, which usually involves pouring money into marketing or sales efforts. Customer service, customer success, customer support — or whatever you call it (at HubSpot, we have a separate team dedicated to each function, but we’re the exception) — has traditionally been viewed as a cost center, not a profit center.

It’s not hard to understand why. The ROI of sales and marketing investment is immediately tangible, while investment in customer service is a long game.

But most companies mistakenly try to optimize for fewer customer interactions, which just means issues don’t get addressed. Because they’re thinking short-term, it ends up costing them dearly in the long term. Too many businesses think once a sale is made and the check’s cleared, it’s on to acquire the next new customer.

That doesn’t work anymore. The hardest part of the customer lifecycle isn’t attracting their attention or closing the deal — it’s the journey that begins post-sale.

Once your customers are out in the wild with your product, they’re free to do, say, and share whatever they want about it. Coincidentally, that’s when many companies drop the ball, providing little guidance and bare-bones or difficult-to-navigate customer support. This approach, quite frankly, makes no sense.

Think about it this way: You control every part of your marketing and sales experience. Your marketing team carefully crafts campaigns to reach the right audiences. Your sales team follows a playbook when prospecting, qualifying, and closing customers. Those processes were put in place for a reason — because they’re a set of repeatable, teachable activities you know lead to consistent acquisition outcomes.

Once a customer has your product in their hands, one of two things will happen: Either they will see success, or they will not. If they’re a new customer or first-time user, they might need help understanding how to use it, or want to learn from other people who have used your product, or want recommendations on how best to use it. Regardless of what roadblocks they run into, one thing is for sure: There’s no guarantee they’ll achieve what they want to achieve.

This is a gaping hole in your business. No one is better positioned to teach your customers about your product than you. No one has more data on what makes your customers successful than you. And no one stands to lose more from getting the customer experience wrong than you.

Let me say that one more time, because it’s important: Nobody has more skin in this game than you. In our survey, 80% of respondents said they’d stopped doing business with a company because of a poor customer experience. If your customers are dissatisfied, they can — and will — switch to another provider.

There are very few businesses in today’s market with no competitors. Once you lose a customer, you are most likely not getting them back. If you fail to make your customers successful, you will fail too.

Not convinced? Here’s another way to think about how to best allocate money between marketing, sales, and customer service.

Consider Minilytics and Biglytics, both with a CAC of $10 and a budget of $100.

14-customer-service-revenue

Minilytics hasn’t invested in a well-staffed or well-trained customer service team, so their churn rate is 30%. Three customers churn, so they spend $30 replacing them. All of the remaining $70 is spent on acquisition, ending with 17 customers.

At Biglytics, things are different. Customer service isn’t the biggest part of the budget, but the team is paid well, trained well, and knowledgeable enough to coach customers who need help.

Because Biglytics has proactively spent $10 of your budget on customer service, their churn rate is much lower, at 10% (for the record, a churn rate of 10% is terrible — we just chose it to keep numbers simple). Biglytics replaces their single churned customer for $10 and spends the remaining $80 on eight new customers, netting out at 18 customers.

A one-customer difference doesn’t seem significant. But that $10 Biglytics invested in their customer service team has been working in the background. Customers they brought on last year have seen success with the product because of great customer service, and have been talking Biglytics up to their friends, family, and colleagues. Through referrals and recommendations, Biglytics brings on five more customers without much extra work from the sales team.

This means Biglytics has not only brought on six more customers than Minilytics in the same timespan, it also brings down their average CAC to $7.14.

Which company would you rather bet on? I’m guessing it’s Biglytics.

This is why investment in customer service is so powerful. Taking the long view enables you to grow more. It costs anywhere from 5 to 25 times more to acquire a new customer than to retain an existing one.

Prioritizing short-term growth at the expense of customer happiness is a surefire way to ensure you’ll be pouring money into the business just to stay in maintenance mode.

The 4 points of leverage in the customer experience

A good customer experience goes beyond hiring support staff — it starts pre-sale. The four points of leverage you can start working on today are where we’d recommend starting.

15-points-of-leverage

1) Pre-sale: Understand customer goals

People buy products to fix a problem or improve their lives — to get closer to an ideal state, from their current state.

Your job is to help them get there. Depending on what you sell, much of the work required to make your customers successful might not be done until post-sale through coaching and customer support. But if you understand the most common goals your customers have, you can reverse-engineer your acquisition strategy.

Emily Haahr, VP of global support and services at HubSpot, explains how this works:

“Your best customers stay with you because they get value from your products. Dissect your most successful customers and trace back to how they found you in the first place.

What marketing brought them to your site? What free tool or piece of content converted them to a lead? What type of onboarding did they receive and with who? What steps did they take in onboarding? And so on…

Once you have this information, you can identify and target the best fits for your product earlier, then proactively guide your customers down a path of success, instead of trying to save them once they’ve reached the point of no return.”

2) Pre-sale: Making it easier for your customers to buy

Consider whether your sales process could be easier to navigate. Today’s buyers don’t want to talk to a salesperson, or want to pay money before they know how well a product works. You should empower them to do so.

If possible, take a page out of “freemium” companies’ playbook. Can you give away part of your product or service at scale so prospects can try before they buy? This way, they’ll qualify themselves and learn how to use your product before you ever have to raise a finger. Anecdotally, HubSpot has seen the most rapid growth in our acquisition through self-service purchases.

Also evaluate what parts of your marketing and sales process can be automated. The more you can take off your marketers’ and salespeople’s plates, the better — and you’ll be giving your buyers more control over their purchase at the same time.

This change is already happening. Think about Netflix, Spotify, and Uber. All three companies disrupted industries with difficulty built into their go-to-market.

16-disruptors

People wanted to watch movies, but they didn’t want to pay late fees. Hello, Netflix.

People wanted to listen to music, but they didn’t want to pay for individual songs or albums. Hello, Spotify.

People wanted to be driven between Point A and Point B, but they didn’t want to wait for cabs in the rain. Hello, Uber.

Today’s biggest disruptors got to where they are by disrupting inconvenience. Hurdles are the enemy — remove as many as you can.

3) Post-sale: Invest in your customers’ success

I scaled the HubSpot customer service team to over 250 employees, and there are a few things you can do to make your customers happier (and your employees’ lives better):

Gather feedback — NPS® or otherwise

As early as possible, start surveying your customer base to understand how likely they are to recommend your product to a friend. You can also send out post-case surveys to customers whose issues your team has helped resolve.

At HubSpot, we track Net Promoter Score (or NPS) maniacally — it’s a company-level metric that we all work toward improving. This helps us:

  • Identify holes in our customer service early
  • Track customer sentiment over time — the trend of NPS is far more useful than one raw number
  • Quantify the value of customer happiness — when we changed a customer from a detractor to a promoter, that change increased LTV by 10-15%

Start small, with a post-support case NPS so you know whether immediate issues were resolved. You can build up to a quarterly or monthly NPS survey of your full customer base that focuses on their general experience with the product.

Building up a lightweight knowledge base

Self-service is the name of the game. Identify your most commonly asked customer questions or encountered issues, then write up the answers into a simple FAQ page or the beginnings of a searchable knowledge base. This will enable your customers to search for their own solutions, instead of waiting on hold to get human support. As an added bonus, it will take work off your team’s plate.

4) Post-success: Activate happy customers into advocates

Once you have happy and successful customers, it’s time to put them to work for you.

Take a look at this chart again. Notice anything interesting?

9-sources-of-information

Buyers report that their two most trusted sources of information when making a purchase decision are word-of-mouth referrals and customer references. They’re listening to your customers, not you. Use your customers as a source of referrals, social proof for your business through testimonials, case studies, and references, and brand amplification.

The key to successful customer advocacy is to not ask for anything too early. Don’t try to extract value from your customers until you’ve provided value — asking for five referrals a week after they’ve signed a contract is inappropriate. Your primary goal should always be to make your customers successful. After you’ve done that, you can ask for something in return.

Putting it all together: The inbound service framework

This methodology is a direct result of my eight years at HubSpot. We’ve made a lot of mistakes but learned even more about how built a repeatable playbook for leading your customers to success and eventually turning them into promoters.

We call it the inbound service framework.

services-framework

Step 1: Engage

Good customer service is the foundation for everything else — that’s why “Engage” is the first part of this framework. At this stage, your only concern should be understanding the breadth and depth of customer questions, and resolving them.

When you’re just getting started with the customer service function, cast a wide net. Engage with any customer, wherever, about whatever they want. Be on all the channels, try to solve whatever problems come your way, and help anyone who needs it.

Above all, make sure you’re easy to interact with. At HubSpot, we found that customers who submitted at least one support ticket a month retain at a rate around 10% higher than customers who didn’t, and were 9-10 times more likely to renew with HubSpot year over year. Not getting support tickets does not mean your product has no issues — every product does. It means your customers are silently suffering.

As your team gets more sophisticated, you’ll be able refine your approach, but this initial operating system helps you gather lots of data very quickly. At this stage, your objective should be to learn as much as possible about the following:

  • FAQs requiring customized guidance
  • FAQs that can be addressed with a canned response
  • Most confusing parts of your product/service
  • When support issues arise — do people require implementation help or do they encounter issues three months after purchase?
  • Commonalities of customers who need the most help
  • Your customers’ preferred support channels

This information will empower you to identify huge points of leverage in your customer service motion. For example, if you find that 30% of customer queries have quick, one-and-done answers (i.e. “How do I change my password?”, “How do I track my order?”, “What is your return policy?”), stand up a simple FAQ page to direct customers to. Boom — you’ve freed up 30% of your team’s time to work on more complicated, specific issues.

Empower your customer team to make noise about the problems they see, early and often, and turn their insights into action.

Are your sales and marketing teams overpromising to your customers? Your customer team will hear these complaints first. Trace the points of confusion back to their origin, and change your sales talk tracks and marketing collateral to reflect reality.

Is there something about your product that causes mass confusion? Your customer team will know which parts of your product/service are most difficult to navigate and why. Use this information to improve your product/service itself, eliminating these problems at the source.

Do certain types of customers run into issues frequently? Do they usually churn or do they just require a little extra love to get over the hump? If it’s the former, build an “anti-persona” your sales and marketing teams should avoid marketing and selling to. If it’s the latter, dive into that cohort of customers to understand whether the extra help they need is justified by their lifetime value.

As you learn more about your customers, you’ll also learn how to best optimize your own process. Identify the most effective support channels for your team and create a great experience for those, then establish a single queue to manage all inquiries.

In this stage, measure success by how fast you solve problems and post-case customer satisfaction. You can do the latter through a post-case NPS survey, which gives you instant feedback on how effective your customer team actually is.

Step 2: Guide

In the “Guide” stage, your goal is to turn your relationship with your customers from a reactive, transactional model into a proactive partnership. It’s time to level your customer team up from a supportive function into a customer success-driven organization. (The reactive part of your customer service organization will never go away. But as you grow, it should become part of a multifunctional group.)

What does it mean to be proactive?

First, it means anticipating common issues and challenges and building resources to prevent them. This includes things like a knowledge base or FAQ, as well as re-engineering parts of your offering to be more user-friendly and intuitive.

Second, it means partnering with your customers to help them get to their stated goals. Guide them through key milestones, provide tasks to keep them on track, and connect them with peers so they can crowdsource answers if necessary. Create frameworks and tutorials where you can.

It’s better to be proactive than reactive for a few reasons:

  1. It saves time, and time is money. Imagine all the hours you’ll save your support team if you can get repetitive queries off their plates. That’s time they should spend on complex, higher-level issues that could reveal even larger points of leverage in your business, and so on.
  2. It makes your customers happier. Even if you’re able to resolve issues at an 100% success rate with 100% satisfaction, you’ve still built an experience filled with roadblocks. Aim to build a world where you’ve anticipated your customers’ challenges and solve them at their source.
  3. It builds a trusting relationship. Customers may not see all the issues you proactively guard against (after all, a problem prevented is an invisible type of value), but they’ll recognize a relatively issue-free customer experience. Buyers are more likely to trust the company that rarely lets them down, over a brand that’s constantly scrambling to fix the next issue.
  4. It’s a competitive advantage. Proactive guidance takes the wealth of knowledge you have about what makes customers successful and puts it into your customers’ brains. You know what your best customers have in common and the mistakes your least successful ones make. That knowledge is a core part of what you’re selling, even if it’s positioned as customer service.

As you move away from proactive support into proactive guidance, you become a teacher, not a vendor. Other companies might be able to build a product as good as yours, but it’ll be difficult to replicate the trust you have with your customers.

Guidance is an iterative process. As in the early days of your customer organization, collect as much data as you can on the customer lifecycle, and continuously update your guidance to reflect current best practices. Pay attention to the formats and channels that work best, which issues have the highest impact for your customers once solved, and update your process accordingly.

Step 3: Grow

Happy customers want to support the businesses they love. 90% of consumers are more likely to purchase more, and 93% were more likely to be repeat customers at companies with excellent customer service.

At the same time, 77% of consumers shared positive experiences with their friends, or on social media/review sites in the last year.

Your happy buyers want to help you. That’s what the “Grow” stage is all about: Turning that desire into action.

There are three ways to activate your customer base into promoters, says Laurie Aquilante, HubSpot’s director of customer marketing: Social proof, brand amplification, and referrals. Let’s review each play.

1) Social proof

Buyers are more likely to trust and do business with companies their networks trust. The following are all different ways your customers can create social proof for your product:

  • Sharing positive experiences on social media or review sites
  • Providing referrals (more on this later)
  • Testimonials/case studies
  • Customer references in the sales process

Activating social proof is all about keeping a close watch on your customers. It’s also not a pick-one-and-done kind of thing, Aquilante says. Case studies and customer references, for example, are useful at different points in your sales motion. While you could use the same customer for both, it’s probably more useful to have a stable of customers on hand that can speak to a diverse range of experiences.

Encourage social proof by proactively reaching out to your most satisfied customers, who will likely be excited to help you. You can also provide incentives for sharing content or writing online reviews.

2) Brand amplification

When someone shares your content on social media, helps contribute content for a campaign, or interacts with your content, they’re amplifying your brand.

“To make this happen, you have to provide a ‘what’s in it for me?’,” Aquilante says. “Either create such engaging content and be so remarkable that your customers can’t help but amplify your message, or provide an incentive, like points toward future rewards or something more transactional, like getting a gift card for sharing content a certain amount of times.”

3) Referrals

Referrals have the most immediate monetary value for your business. B2C companies are masters at the referral game, usually awarding the referrer credit to their account or even monetary rewards.

B2B referrals are a bit trickier. B2B purchases tend to be more expensive than B2C purchases, and usually involve multiple stakeholders and a longer sales process. So your customer likely has to do some selling upfront to feel comfortable sending a contact’s information to you. It’s not impossible to get this right, but it’s crucial to offer your customer something valuable enough to incentivize them to do this work on your behalf.

“Get in your customer’s head, figure out what matters to them, and make sure that you’ve got a good exchange of value,” Aquilante says. “They’re offering you something of very high value if they’re referring your business, and you’ve got to offer something in return.”

4) Upsells and cross-sells

A note from our lawyers: These results contain estimates by HubSpot and are intended for informational purposes only. As past performance does not guarantee future results, the estimates provided in this report may have no bearing on, and may not be indicative of, any returns that may be realized through participation in HubSpot’s offerings.

Aside from promoting your brand and bringing you business, your customers are themselves a source of net new revenue, if you have multiple products or services. Your customer team is your not-so-secret weapon in unlocking this revenue.

In late 2017, HubSpot piloted the concept of “Support-Qualified Leads” at HubSpot. Our sales team owns selling new business and upselling/cross-selling those accounts. But our support team is the one actually speaking with customers day in and day out, so it’s just intuitive that they have a better understanding of when customers reach a point where their needs grow past the products they currently have. When a customer has a new business need and has the budget to expand their offerings with HubSpot, a customer success representative passes the lead to the appropriate salesperson, who takes over the sales conversation.

The Support-Qualified Lead model is powerful because it closes the loop of communication between sales and support, and it works — in its first month, the pilot generated almost $20,000 in annual recurring revenue just from cross-sells and upsells. Since we’ve rolled this out, we’ve generated over $470,000 in annual recurring revenue just from this model — nothing to sneeze at.

Growth has always been hard. If you’re just starting out, it’s hard to imagine ever competing with the top companies in your industry.

Customer service is the key to this equation. If you provide an excellent customer experience and can create a community of people who are willing to promote your business on your behalf, you’re laying the groundwork for sustainable, long term growth. And in a world where acquisition is hard and getting harder, who wouldn’t want that?

https://research.hubspot.com/customer-acquisition-study

Ecology – Futurecasting ecological research: the rise of technoecology

Abstract

Increasingly complex research questions and global challenges (e.g., climate change and biodiversity loss) are driving rapid development, refinement, and uses of technology in ecology. This trend is spawning a distinct sub‐discipline, here termed “technoecology.” We highlight recent ground‐breaking and transformative technological advances for studying species and environments: bio‐batteries, low‐power and long‐range telemetry, the Internet of things, swarm theory, 3D printing, mapping molecular movement, and low‐power computers. These technologies have the potential to revolutionize ecology by providing “next‐generation” ecological data, particularly when integrated with each other, and in doing so could be applied to address a diverse range of requirements (e.g., pest and wildlife management, informing environmental policy and decision making). Critical to technoecology’s rate of advancement and uptake by ecologists and environmental managers will be fostering increased interdisciplinary collaboration. Ideally, such partnerships will span the conception, implementation, and enhancement phases of ideas, bridging the university, public, and private sectors.

Introduction

Ecosystems are complex and dynamic, and the relationships among their many components are often difficult to measure (Bolliger et al. 2005, Ascough et al. 2008). Ecologists often rely on technology to quantify ecological phenomena (Keller et al. 2008). Technological advancements have often been the catalyst for enhanced understanding of ecosystem function and dynamics (Fig. 1, Table 1), which in turn aids environmental management. For example, the inception of VHF telemetry to track animals in the 1960s allowed ecologists to remotely monitor the physiology, movement, resource selection, and demographics of wild animals for the first time (Tester et al. 1964). However, advancements in GPS and satellite communications technology have largely supplanted most uses for VHF tracking. As opposed to VHF, GPS has the ability to log locations, as well as high recording frequency, greater accuracy and precision, and less researcher interference of the animals, leading to an enhanced, more detailed understanding of species habitat use and interactions (Rodgers et al. 1996). This has assisted in species management by not only highlighting important areas to protect (Pendoley et al. 2014), but also identifying key resources such as individual plants instead of general areas of vegetation.

Illustrative timeline of new technologies in ecology and environmental science (see Table 2 for technology descriptions).
Table 1. Timeline of new technologies in ecology and environmental science, to accompany information in Fig. 1
TechnologyDescription
Past
SonarSonar first used to locate and record schools of fish
Automated sensorsAutomated sensors specifically used to measure and log environmental variables
Camera trapsCamera traps first implemented to record wildlife presence and behavior
Sidescan sonarSidescan sonar is used to efficiently create an image of large areas of the sea floor
Mainframe computersComputers able to undertake ecological statistical analysis of large datasets
VHF trackingRadio tracking, allowing ecologists to remotely monitor wild animals
Landsat imageryThe first space‐based, land‐remote sensing data
Sanger sequencingThe first method to sequence DNA based on the selective incorporation of chain‐terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication
LiDARRemote sensors that measure distance by illuminating a target with a laser and analyzing the refracted light
Multispectral LandsatSatellite imagery with different wavelength bands along the spectrum, allowing for measurements through water and vegetation
Thermal bio‐loggersSurgically implanted devices to measure animal body temperature
GPS trackingSatellite tracking of wildlife with higher recording frequency, greater accuracy and precision, and less researcher interference than VHF
Thematic LandsatA whisk broom scanner operating across seven wavelengths and able to measure global warming and climate change
Infrared camera trapsAble to sense animal movement in the dark and take images without a visible flash
Multibeam sonarTransmitting broad acoustic fan shaped pulses to establish a full water column profile
Video trapsVideo instead of still imagery, able to determine animal behavior as well as identification
Present
AccelerometersMeasures animal movement (acceleration) that is irrespective of satellite reception (geographic position)
3D LiDARAccurate measurement of 3D ecosystem structure
Autonomous vehiclesUnmanned sensor platforms to collect ecological data automatically and remotely, including in terrain that is difficult and/or dangerous to access for humans
3D trackingThe use of inertial measurements units devices in conjunction with GPS data to create real‐time animal movement tracks
ICARUSThe International Cooperation for Animal Research Using Space (ICARUS) Initiative is to observe global migratory movements of small animals through a satellite system
Next gen sequencingMillions of fragments of DNA from a single sample can be sequenced in unison
Long‐range, low‐power telemetryLow‐voltage, low‐amperage transfer of data over several kilometers
Future
Internet of thingsA network of devices that can communicate with one another, transferring information and processing data
Low‐power computersSmall computers with the ability to connect an array of sensors and, in some cases, run algorithms and statistical analyses
Swarm theoryThe autonomous but coordinated use of multiple unmanned sensor platforms to complete ecological surveys or tasks without human intervention
3D printingThe construction of custom equipment and constructing animal analogues for behavioral studies
Mapping molecular movementCameras that can display images at a sub‐cellular level without the need of electron microscopes
Biotic gamingHuman players control a paramecium similar to a video game, which could aid in the understanding of microorganism behavior
Bio‐batteriesElectro‐biochemical devices can run on compounds such as starch, allowing sensors and devices to be powered for extended periods in remote locations where more traditional energy sources such as solar power may be unreliable (e.g., rainforests)
Kinetic batteriesBatteries charged via movement that are able to power microcomputers

Ecological advances to date are driven by technology primarily relating to enhanced data capture. Expanding technologies have focused on the collection of high spatial and temporal resolution information. For example, small, unmanned aircraft can currently map landscapes with sub‐centimeter resolution (Anderson and Gaston 2013), while temperature, humidity, and light sensors can be densely deployed (hundreds per hectare) to record micro‐climatic variations (Keller et al. 2008). Such advances in data acquisition technologies have delivered knowledge of the natural environment unthinkable just a decade ago. But what does the future hold?

Here, we argue that ecology could be on the precipice of a revolution in data acquisition. It will occur within three concepts: supersize (the expansion of current practice), step‐change (the ability to use technology to address questions we previously could not), and radical change (exploring questions we could not previously imagine). Technologies, both current and emerging, have the capacity to spawn this “next‐generation” ecological data that, if harnessed effectively, will transform our understanding of the ecological world (Snaddon et al. 2013). What we term “technoecology” is the hardware side of “big data” (Howe et al. 2008), focused on the employment of cutting edge physical technology to acquire new volumes and forms of ecological data. Such data can help address complex and pressing global issues of ecological and conservation concern (Pimm et al. 2015). However, the pace of this revolution will be determined in part by how quickly ecologists embrace these technologies. The purpose of this article is to bring to the attention of ecologists some examples of current, emerging, and conceptual technologies that will be at the forefront of this revolution, in order to hasten the uptake of these more recent developments in technoecology.

Technoecology’s Application and Potential

Bio‐loggers: recording the movement of animals

Bio‐logging technology is not new to ecology, incorporating sensors such as heart rate loggers, as well as VHF and GPS technology. Instead, bio‐logging technology is being supersized, expanding the current practices with new technology. Accelerometers are being used to record fine‐scale animal movement in real time, something which was only possible previously via direct observation (Shamoun‐Baranes et al. 2012). Using accelerometry, we can calculate an animal’s rate of energy expenditure (Wilson et al. 2006), allowing ecologists to attribute a “cost” to different activities and in relation to environmental variation.

Bio‐loggers are also causing a step‐change in the questions we can explore in animal movement. Real‐time three‐dimensional animal movement tracks can now be recreated from data collected by inertial measurements units, which incorporate accelerometers, gyroscopes, magnetometers, and barometers. This technology has been used to examine the movements of cryptic animals such as birds (Aldoumani et al. 2016) and whales (Lopez et al. 2016) to determine both how they move and how they respond to external stimuli. The incorporation of GPS technology would allow for the animal movement to be placed spatially within 3D‐rendered environments and allow for the examination of how individuals respond to each other, creating a radical change to the discipline of animal movement. Over the last 50 yr, we have gone from simply locating animals, to reconstructing behavioral states and estimating energy expenditure by using these technological advancements.

Bio‐batteries: plugging‐in to trees to run field equipment

Bio‐batteries are new generation fuel cells that will supersize both the volume and the scale of data that can be collected. Bio‐batteries convert chemical energy into electricity using low‐cost biocatalyst enzymes. Also known as enzymatic fuel cells, electro‐biochemical devices can run on compounds such as starch in plants, which is the most widely used energy‐storage compound in nature (Zhu et al. 2014). While still in early development, bio‐batteries have huge potential for research. Enzymatic fuel cells containing a 15% (wt/v) maltodextrin solution have an energy‐storage density of 596 Ah/kg, which is one order of magnitude higher than that of lithium‐ion batteries. Imagine future ecologists “plugging‐in” to trees, receiving continuous electricity supply to run long‐term sampling and monitoring equipment such as temperature probes and humidity sensors. Further, the capabilities of bio‐batteries combined with low‐power radio communication devices (see Next‐generation Ecology) could revolutionize field‐based data acquisition.

Bio‐batteries could greatly aid current technoecological projects such as large‐scale environmental monitoring. For example, Cama et al. (2013) are undertaking permanent monitoring of the Napo River in the Amazon using data transfer over the Wi‐Fi network already in place. The Wi‐Fi towers are powered via solar panels, but within the dense rainforest canopy there is not enough light to use solar power to run electronics. If sensor arrays within the rainforest could be powered continuously via the trees, the project could run without a need for avoiding regions for lack of sunlight or using staff to regularly replace batteries.

Low‐power, long‐range telemetry: transmitting data from the field to the laboratory

Ecological data collection often occurs in locations difficult or hazardous to traverse, meaning that practical methods of data retrieval often influence sensor placement, limiting the data collected, but what if the data could be sent from remote sensors back to a central location for easy collection? Ecological projects such as monitoring the Amazon environment already do so using Wi‐Fi towers (Cama et al. 2013), but Wi‐Fi transmission range is limited (approximately 30 m). This can be extended with larger antennas and increasing transmission power, but in return consumes much more electricity. Other technologies are capable of transmitting data via either satellite (Lidgard et al. 2014) or the cell phone network (Sundell et al. 2006), but are likewise limited to locations with cell coverage or are prohibitively expensive. Low‐power networks offer great promise for data transfer over large distances (kilometers), including the increasingly popular LoRa system (Talla et al. 2017). Long‐range telemetry is already being used commercially for reading water meters, where water usage data are sent to hubs, transmitting data hourly, and a single battery could last over a decade (e.g., Taggle Systems; http://www.taggle.com.au/). Integrating such technology into ecological research would allow sensor deployment in remote areas where other communication methods are infeasible, for example, dense forests, high mountain ranges, swamps, and deep canyons. Such devices could also be used to transmit information to a base station, resulting in faster data collection and more convenient data retrieval.

The Internet of things: creating “smart” environments

It is now possible to wirelessly connect devices to one another so they can share information automatically. This is known as the Internet of things (IoT), in which a variety of “things” or objects can interact and co‐operate with their neighbors (Gershenfeld et al. 2004). Each device is still capable of acting independently, or it can communicate with others to gain additional information. Expanding on the use of low‐power, long‐range telemetry, IoT could be used to set up peer‐to‐peer networking to transfer data from one device to the next until reaching a location with Internet access or cell coverage, where more traditional means of transmission are possible. An attempt of such peer‐to‐peer transfer in ecology is ZebraNet: a system of GPS devices attached to animals (zebras) which transfer each individual’s GPS data between each other when in close proximity (Juang et al. 2002). Using this design, retrieving a device attached to one animal also provides the data from all other animals.

The applications of IoT go beyond the simple transfer of data. IoT technology effectively creates “smart environments,” in which hundreds of networked devices, such as temperature sensors, wildlife camera traps, and acoustic monitors, are connected wirelessly and are able to transmit data to central nodes. Using bio‐batteries, such devices could run “indefinitely” (not literally, as components will eventually fail due to wear and tear in field conditions, which can be severe in some environments, e.g., very high/low temperatures, humidity, and/or salinity). From there, fully automated digital asset management systems can query and analyze data. Automated processes are increasingly pertinent with more long‐term continuously recording sensor networks (e.g., National Ecological Observatory Network [NEON]). NEON is composed of multiple sensors measuring environmental parameters such as the concentration of CO2 and Ozone, or soil moisture, all continuously‐recording remotely with high temporal resolution, creating ever expanding environmental datasets (Keller et al. 2008). To make best use of such data requires analysis at high temporal resolutions, which is not feasible to do manually by researchers, but possible with machine learning algorithms and other advanced statistical approaches.

Swarm theory for faster and safer data acquisition, and dynamic ecological survey

Swarm theory is a prime example of the complimentary nature of technology and ecology. In essence, swarm theory refers to individuals self‐organizing to work collectively to accomplish goals. Swarm theory relates to both natural and artificial life, and mathematicians have studied the organization of ant colonies (Dorigo et al. 1999) and flocking behavior of birds and insects (Li et al. 2013), in an attempt to understand this phenomenon. Swarm theory is already being used with unmanned autonomous vehicles for first response to disasters, investigating potentially dangerous situations, search and rescue, and for military purposes (http://bit.ly/1Pel9Qz). Exciting applications of swarm theory include faster data acquisition and communication over large geographic scales and dynamic ecological survey.

Swarm theory is directly applicable to the collection of remotely‐sensed data by multiple unmanned vehicles, whether aerial, water surface, or underwater. Unmanned aerial vehicles (UAVs) are already being used for landscape mapping and wildlife identification (Anderson and Gaston 2013, Humle et al. 2014, Lucieer et al. 2014), and the data collected can be processed into high‐resolution (<10 cm) to characterize the variability in terrain and vegetation density (Friedman et al. 2013, Lucieer et al. 2014). So far, however, such vehicles are used individually. By employing swarm theory, data collection could be completed faster by using several vehicles working simultaneously and collaboratively. Moreover, if vehicles were enabled to communicate with each other, data transfer would also be improved. Given the comparatively low costs of unmanned vehicles versus manned vehicles, such implementation would dramatically increase the efficiency of data collection while also eliminating safety issues. This efficiency could, in turn, allow for more repeated and systematic surveys, improving the statistical power and inference from time‐series analyses.

Even more exciting than swarms simply being used to advance our capabilities in data acquisition is the prospect of deploying them as more active tools for quantifying biotic interactions. The ability of a swarm to locate and then track individuals of different species in real time could revolutionize our understanding of key ecological phenomena such as dispersal, animal migration, competition, and predation. Swarms could be used to initially sweep large areas, and then, as individual drones detect the species/individuals of interest, they could then inform other drones, refining search areas based on this geographic information, and then detect and track the behavior of additional animals, in real time. An increased capacity to detect and measure species interactions, and assess marine and terrestrial landscape change, would enhance our understanding of fundamental ecological and geological processes, ultimately assisting to further ecological theory and improve biodiversity conservation (Williams et al. 2012).

This technology will however require careful consideration of the societal and legislative context, as is the case for UAVs (see Allan et al. 2015).

3D printing for unique and precise equipment

While 3D printing has existed since the 1980s, its use in ecology has primarily been as teaching aids. For example, journals such as PeerJ offer the ability to download blueprints of 3D images (http://bit.ly/1MBPn1d). However, 3D printing has many more applications. These include (1) building specialized equipment cheaply and relatively easily by using the design tools included with many 3D printers or by scanning and modifying products that already exist (Rangel et al. 2013); (2) building organic small molecules, mimicking the production of molecules in nature (Li et al. 2015); (3D printing at the molecular level even has the potential to create small organic molecules in the laboratory, Service 2015); and (3) printing realistic high‐definition full‐color designs in a number of different materials (http://www.3dsystems.com/). Using such models, ecologists are able to print specialized platforms for sensor equipment (e.g., GPS collars) that fit better to animals. The use of 3D printing could go a step further, however, and create true‐color, structurally complex analogues of either vegetation or other animals for behavioral studies. For example, Dyer et al. (2006) explored whether bee attraction was based on color or may also be associated with flower temperature. Flowers of intricate and exact shape and color could be printed with heating elements embedded more easily and realistically than trying to build them by hand.

Mapping molecular movement for non‐destructive analysis of nature

New developments in optical resolution and image processing have led to cameras that can display images at a sub‐cellular level without the need of electron microscopes. Originally developed to scan silicon wafers for defects, this new technology is now being used to examine molecular transport and the exchange between muscle, cartilage, and bone in living tissue (http://bit.ly/1DlIYkD). The development also highlights what can be achieved by cross‐disciplinary and institutional collaboration, in this case optical and industrial measurement manufacturers Zeiss, Google, Cleveland Clinic, and Brown, Stanford, New South Wales universities. Together, they have also created a “zoom‐able” model that can go from the centimeter level down to nanometer‐sized molecules, creating terabytes of data.

These technology’s ecological and environmental applications are substantial, paramount of which is the non‐destructive nature of the analysis, allowing for time‐series analyses of molecular transfer. For instance, Clemens et al. (2002) examined the hyper‐accumulation of toxic metals by specific plant species. Understanding how some plants can absorb toxic metals has promise for soil decontamination, but as stated by Clemens et al. (2002) “molecularly, the factors governing differential metal accumulation and storage are unknown.” The ability to not only observe the molecular transport of heavy metals in plant tissue, but also to change the observational scale, will greatly advance our knowledge of nutrient uptake and storage in plants.

Low‐power computers for automated data analysis

Low‐power microcomputers and microcontrollers exist in products such as Raspberry Pi, Arduino, and Beagleboard. In ecology, low‐power computers have been used to build custom equipment such as underwater stereo‐camera traps, automated weather stations, and GPS tracking collars (Williams et al. 2014, Greenville and Emery 2016). Notably though, following a surge in hobbyists embracing the adaptability of low‐cost, low‐power, high‐performance microcontrollers, large companies such as Intel have also joined the marketplace with microcontrollers like Edison (http://intel.ly/1yekvNP). Edison is low‐power, but has a dual‐core CPU, Wi‐Fi, Bluetooth, data storage, inbuilt Real Time Clock, and the ability to connect a plethora of sensors from GPS receivers to infrared cameras (http://bit.ly/1qHdor2; Intel 2014). Cell phones and wearable devices are already integrating this technology. As an example, the Samsung Galaxy S8 cellular phone contains an eight‐core processor computer with 4GB ram, cameras, GPS, accelerometers, heart rate monitor, fingerprint, proximity, and pressure sensors (http://bit.ly/2ni8KRD). Using microcontrollers such as these, it is possible to run high‐level algorithms and statistical analysis on the device such as image recognition and machine learning. Not all microcontrollers are capable of running such complex data processes and other options will be required (e.g., microprocessors) instead, a situation that is likely to improve, however, with further development of the technology.

The ability to process data onboard has huge potential for technology’s ecological application, such as remote camera traps and acoustic sensors. By running pattern recognition algorithms in the equipment itself, species identification from either images or calls could be achieved both automatically and immediately. This information could be processed, records tabulated, and a decision taken as to conserve, delete, flag the recorded data for later manual observation, or even transmit the data back to the laboratory. This removes the need for storing huge volumes of raw photographs or audio files, but instead just tabulated summary results. The equipment could be programmed to specifically keep photographs and acoustics of species of interest (e.g., rare or invasive species, or species that cannot be identified with high certainty) while deleting those that are not, and/or to save any data with a recognition confidence below a designated threshold for manual inspection. In terms of direct application to conservation, it is possible that this technology would allow intelligent poison bait stations to be built. Poison baiting is widely used to control pest species (Buckmaster et al. 2014), but the consumption of baits by non‐target species can have unintended consequences ranging from incapacitation to death, limiting the efficacy of the control program (Doherty and Ritchie 2017). Using real‐time image recognition software built into custom designed bait dispensers, we could program poison bait release only when pest animals are present (e.g., grooming traps, https://bit.ly/2IKAYAD), reducing harm to non‐target species.

Technological Developments Flowing into Ecology

The technological developments from outside ecology that flow into the discipline offer great potential for theoretical advances and environmental applications. Two examples include personal satellites and neural interface research.

Personal satellites are an upcoming technology in the world of ecology. Like UAVs before them, miniature satellites promise transformative data gathering and transmission opportunities. Projects such as CubeSat were created by California Polytechnic State University, San Luis Obispo, and Stanford University’s Space Systems Development Lab in 1999, and focused on affordable access to space. These satellites are designed to achieve low Earth orbit (LEO), approximately 125 to 500 km above the Earth. Measuring only 10 cm per side, the CubeSats can house sensors and communications arrays that enable operators to study the Earth from space, as well as space around the Earth. Open‐source development kits are already available (http://www.cubesatkit.com/). However, NASA estimates it currently costs approximately US $10,000 to launch ~0.5 kg of payload into LEO (NASA 2017), meaning it is still cost prohibitive, and the capabilities of such satellites are currently limited. Given the rapid expansion of commercial space missions and pace of evolving technology, however, private satellites to examine ecosystem function and dynamics may not be too far over the horizon.

Neural interface research aims at creating a link between the nervous system and the outside world, by stimulating or recording from neural tissue (Hatsopoulos and Donoghue2009). Currently, this technology is focused in biomedical science, recording neural signals to decipher movement intentions, with the aim of assisting paralyzed people. Recent experiments have been able to surgically implant a thumbtack‐sized array of electrodes, able to record the electrical activity of neurons in the brain. Using wireless technology, scientists were able to link epidural electrical stimulation with leg motor cortex activity in real time to alleviate gait deficits after a spinal cord injury in Rhesus monkeys (Macaca mulatta; Capogrosso et al. 2016). Restoration of volitional movement may at first appear limited in its relevance to ecology, but the recording and analysis of neural activity is not. To restore volitional movement, mathematical algorithms are being used to interpret neural coding and brain behavior to determine the intent to move. This technology may make it possible in the future to record and understand how animals make decisions based on neural activity, and as affected by their surrounding environment. Using such information could greatly advance the field of movement ecology and related theory (e.g., species niches, dispersal, meta‐populations, trophic interactions) and aid improved conservation planning for species (e.g., reserve design) based on how they perceive their environment and make decisions.

Next‐generation Ecology

The technologies listed above clearly provide exciting opportunities in data capture for ecologists. However, transformation of data acquisition in ecology will be most hastened by their use in combination, through the integration of multiple emerging technologies into next‐generation ecological monitoring (Marvin et al. 2016). For instance, imagine research stations fitted with remote cameras and acoustic recorders equipped with low‐power computers for image and call recognition and powered by trees via bio‐batteries. These devices could use low‐power, long‐range telemetry both to communicate with each other in a network, potentially tracking animal movement from one location to the next, and to transmit data to a central location. Swarms of UAVs working together (swarm theory) could then be deployed to both map the landscape and collect the data from the central location wirelessly without landing. The UAVs could then land in a location with Wi‐Fi and send all the data via the Internet into cloud‐based storage, accessible from any Internet‐equipped computer in the world (Fig. 2, Table 2). While a system with this much integration might still be theoretical, it is not outside the possibilities of the next 5–10 yrs.

Visualization of a future “smart” research environment, integrating multiple ecological technologies. The red lines indicate data transfer via the Internet of things (IoT), in which multiple technologies are communicating with one another. The gray lines indicate more traditional data transfer. Broken lines indicate data transferred over long distances. Once initiated, this environment would require minimal researcher input. (See Table 3 for descriptions of numbered technologies.).
Table 2. Description of elements of a future “smart” research environment, as illustrated in Fig. 2
NumberTechnologyDescription
1Bio‐batteriesIn locations where solar power is not an option (closed canopies), data‐recording technology such as camera traps and acoustic sensors could run on bio‐batteries, eliminating the need for conventional batteries
2The Internet of things (IoT)Autonomous unmanned vehicles could use IoT to wirelessly communicate and collect data from recording technologies (camera traps) located in dangerous or difficult‐to‐access locations
3Swarm theoryAutonomous vehicles such as unmanned aerial vehicles could self‐organize to efficiently collect and transfer data
4Long‐range low‐power telemetryTechnology “talking” to each other, transferring information over several kilometers
5Solar powerEnvironmental sensors, such as weather stations, could be powered via solar power and transfer data to autonomous vehicles for easy data retrieval
6Low‐power computerA field server designed to wirelessly collect and analyze data from all the technology in the environment
7Data transfer via satelliteThere is potential to autonomously transfer data from central hubs in the environment back to researchers, without the need for visiting the research sites
8BioinformaticsWith the ability to collect vast quantities of high‐resolution spatial and temporal data via permanent and perpetual environmental data‐recording technologies, the development of methods to manage and analyze the data collected will become much more pertinent

Bioinformatics will play a large role in the use of “next‐generation” ecological data that technoecology produces. Datasets will be very large and complex, meaning that manual processing and traditional computing hardware and statistical approaches will be insufficient to process such information. For example, the data captured on a 1‐km2 UAV survey for high‐resolution image mosaics and 3D construction is in the tens of gigabytes, so at a landscape scale datasets can be terabytes. Such datasets are known as “big data” (Howe et al. 2008), and bioinformatics will be required to develop methods for sorting, analyzing, categorizing, and storing these data, combining the fields of ecology, computer science, statistics, mathematics, and engineering.

Multi‐disciplinary collaboration will also play a major role in developing future technologies in ecology (Joppa 2015). Ecological applications of cutting edge technology most often develop through multi‐disciplinary collaboration between scientists from different fields or between the public and private sectors. For instance, the Princeton ZebraNet project is a collaboration between engineers and biologists (Juang et al. 2002), while the development of the molecular microscope involved the private sector companies Zeiss and Google. Industries may already have technology and knowledge to answer certain ecological questions, but might be unaware to such applications. Ecologists should also look to collaborate on convergent design; much of what we do as ecologists and environmental scientists has applications in agriculture, search and rescue, health, or sport science, and vice versa, so opportunities to share and reduce research and development costs exist. Finally, ecologists should be given opportunities for technology‐based training and placement programs early in their careers as a way to raise awareness of what could be done.

In the coming decades, a technology‐based revolution in ecology, akin to what has already occurred in genetics (Elmer‐DeWitt and Bjerklie 1994), seems likely. The pace of this revolution will be dictated, in part, by the speed at which ecologists embrace and integrate new technologies as they arise. It is worth remembering, “We still do not know one thousandth of one percent of what nature has revealed to us”—Albert Einstein.

Acknowledgments

We would like to thank the corresponding editor for his excellent suggestions for improving our manuscript and the anonymous reviewer who suggested the addition of the supersize, step‐change, and radical change conceptual framework.

https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.2163

MedCity – As digital health companies proliferate, it’s getting tougher to spot the strongest businesses

The digital health rocket seems to have gotten supercharged lately, at least when it comes to fundraising.  Depending on who you ask, either $1.62 billion (Rock Health’s count) or $2.5 billion (Mercom) or $2.8 billion (Startup Health’s count) was plowed into digital health companies in just the first three months of 2018.  By any measure Q1 2018 was the most significant quarter yet for digital health funding.  This headline has been everywhere.  Digital health:  to infinity and beyond!  But what is the significance of this? Should investors and customers of these companies be excited or worried?  It’s a little hard to tell.

But if you dig a little deeper, there are some interesting things to notice.

Another thing to notice: some deals are way more equal than others, to misquote a book almost everyone was forced to read in junior high.  Megadeals have come to digital health (whatever that is), defined as companies getting more than $100 million dropped on them in a single deal.  For instance, according to Mercom Capital, just five deals together accounted for approximately $936 million, or more than a third of the entire quarter’s funding (assuming you’re using the Mercom numbers)  If you use the Rock Health numbers, which include only three of the mega deals, we are talking $550 million for the best in class (bank account wise, anyway).  Among the various megadeal high fliers are Heartflow ($240 million raised), Helix ($200 million raised), SomaLogic ($200 million raised), PointClickCare ($186 million raised), and Collective Health ($110 million raised); three others raised $100 million each.

First of all, the definition of “digital health” is getting murkier and murkier.  Some sweep in things that others might consider life sciences or genomics. Others include things that may generally be considered health services, in that they are more people than technology. Rock Health excludes companies that are primarily health services, such as One Medical or primarily insurance companies, such as Oscar, including only “health companies that build and sell technologies—sometimes paired with a service, but only when the technology is, in and of itself, the service.”  In contrast, Startup Health and Mercom Capital clearly have more expansive views though I couldn’t find precise definitions.  My solution is this: stop using the term “digital health”.  Frankly, it’s all healthcare and if I were in charge of the world I would use the following four categories and ditch the new school monikers:  1) drugs/therapeutics 2) diagnostics in vivo, in vitro, digital or otherwise; 3) medical devices with and without sensors; and 4) everything else.  But I’m not in charge of the world and it isn’t looking likely anytime soon, so the number and nomenclature games continue.  My kingdom for a common ontology!

It used to be conventional wisdom that the reason healthcare IT deals were appealing, at least compared to medical devices and biotech, is because they needed far less capital to get to the promised land.  Well, property values in the digital health promised land have risen faster than those in downtown San Francisco, so conventional wisdom be damned.  These technology-focused enterprises are giving biotech deals a run for their money, literally.

But here’s what it really means: if you take out the top 10 deals in Rock Health’s count, which take up 55 percent of the first quarter’s capital, the remainder are averaging about $10.8 incoming capital per deal.  If you use the Mercom numbers, the average non-megadeal got $8.6 million. This is a far more “normalish” amount of capital for any type of Series A or Series B venture deal, so somewhere in the universe, there is the capacity to reason.Another note on this topic: the gap between the haves and have not’s is widening dramatically.  Mercom reports that the total number of deals for Q1 2018 was 187, which is the lowest total number of digital health deals for five quarters.  Rock Health claims there were 77 deals in the quarter; Startup Health, always the most enthusiastic, claims there were 191 deals in the digital health category.  I don’t know who has the “right” definition of digital health; what I do know is that either way, this is a lot of companies.

I think that the phenomenon of companies proliferating like bunnies on Valentine’s Day has another implication: too many damn companies.  Perhaps it’s only me, but it’s getting harder and harder to tell the difference between the myriad of entrants in a variety of categories.  Medication adherence deals seem to be proliferating faster than I can log them.   Companies claiming to improve consumer engagement, whatever that is, are outnumbering articles about AI, and that’s saying something.  Companies claiming to use AI to make everything better, whether it’s care delivery or drug development or French Fries are so numerous that it’s making me artificially stupider.  I think that this excess of entrepreneurship is actually bad for everyone in that it makes it much harder for any investor to pick the winner(s) and makes it nearly impossible for customers to figure out the best dance partner.  It’s a lot easier for customers to simply say no than to take a chance and pick the wrong flavor of the month.  It’s just become too darn easy to start a company.

And with respect to well-constructed clinical studies to demonstrate efficacy, nothing could be more important for companies trying to stand out from the crowd.  We keep seeing articles like this one, that talks about how digital health products often fail to deliver on the promise of better, faster, cheaper or any part thereof. And there’s this one by a disappointed Dr. Eric Topol, a physician who has committed a significant amount of his professional life to the pursuit ofhigh quality digital health initiatives – a true believer, as it were, but one who has seen his share of disappointment when it comes to the claims of digital health products. I’m definitely of the belief that there are some seriously meaningful products out there that make a difference.  But there is so much chaff around the wheat that it’s hard to find the good stuff.

Digital health has become the world’s biggest Oreo with the world’s thinnest cream center.  But well-constructed, two-arm studies can make one Oreo stand out in a crowd of would-be Newman-Os. One way that investors and buyers are distinguishing the good from the not-so-much is by looking for those who have made the effort to get an FDA approval and who have made an investment in serious clinical trials to prove value.  Mercom Capital reports that there were over 100 FDA approvals of digital health products in 2017.  Considering that there were at least 345 digital health deals in 2017 (taking the low-end Rock Health numbers) and that only a fraction raised money in that year, it is interesting to think that a minority of companies are bothering to take the FDA route.

Now, this is a SWAG at best, but it feels about right to me.  I often hear digital health entrepreneurs talking about the lengths they are going to in order to avoid needing FDA approval and I almost always respond by saying that I disagree with the approach.  Yes, there are clearly companies that don’t ever need the FDA’s imprimatur (e.g., administrative products with no clinical component), but if you have a clinically-oriented product and hope to claim that it makes a difference, the FDA could be your best friend.  Having an FDA approval certainly conveys a sense of value and legitimacy to many in the buyer and investor community.

It will be interesting to see if the gravity-defying digital health activity will continue ad infinitum or whether the sector will come into contact with the third of Newton’s Laws. Investors are, by definition, in it for the money.  If you can’t exit you shouldn’t enter.  In 2017 there were exactly zero digital health IPOs.  This year there has, so far, been one IPO: Chinese fitness tracker and smartwatch maker, Huami, which raised $110 million and is now listed on the New York Stock Exchange, per Mercom Capital.  In 2017 there were about 119 exits via merger or acquisition, which was down from the prior year.  This year has started off with a faster M&A run rate (about 37 companies acquired in Q1 2018), but what we don’t know is whether the majority of these company exits will look more like Flatiron (woo hoo!) or Practice Fusion (yikes!).  Caveat Emptor: Buyer beware is all I have to say about that.

https://medcitynews.com/2018/05/as-digital-health-companies-proliferate-its-getting-tougher-to-spot-the-strongest-businesses/

McKinsey – AI frontier : Analysis of more than 400 use cases across 19 industries and nine business functions

An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value

 

Mapping AI techniques to problem types

As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).

AI analytics techniques

Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.

We analyzed the applications and value of three neural network techniques:

  • Feed forward neural networks: the simplest type of artificial neural network. In this architecture, information moves in only one direction, forward, from the input layer, through the “hidden” layers, to the output layer. There are no loops in the network. The first single-neuron network was proposed already in 1958 by AI pioneer Frank Rosenblatt. While the idea is not new, advances in computing power, training algorithms, and available data led to higher levels of performance than previously possible.
  • Recurrent neural networks (RNNs): Artificial neural networks whose connections between neurons include loops, well-suited for processing sequences of inputs. In November 2016, Oxford University researchers reported that a system based on recurrent neural networks (and convolutional neural networks) had achieved 95 percent accuracy in reading lips, outperforming experienced human lip readers, who tested at 52 percent accuracy.
  • Convolutional neural networks (CNNs): Artificial neural networks in which the connections between neural layers are inspired by the organization of the animal visual cortex, the portion of the brain that processes images, well suited for perceptual tasks.

For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.

Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.

Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.

 

Section 2

Insights from use cases

We collated and analyzed more than 400 use cases across 19 industries and nine business functions. They provided insight into the areas within specific sectors where deep neural networks can potentially create the most value, the incremental lift that these neural networks can generate compared with traditional analytics (Exhibit 2), and the voracious data requirements—in terms of volume, variety, and velocity—that must be met for this potential to be realized. Our library of use cases, while extensive, is not exhaustive, and may overstate or understate the potential for certain sectors. We will continue refining and adding to it.

Advanced deep learning AI techniques can be applied across industries

Examples of where AI can be used to improve the performance of existing use cases include:

  • Predictive maintenance: the power of machine learning to detect anomalies. Deep learning’s capacity to analyze very large amounts of high dimensional data can take existing preventive maintenance systems to a new level. Layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield. For example, AI can extend the life of a cargo plane beyond what is possible using traditional analytic techniques by combining plane model data, maintenance history, IoT sensor data such as anomaly detection on engine vibration data, and images and video of engine condition.
  • AI-driven logistics optimization can reduce costs through real-time forecasts and behavioral coaching. Application of AI techniques such as continuous estimation to logistics can add substantial value across sectors. AI can optimize routing of delivery traffic, thereby improving fuel efficiency and reducing delivery times. One European trucking company has reduced fuel costs by 15 percent, for example, by using sensors that monitor both vehicle performance and driver behavior; drivers receive real-time coaching, including when to speed up or slow down, optimizing fuel consumption and reducing maintenance costs.
  • AI can be a valuable tool for customer service management and personalization challenges. Improved speech recognition in call center management and call routing as a result of the application of AI techniques allow a more seamless experience for customers—and more efficient processing. The capabilities go beyond words alone. For example, deep learning analysis of audio allows systems to assess a customers’ emotional tone; in the event a customer is responding badly to the system, the call can be rerouted automatically to human operators and managers. In other areas of marketing and sales, AI techniques can also have a significant impact. Combining customer demographic and past transaction data with social media monitoring can help generate individualized product recommendations. “Next product to buy” recommendations that target individual customers—as companies such as Amazon and Netflix have successfully been doing–can lead to a twofold increase in the rate of sales conversions.

Two-thirds of the opportunities to use AI are in improving the performance of existing analytics use cases

In 69 percent of the use cases we studied, deep neural networks can be used to improve performance beyond that provided by other analytic techniques. Cases in which only neural networks can be used, which we refer to here as “greenfield” cases, constituted just 16 percent of the total. For the remaining 15 percent, artificial neural networks provided limited additional performance over other analytics techniques, among other reasons because of data limitations that made these cases unsuitable for deep learning (Exhibit 3).

AI improves the performance of existing analytics techniques

Greenfield AI solutions are prevalent in business areas such as customer service management, as well as among some industries where the data are rich and voluminous and at times integrate human reactions. Among industries, we found many greenfield use cases in healthcare, in particular. Some of these cases involve disease diagnosis and improved care, and rely on rich data sets incorporating image and video inputs, including from MRIs.

On average, our use cases suggest that modern deep learning AI techniques have the potential to provide a boost in additional value above and beyond traditional analytics techniques ranging from 30 percent to 128 percent, depending on industry.

In many of our use cases, however, traditional analytics and machine learning techniques continue to underpin a large percentage of the value creation potential in industries including insurance, pharmaceuticals and medical products, and telecommunications, with the potential of AI limited in certain contexts. In part this is due to the way data are used by these industries and to regulatory issues.

Data requirements for deep learning are substantially greater than for other analytics

Making effective use of neural networks in most applications requires large labeled training data sets alongside access to sufficient computing infrastructure. Furthermore, these deep learning techniques are particularly powerful in extracting patterns from complex, multidimensional data types such as images, video, and audio or speech.

Deep-learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. By one estimate, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labeled examples. In some cases where advanced analytics is currently used, so much data are available—million or even billions of rows per data set—that AI usage is the most appropriate technique. However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.

These massive data sets can be difficult to obtain or create for many business use cases, and labeling remains a challenge. Most current AI models are trained through “supervised learning”, which requires humans to label and categorize the underlying data. However promising new techniques are emerging to overcome these data bottlenecks, such as reinforcement learning, generative adversarial networks, transfer learning, and “one-shot learning,” which allows a trained AI model to learn about a subject based on a small number of real-world demonstrations or examples—and sometimes just one.

Organizations will have to adopt and implement strategies that enable them to collect and integrate data at scale. Even with large datasets, they will have to guard against “overfitting,” where a model too tightly matches the “noisy” or random features of the training set, resulting in a corresponding lack of accuracy in future performance, and against “underfitting,” where the model fails to capture all of the relevant features. Linking data across customer segments and channels, rather than allowing the data to languish in silos, is especially important to create value.

Realizing AI’s full potential requires a diverse range of data types including images, video, and audio

Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multidimensional nature, known by practitioners as “high dimensionality.” Neural networks are good at dealing with high dimensionality, as multiple layers in a network can learn to represent the many different features present in the data. Thus, for facial recognition, the first layer in the network could focus on raw pixels, the next on edges and lines, another on generic facial features, and the final layer might identify the face. Unlike previous generations of AI, which often required human expertise to do “feature engineering,” these neural network techniques are often able to learn to represent these features in their simulated neural networks as part of the training process.

Along with issues around the volume and variety of data, velocity is also a requirement: AI techniques require models to be retrained to match potential changing conditions, so the training data must be refreshed frequently. In one-third of the cases, the model needs to be refreshed at least monthly, and almost one in four cases requires a daily refresh; this is especially the case in marketing and sales and in supply chain management and manufacturing.

 

Section 3

Sizing the potential value of AI

We estimate that the AI techniques we cite in this briefing together have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques (Exhibit 4).

AI has the potential to create value across sectors

Per industry, we estimate that AI’s potential value amounts to between one and nine percent of 2016 revenue. The value as measured by percentage of industry revenue varies significantly among industries, depending on the specific applicable use cases, the availability of abundant and complex data, as well as on regulatory and other constraints.

These figures are not forecasts for a particular period, but they are indicative of the considerable potential for the global economy that advanced analytics represents.

From the use cases we have examined, we find that the greatest potential value impact from using AI are both in top-line-oriented functions, such as in marketing and sales, and bottom-line-oriented operational functions, including supply chain management and manufacturing.

Consumer industries such as retail and high tech will tend to see more potential from marketing and sales AI applications because frequent and digital interactions between business and customers generate larger data sets for AI techniques to tap into. E-commerce platforms, in particular, stand to benefit. This is because of the ease with which these platforms collect customer information such as click data or time spent on a web page and can then customize promotions, prices, and products for each customer dynamically and in real time.

AI's impact is likely to be most substantial in M&S, supply-chain management, and manufacturing

Here is a snapshot of three sectors where we have seen AI’s impact: (Exhibit 5)

  • In retail, marketing and sales is the area with the most significant potential value from AI, and within that function, pricing and promotion and customer service management are the main value areas. Our use cases show that using customer data to personalize promotions, for example, including tailoring individual offers every day, can lead to a one to two percent increase in incremental sales for brick-and-mortar retailers alone.
  • In consumer goods, supply-chain management is the key function that could benefit from AI deployment. Among the examples in our use cases, we see how forecasting based on underlying causal drivers of demand rather than prior outcomes can improve forecasting accuracy by 10 to 20 percent, which translates into a potential five percent reduction in inventory costs and revenue increases of two to three percent.
  • In banking, particularly retail banking, AI has significant value potential in marketing and sales, much as it does in retail. However, because of the importance of assessing and managing risk in banking, for example for loan underwriting and fraud detection, AI has much higher value potential to improve performance in risk in the banking sector than in many other industries.

 

Section 4

The road to impact and value

Artificial intelligence is attracting growing amounts of corporate investment, and as the technologies develop, the potential value that can be unlocked is likely to grow. So far, however, only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.

For all their promise, AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed above, but also five other limitations:

  • First is the challenge of labeling training data, which often must be done manually and is necessary for supervised learning. Promising new techniques are emerging to address this challenge, such as reinforcement learning and in-stream supervision, in which data can be labeled in the course of natural usage.
  • Second is the difficulty of obtaining data sets that are sufficiently large and comprehensive to be used for training; for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
  • Third is the difficulty of explaining in human terms results from large and complex models: why was a certain decision reached? Product certifications in healthcare and in the automotive and aerospace industries, for example, can be an obstacle; among other constraints, regulators often want rules and choice criteria to be clearly explainable.
  • Fourth is the generalizability of learning: AI models continue to have difficulties in carrying their experiences from one set of circumstances to another. That means companies must commit resources to train new models even for use cases that are similar to previous ones. Transfer learning—in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity—is one promising response to this challenge.
  • The fifth limitation concerns the risk of bias in data and algorithms. This issue touches on concerns that are more social in nature and which could require broader steps to resolve, such as understanding how the processes used to collect training data can influence the behavior of models they are used to train. For example, unintended biases can be introduced when training data is not representative of the larger population to which an AI model is applied. Thus, facial recognition models trained on a population of faces corresponding to the demographics of AI developers could struggle when applied to populations with more diverse characteristics. A recent report on the malicious use of AIhighlights a range of security threats, from sophisticated automation of hacking to hyper-personalized political disinformation campaigns.

Organizational challenges around technology, processes, and people can slow or impede AI adoption

Organizations planning to adopt significant deep learning efforts will need to consider a spectrum of options about how to do so. The range of options includes building a complete in-house AI capability, outsourcing these capabilities, or leveraging AI-as-a-service offerings.

Based on the use cases they plan to build, companies will need to create a data plan that produces results and predictions, which can be fed either into designed interfaces for humans to act on or into transaction systems. Key data engineering challenges include data creation or acquisition, defining data ontology, and building appropriate data “pipes.” Given the significant computational requirements of deep learning, some organizations will maintain their own data centers, because of regulations or security concerns, but the capital expenditures could be considerable, particularly when using specialized hardware. Cloud vendors offer another option.

Process can also become an impediment to successful adoption unless organizations are digitally mature. On the technical side, organizations will have to develop robust data maintenance and governance processes, and implement modern software disciplines such as Agile and DevOps. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.

On the people front, much of the construction and optimization of deep neural networks remains something of an art requiring real experts to deliver step-change performance increases. Demand for these skills far outstrips supply at present; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems. and competition for them is fierce among the tech giants.

AI can seem an elusive business case

Where AI techniques and data are available and the value is clearly proven, organizations can already pursue the opportunity. In some areas, the techniques today may be mature and the data available, but the cost and complexity of deploying AI may simply not be worthwhile, given the value that could be generated. For example, an airline could use facial recognition and other biometric scanning technology to streamline aircraft boarding, but the value of doing so may not justify the cost and issues around privacy and personal identification.

Similarly, we can see potential cases where the data and the techniques are maturing, but the value is not yet clear. The most unpredictable scenario is where either the data (both the types and volume) or the techniques are simply too new and untested to know how much value they could unlock. For example, in healthcare, if AI were able to build on the superhuman precision we are already starting to see with X-ray analysis and broaden that to more accurate diagnoses and even automated medical procedures, the economic value could be very significant. At the same time, the complexities and costs of arriving at this frontier are also daunting. Among other issues, it would require flawless technical execution and resolving issues of malpractice insurance and other legal concerns.

Societal concerns and regulations can also constrain AI use. Regulatory constraints are especially prevalent in use cases related to personally identifiable information. This is particularly relevant at a time of growing public debate about the use and commercialization of individual data on some online platforms. Use and storage of personal information is especially sensitive in sectors such as banking, health care, and pharmaceutical and medical products, as well as in the public and social sector. In addition to addressing these issues, businesses and other users of data for AI will need to continue to evolve business models related to data use in order to address societies’ concerns.. Furthermore, regulatory requirements and restrictions can differ from country to country, as well from sector to sector.

Implications for stakeholders

As we have seen, it is a company’s ability to execute against AI models that creates value, rather than the models themselves. In this final section, we sketch out some of the high-level implications of our study of AI use cases for providers of AI technology, appliers of AI technology, and policy makers, who set the context for both.

  • For AI technology provider companies: Many companies that develop or provide AI to others have considerable strength in the technology itself and the data scientists needed to make it work, but they can lack a deep understanding of end markets. Understanding the value potential of AI across sectors and functions can help shape the portfolios of these AI technology companies. That said, they shouldn’t necessarily only prioritize the areas of highest potential value. Instead, they can combine that data with complementary analyses of the competitor landscape, of their own existing strengths, sector or function knowledge, and customer relationships, to shape their investment portfolios. On the technical side, the mapping of problem types and techniques to sectors and functions of potential value can guide a company with specific areas of expertise on where to focus.
  • Many companies seeking to adopt AI in their operations have started machine learning and AI experiments across their business. Before launching more pilots or testing solutions, it is useful to step back and take a holistic approach to the issue, moving to create a prioritized portfolio of initiatives across the enterprise, including AI and the wider analytic and digital techniques available. For a business leader to create an appropriate portfolio, it is important to develop an understanding about which use cases and domains have the potential to drive the most value for a company, as well as which AI and other analytical techniques will need to be deployed to capture that value. This portfolio ought to be informed not only by where the theoretical value can be captured, but by the question of how the techniques can be deployed at scale across the enterprise. The question of how analytical techniques are scaling is driven less by the techniques themselves and more by a company’s skills, capabilities, and data. Companies will need to consider efforts on the “first mile,” that is, how to acquire and organize data and efforts, as well as on the “last mile,” or how to integrate the output of AI models into work flows ranging from clinical trial managers and sales force managers to procurement officers. Previous MGI research suggests that AI leaders invest heavily in these first- and last-mile efforts.
  • Policy makers will need to strike a balance between supporting the development of AI technologies and managing any risks from bad actors. They have an interest in supporting broad adoption, since AI can lead to higher labor productivity, economic growth, and societal prosperity. Their tools include public investments in research and development as well as support for a variety of training programs, which can help nurture AI talent. On the issue of data, governments can spur the development of training data directly through open data initiatives. Opening up public-sector data can spur private-sector innovation. Setting common data standards can also help. AI is also raising new questions for policy makers to grapple with for which historical tools and frameworks may not be adequate. Therefore, some policy innovations will likely be needed to cope with these rapidly evolving technologies. But given the scale of the beneficial impact on business the economy and society, the goal should not be to constrain the adoption and application of AI, but rather to encourage its beneficial and safe use.

https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning

Scroll to top