News & Press

Follow our trends news and stay up to date.

Mission – Greatest Sales Pitch I’ve Seen All Year

A few weeks ago, I met a CMO named Yvette in the office kitchen at OpenView Venture Partners. She was chewing on a bagel during a lunch break from the VC firm’s all-day speaker event, and she was clearly upset.

“How in the world,” Yvette said, reaching for the cream cheese, “am I going to inform my team that our entire approach to marketing is wrong?”

The CEO of another company, overhearing Yvette, chimed in. “Right? I just texted my VP of sales that the way we’re selling is obsolete.”

In fact, virtually every CEO, sales exec, and marketing VP in attendance seemed suddenly overwhelmed by an urgent desire to change the way they worked.

The reason?

They had just been on the receiving end of the best sales pitch I’ve seen all year.

The 5 Elements of Drift’s Powerful Strategic Narrative

There were many great speakers at OpenView’s Boston headquarters that morning — JetBlue’s VP of marketing, senior execs from OpenView’s portfolio—yet none moved the crowd quite like Drift director of marketing Dave Gerhardt. By the time Gerhardt was finished, the only attendees who weren’t plotting to secure budget for Drift’s platform were the ones humble-bragging about how they’d already implemented it.

How did Gerhardt do it? For that matter, how has Drift—a web-based, live-chat tool for salespeople and marketers—managed to differentiate itself in a market crowded with similar offerings? (The company recently raised a $32 million round of Series B capital led by General Catalyst, with participation from Sequoia, and boasts over 40,000 businesses on its platform.)

The answer to both starts with a brilliant strategic narrative, championed by Drift CEO David Cancel, that has transformed the company into something more like a movement. In fact, two weeks after hearing Gerhardt speak, I saw Cancel pitch a new feature at Drift’s day-long Hypergrowth event, and he told virtually the same story, to similar effect.

Here, then, are the 5 elements of every compelling strategic story, and how Drift is leveraging them to achieve breakout success. If you’re pitching anything to anyone, lay them out in exactly this order:

(I have no financial stake in Drift. However, I saw Cancel pitch at Hypergrowth because I also spoke at the conference. The images below are a mix of Gerhardt’s OpenView slides and Cancel’s from Hypergrowth.)

#1. Start with a big, undeniable change that creates stakes

No matter what you’re selling, your most formidable obstacle is prospects’ adherence to the status quo. Your primary adversary, in other words, is a voice inside people’s heads that goes, We’ve gotten along just fine without it, and we’ll always be fine without it.

How do you overcome that? By demonstrating that the world has changed in such a fundamental way that prospects have to change, too.

Drift kicks off its strategic narrative with a dramatic change in the life of a typical business buyer. She’s so connected now that she practically sleeps with her phone:

Drift’s change in the world (1)

And her preferred way to interact—professionally and socially—is through always-on messaging platforms like Facebook Messenger, SMS, Instagram, and Slack:

Drift’s change in the world (2)

Note that the change Drift talks about is (1) undeniably happening and (2) happening independently of Drift — that is, whether Drift exists or not. It also (3) gives rise to stakes. All three must be true if you want prospects’ trust as you lead them down the path of questioning their love for the status quo.

While Drift’s slides don’t name the stakes explicitly, Cancel and Gerhardt’s voiceovers make them clear enough: Interact with prospects through these new channels—in real time—or don’t interact with them at all.

#2. Name the enemy

Luke fought Vader. Moana battled the Lava Monster. Marc Benioff squared off against software.

One of the most powerful ways to turn prospects into aspiring heroes is to pit them against an antagonist. What’s stopping marketers and salespeople—the heroes of Drift’s strategic story—from reaching prospects in the new, changed world?

According to Drift, it’s tools of the trade like lead forms— “fill in your name, company, and title, and maybe we’ll get back to you”— designed for a bygone era:

The enemy: tools of the trade designed for a bygone era

Drift even steals a page from Salesforce’s “no software” playbook, except here it’s those “forms” that play the role of villain:

Drift’s “no forms” icon

Naming your customer’s enemy differentiates you — not directly in relation to competitors (which comes off as “salesy”), but in relation to the old world that your competitors represent. To be sure, “circle-slash” isn’t the only way to do that, but once you indoctrinate audiences with your story, icons like this can serve as a powerful shorthand. (I bet the first time you saw Benioff’s “no software” image, you had no idea what he was talking about; once you heard the story, it spoke volumes.)

Incidentally, both Cancel and Gerhardt execute a total ninja move by including themselves in the legions of marketers and salespeople seduced by the enemy—that is, brought up to believe that phone calls, forms and email are how you reach prospects. (Cancel was, after all, chief product officer at Hubspot, a big enabler of forms.) In their story, it’s not you who needs help, but we:

Drift isn’t pointing an accusatory finger at your problem. Instead, they’re inviting you to join them in a revolution, to fight with them against a common foe.

#3. Tease the “Promised Land”

In declaring the old way to be a losing path, Drift plants a question in audiences’ minds: OK, so how do I win?

It‘s tempting to answer that question by jumping right to your product and its capabilities, but you’ll be wise to resist that urge. Otherwise audiences will lack context for why your capabilities matter, and they’ll tune out.

Instead, first present a glimpse of the “Promised Land “— the state of winning in the new world. Remember, winning is not having your product but the future that’s possible thanks to having your product:

Drift lays out the Promised Land

It’s wildly effective to introduce your Promised Land, as Drift does, so it feels like we’re watching you think it through (“what we realized was”). However you do it, your Promised Land should be both (a) desirable (obviously) and (b) difficult for prospects to reach without you. Otherwise, why do you exist?

#4. Position capabilities as “magic” for slaying “monsters”

Once audiences buy into your Promised Land, they’re ready to hear about your capabilities. It’s the same dynamic that plays out in epic films and fairy tales: We value Obiwan’s gift of a lightsaber precisely because we understand the role it can play in Luke’s struggle to destroy the Death Star.

So yes, you’re Obiwan and your product (service, proposal, whatever) is a lightsaber that helps Luke battle stormtroopers. You’re Moana’s grandmother, Tala, and your product is the ancient wisdom that propels Moana to defeat the Lava Monster.

Drift’s “magic” for annihilating forms is an always-on chat box that prospects see when they visit your website:

Slaying forms: Drift’s always-on chat box

Cancel’s keynote doubled as a launch announcement for a new feature called Drift Email, so he transitioned next to another monster that’s keeping people from reaching Drift’s conversational Promised Land:

A new monster

Email, obviously, would be a way to facilitate that. But wait, wasn’t email the enemy—or at least an evil henchman?

Well, if the Terminator can be resurrected as a force for good (Terminator 2), then email can be too. Cancel lays out three “mini-monsters” blocking that transformation:

Then he introduces Drift Email—not as a set of context-free features or even benefits, but as a collection of magic for slaying the monsters. With Drift Email, your prospects can react to an email instantly by, for instance, booking a demo without waiting for you to email them back:

After a prospect returns to your website by responding to an email, you can continue the conversation in a relevant way, rather than by bombarding the prospect with more emails:

And no matter what channel the prospect wanted to talk through next, the context (history, etc.) persists:

#5. Present your best evidence

Of course, even if you’ve laid out the story perfectly, audiences will be skeptical. As they should be, since your Promised Land is by definition difficult to reach!

So you must present evidence of your ability to deliver happily-ever-after. The best evidence is stories about people—told in their own voices—who say you helped reach the Promised Land:

Evidence: Drift got us to the Promised Land

What if you’re so early that you’re not yet blessed with a stack of glowing success stories and testimonials? That must have been the case with Drift Email, so Cancel had his team dogfood it (use it themselves) and showed the results:

The results of “dogfooding” Drift Email

Drift’s Story Works Because They Tell It Everywhere—and Commit to Making It Come True

Whether I’m running a strategic messaging and positioning project for a CEO and his/her leadership team, or training larger sales and marketing groups, I use the framework above to help them craft a customer-centric narrative, as Cancel has. But that’s just the first step.

To achieve what Drift has, the CEO (or whatever the leader is called) must commit to telling the story over and over through all-hands talks, recruiting pitches, investor presentations, social channels — everywhere — and to making it come true.

That’s what Cancel does. His entire team seems maniacally focused on getting customers to the Promised Land — through sales conversations, customer interactions, marketing collateral, success stories, events, and podcasts. Even product: Once you pinpoint the Promised Land, you see monsters everywhere, each an opportunity for profitable new features. Among the questions Gerhardt received from the audience at OpenView: How do we schedule salespeople for live-chat duty? How do we qualify prospects if every one of them gets to chat with you? If Drift hasn’t already conjured up magic for taming those beasts, I’m guessing it’s on its way.

The Rise of Story-Led Differentiation

If I had any doubt that Cancel believes, as I do, that it’s more important now than ever to differentiate through a customer-centric strategic narrative, it was erased when, a few days later, he posted this on Linkedin:

Cancel’s Linkedin update

Some commenters disagreed, but most wanted to know what he meant by “act accordingly.” Cancel never responded, but his actions—and Drift’s success—scream his answer.

Product differentiation, by itself, has become indefensible because today’s competitors can copy your better, faster, cheaper features virtually instantly. Now, the only thing they can’t replicate is the trust that customers feel for you and your team. Ultimately, that’s born not of a self-centered mission statement like “We want to be best-in-class X” or “to disrupt Y,” but of a culture whose beating heart is a strategic story that casts your customer as the world-changing hero.

That’s the big, undeniable shift in the world that I spoke about as the final speaker at OpenView that day, several hours after Gerhardt left the stage.

https://medium.com/the-mission/the-best-sales-pitch-ive-seen-all-year-7fa92afaa248

Hubspot – The Hard Truth About Acquisition Costs (and How Your Customers Can Save You)

Trust in businesses is eroding, and so is patience. Marketing and sales are getting harder, and the math behind most companies’ acquisition strategy is simply unworkable.

The best point of leverage you have to combat these changes? An investment in customer service.


Consumers don’t trust businesses anymore

The way people interact with businesses has changed — again. The internet’s rise three decades ago did more to change the landscape of business than anyone could have imagined in the 1990s. And now it’s happening again.

Rapid spread of misinformation, concerns over how online businesses collect and use personal data, and a deluge of branded content all contribute to a fundamental shift — we just don’t trust businesses anymore.

1-trust-in-business

  • 81% trust their friends and family’s advice over advice from a business
  • 55% no longer trust the companies they buy from as much as they used to
  • 65% do not trust company press releases
  • 69% do not trust advertisements, and 71% do not trust sponsored ads on social networks

We used to trust salespeople, seek out company case studies, and ask companies to send us their customer references. But not anymore. Today, we trust friends, family, colleagues, and look to third-party review sites like Yelp, G2Crowd, and Glassdoor to help us choose the businesses we patronize, the software we buy, and even the places we work.

Consumers are also becoming more impatient, more demanding, and more independent.

2-consumers-are-impatient

In a survey of 1,000 consumers in the United States, United Kingdom, Australia, and Singapore, we found that 82% rated an immediate response as “important” or “very important” when they were looking to buy from a company, speak with a salesperson, or ask a question about a product or service. That number rises to 90% when looking for customer service support.

But what does “immediate” mean? Over half (59%) of buyers expect a response within 30 minutes when they want to learn more about a business’ product or service. That number rises to 72% when they’re looking for customer support and 75% when they want to speak with a salesperson.

Modern consumers are also unafraid to tell the world what they think. Nearly half (49%) reported sharing an experience they had with a company on social media, good or bad. While buyers are fairly even split between being more likely to share a good experience (49%) vs. a bad one (51%), every customer interaction you have is an opportunity to generate buzz — or risk public shaming.

The hard truth is that your customers need you a lot less than they used to. They learn from friends, not salespeople. They trust other customers, not marketers. They’d rather help themselves than call you.

Acquisition is getting harder

The erosion of consumer trust is a difficult issue for companies to grapple with on its own. But as if that wasn’t enough, the internet, which has always fundamentally transformed the traditional go-to-market strategy, is moving the goalposts again.

Let’s break this down into two functions: Marketing and sales.

Marketing is getting more expensive

We’ve taught inbound marketing to tens of thousands of companies and built software to help them execute it. Inbound marketing accelerated business growth through a repeatable formula: Create a website, create search-optimized content that points to gated content, then use prospects’ contact information to nurture them to a point of purchase.

This still works — but the market is experiencing four trends that, combined, have made it harder for growing businesses to compete with long-established, better-resourced companies.

Trend 1: Google is taking back its own real estate

Much of modern marketing is dependent on getting found online. Without the multimillion-dollar brand awareness and advertising budgets of consumer goods titans, the best way a growing business can compete is creating content specific to their niche and optimizing it for search.

Google, the arbiter of online content discoverability, has made significant changes in the last few years that make it harder for marketers to run this model at scale without a financial investment.

First, through featured snippets and “People Also Ask” boxes, Google is reclaiming its own traffic.

featured snippet is a snippet of text that Google serves on the search engine results page (SERP). You’ve likely been served a featured snippet when you were searching for a definition, or something that involved a step-by-step explanation.

Here’s an example of a featured snippet. It’s designed to pull information onto the SERP itself so there’s no need to click into the full recipe, hosted on another website.

3-featured-snippet

“People also ask” boxes are a different permutation of a featured snippet. These display questions related to your original search, live on the SERP, and are expandable with a click, like so:

4-people-also-ask

Each time you expand a “People Also Ask” section, Google adds 2-4 queries to the end of the list.

The combined effect of featured snippets and “People Also Ask” boxes? It depends. If your site is the first result and gets featured in the snippet, your traffic should increase. But if you don’t win the featured snippet, even if your post is ranked at position 1, it’s likely your overall traffic will decrease.

Second, Google’s also changed its search engine results page (SERP), moving search ads from a sidebar to the top four slots. Organic results fall much further down the page, and on a mobile device they disappear entirely.

Search won’t ever become purely pay-to-play. But in a world where screen real estate is increasingly dominated by sponsored content, marketers need to factor paid tactics into any organic strategy.

Voice search adds a third wrinkle to these shifts — the winner-take-all market. As the use of voice search has proliferated, it’s become more and more important to become the answer, as voice assistants only provide one result when asked a question.

On Google, featured snippets demonstrate this necessity. Amazon has also introduced “Amazon’s Choice” products, the first items suggested when consumers order items via voice assistant. It’s not hard to imagine a future where all Amazon’s Choice products are also Amazon-branded, manufactured, and distributed.

Trend 2: Social media sites are walled gardens

A decade ago, social media sites were promotion channels that served as a path between users and the poster’s site. The borders between different sites were fluid — people would discover content on Facebook, Twitter, and LinkedIn, then click through to content (usually hosted on another site).

Today, social media sites are walled gardens. Algorithms have been rewritten to favor onsite content created specifically for that platform. Facebook Messenger and evolving paid tools like Lead Ads are becoming a table-stakes marketing channel, meaning businesses can’t just “be on Facebook” — they must recreate their marketing motion in a second place.

Facebook and LinkedIn have also deprioritized showing content that links offsite in favor of family and friends’ content (on Facebook) and onsite video and text (on LinkedIn). Not only does your branded content have a harder time competing with other brands, it will also have to compete for attention with your prospects’ personal network. Twitter’s investment in streaming video partnerships with entertainment and news networks are a nod to bringing consumers content they’d watch anyway in a platform-owned experience.

Sites like Amazon and Facebook are also becoming starting points for search. Over half of product searches (52%) begin on Amazon, while 48% of searches for content originate on Facebook — almost equivalent to Google’s reach (52%). And both Amazon and Facebook sell targeted advertising space.

5-top-content-channels

Why is any of this important?

These algorithm changes reflect the desire companies have to keep the audiences they own, on their own sites. As long as they can monetize their traffic, they have no incentive to move back to the old passthrough model.

Increasingly, Facebook is a destination. Twitter is a destination. LinkedIn is a destination. It’s no longer enough to create a piece of content for your own site, then schedule out promotion across channels that point back to that content.

Savvy marketers know their ideas must be channel-agnostic and channel-specific at the same time. To get the most mileage out of a piece of content, its core concept must perform well across multiple channels, but marketers have to do more upfront work to create separate versions of this content to best suit the channel on which it’s appearing.

Trend 3: It’s getting more expensive to do marketing

Search and social media titans have moved their goalposts to create a more competitive content discovery landscape. At the same time, barriers to entry on these platforms are getting higher in two ways:

1. Organic acquisition costs are rising.

According to ProfitWell, overall customer acquisition costs (CAC) have been steadily rising for B2B and B2C companies.

Over the last five years, overall CAC has risen almost 50% — and while paid CAC is still higher than content marketing (organic) CAC, organic costs are rising at a faster rate.

Profitwell_RisingCAC(source: ProfitWell)

2. Content marketers are commanding higher salaries.

It’s not only harder to get value from content, it’s getting more expensive to create it. ProfitWell’s study examined the rise of content marketers’ salaries by location — median salary has risen 24.9% in metropolitan areas and 18.9% for remote workers in the last five years.

Profitwell_CMSalaries

(source: ProfitWell)

This rise is partially explained by changes in the content marketing profession. Google’s changing algorithm requires more specialized knowledge than ever. Not only are there specific optimization best practices to win featured snippets, Google’s current algorithmic model favors sites that are architected using the topic cluster model. Depending on the size of your site, this can be a massive undertaking — at HubSpot, it took us over six months to fully organize our blog content by this model.

Trend 4: GDPR

The following is not legal advice for your company to use in complying with EU data privacy laws like the GDPR. Instead, it provides background information to help you better understand the GDPR. This legal information is not the same as legal advice, where an attorney applies the law to your specific circumstances, so we insist that you consult an attorney if you’d like advice on your interpretation of this information or its accuracy. In a nutshell, you may not rely on this as legal advice, or as a recommendation of any particular legal understanding.

The General Data Protection Regulation recently passed by the European Union (EU) imposes new regulations on how businesses are allowed to obtain, store, manage, or process personal data of EU citizens.

At a high level, here’s what GDPR means for marketing teams:

  • Businesses collecting prospect data must explicitly state how that data will be used, and may only collect what’s necessary for that stated purpose
  • Businesses may only use that data for the specified purposes above, and ensure it’s stored according to GDPR provisions
  • Businesses may only keep personal data for as long as is necessary to fulfill the intended purpose of collection
  • EU citizens may request that businesses delete their personal data at any time, and businesses must comply

GDPR doesn’t go into effect until May 25, 2018, so it’s hard to predict the exact impact it will have on lead generation and collection. But we feel confident that GDPR is the first step toward more regulation of how businesses interact with consumers globally, further limiting your marketing team’s power.

In combination, these four trends mean that:

  • It’s harder to stand out in a crowded internet
  • It’s more expensive to find talent and produce content
  • Algorithmic changes will force investment in a multichannel marketing strategy

So it’s getting harder to get prospects to your site. But once you get them in the door, it should be standard operating procedure to getting those deals closed, right? Turns out … not quite.

Sales is getting harder, too

Every year, HubSpot surveys thousands of marketers and salespeople to identify the primary trends and challenges they face. And year after year, salespeople report that their jobs are becoming more difficult.

Consider this chart (a preview of State of Inbound 2018).

8-prospecting-is-harder

A whopping 40% of respondents reported that getting a response from prospects has gotten harder, while 29% and 28% respectively identify phone connections and prospecting as pain points.

Almost a third (31%) have to engage with multiple decision makers to move a single deal forward, and just as many find it difficult to close those deals.

Salespeople have to overcome an additional challenge on top of these sobering statistics: They aren’t trusted. Year over year, consumers report that salespeople are their least trusted source of information when making purchase decisions.

9-sources-of-information

And even when there’s no purchase decision being made, salespeople don’t have a great reputation. A 2016 HubSpot study found that sales and marketing are among two of the least trusted professions — only above stockbrokers, car salesmen, politicians, and lobbyists.

10-marketing-sales-trust

For software companies, sales is becoming a more technical field. Buyers contact sales later in the process — more prefer a “try before you buy” approach through free trials or “freemium” versions of paid products. At these companies, the actual onboarding flow and user experience of the product are often more important than the sales team, as most customers become free users before ever speaking with a human.

In the same way that a big chunk of sales work was consumed by marketers 10 years ago, a big chunk of sales work today is being consumed by developers and growth marketers.

The implications are clear. Buyers no longer rely on salespeople to steward them through a purchasing process, preferring to do independent research or lean on their networks for an opinion. The inherent distrust of the profession is diluting salespeople’s influence salespeople in a purchasing process, making your acquisition strategy less and less consistently reliable.

This is scary stuff. But there’s a bright side. Within the pain of change lies opportunity, and your business boasts a huge, overlooked source of growth you probably haven’t invested in at all — your customers.

Your customers are your best growth opportunity

When you’re growing a business, two numbers matter more than anything else:

  1. How much it costs to acquire a new customer (or “CAC”)
  2. That customer’s lifetime value — how much they’ll spend with you over their lifetime (or “LTV”)

For many years, most businesses (us included) focused on lowering CAC. Inbound marketing made this relatively easy, but the new rules of the internet mean this is getting harder. As Facebook, Amazon, and Google tighten their grips on content, the big opportunity for today’s companies is raising LTV.

If your customers are unhappy, you might be in trouble. But if you’ve invested in their experience, you’re well-poised to grow from their success.

When you have a base of successful customers who are willing and able to spread the good word about your business, you create a virtuous cycle.

Happy customers transform your business from a funnel-based go-to-market strategy into a flywheel. Through promoting your brand, they’re supplementing your in-house acquisition efforts. This creates a flywheel where post-sale investments like customer service actually feed “top of the funnel” activities.

flywheel-product

Buyers trust people over brands, and brands are getting crowded out of their traditional spaces, so why throw more money at the same go-to-market strategy when you could activate a group of people who already know and trust you?

Customers are a source of growth you already own, and a better and more trusted way for prospects to learn about your business. The happier your customers, the more willing they are to promote your brand, the faster your flywheel spins, and the faster your business grows. Not only is this the right thing to do by your customers, it’s the financially savvy thing to do for your business. It’s a win-win-win.

At some point, your acquisition math will break

More and more businesses are moving to a recurring-revenue, or subscription-based model. A recurring revenue model means customers pay a monthly fee for membership or access to products.

A recurring revenue model makes it easy to project expected revenue over a set period of time. Understanding how money moves in and out of the business makes headcount planning, expansion planning, and R&D efforts far easier.

Luckily it doesn’t matter if your company is subscription-based or not — a recurring revenue model contains lessons that apply to all businesses. Before we dive in, there are three core assumptions this model relies on.

12-recurring-revenue-assumptions

First, every business has a defined total addressable market, or TAM. Your TAM is the maximum potential of your market. It can be bound by geography, profession, age, and more — but in general, every product serves a finite market.

Second, every company aims to create repeat customers — not just subscription-based ones. All of the examples below are businesses that benefit from recurring revenue, even if it’s not formalized through contracts or subscription fees:

  • A beauty products store where customers typically purchase refills once every three months
  • A hotel chain that becomes the default choice for a frequent traveler
  • A neighborhood restaurant that’s cornered the market on Saturday date nights

Third, the key to growth is to retain the customers you already have, while expanding into the portion of your TAM that you haven’t won yet.

The easiest way to understand why thinking about your business like a subscription-based company does is valuable is to walk through the following hypothetical example — let’s call it Minilytics Inc.

13-acquisition-math-broken

Minilytics starts with a customer base of 10 people, and a churn rate of 30% — meaning three of their customers will not buy from them again. Each of Minilytics’ salespeople can sell five new customers per month. Because Minilytics’ customer base is so small, they only need to hire one salesperson to grow.

Fast forward several months, and Minilytics now has 50 customers, 15 of whom churn. To grow, Minilytics’ CEO has to bring on three more salespeople, who create additional overhead cost — their salaries.

You can probably see where this is going. At 100 or 1,000 customers, Minilytics’ CEO simply cannot hire enough salespeople to grow. The sheer cost of paying a staff to simply maintain a business that’s losing 30% of its customers each month will shutter most businesses on its own.

While Minilytics is struggling to plug the leaks in its business, something else is happening that will tank the company — even with an army of salespeople.

Remember TAM? While Minilytics’ CEO was hiring salespeople to replace churned customers, the company was also rapidly burning through its TAM. Generally, customers that churn do not come back — it’s hard enough to gain a consumer’s trust. To break trust through a poor experience, then try to rebuild it, is nearly impossible.

Even if Minilytics is able to afford a rapidly expanding sales team, it’s been rapidly churning through its TAM. Eventually, Minilytics will run through its entire total addressable market — and there will be no room to grow.

Luckily, Minilytics isn’t destined for this fate. Let’s rewind to that first month and explore what they could have done differently.

Building a good customer experience is the foundation of growth

Growing a sustainable company is all about leverage.

In plain English, if you can identify the parts of your business model that require great effort but provide little reward, then re-engineer them to cost you less effort or provide more reward, you’ve identified a point of leverage.

Most companies hunt for leverage in their go-to-market strategy, which usually involves pouring money into marketing or sales efforts. Customer service, customer success, customer support — or whatever you call it (at HubSpot, we have a separate team dedicated to each function, but we’re the exception) — has traditionally been viewed as a cost center, not a profit center.

It’s not hard to understand why. The ROI of sales and marketing investment is immediately tangible, while investment in customer service is a long game.

But most companies mistakenly try to optimize for fewer customer interactions, which just means issues don’t get addressed. Because they’re thinking short-term, it ends up costing them dearly in the long term. Too many businesses think once a sale is made and the check’s cleared, it’s on to acquire the next new customer.

That doesn’t work anymore. The hardest part of the customer lifecycle isn’t attracting their attention or closing the deal — it’s the journey that begins post-sale.

Once your customers are out in the wild with your product, they’re free to do, say, and share whatever they want about it. Coincidentally, that’s when many companies drop the ball, providing little guidance and bare-bones or difficult-to-navigate customer support. This approach, quite frankly, makes no sense.

Think about it this way: You control every part of your marketing and sales experience. Your marketing team carefully crafts campaigns to reach the right audiences. Your sales team follows a playbook when prospecting, qualifying, and closing customers. Those processes were put in place for a reason — because they’re a set of repeatable, teachable activities you know lead to consistent acquisition outcomes.

Once a customer has your product in their hands, one of two things will happen: Either they will see success, or they will not. If they’re a new customer or first-time user, they might need help understanding how to use it, or want to learn from other people who have used your product, or want recommendations on how best to use it. Regardless of what roadblocks they run into, one thing is for sure: There’s no guarantee they’ll achieve what they want to achieve.

This is a gaping hole in your business. No one is better positioned to teach your customers about your product than you. No one has more data on what makes your customers successful than you. And no one stands to lose more from getting the customer experience wrong than you.

Let me say that one more time, because it’s important: Nobody has more skin in this game than you. In our survey, 80% of respondents said they’d stopped doing business with a company because of a poor customer experience. If your customers are dissatisfied, they can — and will — switch to another provider.

There are very few businesses in today’s market with no competitors. Once you lose a customer, you are most likely not getting them back. If you fail to make your customers successful, you will fail too.

Not convinced? Here’s another way to think about how to best allocate money between marketing, sales, and customer service.

Consider Minilytics and Biglytics, both with a CAC of $10 and a budget of $100.

14-customer-service-revenue

Minilytics hasn’t invested in a well-staffed or well-trained customer service team, so their churn rate is 30%. Three customers churn, so they spend $30 replacing them. All of the remaining $70 is spent on acquisition, ending with 17 customers.

At Biglytics, things are different. Customer service isn’t the biggest part of the budget, but the team is paid well, trained well, and knowledgeable enough to coach customers who need help.

Because Biglytics has proactively spent $10 of your budget on customer service, their churn rate is much lower, at 10% (for the record, a churn rate of 10% is terrible — we just chose it to keep numbers simple). Biglytics replaces their single churned customer for $10 and spends the remaining $80 on eight new customers, netting out at 18 customers.

A one-customer difference doesn’t seem significant. But that $10 Biglytics invested in their customer service team has been working in the background. Customers they brought on last year have seen success with the product because of great customer service, and have been talking Biglytics up to their friends, family, and colleagues. Through referrals and recommendations, Biglytics brings on five more customers without much extra work from the sales team.

This means Biglytics has not only brought on six more customers than Minilytics in the same timespan, it also brings down their average CAC to $7.14.

Which company would you rather bet on? I’m guessing it’s Biglytics.

This is why investment in customer service is so powerful. Taking the long view enables you to grow more. It costs anywhere from 5 to 25 times more to acquire a new customer than to retain an existing one.

Prioritizing short-term growth at the expense of customer happiness is a surefire way to ensure you’ll be pouring money into the business just to stay in maintenance mode.

The 4 points of leverage in the customer experience

A good customer experience goes beyond hiring support staff — it starts pre-sale. The four points of leverage you can start working on today are where we’d recommend starting.

15-points-of-leverage

1) Pre-sale: Understand customer goals

People buy products to fix a problem or improve their lives — to get closer to an ideal state, from their current state.

Your job is to help them get there. Depending on what you sell, much of the work required to make your customers successful might not be done until post-sale through coaching and customer support. But if you understand the most common goals your customers have, you can reverse-engineer your acquisition strategy.

Emily Haahr, VP of global support and services at HubSpot, explains how this works:

“Your best customers stay with you because they get value from your products. Dissect your most successful customers and trace back to how they found you in the first place.

What marketing brought them to your site? What free tool or piece of content converted them to a lead? What type of onboarding did they receive and with who? What steps did they take in onboarding? And so on…

Once you have this information, you can identify and target the best fits for your product earlier, then proactively guide your customers down a path of success, instead of trying to save them once they’ve reached the point of no return.”

2) Pre-sale: Making it easier for your customers to buy

Consider whether your sales process could be easier to navigate. Today’s buyers don’t want to talk to a salesperson, or want to pay money before they know how well a product works. You should empower them to do so.

If possible, take a page out of “freemium” companies’ playbook. Can you give away part of your product or service at scale so prospects can try before they buy? This way, they’ll qualify themselves and learn how to use your product before you ever have to raise a finger. Anecdotally, HubSpot has seen the most rapid growth in our acquisition through self-service purchases.

Also evaluate what parts of your marketing and sales process can be automated. The more you can take off your marketers’ and salespeople’s plates, the better — and you’ll be giving your buyers more control over their purchase at the same time.

This change is already happening. Think about Netflix, Spotify, and Uber. All three companies disrupted industries with difficulty built into their go-to-market.

16-disruptors

People wanted to watch movies, but they didn’t want to pay late fees. Hello, Netflix.

People wanted to listen to music, but they didn’t want to pay for individual songs or albums. Hello, Spotify.

People wanted to be driven between Point A and Point B, but they didn’t want to wait for cabs in the rain. Hello, Uber.

Today’s biggest disruptors got to where they are by disrupting inconvenience. Hurdles are the enemy — remove as many as you can.

3) Post-sale: Invest in your customers’ success

I scaled the HubSpot customer service team to over 250 employees, and there are a few things you can do to make your customers happier (and your employees’ lives better):

Gather feedback — NPS® or otherwise

As early as possible, start surveying your customer base to understand how likely they are to recommend your product to a friend. You can also send out post-case surveys to customers whose issues your team has helped resolve.

At HubSpot, we track Net Promoter Score (or NPS) maniacally — it’s a company-level metric that we all work toward improving. This helps us:

  • Identify holes in our customer service early
  • Track customer sentiment over time — the trend of NPS is far more useful than one raw number
  • Quantify the value of customer happiness — when we changed a customer from a detractor to a promoter, that change increased LTV by 10-15%

Start small, with a post-support case NPS so you know whether immediate issues were resolved. You can build up to a quarterly or monthly NPS survey of your full customer base that focuses on their general experience with the product.

Building up a lightweight knowledge base

Self-service is the name of the game. Identify your most commonly asked customer questions or encountered issues, then write up the answers into a simple FAQ page or the beginnings of a searchable knowledge base. This will enable your customers to search for their own solutions, instead of waiting on hold to get human support. As an added bonus, it will take work off your team’s plate.

4) Post-success: Activate happy customers into advocates

Once you have happy and successful customers, it’s time to put them to work for you.

Take a look at this chart again. Notice anything interesting?

9-sources-of-information

Buyers report that their two most trusted sources of information when making a purchase decision are word-of-mouth referrals and customer references. They’re listening to your customers, not you. Use your customers as a source of referrals, social proof for your business through testimonials, case studies, and references, and brand amplification.

The key to successful customer advocacy is to not ask for anything too early. Don’t try to extract value from your customers until you’ve provided value — asking for five referrals a week after they’ve signed a contract is inappropriate. Your primary goal should always be to make your customers successful. After you’ve done that, you can ask for something in return.

Putting it all together: The inbound service framework

This methodology is a direct result of my eight years at HubSpot. We’ve made a lot of mistakes but learned even more about how built a repeatable playbook for leading your customers to success and eventually turning them into promoters.

We call it the inbound service framework.

services-framework

Step 1: Engage

Good customer service is the foundation for everything else — that’s why “Engage” is the first part of this framework. At this stage, your only concern should be understanding the breadth and depth of customer questions, and resolving them.

When you’re just getting started with the customer service function, cast a wide net. Engage with any customer, wherever, about whatever they want. Be on all the channels, try to solve whatever problems come your way, and help anyone who needs it.

Above all, make sure you’re easy to interact with. At HubSpot, we found that customers who submitted at least one support ticket a month retain at a rate around 10% higher than customers who didn’t, and were 9-10 times more likely to renew with HubSpot year over year. Not getting support tickets does not mean your product has no issues — every product does. It means your customers are silently suffering.

As your team gets more sophisticated, you’ll be able refine your approach, but this initial operating system helps you gather lots of data very quickly. At this stage, your objective should be to learn as much as possible about the following:

  • FAQs requiring customized guidance
  • FAQs that can be addressed with a canned response
  • Most confusing parts of your product/service
  • When support issues arise — do people require implementation help or do they encounter issues three months after purchase?
  • Commonalities of customers who need the most help
  • Your customers’ preferred support channels

This information will empower you to identify huge points of leverage in your customer service motion. For example, if you find that 30% of customer queries have quick, one-and-done answers (i.e. “How do I change my password?”, “How do I track my order?”, “What is your return policy?”), stand up a simple FAQ page to direct customers to. Boom — you’ve freed up 30% of your team’s time to work on more complicated, specific issues.

Empower your customer team to make noise about the problems they see, early and often, and turn their insights into action.

Are your sales and marketing teams overpromising to your customers? Your customer team will hear these complaints first. Trace the points of confusion back to their origin, and change your sales talk tracks and marketing collateral to reflect reality.

Is there something about your product that causes mass confusion? Your customer team will know which parts of your product/service are most difficult to navigate and why. Use this information to improve your product/service itself, eliminating these problems at the source.

Do certain types of customers run into issues frequently? Do they usually churn or do they just require a little extra love to get over the hump? If it’s the former, build an “anti-persona” your sales and marketing teams should avoid marketing and selling to. If it’s the latter, dive into that cohort of customers to understand whether the extra help they need is justified by their lifetime value.

As you learn more about your customers, you’ll also learn how to best optimize your own process. Identify the most effective support channels for your team and create a great experience for those, then establish a single queue to manage all inquiries.

In this stage, measure success by how fast you solve problems and post-case customer satisfaction. You can do the latter through a post-case NPS survey, which gives you instant feedback on how effective your customer team actually is.

Step 2: Guide

In the “Guide” stage, your goal is to turn your relationship with your customers from a reactive, transactional model into a proactive partnership. It’s time to level your customer team up from a supportive function into a customer success-driven organization. (The reactive part of your customer service organization will never go away. But as you grow, it should become part of a multifunctional group.)

What does it mean to be proactive?

First, it means anticipating common issues and challenges and building resources to prevent them. This includes things like a knowledge base or FAQ, as well as re-engineering parts of your offering to be more user-friendly and intuitive.

Second, it means partnering with your customers to help them get to their stated goals. Guide them through key milestones, provide tasks to keep them on track, and connect them with peers so they can crowdsource answers if necessary. Create frameworks and tutorials where you can.

It’s better to be proactive than reactive for a few reasons:

  1. It saves time, and time is money. Imagine all the hours you’ll save your support team if you can get repetitive queries off their plates. That’s time they should spend on complex, higher-level issues that could reveal even larger points of leverage in your business, and so on.
  2. It makes your customers happier. Even if you’re able to resolve issues at an 100% success rate with 100% satisfaction, you’ve still built an experience filled with roadblocks. Aim to build a world where you’ve anticipated your customers’ challenges and solve them at their source.
  3. It builds a trusting relationship. Customers may not see all the issues you proactively guard against (after all, a problem prevented is an invisible type of value), but they’ll recognize a relatively issue-free customer experience. Buyers are more likely to trust the company that rarely lets them down, over a brand that’s constantly scrambling to fix the next issue.
  4. It’s a competitive advantage. Proactive guidance takes the wealth of knowledge you have about what makes customers successful and puts it into your customers’ brains. You know what your best customers have in common and the mistakes your least successful ones make. That knowledge is a core part of what you’re selling, even if it’s positioned as customer service.

As you move away from proactive support into proactive guidance, you become a teacher, not a vendor. Other companies might be able to build a product as good as yours, but it’ll be difficult to replicate the trust you have with your customers.

Guidance is an iterative process. As in the early days of your customer organization, collect as much data as you can on the customer lifecycle, and continuously update your guidance to reflect current best practices. Pay attention to the formats and channels that work best, which issues have the highest impact for your customers once solved, and update your process accordingly.

Step 3: Grow

Happy customers want to support the businesses they love. 90% of consumers are more likely to purchase more, and 93% were more likely to be repeat customers at companies with excellent customer service.

At the same time, 77% of consumers shared positive experiences with their friends, or on social media/review sites in the last year.

Your happy buyers want to help you. That’s what the “Grow” stage is all about: Turning that desire into action.

There are three ways to activate your customer base into promoters, says Laurie Aquilante, HubSpot’s director of customer marketing: Social proof, brand amplification, and referrals. Let’s review each play.

1) Social proof

Buyers are more likely to trust and do business with companies their networks trust. The following are all different ways your customers can create social proof for your product:

  • Sharing positive experiences on social media or review sites
  • Providing referrals (more on this later)
  • Testimonials/case studies
  • Customer references in the sales process

Activating social proof is all about keeping a close watch on your customers. It’s also not a pick-one-and-done kind of thing, Aquilante says. Case studies and customer references, for example, are useful at different points in your sales motion. While you could use the same customer for both, it’s probably more useful to have a stable of customers on hand that can speak to a diverse range of experiences.

Encourage social proof by proactively reaching out to your most satisfied customers, who will likely be excited to help you. You can also provide incentives for sharing content or writing online reviews.

2) Brand amplification

When someone shares your content on social media, helps contribute content for a campaign, or interacts with your content, they’re amplifying your brand.

“To make this happen, you have to provide a ‘what’s in it for me?’,” Aquilante says. “Either create such engaging content and be so remarkable that your customers can’t help but amplify your message, or provide an incentive, like points toward future rewards or something more transactional, like getting a gift card for sharing content a certain amount of times.”

3) Referrals

Referrals have the most immediate monetary value for your business. B2C companies are masters at the referral game, usually awarding the referrer credit to their account or even monetary rewards.

B2B referrals are a bit trickier. B2B purchases tend to be more expensive than B2C purchases, and usually involve multiple stakeholders and a longer sales process. So your customer likely has to do some selling upfront to feel comfortable sending a contact’s information to you. It’s not impossible to get this right, but it’s crucial to offer your customer something valuable enough to incentivize them to do this work on your behalf.

“Get in your customer’s head, figure out what matters to them, and make sure that you’ve got a good exchange of value,” Aquilante says. “They’re offering you something of very high value if they’re referring your business, and you’ve got to offer something in return.”

4) Upsells and cross-sells

A note from our lawyers: These results contain estimates by HubSpot and are intended for informational purposes only. As past performance does not guarantee future results, the estimates provided in this report may have no bearing on, and may not be indicative of, any returns that may be realized through participation in HubSpot’s offerings.

Aside from promoting your brand and bringing you business, your customers are themselves a source of net new revenue, if you have multiple products or services. Your customer team is your not-so-secret weapon in unlocking this revenue.

In late 2017, HubSpot piloted the concept of “Support-Qualified Leads” at HubSpot. Our sales team owns selling new business and upselling/cross-selling those accounts. But our support team is the one actually speaking with customers day in and day out, so it’s just intuitive that they have a better understanding of when customers reach a point where their needs grow past the products they currently have. When a customer has a new business need and has the budget to expand their offerings with HubSpot, a customer success representative passes the lead to the appropriate salesperson, who takes over the sales conversation.

The Support-Qualified Lead model is powerful because it closes the loop of communication between sales and support, and it works — in its first month, the pilot generated almost $20,000 in annual recurring revenue just from cross-sells and upsells. Since we’ve rolled this out, we’ve generated over $470,000 in annual recurring revenue just from this model — nothing to sneeze at.

Growth has always been hard. If you’re just starting out, it’s hard to imagine ever competing with the top companies in your industry.

Customer service is the key to this equation. If you provide an excellent customer experience and can create a community of people who are willing to promote your business on your behalf, you’re laying the groundwork for sustainable, long term growth. And in a world where acquisition is hard and getting harder, who wouldn’t want that?

https://research.hubspot.com/customer-acquisition-study

Ecology – Futurecasting ecological research: the rise of technoecology

Abstract

Increasingly complex research questions and global challenges (e.g., climate change and biodiversity loss) are driving rapid development, refinement, and uses of technology in ecology. This trend is spawning a distinct sub‐discipline, here termed “technoecology.” We highlight recent ground‐breaking and transformative technological advances for studying species and environments: bio‐batteries, low‐power and long‐range telemetry, the Internet of things, swarm theory, 3D printing, mapping molecular movement, and low‐power computers. These technologies have the potential to revolutionize ecology by providing “next‐generation” ecological data, particularly when integrated with each other, and in doing so could be applied to address a diverse range of requirements (e.g., pest and wildlife management, informing environmental policy and decision making). Critical to technoecology’s rate of advancement and uptake by ecologists and environmental managers will be fostering increased interdisciplinary collaboration. Ideally, such partnerships will span the conception, implementation, and enhancement phases of ideas, bridging the university, public, and private sectors.

Introduction

Ecosystems are complex and dynamic, and the relationships among their many components are often difficult to measure (Bolliger et al. 2005, Ascough et al. 2008). Ecologists often rely on technology to quantify ecological phenomena (Keller et al. 2008). Technological advancements have often been the catalyst for enhanced understanding of ecosystem function and dynamics (Fig. 1, Table 1), which in turn aids environmental management. For example, the inception of VHF telemetry to track animals in the 1960s allowed ecologists to remotely monitor the physiology, movement, resource selection, and demographics of wild animals for the first time (Tester et al. 1964). However, advancements in GPS and satellite communications technology have largely supplanted most uses for VHF tracking. As opposed to VHF, GPS has the ability to log locations, as well as high recording frequency, greater accuracy and precision, and less researcher interference of the animals, leading to an enhanced, more detailed understanding of species habitat use and interactions (Rodgers et al. 1996). This has assisted in species management by not only highlighting important areas to protect (Pendoley et al. 2014), but also identifying key resources such as individual plants instead of general areas of vegetation.

Illustrative timeline of new technologies in ecology and environmental science (see Table 2 for technology descriptions).
Table 1. Timeline of new technologies in ecology and environmental science, to accompany information in Fig. 1
TechnologyDescription
Past
SonarSonar first used to locate and record schools of fish
Automated sensorsAutomated sensors specifically used to measure and log environmental variables
Camera trapsCamera traps first implemented to record wildlife presence and behavior
Sidescan sonarSidescan sonar is used to efficiently create an image of large areas of the sea floor
Mainframe computersComputers able to undertake ecological statistical analysis of large datasets
VHF trackingRadio tracking, allowing ecologists to remotely monitor wild animals
Landsat imageryThe first space‐based, land‐remote sensing data
Sanger sequencingThe first method to sequence DNA based on the selective incorporation of chain‐terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication
LiDARRemote sensors that measure distance by illuminating a target with a laser and analyzing the refracted light
Multispectral LandsatSatellite imagery with different wavelength bands along the spectrum, allowing for measurements through water and vegetation
Thermal bio‐loggersSurgically implanted devices to measure animal body temperature
GPS trackingSatellite tracking of wildlife with higher recording frequency, greater accuracy and precision, and less researcher interference than VHF
Thematic LandsatA whisk broom scanner operating across seven wavelengths and able to measure global warming and climate change
Infrared camera trapsAble to sense animal movement in the dark and take images without a visible flash
Multibeam sonarTransmitting broad acoustic fan shaped pulses to establish a full water column profile
Video trapsVideo instead of still imagery, able to determine animal behavior as well as identification
Present
AccelerometersMeasures animal movement (acceleration) that is irrespective of satellite reception (geographic position)
3D LiDARAccurate measurement of 3D ecosystem structure
Autonomous vehiclesUnmanned sensor platforms to collect ecological data automatically and remotely, including in terrain that is difficult and/or dangerous to access for humans
3D trackingThe use of inertial measurements units devices in conjunction with GPS data to create real‐time animal movement tracks
ICARUSThe International Cooperation for Animal Research Using Space (ICARUS) Initiative is to observe global migratory movements of small animals through a satellite system
Next gen sequencingMillions of fragments of DNA from a single sample can be sequenced in unison
Long‐range, low‐power telemetryLow‐voltage, low‐amperage transfer of data over several kilometers
Future
Internet of thingsA network of devices that can communicate with one another, transferring information and processing data
Low‐power computersSmall computers with the ability to connect an array of sensors and, in some cases, run algorithms and statistical analyses
Swarm theoryThe autonomous but coordinated use of multiple unmanned sensor platforms to complete ecological surveys or tasks without human intervention
3D printingThe construction of custom equipment and constructing animal analogues for behavioral studies
Mapping molecular movementCameras that can display images at a sub‐cellular level without the need of electron microscopes
Biotic gamingHuman players control a paramecium similar to a video game, which could aid in the understanding of microorganism behavior
Bio‐batteriesElectro‐biochemical devices can run on compounds such as starch, allowing sensors and devices to be powered for extended periods in remote locations where more traditional energy sources such as solar power may be unreliable (e.g., rainforests)
Kinetic batteriesBatteries charged via movement that are able to power microcomputers

Ecological advances to date are driven by technology primarily relating to enhanced data capture. Expanding technologies have focused on the collection of high spatial and temporal resolution information. For example, small, unmanned aircraft can currently map landscapes with sub‐centimeter resolution (Anderson and Gaston 2013), while temperature, humidity, and light sensors can be densely deployed (hundreds per hectare) to record micro‐climatic variations (Keller et al. 2008). Such advances in data acquisition technologies have delivered knowledge of the natural environment unthinkable just a decade ago. But what does the future hold?

Here, we argue that ecology could be on the precipice of a revolution in data acquisition. It will occur within three concepts: supersize (the expansion of current practice), step‐change (the ability to use technology to address questions we previously could not), and radical change (exploring questions we could not previously imagine). Technologies, both current and emerging, have the capacity to spawn this “next‐generation” ecological data that, if harnessed effectively, will transform our understanding of the ecological world (Snaddon et al. 2013). What we term “technoecology” is the hardware side of “big data” (Howe et al. 2008), focused on the employment of cutting edge physical technology to acquire new volumes and forms of ecological data. Such data can help address complex and pressing global issues of ecological and conservation concern (Pimm et al. 2015). However, the pace of this revolution will be determined in part by how quickly ecologists embrace these technologies. The purpose of this article is to bring to the attention of ecologists some examples of current, emerging, and conceptual technologies that will be at the forefront of this revolution, in order to hasten the uptake of these more recent developments in technoecology.

Technoecology’s Application and Potential

Bio‐loggers: recording the movement of animals

Bio‐logging technology is not new to ecology, incorporating sensors such as heart rate loggers, as well as VHF and GPS technology. Instead, bio‐logging technology is being supersized, expanding the current practices with new technology. Accelerometers are being used to record fine‐scale animal movement in real time, something which was only possible previously via direct observation (Shamoun‐Baranes et al. 2012). Using accelerometry, we can calculate an animal’s rate of energy expenditure (Wilson et al. 2006), allowing ecologists to attribute a “cost” to different activities and in relation to environmental variation.

Bio‐loggers are also causing a step‐change in the questions we can explore in animal movement. Real‐time three‐dimensional animal movement tracks can now be recreated from data collected by inertial measurements units, which incorporate accelerometers, gyroscopes, magnetometers, and barometers. This technology has been used to examine the movements of cryptic animals such as birds (Aldoumani et al. 2016) and whales (Lopez et al. 2016) to determine both how they move and how they respond to external stimuli. The incorporation of GPS technology would allow for the animal movement to be placed spatially within 3D‐rendered environments and allow for the examination of how individuals respond to each other, creating a radical change to the discipline of animal movement. Over the last 50 yr, we have gone from simply locating animals, to reconstructing behavioral states and estimating energy expenditure by using these technological advancements.

Bio‐batteries: plugging‐in to trees to run field equipment

Bio‐batteries are new generation fuel cells that will supersize both the volume and the scale of data that can be collected. Bio‐batteries convert chemical energy into electricity using low‐cost biocatalyst enzymes. Also known as enzymatic fuel cells, electro‐biochemical devices can run on compounds such as starch in plants, which is the most widely used energy‐storage compound in nature (Zhu et al. 2014). While still in early development, bio‐batteries have huge potential for research. Enzymatic fuel cells containing a 15% (wt/v) maltodextrin solution have an energy‐storage density of 596 Ah/kg, which is one order of magnitude higher than that of lithium‐ion batteries. Imagine future ecologists “plugging‐in” to trees, receiving continuous electricity supply to run long‐term sampling and monitoring equipment such as temperature probes and humidity sensors. Further, the capabilities of bio‐batteries combined with low‐power radio communication devices (see Next‐generation Ecology) could revolutionize field‐based data acquisition.

Bio‐batteries could greatly aid current technoecological projects such as large‐scale environmental monitoring. For example, Cama et al. (2013) are undertaking permanent monitoring of the Napo River in the Amazon using data transfer over the Wi‐Fi network already in place. The Wi‐Fi towers are powered via solar panels, but within the dense rainforest canopy there is not enough light to use solar power to run electronics. If sensor arrays within the rainforest could be powered continuously via the trees, the project could run without a need for avoiding regions for lack of sunlight or using staff to regularly replace batteries.

Low‐power, long‐range telemetry: transmitting data from the field to the laboratory

Ecological data collection often occurs in locations difficult or hazardous to traverse, meaning that practical methods of data retrieval often influence sensor placement, limiting the data collected, but what if the data could be sent from remote sensors back to a central location for easy collection? Ecological projects such as monitoring the Amazon environment already do so using Wi‐Fi towers (Cama et al. 2013), but Wi‐Fi transmission range is limited (approximately 30 m). This can be extended with larger antennas and increasing transmission power, but in return consumes much more electricity. Other technologies are capable of transmitting data via either satellite (Lidgard et al. 2014) or the cell phone network (Sundell et al. 2006), but are likewise limited to locations with cell coverage or are prohibitively expensive. Low‐power networks offer great promise for data transfer over large distances (kilometers), including the increasingly popular LoRa system (Talla et al. 2017). Long‐range telemetry is already being used commercially for reading water meters, where water usage data are sent to hubs, transmitting data hourly, and a single battery could last over a decade (e.g., Taggle Systems; http://www.taggle.com.au/). Integrating such technology into ecological research would allow sensor deployment in remote areas where other communication methods are infeasible, for example, dense forests, high mountain ranges, swamps, and deep canyons. Such devices could also be used to transmit information to a base station, resulting in faster data collection and more convenient data retrieval.

The Internet of things: creating “smart” environments

It is now possible to wirelessly connect devices to one another so they can share information automatically. This is known as the Internet of things (IoT), in which a variety of “things” or objects can interact and co‐operate with their neighbors (Gershenfeld et al. 2004). Each device is still capable of acting independently, or it can communicate with others to gain additional information. Expanding on the use of low‐power, long‐range telemetry, IoT could be used to set up peer‐to‐peer networking to transfer data from one device to the next until reaching a location with Internet access or cell coverage, where more traditional means of transmission are possible. An attempt of such peer‐to‐peer transfer in ecology is ZebraNet: a system of GPS devices attached to animals (zebras) which transfer each individual’s GPS data between each other when in close proximity (Juang et al. 2002). Using this design, retrieving a device attached to one animal also provides the data from all other animals.

The applications of IoT go beyond the simple transfer of data. IoT technology effectively creates “smart environments,” in which hundreds of networked devices, such as temperature sensors, wildlife camera traps, and acoustic monitors, are connected wirelessly and are able to transmit data to central nodes. Using bio‐batteries, such devices could run “indefinitely” (not literally, as components will eventually fail due to wear and tear in field conditions, which can be severe in some environments, e.g., very high/low temperatures, humidity, and/or salinity). From there, fully automated digital asset management systems can query and analyze data. Automated processes are increasingly pertinent with more long‐term continuously recording sensor networks (e.g., National Ecological Observatory Network [NEON]). NEON is composed of multiple sensors measuring environmental parameters such as the concentration of CO2 and Ozone, or soil moisture, all continuously‐recording remotely with high temporal resolution, creating ever expanding environmental datasets (Keller et al. 2008). To make best use of such data requires analysis at high temporal resolutions, which is not feasible to do manually by researchers, but possible with machine learning algorithms and other advanced statistical approaches.

Swarm theory for faster and safer data acquisition, and dynamic ecological survey

Swarm theory is a prime example of the complimentary nature of technology and ecology. In essence, swarm theory refers to individuals self‐organizing to work collectively to accomplish goals. Swarm theory relates to both natural and artificial life, and mathematicians have studied the organization of ant colonies (Dorigo et al. 1999) and flocking behavior of birds and insects (Li et al. 2013), in an attempt to understand this phenomenon. Swarm theory is already being used with unmanned autonomous vehicles for first response to disasters, investigating potentially dangerous situations, search and rescue, and for military purposes (http://bit.ly/1Pel9Qz). Exciting applications of swarm theory include faster data acquisition and communication over large geographic scales and dynamic ecological survey.

Swarm theory is directly applicable to the collection of remotely‐sensed data by multiple unmanned vehicles, whether aerial, water surface, or underwater. Unmanned aerial vehicles (UAVs) are already being used for landscape mapping and wildlife identification (Anderson and Gaston 2013, Humle et al. 2014, Lucieer et al. 2014), and the data collected can be processed into high‐resolution (<10 cm) to characterize the variability in terrain and vegetation density (Friedman et al. 2013, Lucieer et al. 2014). So far, however, such vehicles are used individually. By employing swarm theory, data collection could be completed faster by using several vehicles working simultaneously and collaboratively. Moreover, if vehicles were enabled to communicate with each other, data transfer would also be improved. Given the comparatively low costs of unmanned vehicles versus manned vehicles, such implementation would dramatically increase the efficiency of data collection while also eliminating safety issues. This efficiency could, in turn, allow for more repeated and systematic surveys, improving the statistical power and inference from time‐series analyses.

Even more exciting than swarms simply being used to advance our capabilities in data acquisition is the prospect of deploying them as more active tools for quantifying biotic interactions. The ability of a swarm to locate and then track individuals of different species in real time could revolutionize our understanding of key ecological phenomena such as dispersal, animal migration, competition, and predation. Swarms could be used to initially sweep large areas, and then, as individual drones detect the species/individuals of interest, they could then inform other drones, refining search areas based on this geographic information, and then detect and track the behavior of additional animals, in real time. An increased capacity to detect and measure species interactions, and assess marine and terrestrial landscape change, would enhance our understanding of fundamental ecological and geological processes, ultimately assisting to further ecological theory and improve biodiversity conservation (Williams et al. 2012).

This technology will however require careful consideration of the societal and legislative context, as is the case for UAVs (see Allan et al. 2015).

3D printing for unique and precise equipment

While 3D printing has existed since the 1980s, its use in ecology has primarily been as teaching aids. For example, journals such as PeerJ offer the ability to download blueprints of 3D images (http://bit.ly/1MBPn1d). However, 3D printing has many more applications. These include (1) building specialized equipment cheaply and relatively easily by using the design tools included with many 3D printers or by scanning and modifying products that already exist (Rangel et al. 2013); (2) building organic small molecules, mimicking the production of molecules in nature (Li et al. 2015); (3D printing at the molecular level even has the potential to create small organic molecules in the laboratory, Service 2015); and (3) printing realistic high‐definition full‐color designs in a number of different materials (http://www.3dsystems.com/). Using such models, ecologists are able to print specialized platforms for sensor equipment (e.g., GPS collars) that fit better to animals. The use of 3D printing could go a step further, however, and create true‐color, structurally complex analogues of either vegetation or other animals for behavioral studies. For example, Dyer et al. (2006) explored whether bee attraction was based on color or may also be associated with flower temperature. Flowers of intricate and exact shape and color could be printed with heating elements embedded more easily and realistically than trying to build them by hand.

Mapping molecular movement for non‐destructive analysis of nature

New developments in optical resolution and image processing have led to cameras that can display images at a sub‐cellular level without the need of electron microscopes. Originally developed to scan silicon wafers for defects, this new technology is now being used to examine molecular transport and the exchange between muscle, cartilage, and bone in living tissue (http://bit.ly/1DlIYkD). The development also highlights what can be achieved by cross‐disciplinary and institutional collaboration, in this case optical and industrial measurement manufacturers Zeiss, Google, Cleveland Clinic, and Brown, Stanford, New South Wales universities. Together, they have also created a “zoom‐able” model that can go from the centimeter level down to nanometer‐sized molecules, creating terabytes of data.

These technology’s ecological and environmental applications are substantial, paramount of which is the non‐destructive nature of the analysis, allowing for time‐series analyses of molecular transfer. For instance, Clemens et al. (2002) examined the hyper‐accumulation of toxic metals by specific plant species. Understanding how some plants can absorb toxic metals has promise for soil decontamination, but as stated by Clemens et al. (2002) “molecularly, the factors governing differential metal accumulation and storage are unknown.” The ability to not only observe the molecular transport of heavy metals in plant tissue, but also to change the observational scale, will greatly advance our knowledge of nutrient uptake and storage in plants.

Low‐power computers for automated data analysis

Low‐power microcomputers and microcontrollers exist in products such as Raspberry Pi, Arduino, and Beagleboard. In ecology, low‐power computers have been used to build custom equipment such as underwater stereo‐camera traps, automated weather stations, and GPS tracking collars (Williams et al. 2014, Greenville and Emery 2016). Notably though, following a surge in hobbyists embracing the adaptability of low‐cost, low‐power, high‐performance microcontrollers, large companies such as Intel have also joined the marketplace with microcontrollers like Edison (http://intel.ly/1yekvNP). Edison is low‐power, but has a dual‐core CPU, Wi‐Fi, Bluetooth, data storage, inbuilt Real Time Clock, and the ability to connect a plethora of sensors from GPS receivers to infrared cameras (http://bit.ly/1qHdor2; Intel 2014). Cell phones and wearable devices are already integrating this technology. As an example, the Samsung Galaxy S8 cellular phone contains an eight‐core processor computer with 4GB ram, cameras, GPS, accelerometers, heart rate monitor, fingerprint, proximity, and pressure sensors (http://bit.ly/2ni8KRD). Using microcontrollers such as these, it is possible to run high‐level algorithms and statistical analysis on the device such as image recognition and machine learning. Not all microcontrollers are capable of running such complex data processes and other options will be required (e.g., microprocessors) instead, a situation that is likely to improve, however, with further development of the technology.

The ability to process data onboard has huge potential for technology’s ecological application, such as remote camera traps and acoustic sensors. By running pattern recognition algorithms in the equipment itself, species identification from either images or calls could be achieved both automatically and immediately. This information could be processed, records tabulated, and a decision taken as to conserve, delete, flag the recorded data for later manual observation, or even transmit the data back to the laboratory. This removes the need for storing huge volumes of raw photographs or audio files, but instead just tabulated summary results. The equipment could be programmed to specifically keep photographs and acoustics of species of interest (e.g., rare or invasive species, or species that cannot be identified with high certainty) while deleting those that are not, and/or to save any data with a recognition confidence below a designated threshold for manual inspection. In terms of direct application to conservation, it is possible that this technology would allow intelligent poison bait stations to be built. Poison baiting is widely used to control pest species (Buckmaster et al. 2014), but the consumption of baits by non‐target species can have unintended consequences ranging from incapacitation to death, limiting the efficacy of the control program (Doherty and Ritchie 2017). Using real‐time image recognition software built into custom designed bait dispensers, we could program poison bait release only when pest animals are present (e.g., grooming traps, https://bit.ly/2IKAYAD), reducing harm to non‐target species.

Technological Developments Flowing into Ecology

The technological developments from outside ecology that flow into the discipline offer great potential for theoretical advances and environmental applications. Two examples include personal satellites and neural interface research.

Personal satellites are an upcoming technology in the world of ecology. Like UAVs before them, miniature satellites promise transformative data gathering and transmission opportunities. Projects such as CubeSat were created by California Polytechnic State University, San Luis Obispo, and Stanford University’s Space Systems Development Lab in 1999, and focused on affordable access to space. These satellites are designed to achieve low Earth orbit (LEO), approximately 125 to 500 km above the Earth. Measuring only 10 cm per side, the CubeSats can house sensors and communications arrays that enable operators to study the Earth from space, as well as space around the Earth. Open‐source development kits are already available (http://www.cubesatkit.com/). However, NASA estimates it currently costs approximately US $10,000 to launch ~0.5 kg of payload into LEO (NASA 2017), meaning it is still cost prohibitive, and the capabilities of such satellites are currently limited. Given the rapid expansion of commercial space missions and pace of evolving technology, however, private satellites to examine ecosystem function and dynamics may not be too far over the horizon.

Neural interface research aims at creating a link between the nervous system and the outside world, by stimulating or recording from neural tissue (Hatsopoulos and Donoghue2009). Currently, this technology is focused in biomedical science, recording neural signals to decipher movement intentions, with the aim of assisting paralyzed people. Recent experiments have been able to surgically implant a thumbtack‐sized array of electrodes, able to record the electrical activity of neurons in the brain. Using wireless technology, scientists were able to link epidural electrical stimulation with leg motor cortex activity in real time to alleviate gait deficits after a spinal cord injury in Rhesus monkeys (Macaca mulatta; Capogrosso et al. 2016). Restoration of volitional movement may at first appear limited in its relevance to ecology, but the recording and analysis of neural activity is not. To restore volitional movement, mathematical algorithms are being used to interpret neural coding and brain behavior to determine the intent to move. This technology may make it possible in the future to record and understand how animals make decisions based on neural activity, and as affected by their surrounding environment. Using such information could greatly advance the field of movement ecology and related theory (e.g., species niches, dispersal, meta‐populations, trophic interactions) and aid improved conservation planning for species (e.g., reserve design) based on how they perceive their environment and make decisions.

Next‐generation Ecology

The technologies listed above clearly provide exciting opportunities in data capture for ecologists. However, transformation of data acquisition in ecology will be most hastened by their use in combination, through the integration of multiple emerging technologies into next‐generation ecological monitoring (Marvin et al. 2016). For instance, imagine research stations fitted with remote cameras and acoustic recorders equipped with low‐power computers for image and call recognition and powered by trees via bio‐batteries. These devices could use low‐power, long‐range telemetry both to communicate with each other in a network, potentially tracking animal movement from one location to the next, and to transmit data to a central location. Swarms of UAVs working together (swarm theory) could then be deployed to both map the landscape and collect the data from the central location wirelessly without landing. The UAVs could then land in a location with Wi‐Fi and send all the data via the Internet into cloud‐based storage, accessible from any Internet‐equipped computer in the world (Fig. 2, Table 2). While a system with this much integration might still be theoretical, it is not outside the possibilities of the next 5–10 yrs.

Visualization of a future “smart” research environment, integrating multiple ecological technologies. The red lines indicate data transfer via the Internet of things (IoT), in which multiple technologies are communicating with one another. The gray lines indicate more traditional data transfer. Broken lines indicate data transferred over long distances. Once initiated, this environment would require minimal researcher input. (See Table 3 for descriptions of numbered technologies.).
Table 2. Description of elements of a future “smart” research environment, as illustrated in Fig. 2
NumberTechnologyDescription
1Bio‐batteriesIn locations where solar power is not an option (closed canopies), data‐recording technology such as camera traps and acoustic sensors could run on bio‐batteries, eliminating the need for conventional batteries
2The Internet of things (IoT)Autonomous unmanned vehicles could use IoT to wirelessly communicate and collect data from recording technologies (camera traps) located in dangerous or difficult‐to‐access locations
3Swarm theoryAutonomous vehicles such as unmanned aerial vehicles could self‐organize to efficiently collect and transfer data
4Long‐range low‐power telemetryTechnology “talking” to each other, transferring information over several kilometers
5Solar powerEnvironmental sensors, such as weather stations, could be powered via solar power and transfer data to autonomous vehicles for easy data retrieval
6Low‐power computerA field server designed to wirelessly collect and analyze data from all the technology in the environment
7Data transfer via satelliteThere is potential to autonomously transfer data from central hubs in the environment back to researchers, without the need for visiting the research sites
8BioinformaticsWith the ability to collect vast quantities of high‐resolution spatial and temporal data via permanent and perpetual environmental data‐recording technologies, the development of methods to manage and analyze the data collected will become much more pertinent

Bioinformatics will play a large role in the use of “next‐generation” ecological data that technoecology produces. Datasets will be very large and complex, meaning that manual processing and traditional computing hardware and statistical approaches will be insufficient to process such information. For example, the data captured on a 1‐km2 UAV survey for high‐resolution image mosaics and 3D construction is in the tens of gigabytes, so at a landscape scale datasets can be terabytes. Such datasets are known as “big data” (Howe et al. 2008), and bioinformatics will be required to develop methods for sorting, analyzing, categorizing, and storing these data, combining the fields of ecology, computer science, statistics, mathematics, and engineering.

Multi‐disciplinary collaboration will also play a major role in developing future technologies in ecology (Joppa 2015). Ecological applications of cutting edge technology most often develop through multi‐disciplinary collaboration between scientists from different fields or between the public and private sectors. For instance, the Princeton ZebraNet project is a collaboration between engineers and biologists (Juang et al. 2002), while the development of the molecular microscope involved the private sector companies Zeiss and Google. Industries may already have technology and knowledge to answer certain ecological questions, but might be unaware to such applications. Ecologists should also look to collaborate on convergent design; much of what we do as ecologists and environmental scientists has applications in agriculture, search and rescue, health, or sport science, and vice versa, so opportunities to share and reduce research and development costs exist. Finally, ecologists should be given opportunities for technology‐based training and placement programs early in their careers as a way to raise awareness of what could be done.

In the coming decades, a technology‐based revolution in ecology, akin to what has already occurred in genetics (Elmer‐DeWitt and Bjerklie 1994), seems likely. The pace of this revolution will be dictated, in part, by the speed at which ecologists embrace and integrate new technologies as they arise. It is worth remembering, “We still do not know one thousandth of one percent of what nature has revealed to us”—Albert Einstein.

Acknowledgments

We would like to thank the corresponding editor for his excellent suggestions for improving our manuscript and the anonymous reviewer who suggested the addition of the supersize, step‐change, and radical change conceptual framework.

https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.2163

MedCity – As digital health companies proliferate, it’s getting tougher to spot the strongest businesses

The digital health rocket seems to have gotten supercharged lately, at least when it comes to fundraising.  Depending on who you ask, either $1.62 billion (Rock Health’s count) or $2.5 billion (Mercom) or $2.8 billion (Startup Health’s count) was plowed into digital health companies in just the first three months of 2018.  By any measure Q1 2018 was the most significant quarter yet for digital health funding.  This headline has been everywhere.  Digital health:  to infinity and beyond!  But what is the significance of this? Should investors and customers of these companies be excited or worried?  It’s a little hard to tell.

But if you dig a little deeper, there are some interesting things to notice.

Another thing to notice: some deals are way more equal than others, to misquote a book almost everyone was forced to read in junior high.  Megadeals have come to digital health (whatever that is), defined as companies getting more than $100 million dropped on them in a single deal.  For instance, according to Mercom Capital, just five deals together accounted for approximately $936 million, or more than a third of the entire quarter’s funding (assuming you’re using the Mercom numbers)  If you use the Rock Health numbers, which include only three of the mega deals, we are talking $550 million for the best in class (bank account wise, anyway).  Among the various megadeal high fliers are Heartflow ($240 million raised), Helix ($200 million raised), SomaLogic ($200 million raised), PointClickCare ($186 million raised), and Collective Health ($110 million raised); three others raised $100 million each.

First of all, the definition of “digital health” is getting murkier and murkier.  Some sweep in things that others might consider life sciences or genomics. Others include things that may generally be considered health services, in that they are more people than technology. Rock Health excludes companies that are primarily health services, such as One Medical or primarily insurance companies, such as Oscar, including only “health companies that build and sell technologies—sometimes paired with a service, but only when the technology is, in and of itself, the service.”  In contrast, Startup Health and Mercom Capital clearly have more expansive views though I couldn’t find precise definitions.  My solution is this: stop using the term “digital health”.  Frankly, it’s all healthcare and if I were in charge of the world I would use the following four categories and ditch the new school monikers:  1) drugs/therapeutics 2) diagnostics in vivo, in vitro, digital or otherwise; 3) medical devices with and without sensors; and 4) everything else.  But I’m not in charge of the world and it isn’t looking likely anytime soon, so the number and nomenclature games continue.  My kingdom for a common ontology!

It used to be conventional wisdom that the reason healthcare IT deals were appealing, at least compared to medical devices and biotech, is because they needed far less capital to get to the promised land.  Well, property values in the digital health promised land have risen faster than those in downtown San Francisco, so conventional wisdom be damned.  These technology-focused enterprises are giving biotech deals a run for their money, literally.

But here’s what it really means: if you take out the top 10 deals in Rock Health’s count, which take up 55 percent of the first quarter’s capital, the remainder are averaging about $10.8 incoming capital per deal.  If you use the Mercom numbers, the average non-megadeal got $8.6 million. This is a far more “normalish” amount of capital for any type of Series A or Series B venture deal, so somewhere in the universe, there is the capacity to reason.Another note on this topic: the gap between the haves and have not’s is widening dramatically.  Mercom reports that the total number of deals for Q1 2018 was 187, which is the lowest total number of digital health deals for five quarters.  Rock Health claims there were 77 deals in the quarter; Startup Health, always the most enthusiastic, claims there were 191 deals in the digital health category.  I don’t know who has the “right” definition of digital health; what I do know is that either way, this is a lot of companies.

I think that the phenomenon of companies proliferating like bunnies on Valentine’s Day has another implication: too many damn companies.  Perhaps it’s only me, but it’s getting harder and harder to tell the difference between the myriad of entrants in a variety of categories.  Medication adherence deals seem to be proliferating faster than I can log them.   Companies claiming to improve consumer engagement, whatever that is, are outnumbering articles about AI, and that’s saying something.  Companies claiming to use AI to make everything better, whether it’s care delivery or drug development or French Fries are so numerous that it’s making me artificially stupider.  I think that this excess of entrepreneurship is actually bad for everyone in that it makes it much harder for any investor to pick the winner(s) and makes it nearly impossible for customers to figure out the best dance partner.  It’s a lot easier for customers to simply say no than to take a chance and pick the wrong flavor of the month.  It’s just become too darn easy to start a company.

And with respect to well-constructed clinical studies to demonstrate efficacy, nothing could be more important for companies trying to stand out from the crowd.  We keep seeing articles like this one, that talks about how digital health products often fail to deliver on the promise of better, faster, cheaper or any part thereof. And there’s this one by a disappointed Dr. Eric Topol, a physician who has committed a significant amount of his professional life to the pursuit ofhigh quality digital health initiatives – a true believer, as it were, but one who has seen his share of disappointment when it comes to the claims of digital health products. I’m definitely of the belief that there are some seriously meaningful products out there that make a difference.  But there is so much chaff around the wheat that it’s hard to find the good stuff.

Digital health has become the world’s biggest Oreo with the world’s thinnest cream center.  But well-constructed, two-arm studies can make one Oreo stand out in a crowd of would-be Newman-Os. One way that investors and buyers are distinguishing the good from the not-so-much is by looking for those who have made the effort to get an FDA approval and who have made an investment in serious clinical trials to prove value.  Mercom Capital reports that there were over 100 FDA approvals of digital health products in 2017.  Considering that there were at least 345 digital health deals in 2017 (taking the low-end Rock Health numbers) and that only a fraction raised money in that year, it is interesting to think that a minority of companies are bothering to take the FDA route.

Now, this is a SWAG at best, but it feels about right to me.  I often hear digital health entrepreneurs talking about the lengths they are going to in order to avoid needing FDA approval and I almost always respond by saying that I disagree with the approach.  Yes, there are clearly companies that don’t ever need the FDA’s imprimatur (e.g., administrative products with no clinical component), but if you have a clinically-oriented product and hope to claim that it makes a difference, the FDA could be your best friend.  Having an FDA approval certainly conveys a sense of value and legitimacy to many in the buyer and investor community.

It will be interesting to see if the gravity-defying digital health activity will continue ad infinitum or whether the sector will come into contact with the third of Newton’s Laws. Investors are, by definition, in it for the money.  If you can’t exit you shouldn’t enter.  In 2017 there were exactly zero digital health IPOs.  This year there has, so far, been one IPO: Chinese fitness tracker and smartwatch maker, Huami, which raised $110 million and is now listed on the New York Stock Exchange, per Mercom Capital.  In 2017 there were about 119 exits via merger or acquisition, which was down from the prior year.  This year has started off with a faster M&A run rate (about 37 companies acquired in Q1 2018), but what we don’t know is whether the majority of these company exits will look more like Flatiron (woo hoo!) or Practice Fusion (yikes!).  Caveat Emptor: Buyer beware is all I have to say about that.

https://medcitynews.com/2018/05/as-digital-health-companies-proliferate-its-getting-tougher-to-spot-the-strongest-businesses/

McKinsey – AI frontier : Analysis of more than 400 use cases across 19 industries and nine business functions

An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value

 

Mapping AI techniques to problem types

As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).

AI analytics techniques

Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.

We analyzed the applications and value of three neural network techniques:

  • Feed forward neural networks: the simplest type of artificial neural network. In this architecture, information moves in only one direction, forward, from the input layer, through the “hidden” layers, to the output layer. There are no loops in the network. The first single-neuron network was proposed already in 1958 by AI pioneer Frank Rosenblatt. While the idea is not new, advances in computing power, training algorithms, and available data led to higher levels of performance than previously possible.
  • Recurrent neural networks (RNNs): Artificial neural networks whose connections between neurons include loops, well-suited for processing sequences of inputs. In November 2016, Oxford University researchers reported that a system based on recurrent neural networks (and convolutional neural networks) had achieved 95 percent accuracy in reading lips, outperforming experienced human lip readers, who tested at 52 percent accuracy.
  • Convolutional neural networks (CNNs): Artificial neural networks in which the connections between neural layers are inspired by the organization of the animal visual cortex, the portion of the brain that processes images, well suited for perceptual tasks.

For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.

Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.

Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.

 

Section 2

Insights from use cases

We collated and analyzed more than 400 use cases across 19 industries and nine business functions. They provided insight into the areas within specific sectors where deep neural networks can potentially create the most value, the incremental lift that these neural networks can generate compared with traditional analytics (Exhibit 2), and the voracious data requirements—in terms of volume, variety, and velocity—that must be met for this potential to be realized. Our library of use cases, while extensive, is not exhaustive, and may overstate or understate the potential for certain sectors. We will continue refining and adding to it.

Advanced deep learning AI techniques can be applied across industries

Examples of where AI can be used to improve the performance of existing use cases include:

  • Predictive maintenance: the power of machine learning to detect anomalies. Deep learning’s capacity to analyze very large amounts of high dimensional data can take existing preventive maintenance systems to a new level. Layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield. For example, AI can extend the life of a cargo plane beyond what is possible using traditional analytic techniques by combining plane model data, maintenance history, IoT sensor data such as anomaly detection on engine vibration data, and images and video of engine condition.
  • AI-driven logistics optimization can reduce costs through real-time forecasts and behavioral coaching. Application of AI techniques such as continuous estimation to logistics can add substantial value across sectors. AI can optimize routing of delivery traffic, thereby improving fuel efficiency and reducing delivery times. One European trucking company has reduced fuel costs by 15 percent, for example, by using sensors that monitor both vehicle performance and driver behavior; drivers receive real-time coaching, including when to speed up or slow down, optimizing fuel consumption and reducing maintenance costs.
  • AI can be a valuable tool for customer service management and personalization challenges. Improved speech recognition in call center management and call routing as a result of the application of AI techniques allow a more seamless experience for customers—and more efficient processing. The capabilities go beyond words alone. For example, deep learning analysis of audio allows systems to assess a customers’ emotional tone; in the event a customer is responding badly to the system, the call can be rerouted automatically to human operators and managers. In other areas of marketing and sales, AI techniques can also have a significant impact. Combining customer demographic and past transaction data with social media monitoring can help generate individualized product recommendations. “Next product to buy” recommendations that target individual customers—as companies such as Amazon and Netflix have successfully been doing–can lead to a twofold increase in the rate of sales conversions.

Two-thirds of the opportunities to use AI are in improving the performance of existing analytics use cases

In 69 percent of the use cases we studied, deep neural networks can be used to improve performance beyond that provided by other analytic techniques. Cases in which only neural networks can be used, which we refer to here as “greenfield” cases, constituted just 16 percent of the total. For the remaining 15 percent, artificial neural networks provided limited additional performance over other analytics techniques, among other reasons because of data limitations that made these cases unsuitable for deep learning (Exhibit 3).

AI improves the performance of existing analytics techniques

Greenfield AI solutions are prevalent in business areas such as customer service management, as well as among some industries where the data are rich and voluminous and at times integrate human reactions. Among industries, we found many greenfield use cases in healthcare, in particular. Some of these cases involve disease diagnosis and improved care, and rely on rich data sets incorporating image and video inputs, including from MRIs.

On average, our use cases suggest that modern deep learning AI techniques have the potential to provide a boost in additional value above and beyond traditional analytics techniques ranging from 30 percent to 128 percent, depending on industry.

In many of our use cases, however, traditional analytics and machine learning techniques continue to underpin a large percentage of the value creation potential in industries including insurance, pharmaceuticals and medical products, and telecommunications, with the potential of AI limited in certain contexts. In part this is due to the way data are used by these industries and to regulatory issues.

Data requirements for deep learning are substantially greater than for other analytics

Making effective use of neural networks in most applications requires large labeled training data sets alongside access to sufficient computing infrastructure. Furthermore, these deep learning techniques are particularly powerful in extracting patterns from complex, multidimensional data types such as images, video, and audio or speech.

Deep-learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. By one estimate, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labeled examples. In some cases where advanced analytics is currently used, so much data are available—million or even billions of rows per data set—that AI usage is the most appropriate technique. However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.

These massive data sets can be difficult to obtain or create for many business use cases, and labeling remains a challenge. Most current AI models are trained through “supervised learning”, which requires humans to label and categorize the underlying data. However promising new techniques are emerging to overcome these data bottlenecks, such as reinforcement learning, generative adversarial networks, transfer learning, and “one-shot learning,” which allows a trained AI model to learn about a subject based on a small number of real-world demonstrations or examples—and sometimes just one.

Organizations will have to adopt and implement strategies that enable them to collect and integrate data at scale. Even with large datasets, they will have to guard against “overfitting,” where a model too tightly matches the “noisy” or random features of the training set, resulting in a corresponding lack of accuracy in future performance, and against “underfitting,” where the model fails to capture all of the relevant features. Linking data across customer segments and channels, rather than allowing the data to languish in silos, is especially important to create value.

Realizing AI’s full potential requires a diverse range of data types including images, video, and audio

Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multidimensional nature, known by practitioners as “high dimensionality.” Neural networks are good at dealing with high dimensionality, as multiple layers in a network can learn to represent the many different features present in the data. Thus, for facial recognition, the first layer in the network could focus on raw pixels, the next on edges and lines, another on generic facial features, and the final layer might identify the face. Unlike previous generations of AI, which often required human expertise to do “feature engineering,” these neural network techniques are often able to learn to represent these features in their simulated neural networks as part of the training process.

Along with issues around the volume and variety of data, velocity is also a requirement: AI techniques require models to be retrained to match potential changing conditions, so the training data must be refreshed frequently. In one-third of the cases, the model needs to be refreshed at least monthly, and almost one in four cases requires a daily refresh; this is especially the case in marketing and sales and in supply chain management and manufacturing.

 

Section 3

Sizing the potential value of AI

We estimate that the AI techniques we cite in this briefing together have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques (Exhibit 4).

AI has the potential to create value across sectors

Per industry, we estimate that AI’s potential value amounts to between one and nine percent of 2016 revenue. The value as measured by percentage of industry revenue varies significantly among industries, depending on the specific applicable use cases, the availability of abundant and complex data, as well as on regulatory and other constraints.

These figures are not forecasts for a particular period, but they are indicative of the considerable potential for the global economy that advanced analytics represents.

From the use cases we have examined, we find that the greatest potential value impact from using AI are both in top-line-oriented functions, such as in marketing and sales, and bottom-line-oriented operational functions, including supply chain management and manufacturing.

Consumer industries such as retail and high tech will tend to see more potential from marketing and sales AI applications because frequent and digital interactions between business and customers generate larger data sets for AI techniques to tap into. E-commerce platforms, in particular, stand to benefit. This is because of the ease with which these platforms collect customer information such as click data or time spent on a web page and can then customize promotions, prices, and products for each customer dynamically and in real time.

AI's impact is likely to be most substantial in M&S, supply-chain management, and manufacturing

Here is a snapshot of three sectors where we have seen AI’s impact: (Exhibit 5)

  • In retail, marketing and sales is the area with the most significant potential value from AI, and within that function, pricing and promotion and customer service management are the main value areas. Our use cases show that using customer data to personalize promotions, for example, including tailoring individual offers every day, can lead to a one to two percent increase in incremental sales for brick-and-mortar retailers alone.
  • In consumer goods, supply-chain management is the key function that could benefit from AI deployment. Among the examples in our use cases, we see how forecasting based on underlying causal drivers of demand rather than prior outcomes can improve forecasting accuracy by 10 to 20 percent, which translates into a potential five percent reduction in inventory costs and revenue increases of two to three percent.
  • In banking, particularly retail banking, AI has significant value potential in marketing and sales, much as it does in retail. However, because of the importance of assessing and managing risk in banking, for example for loan underwriting and fraud detection, AI has much higher value potential to improve performance in risk in the banking sector than in many other industries.

 

Section 4

The road to impact and value

Artificial intelligence is attracting growing amounts of corporate investment, and as the technologies develop, the potential value that can be unlocked is likely to grow. So far, however, only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.

For all their promise, AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed above, but also five other limitations:

  • First is the challenge of labeling training data, which often must be done manually and is necessary for supervised learning. Promising new techniques are emerging to address this challenge, such as reinforcement learning and in-stream supervision, in which data can be labeled in the course of natural usage.
  • Second is the difficulty of obtaining data sets that are sufficiently large and comprehensive to be used for training; for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
  • Third is the difficulty of explaining in human terms results from large and complex models: why was a certain decision reached? Product certifications in healthcare and in the automotive and aerospace industries, for example, can be an obstacle; among other constraints, regulators often want rules and choice criteria to be clearly explainable.
  • Fourth is the generalizability of learning: AI models continue to have difficulties in carrying their experiences from one set of circumstances to another. That means companies must commit resources to train new models even for use cases that are similar to previous ones. Transfer learning—in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity—is one promising response to this challenge.
  • The fifth limitation concerns the risk of bias in data and algorithms. This issue touches on concerns that are more social in nature and which could require broader steps to resolve, such as understanding how the processes used to collect training data can influence the behavior of models they are used to train. For example, unintended biases can be introduced when training data is not representative of the larger population to which an AI model is applied. Thus, facial recognition models trained on a population of faces corresponding to the demographics of AI developers could struggle when applied to populations with more diverse characteristics. A recent report on the malicious use of AIhighlights a range of security threats, from sophisticated automation of hacking to hyper-personalized political disinformation campaigns.

Organizational challenges around technology, processes, and people can slow or impede AI adoption

Organizations planning to adopt significant deep learning efforts will need to consider a spectrum of options about how to do so. The range of options includes building a complete in-house AI capability, outsourcing these capabilities, or leveraging AI-as-a-service offerings.

Based on the use cases they plan to build, companies will need to create a data plan that produces results and predictions, which can be fed either into designed interfaces for humans to act on or into transaction systems. Key data engineering challenges include data creation or acquisition, defining data ontology, and building appropriate data “pipes.” Given the significant computational requirements of deep learning, some organizations will maintain their own data centers, because of regulations or security concerns, but the capital expenditures could be considerable, particularly when using specialized hardware. Cloud vendors offer another option.

Process can also become an impediment to successful adoption unless organizations are digitally mature. On the technical side, organizations will have to develop robust data maintenance and governance processes, and implement modern software disciplines such as Agile and DevOps. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.

On the people front, much of the construction and optimization of deep neural networks remains something of an art requiring real experts to deliver step-change performance increases. Demand for these skills far outstrips supply at present; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems. and competition for them is fierce among the tech giants.

AI can seem an elusive business case

Where AI techniques and data are available and the value is clearly proven, organizations can already pursue the opportunity. In some areas, the techniques today may be mature and the data available, but the cost and complexity of deploying AI may simply not be worthwhile, given the value that could be generated. For example, an airline could use facial recognition and other biometric scanning technology to streamline aircraft boarding, but the value of doing so may not justify the cost and issues around privacy and personal identification.

Similarly, we can see potential cases where the data and the techniques are maturing, but the value is not yet clear. The most unpredictable scenario is where either the data (both the types and volume) or the techniques are simply too new and untested to know how much value they could unlock. For example, in healthcare, if AI were able to build on the superhuman precision we are already starting to see with X-ray analysis and broaden that to more accurate diagnoses and even automated medical procedures, the economic value could be very significant. At the same time, the complexities and costs of arriving at this frontier are also daunting. Among other issues, it would require flawless technical execution and resolving issues of malpractice insurance and other legal concerns.

Societal concerns and regulations can also constrain AI use. Regulatory constraints are especially prevalent in use cases related to personally identifiable information. This is particularly relevant at a time of growing public debate about the use and commercialization of individual data on some online platforms. Use and storage of personal information is especially sensitive in sectors such as banking, health care, and pharmaceutical and medical products, as well as in the public and social sector. In addition to addressing these issues, businesses and other users of data for AI will need to continue to evolve business models related to data use in order to address societies’ concerns.. Furthermore, regulatory requirements and restrictions can differ from country to country, as well from sector to sector.

Implications for stakeholders

As we have seen, it is a company’s ability to execute against AI models that creates value, rather than the models themselves. In this final section, we sketch out some of the high-level implications of our study of AI use cases for providers of AI technology, appliers of AI technology, and policy makers, who set the context for both.

  • For AI technology provider companies: Many companies that develop or provide AI to others have considerable strength in the technology itself and the data scientists needed to make it work, but they can lack a deep understanding of end markets. Understanding the value potential of AI across sectors and functions can help shape the portfolios of these AI technology companies. That said, they shouldn’t necessarily only prioritize the areas of highest potential value. Instead, they can combine that data with complementary analyses of the competitor landscape, of their own existing strengths, sector or function knowledge, and customer relationships, to shape their investment portfolios. On the technical side, the mapping of problem types and techniques to sectors and functions of potential value can guide a company with specific areas of expertise on where to focus.
  • Many companies seeking to adopt AI in their operations have started machine learning and AI experiments across their business. Before launching more pilots or testing solutions, it is useful to step back and take a holistic approach to the issue, moving to create a prioritized portfolio of initiatives across the enterprise, including AI and the wider analytic and digital techniques available. For a business leader to create an appropriate portfolio, it is important to develop an understanding about which use cases and domains have the potential to drive the most value for a company, as well as which AI and other analytical techniques will need to be deployed to capture that value. This portfolio ought to be informed not only by where the theoretical value can be captured, but by the question of how the techniques can be deployed at scale across the enterprise. The question of how analytical techniques are scaling is driven less by the techniques themselves and more by a company’s skills, capabilities, and data. Companies will need to consider efforts on the “first mile,” that is, how to acquire and organize data and efforts, as well as on the “last mile,” or how to integrate the output of AI models into work flows ranging from clinical trial managers and sales force managers to procurement officers. Previous MGI research suggests that AI leaders invest heavily in these first- and last-mile efforts.
  • Policy makers will need to strike a balance between supporting the development of AI technologies and managing any risks from bad actors. They have an interest in supporting broad adoption, since AI can lead to higher labor productivity, economic growth, and societal prosperity. Their tools include public investments in research and development as well as support for a variety of training programs, which can help nurture AI talent. On the issue of data, governments can spur the development of training data directly through open data initiatives. Opening up public-sector data can spur private-sector innovation. Setting common data standards can also help. AI is also raising new questions for policy makers to grapple with for which historical tools and frameworks may not be adequate. Therefore, some policy innovations will likely be needed to cope with these rapidly evolving technologies. But given the scale of the beneficial impact on business the economy and society, the goal should not be to constrain the adoption and application of AI, but rather to encourage its beneficial and safe use.

https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning

Glossy : Blockchain, Internet of Things and AI: What the newest luxury startup accelerators are investing in

Blockchain, artificial intelligence, and sustainability and raw materials are the top new technology priorities at a batch of recently formed fashion accelerators.

LVMH announced earlier this month that it will be launching an annual startup accelerator at Station F, a business incubator in Paris, and will choose 50 startups to join two six-month programs. On Friday, Farfetch announced the launch of Dream Assembly, its first in-house accelerator in partnership with Burberry and 500 Startups, a venture capital firm. It’s currently taking applications from companies and will choose up to 10 to participate in a 12-week program. Kering, a founding partner of the accelerator Plug and Play, started its second 12-week season with 15 new startups in March.

Helping to foster new industry startups is a way for retailers and conglomerates to lend their support and cash to companies they think will prove lucrative and beneficial down the line, as well as demonstrate an interest in the broad umbrella of innovation. Retail and accelerator partnerships have popped up throughout the industry: Target’s Techstars Retail accelerator will start its third program this summer; the R/GA Accelerator has partnered with brands and retailers like Nike and Westfield; beauty retailers like Sephora and L’Oréal have launched in-house accelerators to harbor relevant technology companies within their own walls. Early successes within the world of luxury accelerators include Orchard Mile, incubated by an early version of LVMH’s accelerator, and The Modist, which has received funding from Farfetch.

“It’s very important for new companies to continue to form luxury in all shapes and sizes,” said Morty Singer, CEO of business development firm Marvin Traub Associates. “LVMH’s service explores the technology and innovation out there, where they may not be ready to invest in-house, and you can really see where the industry is headed by looking at the companies that they’re supporting.”

In luxury, startups working on supply-chain logistics technology, fabric sourcing and sustainable materials, personalization and customization through artificial intelligence, e-commerce solutions and mobile capabilities are all present across the accelerators, but blockchain and the Internet of Things make up one of the newest trends to emerge among startup initiatives. It’s one of the seven “main ideas” at LVMH’s new accelerator, titled La Maison des Startups; JD.com launched a blockchain-specific accelerator in February; last year, Kering’s sustainability-centric accelerator included blockchain startups like A Transparent Company, as the blockchain can be used to verify where materials are sourced from and where items are produced. It’s also playing a role in anti-counterfeiting efforts in online luxury.

“This isn’t something that a luxury fashion house is born knowing how to do. Brands would have to build out new teams to deliver new products and new content for connected consumers, and be continually investing in new content and experiences,” said Lauren Nutt Bello, the managing partner at digital agency Ready Set Rocket. “Already, there’s an overloading amount of data that brands don’t know how to make sense of, and some luxury companies are brands that don’t even have full e-commerce stores. They need outside help.”

The goal is to scout technology startups that could be absorbed at the brand level. While other fashion accelerators like the Fashion Tech Accelerator and XRC Labs offer potential retail partnerships as a perk of the program, retail-founded accelerators build brand partnerships in as a guarantee. At LVMH’s La Maison des Startups, the companies invited to participate will be working on projects with individual fashion houses owned by the conglomerate to incorporate new technologies and practices at the brand level. Burberry is the first brand partner of Farfetch’s Dream Assembly, and new brands can get involved down the line. Kering brands all have the option of participating in the Plug and Play accelerator, and the company’s sustainability initiative is an open-source resource for all of its owned brands. None of the accelerators promise investments to the startups, and terms for potential investments aren’t disclosed.

But building up a startup in such close proximity to brands — in some cases, legacy brands — can invite competing agendas. What incubators like XRC Labs offer is a more mixed variety of insight from experts, mentors and judges from across the industry, and in-house retail accelerators can have a one-track mind focused on their own goals, and reduce the potential for investment to a limited group. Partnerships with unbiased players in the space, like Station F and 500 Startups can help. The big trade off, of course, is access.

“When I started my company, I needed exposure to boutiques and to brands; I needed consumers I could learn from; I needed technology support and mentoring,” said Jose Neves, the founder and CEO of Farfetch.

https://www.glossy.co/evolution-of-luxury/blockchain-internet-of-things-and-ai-what-the-newest-luxury-startup-accelerators-are-investing-in

Teads – Real-life AWS infrastructure cost optimization strategy

The cloud computing opportunity and its traps

One of the advantages of cloud computing is its ability to fit the infrastructure to your needs, you only pay for what you really use. That is how most hyper growth startups have managed their incredible ascents.

Most companies migrating to the cloud embrace the “lift & shift” strategy, replicating what was once on premises.

You most likely won’t save a penny with this first step.

Main reasons being:

  • Your applications do not support elasticity yet,
  • Your applications rely on complex backend you need to migrate with (RabbitMQ, Cassandra, Galera clusters, etc.),
  • Your code relies on being executed in a known network environment and most likely uses NFS as distributed storage mechanism.

Once in the cloud, you need to “cloudify” your infrastructure.

Then, and only then, will you have access to virtually infinite computing power and storage.

Watch out, this apparent freedom can lead to very serious drifts: over provisioning, under optimizing your code or even forgetting to “turn off the lights” by letting that small PoC run more than necessary using that very nice r3.8xlarge instance.

Essentially, you have just replaced your need for capacity planning by a need for cost monitoring and optimization.

The dark side of cloud computing

At Teads we were “born in the cloud” and we are very happy about it.

One of our biggest pain today with our cloud providers is the complexity of their pricing.

It is designed to look very simple at the first glance (usually based on simple metrics like $/GB/month or $/hour or, more recently, $/second) but as you expand and go into a multi-region infrastructure mixing lots of products, you will have a hard time tracking the ever-growing cost of your cloud infrastructure.

For example, the cost of putting a file on S3 and serving it from thereincludes four different lines of billing:

  • Actual storage cost (80% of your bill)
  • Cost of the HTTP PUT request (2% of your bill)
  • Cost of the many HTTP GET requests (3% of your bill)
  • Cost of the data transfer (15% of your bill)

Our take on Cost Optimization

  • Focus on structural costs – Never block short term costs increase that would speed up the business, or enable a technical migration.
  • Everyone is responsible – Provide tooling to each team to make them autonomous on their cost optimization.

The limit of cost optimization for us is when it drives more complexity in the code and less agility in the future, for a limited ROI.
This way of thinking also helps us to tackle cost optimisation in our day to day developments.

Overall we can extend this famous quote from Kent Beck:

“Make it work, make it right, make it fast” … and then cost efficient.

Billing Hygiene

It is of the utmost importance to keep a strict billing hygiene and know your daily spends.

In some cases, it will help you identify suspicious uptrends, like a service stuck in a loop and writing a huge volume of logs to S3 or a developer that left its test infrastructure up & running during a week-end.

You need to arm yourself with a detailed monitoring of your costs and spend time looking at it every day.

You have several options to do so, starting with AWS’s own tools:

  • Billing Dashboard, giving a high level view of your main costs (Amazon S3, Amazon EC2, etc.) and a rarely accurate forecast, at least for us. Overall, it’s not detailed enough to be of use for serious monitoring.
  • Detailed Billing Report, this feature has to be enabled in your account preferences. It sends you a daily gzipped .csv file containing one line per billable item since the beginning of the month (e.g., instance A sent X Mb of data on the Internet).
    The detailed billing is an interesting source of data once you have added custom tags to your services so that you can group your costs by feature / application / part of your infrastructure.
    Be aware that this file is accurate within a delay of approximately two days as it takes time for AWS to compute the files.
  • Trusted Advisor, available at the business and enterprise support level, also includes a cost section with interesting optimization insights.
Trusted Advisor cost section – Courtesy of AWS
  • Cost Explorer, an interesting tool since its update in august 2017. It can be used to quickly identify trends but it is still limited as you cannot build complete dashboards with it. It is mainly a reporting tool.
Example of a Cost Explorer report — AWS documentation

Then you have several other external options to monitor the costs of your infrastructure:

  • SaaS products like Cloudyn / Cloudhealth. These solutions are really well made and will tell you how to optimize your infrastructure. Their pricing model is based on a percentage of your annual AWS bill, not on the savings that the tools will help you make, which was a show stopper for us.
  • The open source project Ice, initially developed by Netflix for their own use. Recently, the leadership of this project was transferred to the french startup Teevity who is also offering a SaaS version for a fixed fee. This could be a great option as it also handles GCP and Azure.

Building our own monitoring solution

At Teads we decided to go DIY using the detailed billings files.

We built a small Lambda function that ingests the detailed billing file into Redshift every day. This tool helps us slice and dice our data along numerous dimensions to dive deeper into our costs. We also use it to spot suspicious usage uptrends, down to the service level.

This is an example of our daily dashboard built with chart.io, each color corresponds to a service we tagged
When zoomed on a specific service, we can quickly figure out what is expensive

On top of that, we still use a spreadsheet to integrate the reservation upfronts in order to get a complete overview and the full daily costs.

Now that we have the data, how to optimize?

Here are the 5 pillars of our cost optimization strategy.

1 – Reserved Instances (RIs)

First things first, you need to reserve your instances. Technically speaking, RIs will only make sure that you have access to the reserved resources.

At Teads our reservation strategy is based on bi-annual reservation batchesand we are also evaluating higher frequencies (3 to 4 batches per year).

The right frequency should be determined by the best compromise between flexibility (handling growth, having leaner financial streams) and the ability to manage the reservations efficiently.
In the end, managing reservations is a time consuming task.

Reservation is mostly a financial tool, you commit to pay for resources during 1 or 3 years and get a discount over the on-demand price:

  • You have two types of reservations, standard or convertible. Convertible lets you change the instance family but comes with a smaller discount compared to standard (avg. 75% vs 54% for a convertible). They are the best option to leverage future instance families in the long run.
  • Reservations come with three different payment options: Full Upfront, Partial Upfront, and No Upfront. With partial and no upfront, you pay the remaining balance monthly over the term. We prefer partial upfront since the discount rate is really close to the full upfront one (e.g. 56% vs 55% for a convertible 3-year term with partial).
  • Don’t forget that you can reserve a lot of things and not only Amazon EC2 instances: Amazon RDS, Amazon Elasticache, Amazon Redshift, Amazon DynamoDB, etc.

2 – Optimize Amazon S3

The second source of optimization is the object management on S3. Storage is cheap and infinite, but it is not a valid reason to keep all your data there forever. Many companies do not clean their data on S3, even though several trivial mechanisms could be used:

The Object Lifecycle option enables you to set simple rules for objects in a bucket :

  • Infrequent Access Storage (IAS): for application logs, set the object storage class to Infrequent Access Storage after a few days.
    IAS will cut the storage cost by a factor of two but comes with a higher cost for requests.
    The main drawback of IAS is that it uses 128kb blocks to store data so if you want to store a lot of smaller objects it will end up more expensive than standard storage.
  • GlacierAmazon Glacier is a very long term archiving service, also called cold storage.
    Here is a nice article from Cloudability if you want to dig deeper into optimizing storage costs and compare the different options.

Also, don’t forget to set up a delete policy when you think you won’t need those files anymore.

Finally, enabling a VPC Endpoint for your Amazon S3 buckets will suppress the data transfer costs between Amazon S3 and your instances.

3 – Leverage the Spot market

Spot instances enables you to use AWS’s spare computing power at a heavily discounted price. This can be very interesting depending on your workloads.

Spot instances are bought using some sort of auction model, if your bid is above the spot market rate you will get the instance and only pay the market price. However these instances can be reclaimed if the market price exceeds your bid.

At Teads, we usually bid the on-demand price to be sure that we can get the instance. We only pay the “market” rate which gives us a rebate up to 90%.

It is worth noting that:

  • You get a 2 min termination notice before your spot is reclaimed but you need to look for it.
  • Spot Instances are easy to use for non critical batch workloads and interesting for data processing, it’s a very good match with Amazon Elastic Map Reduce.

4 – Data transfer

Back in the physical world, you were used to pay for the network link between your Data Center and the Internet.

Whatever data you sent through that link was free of charge.

In the cloud, data transfer can grow to become really expensive.

You are charged for data transfer from your services to the Internet but also in-between AWS Availability Zones.

This can quickly become an issue when using distributed systems like Kafkaand Cassandra that need to be deployed in different zones to be highly available and constantly exchange over the network.

Some advice:

  • If you have instances communicating with each other, you should try to locate them in the same AZ
  • Use managed services like Amazon DynamoDB or Amazon RDS as their inter-AZ replication costs is built-in their pricing
  • If you serve more than a few hundred Terabytes per months you should discuss with your account manager
  • Use Amazon CloudFront (AWS’s CDN) as much as you can when serving static files. The data transfer out rates are cheaper from CloudFront and free between CloudFront and EC2 or S3.

5 – Unused infrastructure

With a growing infrastructure, you can rapidly forget to turn off unused and idle things:

  • Detached Elastic IPs (EIPs), they are free when attached to an EC2 instance but you have to pay for it if they are not.
  • The block stores (EBS) starting with the EC2 instances are preserved when you stop your instances. As you will rarely re-attach a root EBS volume you can delete them. Also, snapshots tend to pile up over time, you should also look into it.
  • Load Balancer (ELB) with no traffic is easy to detect and obviously useless. Still, it will cost you ~20 $/month.
  • Instances with no network activity over the last week. In a cloud context it doesn’t make a lot of sense.

Trusted Advisor can help you in detecting these unnecessary expenses.


Key takeaways

https://medium.com/teads-engineering/real-life-aws-cost-optimization-strategy-at-teads-135268b0860f

Shopify – 67 Key Performance Indicators (KPIs) for Ecommerce

Key performance indicators (KPIs) are like milestones on the road to online retail success. Monitoring them will help ecommerce entrepreneurs identify progress toward sales, marketing, and customer service goals.

KPIs should be chosen and monitored depending on your unique business goals. Certain KPIs support some goals while they’re irrelevant for others. With the idea that KPIs should differ based on the goal being measured, it’s possible to consider a set of common performance indicators for ecommerce.

Table of Contents

Here is the definition of key performance indicators, types of key performance indicators, and 67 examples of ecommerce key performance indicators.

What is a performance indicator?

A performance indicator is a quantifiable measurement or data point used to gauge performance relative to some goal. As an example, some online retailers may have a goal to increase site traffic 50% in the next year.

Relative to this goal, a performance indicator might be the number of unique visitors the site receives daily or which traffic sources send visitors (paid advertising, search engine optimization, brand or display advertising, a YouTube video, etc.)

What is a key performance indicator?

For most goals there could be many performance indicators — often too many — so often people narrow it down to just two or three impactful data points known as key performance indicators. KPIs are those measurements that most accurately and succinctly show whether or not a business in progressing toward its goal.

Why are key performance indicators important?

KPIs are important just like strategy and goal setting are important. Without KPIs, it’s difficult to gauge progress over time. You’d be making decisions based on gut instinct, personal preference or belief, or other unfounded hypotheses. KPIs tell you more information about your business and your customers, so you can make informed and strategic decisions.

But KPIs aren’t important on their own. The real value lies in the actionable insights you take away from analyzing the data. You’ll be able to more accurately devise strategies to drive more online sales, as well as understand where there may problems in your business.

Plus, the data related to KPIs can be distributed to the larger team. This can be used to educate your employees and come together for critical problem-solving.

What is the difference between a SLA and a KPI?

SLA stands for service level agreement, while a KPI is a key performance indicator. A service level agreement in ecommerce establishes the scope for the working relationship between an online retailer and a vendor. For example, you might have a SLA with your manufacturer or digital marketing agency. A KPI, as we know, is a metric or data point related to some business operation. These are often quantifiable, but KPIs may also be qualitative

Types of key performance indicators

There are many types of key performance indicators. They may be qualitative, quantitative, predictive of the future, or revealing of the past. KPIs also touch on various business operations. When it comes to ecommerce, KPIs generally fall into one of the following five categories:

  1. Sales
  2. Marketing
  3. Customer service
  4. Manufacturing
  5. Project management

67 key performance indicator examples for ecommerce

Note: The performance indicators listed below are in no way an exhaustive list. There are an almost infinite number of KPIs to consider for your ecommerce business.

What are key performance indicators for sales?

Sales key performance indicators are measures that tell you how your business is doing in terms of conversions and revenue. You can look at sales KPIs related to a specific channel, time period, team, employee, etc. to inform business decisions.

Examples of key performance indicators for sales include:

  • Sales: Ecommerce retailers can monitor total sales by the hour, day, week, month, quarter, or year.
  • Average order size: Sometimes called average market basket, the average order size tells you how much a customer typically spends on a single order.
  • Gross profit: Calculate this KPI by subtracting the total cost of goods sold from total sales.
  • Average margin: Average margin, or average profit margin, is a percentage that represents your profit margin over a period of time.
  • Number of transactions: This is the total number of transactions. Use this KPI in conjunction with average order size or total number of site visitors for deeper insights.
  • Conversion rate: The conversion rate, also a percentage, is the rate at which users on your ecommerce site are converting (or buying). This is calculated by dividing the total number of visitors (to a site, page, category, or selection of pages) by the total number of conversions.
  • Shopping cart abandonment rate: The shopping cart abandonment rate tells you how many users are adding products to their shopping cart but not checking out. The lower this number, the better. If your cart abandonment rate is high, there may be too much friction in the checkout process.
  • New customer orders vs. returning customer orders: This metric shows a comparison between new and repeat customers. Many business owners focus only on customer acquisition, but customer retention can also drive loyalty, word of mouth marketing, and higher order values.
  • Cost of goods sold (COGS): COGS tells you how much you’re spending to sell a product. This includes manufacturing, employee wages, and overhead costs.
  • Total available market relative to a retailer’s share of market: Tracking this KPI will tell you how much your business is growing compared to others within your industry.
  • Product affinity: This KPI tells you which products are purchased together. This can and should inform cross-promotion strategies.
  • Product relationship: This is which products are viewed consecutively. Again, use this KPI to formulate effective cross-selling tactics.
  • Inventory levels: This KPI could tell you how much stock is on hand, how long product is sitting, how quickly product is selling, etc.
  • Competitive pricing: It’s important to gauge your success and growth against yourself and against your competitors. Monitor your competitors’ pricing strategies and compare them to your own.
  • Customer lifetime value (CLV): The CLV tells you how much a customer is worth to your business over the course of their relationship with your brand. You want to increase this number over time through strengthening relationships and focusing on customer loyalty.
  • Revenue per visitor (RPV): RPV gives you an average of how much a person spends during a single visit to your site. If this KPI is low, you can view website analytics to see how you can drive more online sales.
  • Churn rate: For an online retailer, the churn rate tells you how quickly customers are leaving your brand or canceling/failing to renew a subscription with your brand.
  • Customer acquisition cost (CAC): CAC tells you how much your company spends on acquiring a new customer. This is measured by looking at your marketing spend and how it breaks down per individual customer.

What are key performance indicators for marketing?

Key performance indicators for marketing tell you how well you’re doing in relation to your marketing and advertising goals. These also impact your sales KPIs. Marketers use KPIs to understand which products are selling, who’s buying them, how they’re buying them, and why they’re buying them. This can help you market more strategically in the future and inform product development.

Examples of key performance indicators for marketing include:

  • Site traffic: Site traffic refers to the total number of visits to your ecommerce site. More site traffic means more users are hitting your store.
  • New visitors vs. returning visitors: New site visitors are first-time visitors to your site. Returning visitors, on the other hand, have been to your site before. While looking at this metric alone won’t reveal much, it can help ecommerce retailers gauge success of digital marketing campaigns. If you’re running a retargeted ad, for example, returning visitors should be higher.
  • Time on site: This KPI tells you how much time visitors are spending on your website. Generally, more time spent means they’ve had deeper engagements with your brand. Usually, you’ll want to see more time spent on blog content and landing pages and less time spent through the checkout process.
  • Bounce rate: The bounce rate tells you how many users exit your site after viewing only one page. If this number is high, you’ll want to investigate why visitors are leaving your site instead of exploring.
  • Pageviews per visit: Pageviews per visit refers to the average number of pages a user will view on your site during each visit. Again, more pages usually means more engagement. However, if it’s taking users too many clicks to find the products they’re looking for, you want to revisit your site design.
  • Average session duration: The average amount of time a person spends on your site during a single visit is called the average session duration.
  • Traffic source: The traffic source KPI tells you where visitors are coming from or how they found your site. This will provide information about which channels are driving the most traffic, such as: organic search, paid ads, or social media.
  • Mobile site traffic: Monitor the total number of users who use mobile devices to access your store and make sure your site is optimized for mobile.
  • Day part monitoring: Looking at when site visitors come can tell you which are peak traffic times.
  • Newsletter subscribers: The number of newsletter subscribers refers to how many users have opted into your email marketing list. If you have more subscribers, you can reach more consumers. However, you’ll also want to look at related data, such as the demographics of your newsletter subscribers, to make sure you’re reaching your target audience.
  • Texting subscribers: Newer to digital marketing than email, ecommerce brands can reach consumers through SMS-based marketing. Texting subscribers refers to the number of customers on your text message contact list. To get started with your own text-based marketing, browse these SMS Shopify apps.
  • Subscriber growth rate: This tells you how quickly your subscriber list is growing. Pairing this KPI with the total number of subscribers will give you good insight into this channel.
  • Email open rate: This KPI tells you the percentage of subscribers that open your email. If you have a low email open rate, you could test new subject lines, or try cleaning your list for inactive or irrelevant subscribers.
  • Email click-through rate (CTR): While the open rate tells you the percentage of subscribers who open the email, the click-through rate tells you the percentage of those who actually clicked on a link after opening. This is arguably more important than the open rate because without clicks, you won’t drive any traffic to your site.
  • Unsubscribes: You can look at both the total number and the rate of unsubscriptions for your email list.
  • Chat sessions initiated: If you have live chat functionality on your ecommerce store, the number of chat sessions initiated tells you how many users engaged with the tool to speak to a virtual aide.
  • Social followers and fans: Whether you’re on Facebook, Instagram, Twitter, Pinterest, or Snapchat (or a combination of a few), the number of followers or fans you have is a useful KPI to gauge customer loyalty and brand awareness. Many of those social media networks also have tools that ecommerce businesses can use to learn more about their social followers.
  • Social media engagement: Social media engagement tells you how actively your followers and fans are interacting with your brand on social media.
  • Clicks: The total number of clicks a link gets. You could measure this KPI almost anywhere: on your website, social media, email, display ads, PPC, etc.
  • Average CTR: The average click-through rate tells you the percentage of users on a page (or asset) who click on a link.
  • Average position: The average position KPI tells you about your site’s search engine optimization (SEO) and paid search performance. This demonstrates where you are on search engine results pages. Most online retailers have the goal of being number one for their targeted keywords.
  • Pay-per-click (PPC) traffic volume: If you’re running PPC campaigns, this tells you how much traffic you’re successfully driving to your site.
  • Blog traffic: You can find this KPI by simply creating a filtered view in your analytics tool. It’s also helpful to compare blog traffic to overall site traffic.
  • Number and quality of product reviews: Product reviews are great for a number of reasons: They provide social proof, they can help with SEO, and they give you valuable feedback for your business. The quantity and content of product reviews are important KPIs to track for your ecommerce business.
  • Banner or display advertising CTRs: The CTRs for your banner and display ads will tell you the percentage of viewers who have clicked on the ad. This KPI will give you insight into your copy, imagery, and offer performance.
  • Affiliate performance rates: If you engage in affiliate marketing, this KPI will help you understand which channels are most successful.

What are key performance indicators for customer service?

Customer service KPIs tell you how effective your customer service is and if you’re meeting expectations.You might be wondering: what should the KPIs be in our call center, for our email support team, for our social media support team, etc. Measuring and tracking these KPIs will help you ensure you’re providing a positive customer experience.

Key performance indicators for customer service include:

  • Customer satisfaction (CSAT) score: The CSAT KPI is typically measured by customer responses to a very common survey question: “How satisfied were you with your experience?” This is usually answered with a numbered scale.
  • Net promoter score (NPS): Your NPS KPI provides insight into your customer relationships and loyalty by telling you how likely customers are to recommend your brand to someone in their network.
  • Hit rate: Calculate your hit rate by taking the total number of sales of a single product and dividing it by the number of customers who have contacted your customer service team about said product.
  • Customer service email count: This is the number of emails your customer support team receives.
  • Customer service phone call count: Rather than email, this is how frequently your customer support team is reached via phone.
  • Customer service chat count: If you have live chat on your ecommerce site, you may have a customer service chat count.
  • First response time: First response time is the average amount of time it takes a customer to receive the first response to their query. Aim low!
  • Average resolution time: This is the amount of time it takes for a customer support issue to be resolved, starting from the point at which the customer reached out about the problem.
  • Active issues: The total number of active issues tells you how many queries are currently in progress.
  • Backlogs: Backlogs are when issues are getting backed up in your system. This could be caused by a number of factors.
  • Concern classification: Beyond the total number of customer support interactions, look at quantitative data around trends to see if you can be proactive and reduce customer support queries. You’ll classify the customer concerns which will help identify trends and your progress in solving issues.
  • Service escalation rate: The service escalation rate KPI tells you how many times a customer has asked a customer service representative to redirect them to a supervisor or other senior employee. You want to keep this number low.

What are key performance indicators for manufacturing?

Key performance indicators for manufacturing are, predictably, related to your supply chain and production processes. These may tell you where efficiencies and inefficiencies are, as well as help you understand productivity and expenses.

Key performance indicators for manufacturing in ecommerce include:

  • Cycle time: The cycle time manufacturing KPI tells you how long it takes to manufacture a single product from start to finish. Monitoring this KPI will give you insight into production efficiency.
  • Overall equipment effectiveness (OEE): The OEE KPI provides ecommerce businesses with insight into how well manufacturing equipment is performing.
  • Overall labor effectiveness (OLE): Just as you’ll want insight into your equipment, the OLE KPI will tell you how productive the staff operating the machines are.
  • Yield: Yield is a straightforward manufacturing KPI. It is the number of products you have manufactured. Consider analyzing the yield variance KPI in manufacturing, too, as that will tell you how much you deviate from your average.
  • First time yield (FTY) and first time through (FTT): FTY, also referred to as first pass yield, is a quality-based KPI. It tells you how wasteful your production processes are. To calculate FTY, divide the number of successfully manufactured units by the total number of units that started the process.
  • Number of non-compliance events or incidents: In manufacturing, there are several sets of regulations, licenses, and policies businesses must comply with. These are typically related to safety, working conditions, and quality. You’ll want to reduce this number to ensure you’re operating within the mandated guidelines.

What are key performance indicators for project management?

Key performance indicators for project management give you insight into how well your teams are performing and completing specific tasks. Each project or initiative within your ecommerce business has different goals, and must be managed with different processes and workflows. Project management KPIs tell you how well each team is working to achieve their respective goals and how well their processes are working to help them achieve those goals.

Key performance indicators for project management include:

  • Hours worked: The total hours worked tells you how much time a team put into a project. Project managers should also assess the variance in estimated vs. actual hours worked to better predict and resource future projects.
  • Budget: The budget indicates how much money you have allocated for the specific project. Project managers and ecommerce business owners will want to make sure that the budget is realistic; if you’re repeatedly over budget, some adjustments to your project planning need to be made.
  • Return on investment (ROI): The ROI KPI for project management tells you how much your efforts earned your business. The higher this number, the better. The ROI accounts for all of your expenses and earnings related to a project.
  • Cost variance: Just as it’s helpful to compare real vs. predicted timing and hours, you should examine the total cost against the predicted cost. This will help you understand where you need to reel it in and where you may want to invest more.
  • Cost performance index (CPI): The CPI for project management, like ROI, tells you how much your resource investment is worth. The CPI is calculated by dividing the earned value by the actual costs. If you come in under one, there’s room for improvement.

How do I create a KPI?

Selecting your KPIs begins with clearly stating your goals and understanding which areas of business impact those goals. Of course, KPIs for ecommerce can and should differ for each of your goals, whether they’re related to boosting sales, streamlining marketing, or improving customer service.

Key performance indicator templates

Here are a few key performance indicator templates, with examples of goals and the associated KPIs.

GOAL 1: Boost sales 10% in the next quarter.

KPI examples:

  • Daily sales.
  • Conversion rate.
  • Site traffic.

GOAL 2: Increase conversion rate 2% in the next year.

KPI examples:

  • Conversion rate.
  • Shopping cart abandonment rate.
  • Competitive pricing.

GOAL 3: Grow site traffic 20% in the next year.

KPI examples:

  • Site traffic.
  • Traffic sources.
  • Promotional click-through rates.
  • Social shares.
  • Bounce rates.

GOAL 4: Reduce customer service calls by half in the next 6 months.

KPI examples:

  • Service call classification.
  • Pages visited immediately before call.

There are many performance indicators and the value of those indicators is directly tied to the goal measured. Monitoring which page someone visited before initiating a customer service call makes sense as a KPI for GOAL 4 since it could help identify areas of confusion that, when corrected, would reduce customer service calls. But that same performance indicator would be useless for GOAL 3.

Once you have set goals and selected KPIs, monitoring those indicators should become an everyday exercise. Most importantly: Performance should inform business decisions and you should use KPIs to drive actions.

Gary Silberg – Are CFOs ready for a changing business model?

Imagine 25,000 automobile parts sourced in wildly different locations throughout the world, magically converging upon an assembly plant to create a vehicle. Consider the complexity for the CFO — the billions of dollars spent for design, for building plants, and for marketing and advertising.

Consider the attention necessary to address taxation and regulation and the complex economic rationale derived from vehicle line profitability, return on invested capital, cash flow and optimal capital structure.

But if anything, the life of the CFO is about to become even more complex.

For the carmaker and supplier, the financial implications largely end when the vehicle leaves the factory. The car then becomes mostly someone else’s business, except for auto financing and parts supply. The sale is made to the dealer and the revenue recognized.

However, a vast range of new technologies and technological abilities — graphic processing unit chips, LIDAR, mapping software, deep learning and artificial intelligence — are transforming consumer behavior and revolutionizing the way we lead our lives, including how we use our automobiles.

New realities

This doesn’t mean the traditional carmaking business is going away anytime soon, but car sales will decline as mobility services reduce the need to own automobiles.

Thus, car companies must change to accommodate a world where revenue comes more from providing services. That is a dilemma for the CFO, a dilemma of business models: the need to serve multiple innovation paces at once.

CFOs must maintain the traditional pace of the business that reflects the sale of cars — consumer interaction once every three to five years — but must also accommodate new business realities for the faster-paced transactions necessary for emerging, service-oriented markets — indeed, consumer interactions as often as many times per day.

Great potential

Those emerging markets have great potential: mobility services, power provision, fuel services, data aggregation and insurance, for example. This will produce a trillion-dollar market for mobility services alone, thus changing the auto industry.

But adding service businesses requires far-reaching strategic decisions affecting complex revenue models, balance sheets, capital structure, taxation and governance.

This sea change in the industry requires new key performance indicators for the CFO to set a new drumbeat for measuring growth, profitability and sustainability. Some metrics that will be important: revenue passenger miles, recurring revenue and number of active customers.

New indicators for profitability are: passenger revenue per available seat mile, revenue per available seat mile, cost per available seat mile, customer acquisition costs and recurring margins.

Entering the mobility service or any service business market is a profound change.

For the office of the CFO it will mean a radical difference in how they operate. Both the metrics for strategic drivers and key finance considerations require rethinking, restaffing and reinvesting in an infrastructure to accommodate an entirely different kind of business model.

While it can seem overwhelming, it’s important for CFOs to be thoughtful and lead by looking outside the industry to find innovative solutions and business models to meet these challenges.

The question for CFOs is not if the business model will change but rather are they ready for the drastic changes coming to the automotive industry?

Caitlin Stanway – Innovation Is Too Important To Be Left Siloed In The “Innovation” Department

Linda has spent 17 years with W. L. Gore & Associates serving multiple roles. In the Medical Products Division, Linda led new product development teams from ideation to commercial launch; drove technical resourcing for manufacturing engineering; and owns two patents. Linda dedicated the past two years to the creation and launch of the Gore Innovation Center in Silicon Valley, where she took it from its original concept to the facility’s execution, completion, and launch. Now, she’s working to jointly develop products, technologies, and business models where Gore materials can uniquely add value.

How do you encourage a culture of innovation?

Gore’s unique culture encourages Associates to pursue questions, ideas and innovations as part of their daily commitments. Associates are encouraged to use their “dabble time” to explore areas that they believe may be of value to Gore, even if they are outside their current division. Gore’s history of innovation has resulted in important problem solving and business creation as a result of genuinely curious Associates who came up with an idea and had the passion to pursue it.

For example, the Elixir Guitar Strings were born out of the W. L. Gore concept of “dabble time.” During a testing session of slender cables for Disney puppets, a group of Gore engineers noticed the cables were too difficult to control and the prototype failed. Instead of giving up on the project, Gore encouraged them to use some “dabble time” to think of alternative solutions. The group decided they needed a smoother, lower friction cable and realized they could use guitar strings as a substitute for the prototype. Once the strings became integrated into that process, the engineers realized they could create a stronger and longer lasting guitar string by combining the existing strings with Gore polymers.

While this is one proof point, for fostering an innovative mindset, we also believe in the power of creating an internal accelerator, a small team that is available to pursue ideas that germinate from employees. Most employee-generated ideas go nowhere because there are insufficient resources with the right perspective. A broad set of business-building skills are required to take a good idea and an adequate resource pool to a minimum viable product and then build it into a successful business.

Do you have any tips on how companies can have a more innovative mindset?

Innovation is too important to be left siloed in the “innovation” department. Innovation can come from anywhere inside or outside the company. For any company, it is key to be open to the ideas from any source and then take the time to flesh out and prioritize the ideas. Once priorities have been established, the company needs to devote the time and resources necessary to make the ideas successful, then announce the success across the company. A more innovative mindset across the entire company can build from one employee-generated success. Employees are highly motivated by such a success, which improves morale and promotes a supportive culture of innovation across the company.

How do you encourage a culture of innovation in a small company versus a large company?

In all companies, regardless of size, employees need to understand their contribution to company innovation goals. So, leadership is the key. Leaders must communicate their expectations of employees and put infrastructure in place to enable employees to pursue their ideas and curiosities. Companies can give employees a percentage of their time to pursue ideas or have employee idea contests. The most success is when companies have taken the time to educate employees on design thinking, lean innovation, business model innovation, open innovation and more. Leadership, setting expectations and providing infrastructure that supports those expectations, works the same across both large and small companies. Gore started as 2 people in a garage 60 years ago. It is now a $3billion company with over 9,500 Associates. Gore grew up with a culture of innovation and took the necessary steps to ensure that as the company grew, it stayed true to the roots of its culture, made changes when necessary, and allowed Associates to be free to innovate. Leadership was and continues to be critical to this journey.

How important is workplace diversity to innovation?

Innovation comes not just from breadth of experience and deep technical knowledge, but through the involvement of diverse teams. Pioneering ideas result when all those involved — everyone from engineers to customers — tap into their individual talents and experiences. Gore is a stronger enterprise because we foster an environment that is inclusive of all, regardless of race, sex, gender identity, sexual orientation or other personal identifiers.

Are companies having to innovate faster than they have had to before? If so, what tools are helping them do this?

Yes, over time the focus of innovation has shifted from internal only for some companies to bringing in external ideas and working with external partners. This shifts the pace of innovation as startups are on a faster timeline than corporations. To ensure we are working with the best startups and getting them what they need, we need to move faster. One area that facilitates faster innovation is to ‘deconstruct’ corporate practices in legal, procurement, supply chain, and other functions that aren’t built for speed but for the purpose of reducing risk in core businesses. The innovation team embraces risk when it explores new opportunities and speed is critical to these explorations. Internal processes that worked in the past sometimes only hinder innovation today. Applying a ‘deconstruction’ mindset puts the innovation leader in the position to rewrite policies that accelerate innovation, just like a startup CEO writes policies that support the startup’s mission.

How do you prepare for disruption?

We work closely with startups, universities, and customers to understand emerging technologies and business models. Taking a cross-sector approach allows us to capitalize on best practices from a variety of fields. Disruption at its best capitalizes on the agile nature of start-ups, the expertise and infrastructure of established corporations, and the exploratory mindset of academic institutions, all while focusing on the problem that needs to be solved. Innovation for innovation’s sake means nothing unless it truly makes a difference – addressing a challenge, improving a life, increasing efficiency etc. Preparing for disruption means factoring in all these inputs to improve the status quo.

How is the nature of innovation and organizations’ approaches to it set to evolve over the next five years?

One challenge facing companies is finding the right balance across all three business creation phases, Ideation-Incubation-Scaleup. Many companies invest in one of these areas while under-investing in the others, resulting in a large number of projects failing to move the needle for the company. In the next few years, companies will gain more insights from data about where their innovation programs fail to support promising projects, and companies will fix the gaps by balancing investments across all three business creation phases.

In addition, new tools like machine learning and artificial intelligence will continue to shape the way we develop businesses and processes. The potential for increasing efficiency on many fronts and across industries is huge. Organizations that build business models around these disruptive tools will realize success in a way that inflexible institutions are unable to.

Scroll to top