In November, we told you about Farmers Business Network, a social network for farmers that invites them to share their data, pool their know-how and bargain more effectively for better pricing from manufacturing companies. At the time, FBN, as it’s known, had just closed on $110 million in new funding in a round that brought its funding to roughly $200 million altogether.
That kind of financial backing might dissuade newcomers to the space, but a months-old startup called AgVend has just raised $1.75 million in seed funding on the premise that, well, FBN is doing it wrong. Specifically, AgVend’s pitch is that manufacturers aren’t so crazy about FBN getting between their offerings and their end users — in large part because FBN is able to secure group discounts on those users’ behalf.
AgVend is instead planning to work directly with manufacturers and retailers, selling their goods through its own site as well as helping them develop their own web shops. The idea is to “protect their channel pricing power,” explains CEO Alexander Reichert, who previously spent more than four years with Euclid Analytics, a company that helps brands monitor and understand their foot traffic. AgVend is their white knight, coming to save them from getting disrupted out of business. “Why cut them out of the equation?” he asks.
Whether farmers will go along is the question. Those who’ve joined FBN can ostensibly save money on seeds, fertilizers, pesticides and more by being invited to comparison shop through FBN’s own online store. It’s not the easiest sell, though. FBN charges farmers $600 per year to access its platform, which is presumably a hurdle for some.
AgVend meanwhile is embracing good-old-fashioned opacity. While it invites farmers to search for products at its own site based on the farmers’ needs and location, it’s only after someone has purchased something that the retailer who sold the items is revealed. The reason: retailers don’t necessarily want to put all of their pricing online and be bound to those numbers, explains Reichert.
Naturally, AgVend insists that it’s not just better for retailers and the manufacturers standing behind them. For one thing, says Reichert, AgVend’s farming customers are sometimes offered rebates. Customers are also better informed about the products they’re buying because the information is coming from the retailers and not a third party, he insists. “When a third party like FBN comes in and tries going around the retailers, the manufacturers can’t guarantee that FBN is giving the right guidance about their products.”
In the end, its customers will decide. But the market looks big enough to support a number of players if they figure out how to play it. According to USDA data from last year, U.S. farms spent an estimated $346.9 billion in 2016 on farm production expenditures.
That’s a lot of feed and fertilizer. It’s no wonder that founders, and the VCs who are writing them checks, see fertile ground. This particular deal was led by 8VC and included the participation of Precursor Ventures, Green Bay Ventures, FJ Labs and House Fund, among others.
Naval Ravikant recently shared this thought:
“The dirty secrets of blockchains: they don’t scale (yet), aren’t really decentralized, distribute wealth poorly, lack killer apps, and run on a controlled Internet.”
In this post, I want to dive into his fourth observation that blockchains “lack killer apps” and understand just how far away we are to real applications (not tokens, not store of value, etc.) being built on top of blockchains.
Thanks to Dappradar, I was able to analyze the top decentralized applications (DApps) built on top of Ethereum, the largest decentralized application platform. My research is focused on live public DApp’s which are deployed and usable today. This does not include any future or potential applications not deployed yet.
If you look at a broad overview of the 312 DApps created, the main broad categories are:
I. Decentralized Exchanges
II. Games (Largely collectible type games, excluding casino/games of chance)
III. Casino Applications
IV. Other (we’ll revisit this category later)
On closer examination, it becomes clear only a few individual DApps make up the majority of transactions within their respective category:
Diving into the “Other” category, the largest individual DApps in this category are primarily pyramid schemes: PoWH 3D, PoWM, PoWL, LockedIn, etc. (*Please exercise caution, all of these projects are actual pyramid schemes.)
These top DApps are all still very small relative to traditional consumer web and mobile applications.
Further trends emerge on closer inspection of the transactions of DApps tracked here:
Where we are and what it means for protocols and the ecosystem:
After looking through the data, my personal takeaways are:
What kind of DApps do you think we as a community should be building? Would love to hear your takeaways and thoughts about the state of DApps, feel free to comment below or tweet @mccannatron.
Also, if there are any DApps or UI/UX tools I should be paying attention to, let me know — I would love to check them out.
For those who have not recently driven down the Ohio Turnpike, images of abandoned steel mills and shuttered factories may come to mind. But the view from the car window today looks far different — one can stop by the Youngstown Business Incubator to see how additive manufacturing startups are utilizing 3D printers, or check out the construction underway at the newly launched Bounce Innovation Hub in Akron.
The Ohio Turnpike is just a small slice of Interstate 80, which connects our nation’s coastal communities from New Jersey to California, cutting straight through Ohio. One of the original routes of the Interstate Highway System, I-80, catalyzed economic growth 60 years ago by bringing together Americans from all corners of the country, creating a center of gravity for a number of industries.
In February, we took to the highway, bound for several stops across the Midwest. Our goal was not to promote investment in the Midwest’s highways, but investment in its burgeoning entrepreneurial ecosystem. We were joined by more than a dozen venture capital investors from Silicon Valley and New York to take part in a “Comeback Cities Tour” through Youngstown and Akron, Ohio; Detroit and Flint, Michigan; and South Bend, Indiana.
Why the Midwest and why us? Rep. Ryan represents Northeast Ohio and led this tour to show VCs that his community, and communities that look like his, are open for business. For experienced VCs like Nitin and Patrick, this trip was an opportunity they could not miss. The Midwest startup ecosystem has been experiencing a bit of a renaissance, attracting increased investment and showing significant results as the area experienced 37 company exits valued at over $5.1 billion in 2017, up from $1.6 billion in 2016. Together, the members on the trip represent the growing percentage of the VC community interested in learning on the ground and bringing resources to areas of the US other than just Silicon Valley and New York. Most importantly, we know that investing is not transactional — it’s a relationship, which requires showing up in person.
Meeting dozens of entrepreneurs on the tour, we recognized similarities among them: passion for building impactful companies and a desire to see their cities once again thrive as business epicenters. The Midwest has long been a source of talent, and it makes sense that people located around research universities and iconic industries will create innovative companies.
Standing in the way of further progress is the lack of a more developed network with reliable sources of early stage capital, connections to broader networks, and companies spanning different stages of development. Though some local startup capital is available, the risk appetite and access to venture resources and growth capital are limited. Through Nitin’s work with Unshackled Ventures, Patrick’s experience building companies, and Rep. Ryan’s political leadership in the Midwest, we know firsthand how important each component is for success.
Silicon Valley already has a robust network of investors at each stage of a company’s growth. Our goal is to build access points from Silicon Valley inward to states including Ohio, Michigan, and Indiana. This starts with building relationships and trust between investors, business leaders, corporations, incubators, and accelerators. Next, we need help from the local business ecosystem to understand and tout local strengths that set this region apart.
Business communities in Rep. Ryan’s district, like Akron and Youngstown, offer investors and employers an attractive business environment that includes a low cost of living, proximity to outstanding universities, solid infrastructure, and clusters of local enterprise customers. Packaged together, it’s an enticing offer for the outside investor community. Similarly, large family foundations and funds can be access points, fostering the flow of investments and knowledge in both directions. Partnering with local investors and business leaders will be very helpful to identify, evaluate, and leverage these strengths — and that’s what we started to do in February.
The Comeback Cities Tour is already paying dividends for entrepreneurs in the Midwest. Those on the tour immediately recognized critical masses of customers in these cities, and conversations have already started between Silicon Valley startups, VC funds, and the regional customer base. This progress is on top of $75,000 pledged by Patrick in partnerships spanning Ohio. Over the coming months, we will deepen the relationships developed during the tour, to drive opportunities and flow of capital in both directions. And we will hold ourselves accountable on follow-up action.
Our Comeback Cities Tour started the dialog, but more needs to be done to grow partnerships of trust. So where do we go from here? There is no better model than Interstate 80. The interstate of the future will connect talent, ideas, and capital as much as the road system of the past connected physical commerce. With the right coordination and relationships, Midwest innovation combined with coastal capital and business experience will drive economic growth from the center of our country outward.
Because entrepreneurship is not a zero-sum game, we all stand to win by working together.
Retailers are regularly mocked about being terrible at personalization. Late last year, Bloomberg stroked brands with one hand while giving a slap with the other when it published a column titled “Personalization Helps Retailers; Too Bad They’re Terrible at It.” This was a blanket accusation. Others get more specific.
Every so often an article will come out featuring a story like this: Person looks at a pair of pants on, say, the J.Crew website. Person buys those pants in a J.Crew store. Person is then retargeted online with an ad for the same pair of pants. The article’s conclusion? J.Crew (or other retailer du jour) is terrible at knowing its customer.
Despite the glaring implications of these articles, retailers aren’t stupid; personalization is just really hard.
Knowing that a specific customer looked at something online and then bought it in a physical store is difficult enough to pull off on its own. But to then feed that information to an ad network so it can stop serving up retargeting ads featuring the item the customer just purchased? That’s no easy feat.
But it doesn’t mean retailers shouldn’t try.
Currently, there are companies that attempt to solve this problem by working with retailers to track individual customers across every single touchpoint and channel with which they interact. It’s a noble task, but not one that’s easily — or even usually, successfully–pulled off.
The first, most basic step these companies might recommend is identifying each customer and prospective customer through data from customer relationship marketing (CRM) systems, data management platforms, the devices they use, the social media they participate in and a variety of other sources.
That’s a tall order, but we’re just getting started. The next steps involve knowing what customers buy, view and consume, why they make their decisions and who and where they are. Next, it’s time to make personalized recommendations based on their actions, preferences and interests and deliver these messages in the context of where they are, the recent events around them ― oh, and the time of day and year.
Rather than mocking those who are doing it wrong, an easier task might be to look at who’s doing it right. And what ‘it’ even is.
When people talk about personalization, they’re usually referring to technologies that enable A/B testing or purchase recommendation engines. However, these activities are outcomes that offer tactical ways for brands to deliver distinct messages to individuals. They aren’t personalization. True personalization strategies come from a position of deep knowledge, and a brand’s deepest, most easily grasped knowledge is what it knows about its products.
Take the clothing shopping service Stitch Fix, which assigns each of its garments 100 or so different attributes (things such as material, color, season, garment type and so on) to get a deep understanding of the variables to which different people respond. Stitch Fix then combines this knowledge with feedback that customers give to their stylists about what items they like and don’t like. Data science then kicks in to understand patterns between things the customer likes across items and pinpoint the exact attributes to which they’re consistently drawn. The result is a dynamic recommendation capability that allows the company to present apparel more likely to please any given shopper.
That’s a very different strategy than, say, throwing products that are supposed to appeal to young professional males in a monthly package and hoping for the best.
Another highly effective personalization strategy comes from Netflix. Todd Yellin, vice president of product at Netflix, likes to say his company has a “three-legged stool” approach to helping people find shows and movies they’re likely to enjoy. According to Yellin, “The three legs of this stool would be: Netflix members, taggers who understand everything about the content, and our machine-learning algorithms that take all of the data and put things together.”
Netflix is in a unique position because its data, its communications, its product and the customer’s experience with all of these things reside in the same place. Retailers, on the other hand, don’t usually see their customers daily so they have to prioritize the personalization of outbound interactions, such as email or online ads, that bring customers back to engage and buy.
Email’s potential for personalization is particularly high for a couple of reasons: 1) shoppers have deliberately opted in to receive communications, and 2) it allows for a cohesive series of messages that retailers can use to create an ongoing narrative with customers over time.
Personalization becomes a lot more interesting and effective when brands start thinking of it in these terms rather than as a blunt instrument for re-selling the customer on an item s/he’s engaged with — or worse, an item the retailer simply wants to offload.
What seems like a new approach is actually in line with how marketing teams were structured before the digital revolution. For people who worked in the pre-Internet era, marketing channels are just that ― channels. They weren’t strategies.
Take marketers who wanted to sell, say, Cocoa Puffs cereal (and you thought those chocolatey poofs sold themselves!). They wouldn’t talk about a television strategy or a magazine strategy. They would start off by asking, “Who buys Cocoa Puffs?” and answering, “Moms who have busy days.” Based on that, they’d advertise in women’s magazines or on daytime television during soap operas, all while talking about getting kids to eat a good breakfast.
Then they’d ask “Who influences the purchase?” The answer would be kids, so they’d talk about how delicious Cocoa Puffs are and they’d go out with a memorable commercial that has a crazy bird who gets coocoo for Cocoa Puffs. They’d run that spot during cartoons and have ads in kids’ magazines or around schools and rec centers.
But that’s not the way advertising works today. Today, rather than following a top-down strategy where all channels are working toward a common and unified thought, retailers seem to take a bottom-up approach where each channel has its own rules and those rules don’t necessarily influence or get affected by other channels.
The rules of the road
An email team, then, is limited by some arbitrary rules around how often someone should receive emails — rules that someone truly believes are the right rules for all prospects. They might mean well, but that’s not good enough.
Steve Madden is a good example of what can happen when a brand rethinks its personalized customer contact strategy and unchains itself from arbitrary email rules. Before we began working with the brand a couple years ago, Steve Madden’s strongest personalized efforts were triggered cart abandonment reminder emails. But even those had limitations: the system only allowed these messages to be sent once a day, and then only to site visitors who were logged in at the time they abandoned their cart.
Since then, Steve Madden has worked to reconfigure its cart abandonment emails to send a designated time after the activity ― not just once a day. But that was still just the beginning: the marketing team tested things like which product categories customers had the highest affinity for and algorithms that could predict the likelihood of conversions and unsubscribes.
The impact of these efforts became clear when the Steve Madden team decided to run a test on the effectiveness of these models. The team sent the same email featuring its line of Freebird shoes to two different groups: an audience of past purchasers and an audience who had a high-predicted affinity for the line of shoes despite not having purchased them in the past.
To the surprise of everyone, the group of customers with a predicted affinity for the shoes spent twice as much as the group of past purchasers. The Steve Madden initiative demonstrated that personalization can go beyond triggers to reflect consumer interactions with a specific product. By pairing product attributes with customer affinity insights, the brand was able to deliver the right messages to an audience that needed and wanted Freebird products ― an audience a traditional marketing team would have overlooked.
Personalization is powerful, but that power can be used for good or evil. Done well, it will boost engagement, responses and sales. Done poorly, or without the right data, it can give a brand a bad rap for not knowing their customers and challenge its hard-won reputation as a reliable source of information on what consumers will like. Fortunately, marketers have exactly what they need to do it well right at their fingertips: knowledge of their products’ attributes, an understanding of their customers and the ability to determine where those two intersect.
The first quarter of 2018 came in roaring for the tech industry but ended up a little rough around the edges.
As the U.S. president does battle with Amazon, social networks’ privacy policies come under greater public scrutiny, dreams of fully-autonomous electric cars collide with technical limitations, and a cold trade war that grew hotter by the tweet, it’d be easy to think that Q1 2018 was, at best, so-so. And for many big tech companies, particularly those trading on public markets, that’d be a fair assessment.
But the global venture capital market seemed to pay no heed to the choppy waters downstream. According to projected data from Crunchbase, global venture capital deal and dollar volume in Q1 2018 eclipsed previous highs from Q3 2017, setting fresh quarterly records for post-Dot Com startup investment.
Like in previous quarters, we at Crunchbase News venture into a cavern of data from the first quarter. Here, we’ll focus primarily on investment into startups. But, fear not, we’ll follow up shortly with our analysis of startup liquidity in Q1.
Before diving in, here are two key takeaways to keep in mind.
Without further ado, let’s figure out what happened in the world of VC during the first quarter of the new year.
Around the globe, venture capitalists kicked off 2018 where 2017 left off: by setting new records.
In this section, we’re taking a look at the global venture capital market from a relatively high vantage point. We’re going to evaluate some key metrics for the market overall – including the overall size and quantity of venture deals – before digging into the stage-by-stage numbers in the next major section.
By taking a look into the recent past, we’re able to see how last quarter stacks up compared to the past year. And, apart from the Q4 hiccup from last year, the trend is generally upward.
The chart below plots projected data from Crunchbase for venture dollar volume in Q1 2018 in addition to the previous four quarters. (For more information about Crunchbase’s projections and methodology, see the Methodology section at the end of this report.)
On both a sequential quarterly and year-over-year basis, global venture deal volume is up. With an overall quarter-on-quarter expansion of over twelve percent, the market made up for ground lost in Q4.
As we’ll see in our stage-by-stage analysis shortly, most of those gains in deal volume were driven by growth at two different ends of the funding spectrum. Some of the most impressive gains, from a percentage perspective, came from late-stage deals which pushed total dollar volume higher. However, since angel and seed-stage deals make up such a large proportion of overall deal volume, a rising tide there raises numbers for the whole market.
Overall venture capital dollar volume follows a similar pattern, except instead of angel and seed-stage deals pushing the new, record highs, it’s a jump in late-stage funding that pushed the overall metric to a local maximum. In other words, since late-stage deals account for the lion’s share of global dollar volume, growth (or contraction) there drives the numbers for the market overall.
The chart below shows Crunchbase’s projections for venture dollar volume, subdivided by funding stage.
On both a quarterly and year-over-year basis, venture dollar volume is up at stages but “technology growth” since last quarter. Q1 2018 delivered one of the largest percentage-based jumps in dollar volume in recent memory. And with a projected total of nearly $77 billion worth of venture deals last quarter, dollar volume was over twice that of the same quarter last year.
And just for some added perspective of just how big $77 billion in quarterly investment is, at least in relative terms, Crunchbase’s projections show that there was about $150 billion invested around the world in all of 2015.
Now that we’ve explored the contours of the global startup funding market for last quarter, let’s take a look at who’s leading the charge. In venture, leadership is an important skill for many reasons, not least of which is the ability to source deals and organize funding rounds.
In some, but not all rounds with investors attached, Crunchbase designates which investor led the round. And based on an analysis of reported data for 4,951 venture funding rounds from the last quarter, we identified around 1,940 distinct investors – both individual and institutional – that led at least one round in Q1. The chart below shows some of the most prolific round-leading investors in the market last quarter.
The ballooning size of YC’s seasonal batches aside, the makeup of this list is more or less in line with two broad groups you’d expect to see:
But there are a few investors which stand out from the rest in this ranking, both with interesting angles into the venture space:
It should go without saying that there is a very long tail on this chart (again, nearly 2,000 investors total) and is subject to change as more deals from Q1 are added to Crunchbase over time. Regardless, what makes the top here—and just below the threshold for making it to the chart—are mostly just the usual suspects.
Now let’s see what’s going on within each stage.
Earlier we promised a section where we’ll go over some of the global VC market’s internals in greater depth. Well, congrats, we made it here together.
There’s a lot of data to cover in this section, so we’ll try to move fairly quickly.
As we’ve done in previous quarters, we’ll start fairly “close to the metal” by analyzing angel and seed-stage deals, and move on to later stages from there.
The first check of outside funding is among the most difficult a startup will raise. Q1 2018 appears to be another banner quarter for angel and seed-stage deals. In Crunchbase, this is mostly comprised of angel and seed rounds, smaller convertible notes, and equity crowdfunding rounds.
The chart below shows projected deal and dollar volume for angel and seed-stage deals in Q1 2018 and a prior year’s worth of quarterly data.
Projected angel and seed-stage investments make up 58 percent of the total deal volume in Q1 2018 but just four percent of the total dollar volume of venture investment. On both a sequential quarterly and annual basis, both metrics are up, with dollar volume leading the way.
That’s due in no small part to a rise in funding round size over the past year leading up to Q1. Below you’ll find a chart revealing an uptick in reported average and median round size of angel and seed-stage deals over time.
Here too, both metrics are either flat or positive both quarter-on-quarter and year-over-year. As we’ll see throughout the remaining funding stages, this is something of a common thread.
And who were some of the most active investors in Q1? From reported rounds data for the quarter, we identified 1,856 unique individual and institutional investors connected to angel and seed-stage deals, worldwide. The top-ranked startup backers are displayed in the chart below.
It should come as no surprises that the most active investors in angel and seed-stage deals are, for the most part, accelerator programs and dedicated seed funds.
A few groups stand out:
It’s at the early stage of the funding cycle (primarily Series A, Series B, and certain large convertible notes and equity crowdfunding rounds) when we start talking about real money. With 33 percent of global deal volume and 32 percent of the total dollar volume, ebbs and flows in early-stage deal-making can make a serious impact on the market overall.
And considering that many of the companies raising early-stage deals today could go on to raise late-stage deals in the future, a close look at this stage gives a peek at future deal flow.
To see how early-stage funding in Q1 stacks up against the last year, see the chart plotting projected deal and dollar volume below.
Relative to both Q4 2017 and Q1 2017, early-stage deal and dollar volume are up markedly. Nearly twice as much capital was invested in early-stage deals in Q1 2018, relative to the same period last year. And while the number of deals is also up year-on-year, dollar volume grew faster and thus continues to push the average size of early-stage rounds higher.
Below, you’ll find a chart of average and median early-stage rounds – based on reported data in Crunchbase – in Q1 2018 and the four prior quarters.
Early-stage rounds around the world were larger in Q1 2018 than the prior quarter and last year. Because the median figure is on the rise, it’s likely we’re seeing a population-wide trend here; in other words, it’s not just a few very large rounds skewing the average upward.
Despite rising average check size, plenty of investors continue to pump lots of capital into early-stage deals. In the chart below, we plot the most active among them.
There’s nothing much interesting to report on in the above ranking, as the funds included are about what you’d expect to see. That said, there is one tidbit to keep in mind. Five of the eleven firms listed in this chart have a direct connection to China:
Once we account for the Business Growth Fund, an active investor in U.K. startups, we find that primarily U.S.-focused venture firms are in the minority of this particular ranking.
All the companies that didn’t fail, sell out, or just stop raising capital after Series B graduate to late-stage ventures. In Q1 2018, late-stage deals (mostly Series C, Series D, and beyond) accounted for just eight percent of total deal volume but a whopping sixty percent of the dollar volume, giving this stage of deals a lot of sway over aggregate dollar figures for the quarter.
In the chart below, we’ve plotted Crunchbase projections for total late-stage deal action for Q1 and the prior year.
Late-stage deal and dollar volume are definitely on the rise, with fairly consistent quarterly growth over the last year or so, with the exception of a single quarterly decline in deal volume between Q3 2017 and Q4. Growth of late-stage dollar volume – both raw figures and on a percentage basis – and deal volume (just on a percentage basis) outpaced all earlier stages quarterly and year-on-year.
To get an idea of what might be driving dollar volume growth, let’s see how the size of late-stage rounds have changed, globally, over the past five quarters.
Despite some modest growth in median round size over time, the average is growing much faster. So while Q1 2018’s late-stage deals, as a population, may be slightly larger than the same time a year ago, it’s likely that a few very large rounds per quarter are skewing averages higher, faster.
Q1 has plenty of examples of really, really big late-stage rounds. Here are just a few:
And here are the firms which invested in the most late-stage deals in the last quarter.
It’s not #BreakingNews that established, generally well-regarded venture firms with lots of capital under management tend to invest in a lot of late-stage deals, either as de novopositions or by exercising follow-on rights.
What is worth noting, though, is that many of the firms listed above are participants in the Q1 trend of announcing or launching really, really, big new funds. Here’s just a sample from the chart above:
And it’s possible that other firms on this list will be raising new funds later this year. (Andreessen Horowitz, for example, has historically raised a new $900 million-$1 billion fund every two years since 2012. The firm’s last publicly-disclosed fund – its fifth, just a hair under $1 billion – was closed in June 2016.)
As a category of funding rounds, “technology growth” is a bit of a strange one. The idea here is to capture super-late-stage funding deals, typically struck with companies headed toward going public.
Longtime readers of Crunchbase News’s quarterly reports may remember that this category presented some vexing challenges over time, particularly concerning definitions.
For our Q1, Q2, and Q3 reports for 2017, technology growth rounds were defined as “any ‘private equity’ round in which a ‘venture’ investor also participated.” This didn’t work for a few reasons, chief among them being that many of these rounds have only one investor, a private equity fund.
Starting in Q4 2017, and here for Q1 2018, technology growth deals are defined, in plain English, as “any ‘private equity’ round raised by a company that has previously raised ‘venture’ financing in a prior round, such as a seed round or Series C.” By focusing on the company’s funding history, rather than how its investors are labeled, the News team believes it’s capturing a more accurate picture of growth equity investments by PE firms in technology companies.
Just like in prior quarters, deal and dollar volume for tech growth rounds are kind of all over the place, as the chart below shows.
For reasons we’ll discuss shortly, we believe it’s best to focus on deal volume inside of this category. For technology growth deals, there’s been positive growth since last quarter and the same time last year. This signals continued investor interest in very late-stage private companies, which is matched by companies interest in raising from private markets.
This being said, there hasn’t been much change in the size of tech growth rounds overall, apart from some outliers that push the average up. The chart below shows average and median round size of tech growth deals.
First off, the size of technology growth deals is quite variable. As examples:
With much more variability in round size just within the past quarter, it’s difficult to make any definitive claims about the state of tech growth funding in the last quarter. It might be back to the drawing board here.
And with that, we’ve covered the world of startup capital inflows in the first quarter of the year, at least in broad strokes.
On a global scale, the venture capital market in Q1 is a microcosm of a number of salient trends.
Some may take solace in the fact that much of this is just an acceleration of historic trends. But at the same time, there are very few mechanisms to point to which can slow this train down, and investors don’t seem keen on pumping the brakes. After all, things are just now picking up from a sluggish Q4. So much for taking an extended breather.
The data contained in this report comes directly from Crunchbase, and in two varieties: projected data and reported data.
Crunchbase uses projections for global and U.S. trend analysis. Projections are based on historical patterns in late reporting, which are most pronounced at the earliest stages of venture activity. Using projected data helps prevent undercounting or reporting skewed trends that only correct over time. All projected values are noted accordingly.
Certain metrics, like mean and median reported round sizes, were generated using only reported data. Unlike with projected data, Crunchbase calculates these kinds of metrics based only on the data it currently has. Just like with projected data, reported data will be properly indicated.
Please note that all funding values are given in U.S. dollars unless otherwise noted. Crunchbase converts foreign currencies to US dollars at the prevailing spot rate from the date funding rounds, acquisitions, IPOs, and other financial events as reported. Even if those events were added to Crunchbase long after the event was announced, foreign currency transactions are converted at the historic spot price.
The second most common reason why VCs pass on an investment is some version of “it’s not big enough.” For a VC to generate a great fund-level return, they typically need to invest in at least one company that has billions of dollars of enterprise value. To do that, most VCs decide that each one of their investments needs to have the potential to exit at or above that amount, even if it’s very unlikely to be the reality for every single investment.
The problem is, most really exciting companies seem “not big enough” to a lot of investors, especially really early on. These startups are often going after markets that don’t currently exist or seem like a niche opportunity (but in reality, are much bigger).
So if you’re a founder choosing to take the VC path, how can you counter investors’ objections about market size?
Below are some different approaches. Keep in mind that some of these are left-brain sort of approaches and others are more right-brain. Both are important and could be effective for different sorts of investors (and different sorts of founders). And if you gravitate towards one, keep in mind that investors that make team decisions will come at this question from multiple angles.
Most market sizes are top-down. “The market for marketing software is $XB dollars so it’s big enough to support some really big companies.” It’s the simplest way to think about market size, so most investors will gravitate that way, especially if you are building a company that is going after an EXISTING market.
One way to augment this is to essentially take the same approach but show brick-by-brick how your market opportunity may be bigger than it seems. This means showing:
You will still need to be going after a pretty large core business for this to resonate in any way. But doing a build up like this can be effective when a prospective investor does believe that the market is somewhat big but would love to see more upside to get fully comfortable.
The previous approach completely fails when you’re talking about markets that don’t quite exist yet or when an investor is not at least on the fence about market size.
Another approach is to do a bottoms-up analysis to demonstrate the scale of market demand for a service like yours. Start with the total number of potential end-users, and use reasonable estimates around customer demand, pricing, market share, etc. The key things that you’ll be pushed on with this sort of an analysis is a) how you are defining the reasonable scope and segmentation of the potential customers, b) how realistic your market share assumptions are, and c) the fact that this is really all conjecture.
One way to address c) is to include solid data points that lend credibility to your assumptions (like a reasonable estimate of how much customers already spend to solve a similar problem or some ROI analysis on your product/service that can be used to estimate reasonable pricing and the “no-brainer-ness” of what you are proposing). Also keep in mind the “vitamin vs. pain-killer” analogy. Bottoms-up approaches tend to work better for “pain-killers” than “vitamins,” even if the ROI of the vitamin seems to hold together.
Being in lock-step with a broad mega-trend is another way that investors get over a seemingly small market. This means that the investor (consciously or not) believes that the mega-trend will either a) drive massive market growth or b) drive the new company to have unusually high market share.
A simple example of this was the shift of enterprise software to the cloud. Once investors believed this was happening, it became more reasonable to think that a new software product in a specific vertical might enjoy extremely rapid adoption and enough market share to build to $100M+ in revenue and $1B+ in enterprise value reasonably quickly. Without this mega-trend, it’s harder to believe this because the pace of adoption may be too slow and it would be too difficult to dislodge existing players with a similar approach without being 10X better, faster, or cheaper.
Another example is IoT. Historically, investors have hated the idea of investing in consumer electronic products. Any early investor or operator at Ring will tell you that early on, almost nobody believed that a smart doorbell company could be “big enough.” But as a mega-trend emerged in this category, we saw more suspension of disbelief in this area for a period of time, for better or worse.
Using analogies can be tricky because they may not land. But if they do, I find that a lot of investors often get fixated on an analogy and that can sufficiently build conviction. When doing this though, it’s important to not just list out similar companies or big exits in the space, but internalize what those analogies communicate.
For example, if there have been some large exits in a seemingly small market, this can be a blessing or a curse. Yes, those analogies exist, but how well do investors know the comp that you are citing? Was it actually a really teeny business bought for pure strategic reasons? Are there actually only one or two buyers who would pay that kind of premium? How many investors would take that bet?
Productivity software is in this category. One could point to companies like Evernote, Sunrise, Acompli, etc. as examples of companies with really nice exits or private market valuations. But looking at this another way, one could say, “Wow, outside of Microsoft, who will pay a premium? The best companies only exited for at most a couple hundred million? Wow, doesn’t Evernote show that it’s really tough to be a truly venture-scale, independent company in this category?”
I find that the best analogies are ones that tend to connect to one of two things. Either, it ties to a mega-trend. For example, “Just like the shift to the cloud allowed for the rise of great companies in different categories, the shift to mobile computing in the enterprise will do the same. So this application that does X is the beginning of a mobile-first HR product that will be like Workday but for mobile.”
The second analogy is to connect yourself to a company with a similar ethos or founders with the same super-power. This is a lot harder to do, and probably happens by inception more than through direct argument. You would probably not say “We are just like the Airbnb founders, so you should believe we can make this work.” But I have heard investors who have gotten to know founders over some time say something like “Wow, these founders are unbelievably obsessed with design and user experience in a way I haven’t seen since (person X). Maybe they really can pull it off!”
This is some version of “Today we are doing X, but that just puts us in a great position to do Y, which is obviously huge.” There are a couple flavors of this.
The first is the bank-shot. This is where X is actually not the foundation of a great sustainable business but could be a gateway to more. A lot of VCs have a hard time with bank-shots, unless you are already demonstrating some really remarkable traction. Usually, the right approach here is to focus on growth and scale as quickly and efficiently as possible when accomplishing X, and make most of your money doing Y down the road when you have a network effect, customer lock-in, or can provide a valuable service that no one else could provide without your scale.
The second version of this is when X is actually pretty decent. Maybe it won’t be “the next Facebook,” but it could certainly get you to a pretty attractive place. Usually, this works well when the underlying business could be profitable and decently large without being too capital-intensive, which gives you more freedom to pursue the bigger opportunity as a next step. This allows an investor to say to themselves, “I could reasonably get a 5–10X on the core business, and there is some small probability that this could actually be a 20X or more.” Usually, this means that the company is in a market that has decent prospects for future funding or M&A, such that if the business hits a double but not a home run, it still could be a good outcome.
One additional approach that I’ve seen founders use quite successfully is what I’ll call “the future bet.” The approach here is to deflect discussions about current market size and focus the discussion on a single, simple bet about the future.
For example, this can be used in almost any rental or sharing economy company (clothing, transportation, equipment, etc). Even though most rental markets aren’t very large, the bet goes something like “do you really believe that people are going to continue spending thousands of money on products that not utilized 90+% of the time? Our bet is that consumers are rapidly moving away from ownership towards sharing and renting, and those multiple billions of dollars are going to shift towards the companies that get this right.
First, don’t forget about what margins mean for market potential. High-margin businesses like software or marketplaces (when revenue is correctly accounted for) can support 10X+ revenue multiples. So the bar for a large scale opportunity is the potential to generate hundreds of millions of dollars in revenue to be worth billions of dollars down the road. For low-margin businesses, the revenue bar for a larger scale opportunity is higher. So when you are talking about how your business can build using a bottoms-up analysis or comparatives, make sure you keep this in mind.
Second, the landscape of potential acquirers plays into this discussion as well. Generally, I don’t recommend founders spend too much time talking about buyers and M&A opportunities, and we don’t obsess over it much here at NextView. But when you are a company that may very well find that the market opportunity is not as big as one thought or hoped, it’s comforting to be in a category with a strong set of folks who would buy you for a reasonable amount. Most investors don’t really focus on downside protection. But psychologically, this could make a difference when one is on the fence because of market size or the risk associated with a bank shot / scope-expansion strategy.
Here at Bowery Capital, we continue to outline where CxOs are in their current enterprise technology upgrade cycle, a trend that lies at the core of our investment thesis. As a result of this shift, we expect “next-generation” technology spend to hit $468B over the next ten years as legacy technologies are swapped out for new. In addition, we see some of this $468B coming from net new areas of spend where humans are displaced by new process, new system, or new automation software. Taking a look at exit data since our last update on this, we continue to believe that we’re early in this cycle. Cumulative revenues of exited next-generation companies is roughly $60-70B of replacement which in our models represent only about 15-20% of the estimated market opportunity. Over the last few years, a number of key themes have emerged corroborating our view that most of the enterprise opportunity lies ahead of us and that enterprise technology spend shifts are upcoming.
1. The Displacement Of On-Premise Solutions Is Early. There’s a cloud option out there for almost every IT workload, but a survey from the Uptime Institute indicates that about two-thirds of enterprise computing is still done in company-owned data centers. While projections vary and you are still seeing product end markets in SaaS infrastructure and SaaS applications grow by 40%+ y/y we are still moving in the right direction with a lot of opportunity. The story is not fully written and enterprise technology spend shifts will continue to occur here.
2. Traditional Vendors Are Increasingly Investing In Next-Gen Offerings. Startups aren’t the only ones taking advantage of this massive shift in spending. The “tech titans” are fully offering as-a-service versions of their own products. Microsoft, Oracle, SAP, and other company’s cloud revenue continues to grow at a huge clip. IDC has predicted correctly every year for the past three years that about 20%+ of all new business software purchases will be SaaS, benefitting long-standing tech leaders and startups alike.
3. Next-Gen Solutions Are Growing The Overall Market. By replacing spend previously allocated to services or personnel, next-gen solutions are bringing new dollars into the tech market. Yesterday’s personnel or consulting expenditures become tomorrow’s SaaS or IoT revenues. This is the core message behind Marc Andreessen’s now-famous “Software Is Eating The World” piece. The key takeaway here is that even Fortune 500 companies are starting to understand that every non-core function has the potential to be replaced by newer, easier-to-adopt alternatives; and their spending behavior is beginning to reflect that mindset as companies work with increasingly young and innovative vendors.
4. Greenfield Opportunities In Vertical Markets Remain Untapped. Several cloud software companies have already made waves serving vertical markets: Veeva Systems, Fleetmatics, Guidewire, and MindBodyare just a few public-company examples. Per Bain research, however, fewer then 15% of companies in transportation, energy, manufacturing, and several other sectors view themselves as being active cloud software adopters. As we detail in another recent report of ours (“Opportunities In Vertical Software”), the Bowery team expects many more vertical SaaS success stories as the specialization of enterprise tech continues and we will continue to make investments here beyond our portfolio beyond companies like Transfix (trucking), CredSimple (healthcare), and Fero Labs (manufacturing).
5. As Consumers Migrate To Mobile, Companies Need Next-Gen Tools To Follow. It’s already well known that consumers are rapidly adopting mobile to manage nearly every aspect of their lives, including how they buy products and services. Certain categories of commerce moved online seemingly overnight (e.g. flowers, office supplies), for example, and the same is happening in mobile. In order to market to, track, engage with and support customers on new platforms like mobile, enterprises must employ next-gen solutions that can transact in new forms of data. Connected industrial sensors, in-store beacons, mobile marketing attribution and mobile CDN are just a few examples.
The enterprise spend shift to cloud software is underway, and by our measure, we’re still early in this cycle. Over the next few years, growth in enterprise tech upgrades will likely out-shadow anything we’ve seen to date. And that means unprecedented opportunity for smart investors, corporate innovators, and startup founders alike that want to get in on these enterprise technology spend shifts.
Consumer companies are the ones that drive the headlines, that generate the most clicks on Techcrunch, and are top of mind for many in the tech industry. So I’d like to celebrate this brief point in time where the enterprise strikes back. While one of the darlings of the last 10 years, Facebook, is getting pummeled, the enterprise market is back in the spotlight.
Look at the Dropbox IPO which priced above its initial value and came out white hot at the end of one of the worst weeks in stock market performance. Couple that with Mulesoft being bought for 21x TTM revenue (see Tomasz Tunguz analysis) at $6.5 billion and Pivotal’s recent S-1 filing and you can see why the enterprise market has everyone’s attention again. However, I’ve been around the markets long enough to know that this too shall pass.
The real story in my mind is about what’s next. It’s true that Salesforce and Workday have created some of the biggest returns in recent enterprise memory. And with that, VC money poured into every category imaginable as every VC and entrepreneur scrambled to create a new system of record…until there were no more new systems of record to be created. My view is that we will see many more of these application layer companies go public in the next couple of years and that will be awesome for sure. There will also still be some amazing companies that raise their Series C, D and beyond funding rounds with scaling metrics. There will also be the few new SaaS app founders who have incredible domain expertise reinventing pieces of the old guard public SaaS companies.
However as a first check investor in enterprise startups, the companies that truly get my attention are more of the infrastructure layer companies like Mulesoft and Pivotal. We are at the beginning stages of one of the biggest IT shifts in history as legacy workloads in the enterprise continue to move to a cloud-native architecture. Being in NYC working with many of the 52 Fortune 500 companies who are undergoing their own migrations and challenges makes us even more excited about what’s ahead. The problem is that as an investor in infrastructure, it’s quite scary to enter a world where AWS commoditizes every bit of infrastructure and elephants like Microsoft and Google are not far behind. Despite that, it’s also hard to ignore the following facts:
and many more threads which can create new billion dollar outcomes. Key here is tying this all to a business problem to solve and not just having infrastructure for infrastructure’s sake.
Salesforce clearly sees the future and it’s in moving a layer deeper into the infrastructure stack, and combining the world of application with back-end and cloud with on-prem. The irony is that the company that led the “no software” movement is the one that bought Mulesoft, a company where 1/2 of its revenue is from software installed on-premise. What Salesforce clearly understands is that in the world of enterprise, integration becomes king as organizations constantly look to get disparate applications, databases and other systems to talk to each other.
“Every digital transformation starts and ends with the customer,” Salesforce CEO Marc Benioff said in a statement. “Together, Salesforce and MuleSoft will enable customers to connect all of the information throughout their enterprise across all public and private clouds and data sources — radically enhancing innovation.”
It’s a digital transformation journey, one that every Fortune 1000 is undergoing. In a world where Gartner predicts that 75% of new applications supporting digital businesses will be built not bought by 2020″, you can see why Mulesoft’s integration platform helps Salesforce future proof itself and embed itself in a future where developers rule.
If you are looking for a story about how large enterprises digitally transform themselves into agile software organizations (to the extent they can), then I suggest reading Pivotal’s recently filed S-1 on Friday. Their ascent over the last 5 years mirrors many of the trends we are hearing about on a daily basis; cloud in all forms — public, private, hybrid, and multi; agility; rise of developers; monolithic apps to microservices, containers, continuous integration/deployment, abstraction of ops and infrastructure, and every Fortune 500 is a software company in disguise. Their growth to over $509mm of revenue from $281mm 2 years ago is a case in point. What Pivotal understood early is that there is no digital transformation and agile application development without infrastructure spend. Benioff clearly understands this which is why he paid such a high multiple for Mulesoft.
For those that don’t know what Pivotal does, here is what they do in a nutshell:
PCF accelerates and streamlines software development by reducing the complexity of building, deploying and operating modern applications. PCF integrates an expansive set of critical, modern software technologies to provide a turnkey cloud-native platform. PCF combines leading open-source software with our robust proprietary software to meet the exacting enterprise-grade requirements of large organizations, including the ability to operate and manage software across private and public cloud environments, such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, VMware vSphere and OpenStack. PCF is sold on a subscription basis.
I’ve been fortunate to have a chance to watch closely through my first check into Greenplum many moons ago which ultimately sold to EMC and spun back out as Pivotal (along with some VMWare assets). I also remember the journey the founders were taking on when they decided to sell into P&L units at Fortune 500s charged with making a more agile company. Instead of selling infrastructure to IT, they were able to sell a vision of how P&L units could deliver on their goals faster. Difficult in the beginning, but proved out over time. These P&L units were the one’s charged with creating the bank of the future, the hotel of the future, the insurance company of the future, all centered around a better customer experience driven off of one platform that allowed developers to be more productive and delivered on any cloud.
My only fear about all of this enterprise infrastructure excitement is that like the SaaS markets of yesteryear, this attention will attract way too much venture capital, driving up prices, and reducing opportunities to create meaningful exits. It’s great that enterprise infrastructure is top of mind, but part of me prefers for it to stay in the background, stealthily delivering amazing results.
Is your experimentation program experiencing push-back from other departments? Marketers and designers who own the brand? Developers with myriad other priorities? Product owners who’ve spent months developing a new feature?
The reality is that experimentation programs often lose steam because they are operating within a silo.
Problems arise when people outside of the optimization team don’t understand the why behind experimentation. When test goals aren’t aligned with other teams’ KPIs. When experimentation successes aren’t celebrated within the organization-at-large.
Optimization champions can struggle to scale their experimentation programs from department initiatives to an organizational strategy. Because to scale, you need buy-in.
Most companies have a few people who are optimizers by nature, interest, or experience. Some may even have a “growth team.” But what really moves the dial is when everyone in the company is on board and thinks this way reflexively, with full support from C-level leaders.– Krista Seiden, Global Analytics Education Lead, Google
But getting buy-in for any initiative – especially one that challenges norms, like experimentation – is no easy task. Particularly if your organization suffers from silo mentality.
In this post, we outline a 5-step process for blasting through the silo-mentality blocks in your organization to create a culture of experimentation.
Our 5-step process for destroying silos so you can scale your experimentation program:
At WiderFunnel, we often hear questions like: How can I get other people on board with our experimentation program? How can I create an organizational culture of experimentation, when our team seems to be working in a bubble?
When a company operates in silos, people have fewer opportunities to understand the priorities of other departments. Teams can become more insular. They may place greater emphasis on their own KPIs, rather than working with the team-at-large towards the organization’s business goals.
But it’s not silos that are necessarily the problem, it’s silo mentality.
And when an experimentation mindset is only adopted by the optimization team, silo mentality can be a major barrier to scaling your program to create a culture of experimentation.
Silo mentality causes people to withhold information, ignore external priorities, and delay processes where other teams are involved. All in an effort to ensure their team’s success over another’s.
Within a silo, members can suffer from groupthink, emphasizing conformity over confrontation and allowing weak ideas or processes to go unchallenged. They rely on intuition to guide their practices, and resist change because it’s new, uncomfortable, and different.
At its worst, silo mentality can point to adversarial dynamics between teams and their leads. It points to internal conflict, either between management as they fight over limited resources or compete to rise to the upper echelons of your organization.
Silo mentality often comes down to leaders, who are creating the goals and priorities for their teams. If team leads experience conflict, this us-against-them mentality can trickle down to their reports.
Managers, particularly under stress, may feel threatened by another manager’s initiatives. This is because silos often form in organizations where leaders are competing. Or, they appear in organizations where there is a clear divide between upper management and employees.
Unfortunately, silo mentality is a pain point for many optimization champions. But every department is a stakeholder in your organization’s growth. And to enable a strong organizational culture of experimentation, every department needs to understand the value of testing—the why.
So, let’s dive in and explore our 5-step process for breaking down silo mentality. At the heart of this process is creating an understanding of what experimentation can do for the individual, the department, and the organization-at-large.
You may be thinking: What does a “culture of experimentation” even look like?
That’s a great question.
A culture of experimentation is about humility and fearlessness. It means that your organization will use testing as a way to let your customers guide your decision making.
Ask yourself these questions to create a vision for your experimentation program:
In traditional business settings, leadership often takes a top-down approach to communication. But experimentation flips this dynamic on its head. Instead of the HiPPO (highest paid person’s opinion) calling all the shots, all ideas must be tested before initiatives can be implemented.
To me, a culture of experimentation is simply measured by the number of people in an organization who are able to admit, ‘I don’t know the answer to that question, but I know how to get it’.
If people within your organization are telling you ‘This is what our customers want’ (without the data to back it up) then you have a problem. Organizations that excel at experimenting aren’t better at knowing what customers want, they are just better at knowing how to find out.–Mike St, Laurent, WiderFunnel Senior Optimization Strategist
The most effective way to persuade others to adopt an experimentation mindset is to subscribe to your vision. You need to demonstrate the test-and-learn values of an Optimization Champion. Values like:
We listen to our gut, then test what it says.
We gather market research, then test it.
We create best practices, then test them.
We listen to our opinions, then test them.
We hear the advice of others, then test it.
We hear the advice of experts, then test it.
We believe in art and science, creativity and discipline, intuition and evidence, and continuous improvement.
We aim for marketing insights.
We aim to improve business results.
We test because it works.
Scientific testing is our crucible for decision-making.
– Chris Goward in “The Optimization Champion’s Handbook”
Once you have clarified your vision, write it down in a declarative statement. And be ready to communicate your vision over. And over. And over.
You can’t achieve a culture of experimentation by yourself. You need testing allies.
Other department leads can help create momentum, acting as internal influencers, inspiring others to adopt an experimentation mindset in their workflow. They can help spread the gospel of a test-and-learn culture.
When executives embrace failure as an inherent and integral part of the learning process, there is a trickle-down effect on the entire enterprise. Increasingly, more employees from more departments are eager to learn about the customer experience and contribute their ideas. With more individuals invested and involved, it’s easier for a company to gain a deeper understanding of its customer base and make informed decisions that drive business value.– Optimizely’s “Creating a Culture of Experimentation”
To do this, you need to understand what will motivate stakeholders to fully adopt an experimentation mindset; how to incentivize them to champion the cause of experimentation. And of course, not everyone will prescribe to your vision.
At least not right away. It may take some finesse. In her Growth & Conversion Virtual Summit presentation, Gretchen Gary, Product Manager at HP, outlined three different types of stakeholders that may have difficulty engaging in a testing culture.
The underlying emotions for all three types of stakeholders are the same:
Your job is to inspire them to overcome these emotions. You need to communicate the possibilities of experimentation to each department in a way that makes sense for them – particularly in terms of their own performance.
What’s in it for your stakeholders?
You, the Optimization Champion, will need to mitigate different perspectives, opinions, and knowledge levels of testing. You’ll want to:
The best thing you can do is try to familiarize yourself with [other team’s] KPIs so you can speak their language and know what might drive them to be more involved with your program.– Gretchen Gary
Support your vision of experimentation by building a business case. Leverage existing case studies to demonstrate how similar organizations have achieved growth. And show, through real-world examples, how different internal teams — from product development to marketing, from branding to IT — have incorporated experimentation into their workflows.
It’s important to create an experimentation protocol so that people across your organization understand how and when they can contribute to the experimentation program.
Remove bottlenecks and unify siloed and understaffed teams by formalizing an optimization methodology, empowering individuals and groups to take ownership to execute it.– Hudson Arnold, Senior Strategy Consultant at Optimizely
A standard process enables any employee to know when they can take ownership over a test and when they’ll need to collaborate with other stakeholders.
Building a test protocol is essential. If I’ve learned anything over the last six years, it is that you really have to have formal test protocol so everyone is aware of how the testing tool works, how a test is operated and performed, and then how you’re reading out your results. You will always get questions about the credibility of the result, so the more education you can do there, the better.– Gretchen Gary
First, evaluate how your experimentation program is currently structurally organized. And think about the ideal structure for your organization and business needs.
Experimentation programs often fall into one of the following organizational structures:
Regardless of how you structure your program, education is a major part of ensuring success when experimentation is a company-wide initiative. Anyone involved in testing should understand the ultimate goals, the experimentation methodology, and how to properly design tests to reveal growth and insights.
When clarifying your organization’s experimentation methodology, you should:
“Every department should have complete access to and be encouraged to submit ideas for experimentation. But this should only be done when the company is also confident it can complete the feedback loop and provide explanation as to the acceptance or rejection of every single idea,” Mike St. Laurent explains.
“An incomplete feedback loop – where people’s ideas get lost in a black hole – is one of the most detrimental things that can happen to the testing culture. Until a feedback loop can be established, it is better for a more isolated testing team to prove the value of the program, without the stressors caused by other parts of the organization getting involved.”
Different departments in your organization offer unique insight, experience, and expertise that can lead to experiment hypotheses. Experimentation protocol should communicate why your organization is testing, and how and when people can contribute.
If silo mentality is limiting your experimentation program, cross-functional teams may be an ideal solution. On cross-functional teams, each member has a different area of expertise and can bring a unique perspective to testing.
Eliminate the territoriality of small teams,” advises Deborah Wahl, CMO of Cadillac and former CMO of McDonald’s. “[Leverage] small, cross-functional teams rather than teams at-large and really get people committed to working towards the same goal.
When you form cross-functional teams, everyone benefits by gaining a deeper understanding of what drives other teams, what KPIs measure their success, and how experimentation can help everyone solve real business problems. They can also generate a wealth of experiment ideas.
Hypothesis volume is (after all) one of the biggest roadblocks that organizations experience in their optimization programs.
Cross-functional teams can channel the conflict associated with silo mentality toward innovative solutions since they help to break down the silo characteristic of groupthink.
How to move from groupthink to think tank
Bruce Tuckman’s theory of group development provides a unique lens for the problem of collaboration within teams. He breaks down the four stages of group development:
In the first stage, forming, a team comes together to learn about the goals of other team members and they become acquainted with the goals of the group. In this case, the goal is growth through experimentation.
Everyone is more polite in this stage, but they are primarily still oriented toward their own desires for an outcome. They are invested in their own KPIs, rather than aligning on a common goal. And that’s fine, because they’re just getting to know each other.
In the second stage, storming, the group learns to trust each other. And conflict starts to rear its head in group discussions, either when members offer different perspectives or when different members make power plays based on title, role, or status within the organization.
But for the team to work, people need to work outside the norms of hierarchy and seniority in favor of collaboration.
In this stage, members feel the excitement of pursuing the goals of the team, but they also may feel suspicion or anxiety over other member’s contributions. You want to make sure this stage happens so that people feel comfortable raising unconventional or even controversial perspectives.
In the context of experimentation, one person’s opinion won’t win out over another person’s opinion. Rather both opinions can be put to the test.
“I find [experimentation] has been a great way to settle disputes over experience and priorities. Basically you just need to find out what people want to know, and offer answers via testing. And that in itself is gaining trust through collaboration. And to do so you need to deliver value to all KPIs, not just the KPIs that your program will be measured on. Aligning on common goals for design, support, operations, and others will really help to drive relevancy of your program,” explains Gretchen Gary.
It’s important to enable the right kind of conflict—the kind that can propel your experimentation program toward new ideas and solutions.
The third stage, norming, is when members of the group start to accept the challenge of meeting their shared goal. They understand the perspectives of others and become tolerant of different working or communication styles. They start to find momentum in the ideation process, and start working out solutions to the problems that arise.
And the last stage, performing, is when the team becomes self-sufficient. They are now committed to the goal and competent at decision-making. And conflict, when it arises, is effectively channeled through to workable solutions.
Teams may go through these stages again and again. And it’s necessary that they do so.
Because you want weak ideas to be challenged. And you want innovative ideas to be applied in your experimentation program.
Free-flowing internal communication is essential in maintaining and scaling experimentation at your organization.
You should be spreading experiment research, results, and learnings across departments. Those learnings can inform other team’s initiatives, and plant the seed for further marketing hypotheses.
Information has a shelf-life in this era of rapid change. So, the more fluid your internal communication, the more central and accessible your data, the more likely it will be put to use.
How are customer learnings and insights shared at your organization?
One method for sharing information is to create an “intelligence report.”
An intelligence report combines data generated from your organization and data derived from external sources. Paired with stories and insights about experimentation, an intelligence report can be a helpful tool for inciting creativity and generating experimentation ideas.
Another method is to provide regular company-wide results presentations. This creates an opportunity for team members and leaders to hear recent results and customer insights, and be inspired to adopt the test-and-learn mindset. It also provides a space for individuals to express their objections, which is essential in breaking down the silo mindset.
But sharing insights can be also be more informal.
WiderFunnel Strategist Dennis Pavlina shares how one of his clients posts recent test results in the bathroom stalls of their office building to encourage engagement.
A new idea doesn’t get anywhere unless someone champions it, but it’s championship without ownership. Keep it fun and find a way to celebrate the failures. Every failure has a great nugget in it, so how do you pull those out and show people what they gain from it, because that’s what makes the next phase successful.– Deborah Wahl
Whatever tactic you find most effective for your organization, information dissemination is key. As is giving credit for experiment wins! At WiderFunnel, we credit every single contributor – ideator, strategist, designer, developer, project manager, and more – when we share our experiment results. Because it takes a team to make truly drive growth with testing.
A lot of what we talked about in this post is about building trust.
People need to trust systems, procedures and methodologies for them to work. And every initiative in breaking down silos should be geared towards earning that trust.
Because trust is buy-in. It’s a commitment to the process.
Creating and maintaining a culture of experimentation doesn’t happen in a straightforward, sequential manner. It’s an iterative process. For example, you’ll want to:
Because a culture of experimentation is about continuous exploration and validation. And it’s about testing and optimizing what you’ve learned as an organization. Which means you’ll need to apply these concepts over and over.
Make the terms a part of your vocabulary. Make the steps a part of your routine. Day in and day out.
Lights-out manufacturing refers to factories that operate autonomously and require no human presence. These robot-run settings often don’t even require lighting, and can consist of several machines functioning in the dark.
While this may sound futuristic, these types of factories have been a reality for more than 15 years.
Famously, the Japanese robotics maker FANUC has been operating a “lights-out” factory since 2001, where robots are building other robots completely unsupervised for nearly a month at a time.
“Not only is it lights-out,” said FANUC VP Gary Zywiol, “we turn off the air conditioning and heat too.”
To imagine a world where robots do all the physical work, one simply needs to look at the most ambitious and technology-laden factories of today.
For example, the Dongguan City, China-based phone part maker Changying Precision Technology Company has created an unmanned factory.
Everything in the factory — from machining equipment to unmanned transport trucks to warehouse equipment — is operated by computer-controlled robots. The technical staff monitors activity of these machines through a central control system.
Where it once required about 650 workers to keep the factory running, robot arms have cut Changying’s human workforce to less than a tenth of that, down to just 60 workers. A general manager for the company said that it aims to reduce that number to 20 in the future.
As industrial technology grows increasingly pervasive, this wave of automation and digitization is being labelled “Industry 4.0,” as in the fourth industrial revolution.
So, what does the future of factories hold?
To answer this, we took a deep dive into 8 different steps of the manufacturing process, to see how they are starting to change:
Manufacturers predict overall efficiency to grow annually over the next five years at 7x the rate of growth seen since 1990. And despite representing 11.7% of US GDP and employing 8.5% of Americans, manufacturing remains an area of relatively low digitization — meaning there’s plenty of headroom for automation and software-led improvements.
Manufacturing is deeply changing with new technology, and nearly every manufacturing vertical — from cars, to electronics, to pharmaceuticals — is implicated. The timelines and technologies will vary by sector, but most steps in nearly every vertical will see improvement.
Read on for a closer look at how technology is transforming each step of the manufacturing process.
From drug production to industrial design, the planning stage is crucial for mass-production. Across industries, designers, chemists, and engineers are constantly hypothesis testing.
Will this design look right? Does this compound fit our needs? Testing and iterating is the essence of research and development. And the nature of mass-production makes last-minute redesigns costly.
Major corporations across drugs, technology, aerospace, and more pour billions of dollars each year into R&D. General Motors alone spent upwards of $8B on new development last year.
In the highly-scientific world of R&D, high-caliber talent is distributed across the globe. Now, software is helping companies tap into that pool.
When it comes to networking untapped talent in data science and finance, platforms like Kaggle, Quantopian, and Numerai are democratizing “quant” work and compensating their collaborators. The concept has also aleady taken off with pharmaceutical R&D, though it’s growing elsewhere as well. On-demand science platforms like Science Exchange are currently working across R&D verticals, and allow corporations to quickly solve for a lack of on-site talent by outsourcing R&D.
While R&D scientists may seem non-essential to the manufacturing process, they are increasingly critical for delivering the latest and greatest technology, especially in high-tech manufacturing.
Companies are exploring robotics, 3D printing, and artificial intelligence as avenues to improve the R&D process and reduce uncertainty when going into production. But the process of hypothesis testing has room for improvement, and tightening iteration time will translate to faster and better discoveries.
Accelerating product development is the No. 1 priority for firms using 3D printing, according to a recent industry survey. Moreover, 57% of all 3D printing work done is in the first phases for new product development (i.e. proof of concept and prototyping).
3D printing is already a staple in any design studio. Before ordering thousands of physical parts, designers can us 3D printing to see what a future product looks like.
Similarly, robotics is automating the physical process of trial-and-error across a wide array of verticals.
In R&D for synthetic biology, for example, robotics making a big impact for companies like Zymergen and Ginkgo Bioworks, which manufacture custom chemicals from yeast microbes. Finding the perfect microbe requires testing up to 4,000 different variants concurrently, which translates to lot of wet lab work.
Using automatic pipette systems and robotics arms, liquid handling robots permit high-throughput experimentation to arrive at a winning combination faster and with less human error.
Below is the robot gene tester Counsyl (left), used for transferring samples, and Zymergen’s pipetting robot (right) for automating microbe culture testing.
“Materials engineering is the ability to detect a very small particle — something like a 10-nanometer particle on a 300-millimeter wafer. That is really equivalent to finding an ant in the city of Seattle.” — Om Nalamasu, CTO at Applied Materials
Looking beyond biotech, material science has played a pivotal role in computing and electronics.
Notably, chip manufacturers like Intel and Samsung are among the largest R&D spenders in the world. As semiconductors get ever-smaller, working at nanoscale requires precision beyond human ability, making robotics the preferred option.
Tomorrow’s scientific tools will be increasingly more automated and precise to handle micro-scale precision.
Thomas Edison is well-known for highlighting materials science as a process of elimination: “I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work.”
The spirit of Edison persists in today’s R&D labs, although R&D is still less digitized and software-enabled than one might expect (the National Academy of Sciences says developing new materials is often the longest stage of developing new products). Better digitization of the scientific method will be crucial to developing new products and materials and then manufacturing them at scale.
Currently, the hottest area for deals to AI startups is healthcare, as companies employ AI for drug discovery pipelines. Pharma companies are pouring cash into startups tracing drug R&D such as Recursion Pharmaceuticals and twoXAR, and it’s only a matter of time until this takes off elsewhere.
One company working in chemistry and materials science is Citrine Informatics(below, left). Citrine runs AI on its massive materials database, and claims it helps organizations hit R&D and manufacturing milestones 50% of the time. Similarly, Deepchem (right) develops a Python library for applying deep learning to chemistry.
In short, manufacturers across sectors — industrial biotech, drugs, cars, electronics, or other material goods — are relying on robotic automation and 3D printing to remain competitive and tighten the feedback loop in bringing a product to launch.
Already, startups developing or commercializing complex materials are taking off in the 3D printing world. Companies like MarkForged employ carbon fiber composites, where others like BMF are developing composites with rare nanostructures and exotic physical properties.
Certainly, manufacturers of the future will be relying on intelligent software to make their R&D discoveries.
Currently, manufacturers of all types rely on prototyping with computer aided design (CAD) software. In future manufacturing processes, augmented and virtual reality could play a greater role in R&D, and could effectively “abstract away” the desktop PC for industrial designers, possibly eliminating the need for 3D printed physical models.
Autodesk, the software developer of AutoCAD, is a bellwether for the future of prototyping and collaboration technology. The company has been no stranger to investing in cutting-edge technology such as 3D printing, including a partnership with health AI startup Atomwise on a “confidential project.” Recently, Autodesk’s exploration into making an AR/VR game engine foreshadows the larger role it envisions for immersive computing in the design process.
Autodesk’s game engine, called Stingray, has added support for the HTC Vive and Oculus Rift headsets. Additionally, game and VR engine maker Unity has announced a partnership with Autodesk to increase interoperability.
Similarly, Apple has imagined AR/VR facilitating the design process in combination with 3D printing. Using the CB Insights database, we surfaced an Apple patent that envisions AR “overlaying computer-generated virtual information” onto real-world views of existing objects, effectively allowing industrial designers to make 3D-printed “edits” onto existing or unfinished objects.
The patent envisions using AR through “semi-transparent glasses,” but also mentions a “mobile device equipped with a camera,” hinting at potential 3D printing opportunities for using ARKit on an iPhone.
A researcher at Cornell has recently demonstrated the ability to sketch with AR/VR while 3D printing. Eventually, the human-computer interface could be so seamless that 3D models can be sculpted in real time.
Tomorrow’s R&D team will be exploring AR and VR, and testing how it works in combination with 3D printing, as well as the traditional prototyping stack.
Once a product design is finalized, the next step is planning how it will be made at production scale. Typically, this requires gathering a web of parts suppliers, basic materials makers, and contract manufacturers to fulfill a large-scale build of the product. But finding suppliers and gaining trust is a difficult and time-consuming process.
The vacuum maker Dyson, for example, took up to two years to find suppliers for its new push into the auto industry: “Whether you’re a Dyson or a Toyota it takes 18 months to tool for headlights,” a worker on their project reported.
In 2018, assembly lines are so lean they’re integrating a nearly real-time inflow of parts and assembling them as fast as they arrive. Honda’s UK-based assembly factory, for example, only keeps one hour’s worth of parts ready to go. After Brexit, the company reported longer holdups for incoming parts at the border, and said that each 15 minute delay translates to £850,000 per year.
We looked at how technology is improving this complicated sourcing process.
Decentralized manufacturing may be one impending change that helps manufacturers handle demand for parts orders.
Distributed or decentralized manufacturing employs a network of geographically dispersed facilities that are coordinated with IT. Parts orders, especially for making medium- or small-run items like 3D printed parts, can be fulfilled at scale using distributed manufacturing platforms.
Companies like Xometry and Maketime offer on-demand additive manufacturing and CNC-milling (a subtractive method that carves an object out of a block), fulfilling parts orders across its networks of workshops.
Xometry’s site allows users to simply upload a 3D file and get quotes on milling, 3D printing, or even injection molding for parts. Right now, the company allows up to 10,000 injection-molded parts to be ordered on-demand, so it can handle builds done by larger manufacturers.
Xometry isn’t alone in offering printing services: UPS is also embracing the movement, offering services for 3D printed plastic parts like nozzles and brackets in 60 locations and using its logistics network to deliver orders globally.
As mass-customization takes off, so could the reliance on decentralized network of parts suppliers.
Enterprise resource planning (ERP) software tracks resource allocation from raw material procurement all the way through customer relationship management (CRM).
Yet a manufacturing business can have so many disparate ERP systems and siloed data that, ironically, the ERP “stack” (which is intended to simplify things) can itself become a tangled mess of cobbled-together software.
In fact, a recent PwC report found that many large industrial manufacturers have as many as 100 different ERP systems.
Blockchain and distributed ledger technologies (DLT) projects aim to unite data from a company’s various processes and stakeholders into a universal data structure. Many corporate giants are piloting blockchain projects, often specifically aiming to reduce the complexity and disparities of their siloed databases.
Last year, for example, British Airways tested blockchain technology to maintain a unified database of information on flights and stop conflicting flight information from appearing at gates, on airport monitors, at airline websites, and in customer apps.
When it comes to keeping track of the sourcing of parts and raw materials, blockchain can manage the disparate inflows to a factory. With blockchain, as products change hands across a supply chain from manufacture to sale, the transactions can be documented on a permanent decentralized record — reducing time delays, added costs, and human errors.
Viant, a project out of the Ethereum-based startup studio Consensys, works on a number of capital-intensive areas that serve manufacturers. And Provenance is building a traceability system for materials and products, enabling businesses to engage consumers at the point of sale with information gathered collaboratively from suppliers all along the supply chain.
Going forward, we can expect more blockchain projects to build supply chain management (SCM) software, handle machine-to-machine (M2M) communication and payments, and promote cybersecurity by keeping a company’s data footprint smaller.
Presumably, tomorrow’s manufacturing process will eventually look like one huge, self-sustaining cyber-physical organism that only intermittently requires human intervention. But across sectors, the manufacturing process has a long way to go before we get there.
According to lean manufacturing metrics (measured by overall equipment effectiveness, or OEE), world-class manufacturing sites are working at 85% of their theoretical capacity. Yet the average factory is only at about 60%, meaning there’s vast room for improvement in terms of how activities are streamlined.
Industry 4.0’s maturation over the next two decades will first require basic digitization.
Initially, we’ll see a wave of machines become more digital-friendly. Later, that digitization could translate into predictive maintenance and true predictive intelligence.
Large capital goods have evolved to a “power by the hour” business model that guarantees uptime. Power by the hour (or performance-based contracting) is now fairly common in the manufacturing world, especially in mission-critical areas like semiconductors, aerospace, and defense.
The idea dates back to the 1960s, when jet engine manufacturers like GE Aviation, Rolls Royce, and Pratt & Whitney began selling “thrust hours,” as opposed to one-off engine sales. This allows engine makers to escape the commodity trap and to focus on high-margin maintenance and digital platforms. Nowadays, GE is incentivized to track every detail of its engine, because it only gets paid if the engine is working properly.
Despite a guarantee of uptime, a machine’s owner is responsible for optimizing usage (just like airlines that buy jet engines still need to put them to good use). In short, factory owners still “own” the output risk between the chain of machines.
Without digitizing every step, efficiency is being left on the table. Yet there are serious barriers for manufacturers to take on the new burden of analytics.
Shop floors typically contain old machines that still have decades of production left in them. In addition to significant cost, sensors tracking temperature and vibration aren’t made with a typical machine in mind, lengthening the calibration period and efficacy.
When Harley-Davidson’s manufacturing plant went through an IIoT sensor retrofit, Mike Fisher, a general manager at the company, said sensors “make the equipment more complicated, and they are themselves complicated. But with the complexity comes opportunity.”
To put it simply, operational technology (or OT) is similar to traditional IT, but tailored for the “uncarpeted areas.” Where the typical IT stack includes desktops, laptops, and connectivity for knowledge work and proprietary data, OT manages the direct control or monitoring of physical devices.
For manufacturers, the OT stack typically includes:
In a way, IT and OT are two sides to the same tech stack token, and as manufacturing gets better digitized, the boundaries will continue to blur.
Today, the “brain” for most industrial machines is in the programmable logic controller (PLC), which are ruggedized computers. Industrial giants like Siemens, ABB, Schneider, and Rockwell Automation all offer high-priced PLCs, but these can be unnecessarily expensive for smaller manufacturing firms.
This has created an opportunity for startups like Oden Technologies to bring off-the-shelf computing hardware that can plug into most machines directly, or integrate existing PLCs. This, in turn, allows small- and medium-sized businesses to be leaner and analyze their efficiency in real time.
As digitization becomes ubiquitous, the next wave in tech efficiency improvements will be about predictive analytics. Today’s narrative around the Internet of Things has suggested that everything — every conveyor and robotic actuator — will have a sensor, but not all factory functions are of equal value.
Slapping cheap IoT sensors on everything isn’t a cure-all, and it’s entirely possible that more value gets created from a smaller number of more specialized, highly accurate IoT sensors. Augury, for example, uses AI-equipped sensors to listen to machines and predict failure.
Cost-conscious factory owners will recognize that highly accurate sensors will deliver greater ROI than needless IoT.
Computing done at the “edge,” or closer to the sensor, is a new trend within IIoT architecture.
Drafting on innovations in AI, and smarter hardware, Peter Levine of a16z anticipates an end to cloud computing for AVs, drones, and advanced IoT objects.
Connected machines in future factories should be no different.
Companies like Saguna Networks specialize in edge computing (close to the point of collection), whereas a company like Foghorn Systems does fog computing (think a lower-hanging cloud that’s done on-site like a LAN). Both methods allow mission-critical devices to operate safely without the latency of transmitting all data to a cloud, a process that can save big on bandwidth.
In the near future, advances in AI and hardware will allow IoT as we know it to be nearly independent of centralized clouds.
This is important because in the short term, it means that rural factories don’t need to send 10,000 machine messages relaying “I’m OK,” which expends costly bandwidth and compute. Instead, they can just send anomalies to a centralized server and mostly handle the decision-making locally.
Additionally, cloud computing latency has drastic downsides in manufacturing. Mission critical-systems such as connected factories can’t afford the delay of sending packets to off-site cloud databases. Cutting power to a machine split-seconds too late is the difference between avoiding and incurring physical damage.
And in the longer term, edge computing lays down the rails for the autonomous factory. The AI software underpinning the edge will be the infrastructure that allows factory machines to make decisions independently.
In sum, devices that leverage greater computing at the edge of the network are poised to usher in a new, decentralized wave of factory devices.
One paradox of IIoT is that factories bear significant downside risk, yet are barely investing in protection: 28% of the manufacturers in a recent survey said they saw a loss of revenue due to cybersecurity attacks in the past year, but only 30% of executives said they’ll increase IT spend.
Cyber attacks can be devastating to heavy industry, where cyber-physical systems can be compromised. The WannaCry ransomware attack caused shutdowns at the Renault-Nissan auto plants in Europe. And in 2014, a sophisticated cyber attack resulted in physical damage at a German steel plant when an outage prevented a blast furnace from being shut down correctly.
Consequently, critical infrastructure is a growing segment within cybersecurity, and many startups like Bayshore Networks are offering IoT gateways (which bridge the disparate protocols for connected sensors) to allow manufacturers across many verticals to monitor their IIoT networks. Other gateway-based security companies like Xage are even employing blockchain’s tamperproof ledgers so industrial sensors can share data securely.
28% of the manufacturers in a recent survey cited a loss of revenue due to cybersecurity attacks in the past year. But only 30% of executives said they’ll increase IT spend.
Similarly, adding connected IoT objects and Industrial Control System (ICS) sensors has opened up new vulnerabilities at the endpoint.
Additionally, several of the most active enterprise cybersecurity investors are corporates with interests in OT computing. The venture arms of Dell (which makes industrial IoT gateways), as well as Google, GE, Samsung, and Intel are among the most active in this space.
Managing the ICS and IIoT systems securely will continue to be a critical area for investment, especially as hack after hack proves OT’s vulnerability.
In a recent write-up about furniture maker Steelcase’s production line, humans were described as being solely present to guide automation technology.
Steelcase’s “vision tables,” which are computerized workstations that dictate step-by-step instructions, eliminate human error in assembling furniture. Using sound cues and overhead scanners to track assembly, the system won’t let workers proceed if a step is done incorrectly. Scanners also allow off-site operations engineers to analyze progress in real time.
The New Yorker wrote about Steelcase’s labor management, “A decade ago, industrial robots assisted workers in their tasks. Now workers — those who remain — assist the robots in theirs.”
What manufacturing looks has changed drastically in a short time. As a retired Siemens executive recently said, “People on the plant floor need to be much more skilled than they were in the past. There are no jobs for high school graduates at Siemens today.”
But better digitization and cyber-physical technologies are all augmenting the efficiency and manpower available to the workers. Here’s how emerging technology like augmented reality (AR), wearables, and exosuits are fitting in.
Augmented reality will be able to boost the skills of industrial worker.
In addition to being a hands-free “browser” that can communicate factory performance indicators and assign work, AR can analyze complicated machine environments and use computer vision to map out a machine’s parts, like a real-time visual manual. This makes highly skilled labor like field service a “downloadable” skill (in a manner not unlike The Matrix).
Daqri and Atheer are well-funded headset makers that focus on industrial settings. Upskill‘s Skylight platform (below) makes AR for the industrial workforce using Google Glass, Vuzix, ODG, and Realwear headsets. The company raised nearly $50M from the corporate venture arms of Boeing and GE, among other investors.
Many AR makers envision the tech working like a handsfree “internet browser” that allows workers to see real-time stats of relevant information. Realwear‘s wearable display doesn’t aspire to true augmented reality like a Daqri headset, but even a small display in the corner of the eye is fairly robust.
Others like Scope AR do similar work in field service using mobile and iPad cameras, employing AR to highlight parts on industrial equipment and connecting to support experts in real time. This saves on the travel costs of flying out people to repair broken equipment.
Parsable, which works with mobile phones, is a workflow platform that gives out tasks and digitizes data collection, something that is often done with pencil and paper in industrial environments.
As the maxim goes, “what gets measured gets managed,” and in an area where robots are a constant competitive pressure, manufacturing organizations will invest in technologies that digitize human efforts down to each movement.
Exoskeleton technology is finally becoming a reality on factory floors, which could drastically reduce the physical toll of repetitive work. Startups here are making wearable high-tech gear that bear the load alongside a worker’s limbs and back.
Ekso Bionics, seen below, is piloting its EksoVest suit at Ford Motor Company’s Michigan assembly plants, and workers using the suit have reported less neck stress in their daily demands. The EksoVest reduces wear from repetitive motion and, unlike some competing products, provides lift assistance without batteries or robotics. Ekso’s CTO has said the long-term strategy is to get workers accustomed to the technology before eventually moving into powered exoskeletons.
Sarcos is another well-known exosuit maker, which has raised from corporates including Schlumberger, Caterpillar, and Microsoft and GE’s venture arms. Sarcos is more strictly focused on remote controlled robotics and powered exoskeletons, which can lift 200 lbs repeatedly. Delta Airlines recently said it would join Sarcos’ Technical Advisory Group to pilot the technology.
In similar territory is Strong Arm Technologies, which makes posture-measuring and lift-assisting wearables. Strong Arm touts predictive power to intervene before risk of injury or incident, and is positioned as a labor-focused risk management platform.
Where humans are still needed for some dirty and dangerous tasks, wearables and exoskeletons will augment human’s ability to do work while also promoting safety.
Automation is coming for dirty, dull, and dangerous jobs first.
Already, many human jobs within the mass-production assembly line have been crowded out by automation. Cyber-physical systems like industrial robotics and 3D printing are increasingly common in the modern factory. Robots have gotten cheaper, more accurate, safer, and more prevalent alongside humans.
Consumer tastes have also broadened, and manufacturers are trying to keep up with increasing demands for customization and variety.
Visions for Industry 4.0 involve a completely intelligent factory where networked machines and products communicate through IoT technology, and not only prototype and assemble a specific series of products, but also iterate on those products based on consumer feedback and predictive information.
Before we reach a world where humans are largely uninvolved with manufacturing, modular design can help existing factories become more flexible.
Modularity allows the factory to be more streamlined for customization, as opposed to the uniformity that’s traditional for the assembly line. Modularity could come in the form of smaller parts, or modules, that go into a more customizable product. Or it could be equipment, such as swappable end-effectors on robots and machines, allowing for a greater variety of machining.
Presently, mass-production is already refashioning itself to handle consumer demand for greater customization and variety. 90% of auto makers in a BCG survey said they expect a modular line setup will be relevant in final assembly by 2030. Modular equipment will allow more models to come off the same lines.
Startups are capitalizing on the push toward modular parts.
Seed-stage company Vention makes custom industrial equipment on-demand. Choosing from Vention’s modular parts, all a firm needs to do is upload a CAD design of the equipment they want, and then wait 3 days to be sent specialized tooling or robot equipment. Many existing factories have odd jobs that can be done by a simple cobot (collaborative robot) arm or custom machine, and these solutions will gain momentum as factories everywhere search for ways to improve efficiency.
Modular production will impact any sector offering increased product customization. Personalized medicine, for example, is driving demand for smaller and more targeted batches. In pharmaceutical manufacturing, modularity allows processors to produce a variety of products, with faster changeovers.
Industrial robotics are responsible for eroding manufacturing jobs, which have been on the decline for decades. As a report by Bank of America Merrill Lynch explains: “long robots, short humans.”
But the latest wave of robotics seems to be augmenting what a human worker can accomplish.
Cobots (collaborative robots) are programmable through assisted movement. They “learn” by first being moved manually and then copying the movement moving forward. These robots are considered collaborative because they can work alongside humans.
Whether these are truly collaborative or rendering human labor redundant remains to be seen. After a Nissan plant in Tennessee added autonomous guided vehicles, no material handlers were laid off with the increased productivity. European aircraft manufacturer Airbus also uses a mobile robot, which works alongside humans to drill thousands of holes into passenger jets.
While even the best robots still have limitations, economists fear that automation will eventually lead to a drastic restructuring of labor.
Due to rising labor costs worldwide, robotics are presently causing a new wave of re-shoring — the return of manufacturing to the United States.
In a 2015 survey by BCG, 24% of US-based manufacturers surveyed said that they were actively shifting production back to the US from China, or were planning to do so over the next two years — up from only 10% in 2012. The majority said lower automation costs have made the US more competitive.
Robotics have become invaluable for monotonous jobs such as packaging, sorting, lifting repeatedly. Cobot manufacturer Universal Robots says some of its robot arms pay for themselves in 195 days on average. As a whole, the category of collaborative robots are priced on average at $24,000 apiece.
We’ve previously identified more than 80 robotics startups, but for heavy-duty machining, significant market share is taken by big industrials players like ABB, Mitsubishi, Fanuc, and Yaskawa.
In the near term, the reprogrammable nature of cobots will allow manufacturing firms to become more customized and work in parallel with existing equipment and employees. On a longer time horizon, however, robotics will be the engine for moving towards “lights-out” manufacturing.
For certain mass-produced items, 3D printing will never beat the economies of scale seen in injection molding. But for smaller runs, fulfillment using additive manufacturing will make sense.
Using metal additive manufacturing for one-third of components, GE made an engine that burns 15% less fuel than previous designs. GE says it will begin testing the Cessna Denali engine for potential flight tests in 2018.
Manufacturers will increasingly turn to 3D printing as mass-customization takes off within certain consumer products.
Shoes have become one popular use case to watch. For example, Adidas has partnered with Carbon to mass-print custom athletic shoes. Additionally, other 3D printing services companies like Voxel8 and Wiiv have positioned themselves specifically for the shoe use case.
Just a few years from now, it may be more commonplace to see mass-customized parts in consumer electronics, apparel, and other accessories — all brought to you by 3D printing. Additionally, if rocket-printing startup Relativity Space is any indication, the technology will also be applied to building large-scale industrial print jobs.
Industrial 3D printing is the hottest segment within the broader space, and many startups are aiming to deliver advanced materials that include carbon fiber or other metals with exotic properties.
As the factory gets digitized, quality assurance will become increasingly embedded in the organization’s codebase. Machine learning-powered data platforms like Fero, Sight Machine, and Uptake, among a host of others, will be able codify lean manufacturing principles into systems’ inner workings.
Computer vision and blockchain technologies are already on the scene, and offer some compelling alternative methods for tracking quality.
In mass production, checking whether every product is to specification is a very dull job that is limited by human fallibility. In contrast, future factories will employ machine vision to scan for imperfections that the human eye might miss.
Venture-backed startups like Instrumental are training AI to spot manufacturing issues. And famed AI researcher Andrew Ng has a new manufacturing-focused startup called Landing.ai that is already working with Foxconn, an electronics contract manufacturer. (Below is a view inside Landing.ai’s module for identifying defects.)
Many imperfections in electronics aren’t even visible to the human eye. Being able to instantaneously identify and categorize flaws will automate quality control, making factories more adaptive.
In August 2017, Walmart, Kroger, Nestle, and Unilever, among others, partnered with IBM to use blockchain to improve food safety through enhanced supply chain tracking. Walmart has been working with IBM since 2016, and said that blockchain technology helped reduce the time required to track mango shipments from 7 days to 2.2 seconds.
With 9 other big food suppliers joining the IBM project, the food industry — where collaboration is rare — could also be better aligned for safety recalls.
Similarly, factories employing blockchains or distributed ledgers could be better positioned in the event of recall. In factories where food or automobiles are processed, a single system for managing recalls could more swiftly figure out the origin of faulty parts or contaminated batches, possibly saving lives and money.
Lights-out warehouses may come even faster than lights-out factories.
With the rise of e-commerce, demand for warehouse space has exploded. Last year, the average warehouse ceiling height is up 21% compared to 2001, and spending for new warehouse construction hit a peak in October 2017, with $2.3B spent on construction in that month alone.
Amazon’s historic $775M acquisition of Kiva Systems is said to have set off an arms race among robotics makers. Riding the e-commerce wave and the industry-wide pressure to deliver orders on time, we’ve witnessed an explosion of robotics startups focused on making fulfillment more efficient.
Some startups such as Ready Robotics and Locus have applied the classic robotic arm to package e-commerce orders, though their collaborative nature makes them suited for a number of industrial tasks. We’ve previously looked at industrial robotics companies that could be targets for large corporates.
Manufacturers and hardware-focused investors will continue to hunt for the next robotics maker that’s 10x better than the status quo. And the economics of cheaper and more agile robots may mean we’ll see more robots alongside humans in the short term.
As computer vision melds with enterprise resource planning, fewer people and clipboards will be needed in sorting, scanning, and spotting defects.
Aquifi, for example, uses computer vision inside fixed IIoT and handheld scanners. Machine vision can measure products dimensions, count the number of boxes in a pallet, and inspect the quality of boxes. Presently, this is often done with clipboards, eyeballing, and intermittent scanning.
Vision will be increasingly crucial for IIoT to “abstract away” a real-time picture of what’s happening inside a warehouse. Closing the loop, so to speak, between the physical world and bits and bytes is essential to creating the autonomous warehouse.
Once the product is packaged and palletized, getting it out the door efficiently is a daunting task. With thousands of SKU numbers and orders to manage, the complexity can be astounding — and enterprise resource planning (ERP) software has proliferated to handle it.
But there’s still room for IoT and blockchain to get even more granular with real-time supply chains.
In general, there is poor awareness about where items are in real time throughout the supply chain.
The fleet telematics field saw several large exits in recent years, with Verizon acquiring both FleetMatics and Telogis. IoT and software for shipments will only grow more important as supply chains decentralize and get automated.
Farther out, the advent of autonomous trucks could mean that autonomous systems will deliver, depalletize, and charge upon receipt of a Bill of Lading. This will bring greener, more efficient movement, as well as more simplified accounting.
Uber and Tesla both have high-profile plans for autonomous semi-trucks, and Starsky Robotics (below) recently raised nearly $20M from Y Combinator, Sam Altman, and Data Collective, among others, specifically for long-haul trucking.
As mentioned above, a number of DLT pilots and blockchain startups are trying to put supply chain management software into a distributed ledger.
The willingness to explore these technologies indicates digitization here is long overdue. The highly fragmented nature of supply chains is a fitting use case for decentralized technologies and could be part of a larger trend for eliminating the inefficiencies of global commerce.
Shipping giant Maersk, for example, is working on a startup with Hyperledger that will aim to help shippers, ports, customs offices, and banks in global supply chains track freight. Maersk’s goal is to replace related paperwork with tamper-resistant digital records.
Meanwhile Pemex, the Mexican state-owned petroleum company, is assisting Petroteq in developing oil-specific supply chain management software. The Petroteq project — an enterprise-grade, blockchain-based platform called PetroBLOQ — will enable oil and gas companies to conduct global transactions.
In the future, manufacturers will explore decentralized technologies to make their organizations more autonomous and their belongings (coming or going) more digitized in real-time. Blockchain not only has the promise of simplifying SCM, but also could make payments more frictionless.
Manufacturing is become increasingly more efficient, customized, modular, and automated. But factories remain in flux. Manufacturers are known to be slow adopters of technology, and many may resist making new investments. But as digitization becomes the new standard in industry, competitive pressure will escalate the inventive to evolve.
The most powerful levers manufacturers can pull will come in the form of robotics, AI, and basic IoT digitization. Richer data and smart robotics will maximize a factory’s output, while minimizing cost and defects. At the unmanned factory in Dongguan, employing robotics dropped the defect rate from 25% to less than 5%.
Meanwhile, as cutting-edge categories like blockchain and AR are being piloted in industrial settings, manufacturing could eventually be taken to unprecedented levels of frictionless production and worker augmentation.
In the words of Henry Ford: “If you always do what you always did, you’ll always get what you always got.” To reach its full potential, the manufacturing industry will need to continue to embrace new technology.