Month: February 2019

Predicting a Startup Valuation with Data Science – Sebastian Quintero

The following is a condensed and slightly modified version of a Radicle working paper on the startup economy in which we explore post-money valuations by venture capital stage classifications. We find that valuations have interesting distributional properties and then go on to describe a statistical model for estimating an undisclosed valuation with considerable ease. In conjunction with this post, we are releasing a free tool for estimating startup valuations. To use the tool and to download the full PDF of the working paper, go here, but please read the entirety of this post before doing so. This is not magic and the details matter. With that said, grab some coffee and get comfortable––we’re going deep.

Introduction

It’s often difficult to comprehend the significance of numbers thrown around in the startup economy. If a company raises a $550M Series F at a valuation of $4 billion [3]— how big is that really? How does that compare to other Series F rounds? Is that round approximately average when compared to historical financing events, or is it an anomaly?

At Radicle, a disruption research company, we use data science to better understand the entrepreneurial ecosystem. In our quest to remove opacity from the startup economy, we conducted an empirical study to better understand the nature of post-money valuations. While it’s popularly accepted that seed rounds tend to be at valuations somewhere in the $2m to the $10m valuation range [18], there isn’t much data to back this up, nor is it clear what valuations really look like at subsequent financing stages. Looking back at historical events, however, we can see some anecdotally interesting similarities.

Google and Facebook, before they were household names, each raised Series A rounds with valuations of $98m and $100m, respectively. More recently, Instacart, the grocery delivery company, and Medium, the social publishing network on which you’re currently reading this, raised Series B rounds with valuations of $400m and $457m, respectively. Instagram wasn’t too dissimilar at that stage, with a Series B valuation of $500m before its acquisition by Facebook in 2012. Moving one step further, Square (NYSE: SQ), Shopify (NYSE: SHOP), and Wish, the e-commerce company that is mounting a challenge against Amazon, all raised Series C rounds with valuations of exactly $1 billion. Casper, the privately held direct-to-consumer startup disrupting the mattress industry, raised a similar Series C with a post-money valuation of $920m. Admittedly, these are probably only systematic similarities in hindsight because human minds are wired to see patterns even when there aren’t any, but that still makes us wonder if there exists some underlying trend. Our research suggests that there is, but why is this important?

We think entrepreneurs, venture capitalists, and professionals working in corporate innovation or M&A would benefit greatly from having an empirical view of startup valuations. New company financings are announced on a daily cadence, and having more data-driven publicly available research helps anyone that engages with startups make better decisions. That said, this research is solely for informational purposes and our online tool is not a replacement for the intrinsic, from the ground up, valuation methods and tools already established by the venture capital community. Instead, we think of this body of research as complementary — removing information asymmetries and enabling more constructive conversations for decision-making around valuations.

Making Sense of Startup Valuations

We obtained data for this analysis from Crunchbase, a venture capital database that aggregates funding events and associated meta-data about the entrepreneurial ecosystem. Our sample consists of 8,812 financing events since the year 2010 with publicly disclosed valuations and associated venture stage classifications. Table I below provides summary statistics.

The sample size for the median amount of capital raised at each stage is much higher [N=84k] because round sizes are more frequently disclosed and publicly available.

To better understand the nature of post-money valuations, we assessed their distributional properties using kernel density estimation (KDE), a non-parametric approach commonly used to approximate the probability density function (PDF) of a continuous random variable [8]. Put simply, KDE draws the distribution for a variable of interest by analyzing the frequency of events much like a histogram does. Non-parametric is just a fancy way of saying that the method does not make any assumption about the data being normally distributed, which makes it perfect for exercises where we want to draw a probability distribution but have no prior knowledge about what it actually looks like.

The two plots immediately above and further down below show the valuation probability density functions for venture capital stages on a logarithmic scale, with vertical lines indicating the median for each class. Why on a logarithmic scale? Well, post-money valuations are power-law distributed, as most things are in the venture capital domain [5], which means that the majority of valuations are at low values but there’s a long tail of rare but exceptionally high valuation events. Technically speaking, post-money valuations can also be described as being log-normally distributed, which just means that taking the natural logarithm of valuations produces the bell curves we’re all so familiar with. Series A, B, and C valuations may be argued as being bimodal log-normal distributions, and seed valuations may be approaching multimodality (more on that later), but technical fuss aside, this detail is important because log-normal distributions are easy for us to understand using the common language of mean, median, and standard deviation — even if we have to exponentiate the terms to put them in dollar signs. More importantly, this allows us to consider classical statistical methods that only work when we make strong assumptions about normality.

Founders that seek venture capital to get their company off the ground usually start by raising an angel or a seed round. An angel round consists of capital raised from their friends, family members, or wealthy individuals, while seed rounds are usually a startup’s first round of capital from institutional investors [18]. The median valuation for both angel and seed is $2.2m USD, while the median valuation for pre-seed is $1.9m USD. While we anticipated some overlap between angel, pre-seed and seed valuations, we were surprised to find that the distributions for these three classes of rounds almost completely overlap. This implies that these early-stage classifications are remarkably similar in reality. That said, we think it’s possible that the angel sample is biased towards the larger events that get reported, so we remain slightly skeptical of the overlap. And as mentioned earlier, the distribution of seed stage valuations appears to be approaching multimodality, meaning it has multiple modes. This may be due to the changing definition of a seed round and the recent institutionalization of pre-seed rounds, which are equal to or less than $1m in total capital raised and have only recently started being classified as ’Pre-seed” in Crunchbase (and hence the small sample size). There’s also a clear mode in the seed valuation distribution around $7m USD, which overlaps with the Series A distribution, suggesting, as others recently have, that some subset of seed rounds are being pushed further out and resemble what Series A rounds were 10 years ago [1].

Around 21 percent of seed stage companies move on to raise a Series A [16] about 18 months after raising their seed — with approximately 50 percent of Series A companies moving on to a Series B a further 18–21 months out [17]. In that time the median valuation jumps to $16m at the Series A and leaps to $130m at the Series B stage. Valuations climb further to a median of $500m at Series C. In general, we think it’s interesting to see the binomial nature as well as the extent of overlap between the Series A, B, and C valuation distributions. It’s possible that the overlap stems from changes in investor behavior, with the general size and valuation at each stage continuously redefined. Just like some proportion of seed rounds today are what Series A rounds were 10 years ago, the data suggests, for instance, that some proportion of Series B rounds today are what Series C rounds used to be. This was further corroborated when we segmented the data by decades going back to the year 2000 and compared the resulting distributions. We would note, however, that the changes are very gradual, and not as sensational as is often reported [12].

The median valuation for startups reaches $1b between the Series D and E stages, and $1.65 billion at Series F. This answers our original question, putting Peloton’s $4 billion-dollar appraisal at the 81 percentile of valuations at the Series F stage, far above the median, and indeed above the median $2.4b valuation for Series G companies. From there we see a considerable jump to the median Series H and Series I valuations of $7.7b and $9b, respectively. The Series I distribution has a noticeably lower peak in density and higher variance due to a smaller sample size. We know companies rarely make it that far, so that’s expected. Lyft and SpaceX, at valuations of $15b and $27b, respectively, are recent examples of companies that have made to the Series I stage. (Note: In December 2018 SpaceX raised a Series J round, which is a classification not analyzed in this paper.)

We classified each stage into higher level classes using the distributions above, as one of Early (Angel, Pre-Seed, Seed), Growth (Series A, B, C), Late (Series D, E, F, G), or Private IPO (Series H, I). With these aggregate classifications, we further investigated how valuations have faired across time and found that the medians (and means) have been more or less stable on a logarithmic scale. What has changed, since 2013, is the appearance of the “Private IPO” [11, 13]. These rounds, described above with companies such as SpaceX, Lyft, and others such as Palantir Technologies, are occurring later and at higher valuations than have previously existed. These late-stage private rounds are at such high valuations that future IPOs, if they ever occur, may end up being down rounds [22].

Approximating an Undisclosed Valuation

Given the above, we designed a simple statistical model to predict a round’s post-money valuation by its stage classification and the amount of capital raised. Why might this be useful? Well, the relationship between capital raised and post-money valuation is true by mathematical definition, so we’re not interested in claiming to establish a causal relationship in the classical sense. A startup’s post-money valuation is equal to an intrinsic pre-money valuation calculated by investors at the time of investment plus the amount of new capital raised [19, 21]. However, pre-money valuations are often not disclosed, so a statistical model for estimating an undisclosed valuation would be helpful when the size of a financing round is available and its stage is either disclosed as well or easily inferred.

We formulated an ordinary least squares log-log regression model after considering that we did not have enough stage classifications and complete observations at each stage for multilevel modeling and that it would be desirable to build a model that could be easily understood and utilized by founders, investors, executives, and analysts. Formally, our model is of the form:

where y is the output post-money valuation, c is the amount of capital raised, r is a binary term that indicates the financing stage, and epsilon is the error term. log(c · r) is, therefore, an interaction term that specifies the amount of capital raised at a specific stage. The model we present does not include stage main effects because the model remains the same, whether they’re left in or pulled out, while the coefficients become reparameterizations of the original estimates [23]. In other words, boolean stage main effects adjust the constant and coefficients while maintaining equivalent summed values — increasing the mental gymnastics required for interpretation without adding any statistical power to the regression. Capital main effects are not included because domain knowledge and the distributions above suggest that financing events are always indicative of a company’s stage, so the effect is not fixed, and therefore including capital by itself results in a misspecified model alongside interaction terms. Of course, whether or not a stage classification is agreed upon by investors and founders and specified on the term sheet is another matter.

As is standard practice, we used heteroscedasticity robust standard errors to estimate the beta coefficients, and residual analysis via a fitted values versus residuals plot confirms that the model validates the general assumptions of ordinary least squares regression. There is no multicollinearity between the variables, and a Q-Q plot further confirmed that the data is log-normally distributed. The results are statistically significant at the p < 0.001 level for all terms with an adjusted  of 89 percent and an F-Statistic of 5,900 (p < 0.001). Table II outlines the results. Monetary values in the model are specified in millions, USD.

The model can be interpreted by solving for and differentiating with respect to to get the marginal effect. Therefore, we can think of percentage increases in x as leading to some percentage increase in y. At the seed stage, for example, for a 10 percent increase in money raised a company can expect a 6.6 percent increase in their post-money valuation, ceteris paribus. That premium increases as companies make their way through the venture capital funnel, peaking at the Series I stage with a 12.4 percent increase in valuation per 10 percent increase in capital raised. In practice, an analyst could approximate an unknown post-money valuation by specifying the amount of capital raised at the appropriate stage in the model, exponentiating the constant and the beta term, and multiplying the values, such that:

Using the first equation and the values in Table II, the estimated undisclosed post-money valuation of a startup after a $2m seed round is approximately $9.4m USD — for a $35m Series B, it’s $224m — and for a $200m Series D, it’s $1.7b. Subtracting the amount of capital raised from the estimated post-money valuation would yield an estimated pre-money valuation.

Can it really be that simple? Well, that depends entirely on your use case. If you want to approximate a valuation and don’t have the tools to do so, and can’t get on the phone with the founders of the company, then the calculations above should be good enough for that purpose. If instead, you’re interested in purchasing a company, this is a good starting point for discussions, but you probably want to use other valuation methods, too. As mentioned earlier, this research is not meant to supplant existing valuation methodologies established by the venture capital community.

As far as estimation errors, you can infer from the scatter plot above that, for the predictions at the early stages, you can expect valuations to be off by a few million dollars — for growth-stage companies, a few hundred million — and in the late and private IPO stages, being off by a few billion would be reasonable. Of course, the accuracy of any prediction depends on the reliability of the estimated means, i.e., the credible intervals of the posterior distributions under a Bayesian framework [6], as well as the size of the error from omitted variable bias — which is not insignificant. We can reformulate our model in a directly comparable probabilistic Bayesian framework, in vector notation, as:

where the distribution of log(y) given X, an n · k matrix of interaction terms, is normal with a mean that is a linear function of X, observation errors are independent and of equal variance, and represents an n · n identity matrix. We fit the model with a non-informative flat prior using the No-U-Turn Sampler (NUTS), an extension of the Hamiltonian Monte Carlo MCMC algorithm [9], for which our model converges appropriately and has the desirable hairy caterpillar sampling properties [6].

The 95 percent credible intervals in Figure V suggest that posterior distributions from angel to series E, excluding pre-seed, have stable ranges of highly probable values around our original OLS coefficients. However, the distributions become more uncertain at the later stages, particularly for series F, G, H, and I. This should be obvious, considering our original sample sizes for the pre-seed class and for the later stages. Since the data needs to be transformed back to its original scale for appropriate estimation, and the fact that the magnitudes of late-stage rounds tend to be very high, such changes in the exponential will lead to dramatically different prediction results. As with any simple tool then, your mileage may vary. For more accurate and precise estimates, we’d suggest hiring a data scientist to build a more sophisticated machine learning algorithm or Bayesian model to account for more features and hierarchy. If your budget doesn’t allow for it, the simple calculation using the estimates in Table II will get you in the ballpark.

Concluding Remarks

This paper provides an empirical foundation for how to think about startup valuations and introduces a statistical model as a simple tool to help practitioners working in venture capital approximate an undisclosed post-money valuation. That said, the information in this paper is not investment advice, and is provided solely for educational purposes from sources believed to be reliable. Historical data is a great indicator but never a guarantee of the future, and statistical models are never correct — only useful [2]. This paper also makes no comment on whether current valuation practices result in accurate representations of a startup’s fair market value, as that is an entirely separate discussion [7].

This research may also serve as a starting point for others to pursue their own applied machine learning research. We translated the model presented in this article into a more powerful learning algorithm [8] with more features that fills-in the missing post-money valuations in our own database. These estimates are then passed to Startup Anomaly Detection™, an algorithm we’ve developed to estimate the plausibility that a venture-backed startup will have a liquidity such as an IPO or acquisition event given the current state of knowledge about them. Our machine learning system appears to have some similarities with others recently disclosed by GV [15], Google’s venture capital arm, and Social Capital [14], with the exception that our probability estimates are available as part of Radicle’s research products.

Companies will likely continue raising even later and larger rounds in the coming years, and valuations at each stage may continue being redefined, but now we have a statistical perspective on valuations as well as greater insight into their distributional properties, which gives us a foundation for understanding disruption as we look forward.

Source : https://towardsdatascience.com/making-sense-of-startup-valuations-with-data-science-1dededaf18bb

Key to any successful industrial digitalisation project – Manufacturer

Intelligent use of real-time data is critical to successful industrial digitalisation. However, ensuring that data flows effectively is just as critical to success. Todd Gurela explains the importance of getting your manufacturing network right.

Industrial digitalisation, including the Industrial Internet of Things (IIoT), offers great promise for manufacturers looking to optimise business operations.

By bringing together the machines, processes, people and data on your plant floor through a secure Ethernet network, IIoT makes it possible to design, develop, and fabricate products faster, safer, and with less waste.

For example, one automotive parts supplier eliminated network downtime, saving around £750,000 in the process simply by deploying a new wireless network across the factory floor.

The time it took for the company to completely recoup their investment in the project? Just nine months.

The key to any successful industrial digitalisation project is factory data

Without data – extracted from multiple sources and delivered to the right application, at the right time – little optimisation can happen.

And there is a multitude of meaningful data held in factory equipment. Consider how real-time access to condition, performance, and quality data – across every machine on the floor – would help you make better business and production decisions.

Imagine the following. A machine sensor detects that volume is low for a particular part on your assembly line. Data analysis determines, based on real-time production speed and previous output totals, that the part needs to be re-stocked in one hour.

With this information, your team can arrange for replacement parts to arrive before you run out, and avoid a production stoppage.

This scenario may be a theoretical, but it illustrates a genuine truth. Manufacturers need reliable, scalable, secure factory networks so they can focus on their most important task: making whatever they make more efficiently, at higher quality levels, and at lower costs.

At the heart of this truth is the factory network. So, while the key to a successful Industry 4.0 project is data, the key to meaningful, accurate data is the network. And manufacturers need to plan carefully to ensure their network can deliver on their needs.

Five key network characteristics

There are five characteristics manufacturers should look for in a factory network before selecting a vendor.

In no particular order, they are:

Interoperability – this ability allows for the ‘flattening’ of the industrial network to improve data sharing, and usually includes Ethernet as a standard.

Automation – for ‘plug and play’ network deployment to streamline processes and drive productivity.

Simplicity – the network infrastructure should be simple, as should the management.

Security – your network should be secure and provide visibility into and control of your data to reduce risk, protect intellectual property, and ensure production integrity.

Intelligence – you need a network that makes it possible to analyse data, and take action quickly, even at the network edge.

Manufacturers need solutions with these features to help aggregate, visualise, and analyse data from connected machines and equipment, and to assure the reliable, rapid, and secure delivery of data. Anything less will leave them wanting, and with subpar results.

These five characteristics are explained in more detail below, along with a real-world case study of a British manufacturer who recently modernised its network and is now expanding globally. 

1. Interoperability

Network interoperability allows manufacturers to seamlessly pull data from anywhere in their facility. An emerging standard in this area is Time Sensitive Networking (TSN).

Although not yet widely adopted, TSN provides a common communications pathway for your machines. With TSN, the future of industrial networks will be a single, open Ethernet network across the factory floor that enables manufacturers to access data with ease and efficiency.

Most important, TSN opens up critical control applications such as robot control, drive control, and vision systems to the Industrial Internet of Things (IIoT), making it possible for manufacturers to identify areas for optimisation and cost reduction.

Also, with the OPC-UA protocol now running over TSN, it also becomes possible to have standard and secure communication from sensor to cloud. In fact, TSN fills an important gap in standard networking by protecting critical traffic.

How so? Automation and control applications require consistent delivery of data from sensors, to controllers and actuators.

TSN ensures that critical traffic flows promptly, securing bandwidth and time in the network infrastructure for critical applications, while supporting all other forms of traffic.

And because TSN is delivered over standard Industrial Ethernet, control networks can take advantage of the security built into the technology.

TSN eliminates network silos that block reachability to critical plant areas, so that you can extract real-time data for analytics and business insights.

This is key to the future of factory networks, as TSN will drive the interoperability required for manufacturers to maximise the value from Industry 4.0 projects.

One leading manufacturer estimated that unscheduled downtime cost them more than £16,000/minute in lost profits and productivity. That’s almost £1m per hour if production stops. Could your organisation survive a stoppage like that?


2. Automation

Network automation is critical for manufacturers who have growing network demands. This includes needing to add new machines, or integrate operational controls, to existing infrastructure as well as net-new deployments.

Network uptime becomes increasingly important as the network expands. Ask yourself whether your network and its supporting tools have the capability for ‘plug and play’ network deployments that greatly reduce downtime if – and when – failure occurs.

It’s essential that factories leverage networks that automate certain tasks – to automatically set correct switch settings, for example – to meet Industry 4.0 objectives. The task is too overwhelming otherwise.


3. Simplicity

Like automation, network simplicity is an essential component of the factory network. Choosing a single network infrastructure, capable of handling TSN, Ethernet IP, Profinet, and CCLink traffic can significantly simplify installation, reduce maintenance expense, and reduce downtime.

It also makes it possible to get all your machine controls, from any of the top worldwide automation vendors, to talk through the same network hardware.

Consider also that you want a network that can be managed by operations and IT professionals. Avoid solutions that are too IT-centric and look for user-friendly tools that operations can use to troubleshoot network issues quickly.

Tools that visualise the network topology for operations professionals can be especially useful in this regard.

For example, knowing which PLC (including firmware data) is connected to which port, and which I/O is connected to the same switch, can help speed commissioning and troubleshooting.

Last, validated network designs are essential to factory success. These designs help manufacturers quickly roll out new network deployments and maintain the performance of automation equipment. Make sure this is part of the service your network vendor can provide.


4. Security

Cybersecurity is critically important on the factory floor. As manufacturing networks grow, so does the attack surface, or vectors, for malicious activity such as a ransomware attack.

According to the Cisco 2017 Midyear Cybersecurity Report, nearly 50% of manufacturers use six or more security vendors in their facilities. This mix and match of security products and vendors can be difficult to manage for even the most seasoned security expert.

No single product, technology or methodology can fully secure industrial operations. However, there are vendors that can provide comprehensive network security solutions in their plant network infrastructure that include simple protections for physical assets, such as blocking access to ports in unmanaged switches or using managed switches.

Protecting critical manufacturing assets requires a holistic defence-in-depth security approach that uses multiple layers of defence to address different types of threats. It also requires a network design that leverages industrial security best practices such as ‘Demilitarized Zones’ (DMZs) to provide pervasive security across the entire plant.


5. Intelligence

Consider for a moment how professional athletes react to their surroundings. They interpret what is happening in real-time, and make split-second decisions based on what is going on around them.

Part of what makes those decisions possible is how the players have been coached to react in certain situations. If players needed to ask their coach for advice before taking every shot, tackling the opposition, or sprinting for victory…well, the results wouldn’t be very good.

Just as a team’s performance improves when players can take in their surroundings and perform an appropriate action, the factory performs better when certain network data can be processed and actioned upon immediately – without needing to travel to the data centre first.

Processing data in this way is called ‘edge’, or ‘fog’, computing. It entails running applications right on your network hardware to make more intelligent, faster decisions.

Manufacturers need to access information quickly, filter it in real-time, then use that data to better understand processes and areas for improvement.

Processing data at the edge is key to unlocking networking intelligence, so it’s important to ask yourself whether your factory network can support edge applications before beginning a project. And if it can’t, it’s time to consider a new network.

A final note on network intelligence. Once you deploy edge applications, make sure you have the tools to manage and implement them with confidence, at scale. Managing massive amounts of data can quickly become a problem, so you’ll need systems that can extract, compute, and move data to the right places at the right time.

The opportunity for manufacturers who invest in Industry 4.0 solutions is massive (and it’s time that leaders from the top floor and shop floor realised it). But before any Industry 4.0 project can get off the ground, the right foundation needs to be in place.

The factory (or industrial) network is that foundation… and manufacturers owe it to themselves to select the best one available.

Case Study:

SAS International is a leading British manufacturer of quality metal ceilings and bespoke architectural metalwork. Installed in iconic, landmark buildings worldwide, SAS products lead through innovation, cutting-edge design and technical acoustic expertise.

Their success is built on continued investment in manufacturing and achieving value for clients through world-class engineered solutions.

In the UK, SAS operates factories in Bridgend, Birmingham and Maybole, with headquarters and warehouse facilities in Reading. The company has recently expanded its export markets and employs nearly 1,000 staff internationally.

However, the IT infrastructure was operating on ageing equipment with connectivity, visibility and security constraints.

The company’s IT team recently modernised its network, upgrading from commercial-grade wireless to a new network solution with a unified dashboard that allows them to remotely manage distributed sites.

They now have instant visibility and control over the network devices, as well as the mobile devices used by employees daily.

Results

During the initial deployment, the IT team was able to identify cabling issues that previously they would not have been alerted to or been able to investigate.

With upcoming projects and continually working to optimise solutions, like cloud storage, the network is now robust enough and reliable enough to support future IT needs.

SAS is retrofitting numerous manufacturing machines with computers. This retrofit, partnered with the new network, allows remote communications between the machines and the designers without having to manually input data at the machines themselves.

The robust wireless infrastructure is changing the manual printing and checking of stock by enabling handheld scanners and creating a more efficient and cost-effective product flow.

Fault mitigation and anomaly detection have been huge benefits of the solution. For example, the IT team was able to quickly identify a bandwidth issue when a phenomenal amount of data was generated from an automated transfer to a shop machine.

They were able to spot the issue, identify the machine, and fix the problem. Before, they would merely have seen there was a network slowdown, but wouldn’t have been able to identify or resolve the problem.

The SAS team will continue to benefit from the included firmware updates and new feature releases that are integrated into the solution, providing them with a future-proof solution as they expand to global sites in the future.

Source : https://www.themanufacturer.com/articles/the-key-to-any-successful-industrial-digitalisation-project/

Why Blockchain Differs From Traditional Technology Life Cycles – Daniel Heyman

Why another bubble is likely and what the blockchain space should focus on now

In the aftermath of the 2001 internet bubble, Carlota Perez published her influential book Technological Revolutions and Financial Capital. This seminal work provides a framework for how new technologies create both opportunity and turmoil in society. I originally learned about Perez’s work through venture capitalist Fred Wilson, who credits it as a key intellectual underpinning of his investment theses.

In the wake of the 2018 ICO bubble and with the purported potential of blockchain, many people have drawn parallels to the 2001 bubble. I recently reread Perez’s work to think through if there are any lessons for the world of blockchain, and to understand the parallels and differences between then and now. As Mark Twain may or may not have said, “History doesn’t repeat itself, but it does rhyme.”

Framework Overview

In Technological Revolutions and Financial Capital, Carlota Perez analyzes five “surges of development” that have occurred over the last 250 years, each through the diffusion of a new technology and associated way of doing business. These surges are still household names hundreds of years later: the Industrial Revolution, the railway boom, the age of steel, the age of mass production and, of course, the information age. Each one created a burst of development, new ways of doing business, and generated a new class of successful entrepreneurs (from Carnegie to Ford to Jobs). Each one created an economic common sense and set of business models that supported the new technology, which Perez calls a ‘techno-economic paradigm’. Each surge also displaced old industries, drove bubbles to burst, and led to significant social turmoil.

Technology Life cycles

Perez provides a framework for how new technologies first take hold in society and then transform society. She calls the initial phase of this phenomenon “installation.” During installation, technologies demonstrate new ways of doing business and achieving financial gains. This usually creates a frenzy of investment in the new technology which drives a bubble and also intense experimentation in the technology. When the bubble bursts, the subsequent recession (or depression) is a turning point to implement social and regulatory changes to take advantage of the infrastructure created during the frenzy. If changes are made, a “golden age” typically follows as the new technology is productively deployed. If not, a “gilded age” follows where only the rich benefit. In either case, the technology eventually reaches maturity and additional avenues for investment and returns in the new technology dwindle. At this point, the opportunity for a new technology to irrupt onto the scene emerges.

Image from Technology Revolutions and Financial Capital

Inclusion-Exclusion

Within Perez’s framework, new techno-economic paradigms both encourage and discourage innovation, through an inclusion-exclusion process. This means that as new techno-economic paradigms are being deployed, they provide opportunities for entrepreneurs to mobilize and new modes of business to create growth, and at the same time, they exclude alternative technologies because entrepreneurs and capital are following the newly proven path provided by the techno-economic paradigm. When an existing technology reaches maturity and investment opportunities diminish, capital and talent go in search of new technologies and techno-economic paradigms.

Technologies Combine

One new technology isn’t enough for a new techno-economic paradigm. The age of mass production was created by combining oil and the combustion engine. Railways required the steam engine. The information age required the microprocessor, the internet, and much more. Often, a technology will, as Perez says, “gestate” as a small improvement to the existing techno-paradigm, until complementary technologies are created and the exclusion process of the old paradigm ends. Technologies can exist in this gestation period for quite sometime until the technologies and opportunities are aligned for the installation period to begin.

Frenzies and Bubbles

In many ways, the bubbles created by the frenzy in the installation phase makes it possible for the new technology to succeed. The bubble creates a burst of (over-)investment in the infrastructure of the new technology (railways, canals, fiber optic cables, etc.). This infrastructure makes it possible for the technology to successfully deploy after the bubble bursts. The bubbles also encourage a spate of experimentation with new business models and new approaches to the technologies, enabling future entrepreneurs to follow proven paths and avoid common pitfalls. While the bubble creates a lot of financial losses and economic pain, it can be crucial in the adoption of new technologies.

Connecting the Dots

A quick look at Perez’s framework would leave one to assume that 2018 was the blockchain frenzy and bubble, so we must be entering blockchain’s “turning point.” This would be a mistake.

My analysis of Perez’s framework suggests that blockchain is actually still in the gestation period, in the early days of a technology life cycle before the installation period. 2018 was not a Perez-style frenzy and bubble because it did not include key outcomes that are necessary to reach a turning point: significant infrastructure improvements and replicable business models that can serve as a roadmap during the deployment period. The bubble came early because blockchain technology enabled liquidity earlier in its life cycle.

There are three main implications of remaining in the gestation period. First, another blockchain-based frenzy and bubble is likely to come before the technology matures. In fact, multiple bubbles may be ahead of us. Second, the best path to success is to work through, rather than against, the existing technology paradigm. Third, the ecosystem needs to heavily invest in infrastructure for a new blockchain-based paradigm to emerge.

The ICO Bubble Doesn’t Match Up

2018 did show many of the signs of a Perez-style ‘frenzy period’ entering into a turning point. The best way (and ultimately the worst way) to make money was speculation. ‘Fundamentals’ of projects rarely mattered in their valuations or growth. Wealth was celebrated and individual prophets gained recognition. Expectations went through the roof. Scams and fraud were prevalent. Retail investors piled in for fear of missing out. The frenzy had all the tell-tale signs of a classic bubble.

Although there are no “good bubbles,” bubbles can have good side effects. During Canal Mania and Railway Mania, canals and railways were built that had little hope of ever being profitable. Investors lost money, but after the bubble, these canals and railways were still there. This new infrastructure made future endeavors cheaper and easier. After the internet bubble burst in 2001, fiber optic cables were selling for pennies on the dollar. Investors did terribly, but the fiber optics infrastructure created value for consumers and made it possible for the next generation of companies to be built. This over-investment in infrastructure is often necessary for the successful deployment of new technologies.

The ICO bubble, however, did not have the good side effects of a Perez-style bubble. It didn’t produce nearly enough infrastructure to help the blockchain ecosystem move forward.

Compared to previous bubbles, the cryptosphere’s investment in infrastructure was minimal and likely to be obsolete very soon. The physical infrastructure — in mining operations, for example — is unlikely to be useful. Additional mining power on a blockchain has significantly decreasing marginal returns and different characteristics to traditional infrastructure. Unlike a city getting a new fiber optic cable or a new canal, new people do not gain access to blockchain because of additional miners. Additionally, proof of work mining is unlikely to be the path blockchain takes moving forward.

The non-physical infrastructure was also minimal. The tools that can be best described as “core blockchain infrastructure” did not have easy access to the ICO market. Dev tools, wallets, software clients, user-friendly smart contract languages, and cloud services (to name a few) are the infrastructure that will drive blockchain technology toward maturity and full deployment. The cheap capital provided through ICOs primarily flowed to the application layer (even though the whole house has been built on an immature foundation). This created incentives for people to focus on what was easily fundable rather than most needed. These perverse incentives may have actually hurt the development of key infrastructure and splintered the ecosystem.

I don’t want to despair about the state of the ecosystem. Some good things came out of the ICO bubble. Talent has flooded the field. Startups have been experimenting with different use cases to see what sticks. New blockchains were launched incorporating a wide range of new technologies and approaches. New technologies have come to market. Many core infrastructure projects raised capital and made significant technical progress. Enterprises have created their blockchain strategies. Some very successful companies were born, which will continue to fund innovation in the space.The ecosystem as a whole continues to evolve at breakneck speed. As a whole, however, the bubble did not leave in its wake the infrastructure one would expect after a Perez-style bubble.

Liquidity Came Early

The 2018 ICO bubble happened early in blockchain technology’s life-cycle, during its gestation period, which is much earlier than Perez’s framework would predict. This is because the technology itself enabled liquidity earlier in the life-cycle. The financial assets became liquid before the underlying technology matured.

In the internet bubble, it took companies many years to go public, and as such there was some quality threshold and some reporting required. This process enabled the technology to iterate and improve before the liquidity arrived. Because blockchain enabled liquid tokens that were virtually free to issue, the rush was on to create valuable tokens rather than valuable companies or technologies. You could create a liquid asset without any work on the underlying technology. The financial layer jumped straight into a liquid state while the technology was left behind. The resulting tokens existed in very thin markets that were highly driven by momentum.

Because of the early liquidity, the dynamics of a bubble were able to start early for the space in relationship to the technology. After all, this was not the first blockchain bubble (bitcoin already has a rich history of bubbles and crashes). The thin markets in which these assets existed likely accelerated the dynamics of the bubble.

What the Blockchain Space Needs to Focus on now

In the fallout of a bubble, Perez outlines two necessary components to successfully deploy new and lasting technologies: proven, replicable business models and easy-to-use infrastructure. Blockchain hasn’t hit these targets yet, and so it’s a pretty obvious conclusion that blockchain is not yet at a “turning point.”

While protocol development is happening at a rapid clip, blockchain is not yet ready for mass deployment into a new techno-economic paradigm. We don’t have the proven, replicable business models that can expand industry to industry. Exchanges and mining companies, the main success stories of blockchain, are not replicable business models and do not cross industries. We don’t yet have the infrastructure for mass adoption. Moreover, the use cases that are gaining traction are mostly in support of the existing economic system. Komgo is using blockchain to improve an incredibly antiquated industry (trade finance) but it is still operating within the legacy economic paradigm.

Blockchain, therefore, is still in the “gestation period.” Before most technologies could enter the irruption phase and transform the economy, they were used to augment the existing economy. In blockchain, this looks like private and consortium chain solutions.

Some people in blockchain see this as a bad result. I see it as absolutely crucial. Without these experiments, blockchain risks fading out as a technological movement before its given the chance to mature and develop. In fact, one area where ConsenSys is not given the credit I believe it deserves is in bringing enterprises into the Ethereum blockchain space. This enterprise interest brings in more talent, lays the seeds for additional infrastructure, and adds credibility to the space. I am more excited by enterprise usage of blockchain today than any other short-term developments.

The Future of Blockchain Frenzy

This was not the first blockchain bubble. I don’t expect it to be the last (though hopefully some lessons will be learned from the last 12 months). Perez’s framework predicts that when the replicable business model is found in blockchain, another period of frenzied investment will occur, likely leading to a bubble. As Fred Wilson writes, “Carlota Perez [shows] ‘nothing important happens without crashes.’ ” Given the amount of capital available, I think this is a highly likely outcome. Given the massive potential of blockchain technology, the bubble is likely to involve more capital at risk than the 2018 one.

This next frenzy will have the same telltale signs of the previous one. Fundamentals will decrease in importance; retail investors will enter the market for fear of missing out; fraud will increase; and so on.

Lessons for Blockchain Businesses

Perez’s framework offers two direct strategic lessons for PegaSys and for any serious protocol development project in the blockchain space. First, we should continue to work with traditional enterprises. Working with enterprises will enable the technology to evolve and will power some experimentation of business models. This is a key component of the technology life-cycle and the best bet to help the ecosystem iterate.

Second, we must continue investing in infrastructure and diverse technologies for the ecosystem to succeed. This might sound obvious at first, but the point is that we will miss out on the new techno-economic paradigm if we only focus on the opportunities that are commercially viable today. Our efforts in Ethereum 1.x and 2.0 are directly born from our goal of helping the ecosystem mature and evolve. The work other groups in Ethereum and across blockchain are doing also drives towards this goal. We are deeply committed to the Ethereum roadmap and at the same time recognize the value that innovations outside Ethereum bring to the space. Ethereum’s roadmap has learned lessons from other blockchains, just as those chains have been inspired by Ethereum. This is how technologies evolve and improve.

Source : https://hackernoon.com/why-blockchain-differs-from-traditional-technology-life-cycles-95f0deabdf85

Scroll to top