Category: Enterprise

Why every organisation needs a Head of Problems – Zevae Zaheer

It is disheartening to see so many problem-less projects, with a solution – be it a technology, a policy, a feature, an intervention or training – being developed before there is an articulation, common understanding or qualification of the (real) problem it’s meant to solve.

More often than not, failure is not a result of poorly executed solutions but a result of poorly defined problems. You could argue that the pareto-principle applies here: 80% of the ‘new’ innovation work you/r organisation is doing is probably solving the wrong problems.

Before outlining the job spec of Head of Problems, I’ll reflect and connect four dots, a mix of observations, frustrations, and insights, which, when connected together, were the catalyst for penning this post.

Dot one – Deconstructing the Chief Innovation Officer role: There is no shortage of quirky job titles out there – from innovation guerilla to innovation designer – but very few are well defined and in relation to other existing roles. One of the initial reasons I started writing these posts was because I saw the typical tenure of Chief Innovation Officer lasting on average 18 months. I hypothesise that one reason for this is that this one role is trying to do too much. An alternative view is to deconstruct the chief innovation officer role into key roles, best seen as a funnel, to make success not just possible, but more probable.

●     The Head of Problems’ job is to identify, qualify and build an engaged community around the problem hypotheses.

●     The Head of Experimentation’s job is to take the problem hypotheses and run multiple experiments with either a PoC, pilot, prototype, or MVP.

●     The Head of Innovation’s job is to take the successful experiments and scaffold them into a sustainable solution.

●     The Head of Blitzscaling’s job (source: Reid Hoffman) is to scale fast.

Each of these roles require different skill-sets, mindsets, KPIs, resources, and methodologies.

Of course, there are also risks and concerns with this configuration, too. Most notably, around the handovers. I don’t see these as distinct functions but as distinct roles within the same team, therefore avoiding any hard-handover. There is also a wider question of who they all report to. And, I would argue that they need to report to a Chief Growth Officer (but that’s for another post). Finally, there is a concern that most organisations won’t be able to create the business case for all these ‘new’ roles. But what’s the alternative, keep going with high rates of innovation failure and low ROI?

Dot two – We are not incentivised to spend enough time in the problem space: Having had the pleasure and the pain of many failed (and a few successful) projects, strategy offsites, tech platforms, and community building initiatives, on reflection the failures occurred because we paid lip service to the problem, and moved quickly into the solution space (thinking, doing, drafting, creating). We’re not rewarded, supported, or evaluated on how well we define a problem. It can be perceived as not ‘action-oriented’ enough. We’re judged on how quickly we can move to the solution phase and into the real world – even when most of what we put out there will later fail, partly because we didn’t define the (right) problem to solve.

Dot three – We don’t like the word ‘problem’: ‘Problem’ sounds negative. Too much ‘problem’ talk makes you sound like a pessimist. Some may suggest the softer language of challenges or aspirational language of opportunities. I would say the latter language is best reserved for the ‘solution developers’. The ‘problem developers’ need to have uncomfortable conversations others are not willing to have. As Brian Bates’ research outlined, the more successful architects had the trait of ‘playing’ with the problems more than the ‘average’ architects who jumped faster into designing solutions: “It was almost childlike, like when a child gets utterly absorbed in a problem.”

The framing and choice of words matter. I would argue that:

●     An unfulfilled (customer) need, if significant, is a problem.

●     An opportunity, especially if it’s a high value one, not capitalised on, is a problem. Going after an opportunity not rooted in a validated problem is a recipe for failure.

●     A challenge is a problem.

●     A pain-point is a problem.

●     A missed target can be a symptom of a broader problem.

Tristan raises an interesting point – if people are not trying to solve it, then it’s not really a problem. Essentially, we shouldn’t be cavalier in our use of the word ‘problem’. Yet, I feel that it’s our job to sell the problem by articulating the consequences of not solving it. Only when the consequences are significant does it become a problem.

As Judy-Estrin notes “A problem without a name cannot command attention, understanding, or resources—three essential ingredients of change.”

Dot four – Pitch problems downstream: On reflection, after many years in many different types of organisations, I realise that we spend an inordinate amount of time and mental energy selling concepts, ideas, and solutions higher up the chain (to executives). Most of the time we have no real clue what ideas will stick, and which ones will fail to yield any traction. Even if my immediate boss likes the idea, it usually goes up at least two to three levels. Each time you massage and craft the problem so that it looks good internally it gets further removed from reality. And the ideas that do stick often don’t end up having much of an impact because they are inherently safe. I propose that we do the reverse. Executives, as owners of the strategic problems, should pitch problems and then invite teams (inside and outside the organisation) to solve them.

Now, for the job spec.

The Head of Problems’ job is to create a repeatable process, to a) identify, b) qualify, and c) build a community around the problems

Ultimately, the goal for Head of Problems is to develop a good ‘customer-problem-company’ fit. That is to say, to defineproblems that customers have which the company is uniquely positioned to go after.

Before going into detail, let’s play devil’s advocate for a moment:

  • You may be thinking that solving problems is everyone’s job. Yes, it is. But the Head of Problems’ job is not to pick operational problems (e.g. we missed a sales target, which sits at the role level) but those which have potential to provide strategic growth.
  • Another view is, aren’t these just new labels for existing roles, for example Head of Strategy? But, look at most Head of Strategy specs and you see very little if no mention of the word problem. Or, should it be the role of Head of Research? But this role goes beyond researching and documenting insights, especially with building a community around the prioritised problems.
  • I’ve heard many people say something along the lines – we start our workshops with problem definition. Firstly, any problem definition done in a closed room is more often than not probably not an accurate reflection of reality. Even if it was, this is still only one third of the role – which is identifying and framing the problem.

To emphasise, all three jobs of identifying, qualifying, and building an engaged community around the problems is needed – not as a one-time activity but a daily effort. Hence, the need for a role to build the structures, incentives, and processes that will enable these three jobs to run effectively. This role is by no means easy. It requires a different mindset from one that is required to validate solutions.

1. Identify

Problems are everywhere. And, much has already been written on techniques and tools, from design thinking to customer journey mapping to ethnographic research, to help identify them. The job of the Head of Problems is to break the current habits of problem-definition as a one-off activity and create a repeatable system for identifying problems. Problems must be sought in a wide variety of places:

●     Find problems that are hidden and locked in data – data from call centres, complaints, churn, and website forums. In this article, the hotel uses data to find problems (and humans to fix them).

●     Find problems by going through the experience(s) of products daily (not just quarterly). They can be found in the waiting queues, with the pre and post failures of products/solutions.

●     Find problems through people who’ve found workarounds. The positive deviance examples.

●     Find problems in the collective intelligence of staff. Instead of the ‘submit your idea’ website, give them tools to ‘identify the problem’. Because the person framing the problem is not attached, required, or incentivised to come up with solutions, then the theory goes they will ask better questions by listening and observing.

●     Find problems directly from existing customers. Not by giving them a 50-question survey but by asking powerful questions such as ‘what’s one thing that you would change?’ Or, create ‘customer incubators’.

●     Find problems from an external community. Within the organisation there will always be inherent internal biases and constraints, which limit the view of the problems and subsequent solutions. The Head of Problems can source externals such as entrepreneurs-in-residence to identify the problems, without the organisation-first lens.

This identification process is inherently messy, non-linear, and uncertain – not knowing when and where the next strategic problem will be uncovered.

Problem identification also includes ‘framing of the problem’. There is no single right way to do this, but the starting point is crucial. It is a strong determinant of whether you will get an incremental solution or a ‘10x’ disruptive solution at the end. “In most cases people solve problems by copying what other people do with slight variations,” Musk told us. “I operate on the physics approach of analysis by first principles, where you boil things down to the most fundamental truths in a particular area and then you reason up from there.”

2. Qualify

As Head of Problems, once you’ve created a repeatable process to identify problems, the next step is to qualify the problems. Very few of us are good judges of solutions that are pitched to us. Especially solutions that are conjecture, hypotheses, and require a few mental leaps to visualise success. With problem qualification, it’s a lot easier to be objective about the problem and therefore pick higher potential urgent and important problems to be solved.

There is no single perfect tool to qualify problems. There are others who have written about this, so here I’ll merely highlight three additional tools: problem-spider, problem-hypotheses, and problem-portfolio.

Problem-spider

The Head of Problems should be asking questions that touch on all these dimensions, to get a holistic understanding of the problem.

The problem-spider is a visual representation of how confident you feel that you have the answers along each of the dimensions. On the right-hand side is the external views of the problem and on the left the internal views. The further out the placement of the dot, the more evidence you have to back it up.

Customer view – a customer backwards view:

●     Frequency of it – Does this problem occur on a daily, monthly or infrequent basis?

●     Cost of solving the problem – How much effort is currently spent on the unsolved problem? Do customers/users put together a partial solution to minimise the problem or solve it by making a workaround that compromises on cost, quality, time, usability, etc.? What are the costs of existing solutions (including manual workarounds, competitors etc.)?

●     Cost of not solving it today – Quantify the consequences of this problem not being solved from the customers’ perspective.

●     Actively solving it – Identify a particular use-case not a generic definition of the problem. It’s a lot cheaper to sell the solution that solves the problem to a user than to educate the user about the problem in the first place.

●     How long it takes to solve the problem – The problem may occur daily, but if it only takes a minute to solve it is less important than spending 15 minutes every day on a problem. Time is one measure. You may look at other measures.

Org view – an organisation-forward view:

●     Urgency of solving it – What’s the level of urgency, in terms of existing strategy alignment, resource allocation, prioritisation, etc. to solve this problem?

●     Cost of solving it – What’s the internal cost to staff and budget to at least begin experimenting to solve this problem?

●     Total addressable problem – How big is the market size for this particular problem? Is it significantly large enough for a CxO for a Business Unit Director, a product manager, or sales staff to care about?

●     Feasibility of solving it – How feasible is it in terms of existing skill sets, mindsets, networks etc. to prioritise solving it?

●     Complexity of problem – Does it require one person to buy and use or is it more complicated, e.g. a buyer, user, influencers. These are all different people.

This is a living breathing visual. If it’s not being updated throughout all the phases of solution development, then its impact is reduced.

Getting to this relatively simple view of the problem involves doing uncomfortable and sometimes time-consuming activities. For example, living or ‘apprenticing with the problem’, solving the problem with pure human intervention (no product/service development), and post-mortems of past failures and pre-mortems of the current problem definition.

Problem hypotheses

There is never a one right view or frame of the problem. The Head of Problems can use the problem-matrix (below) to frame the problem from different perspectives – mapping different levels of risk (what should be tested first during the search rather than at the start or scale stage) with different types of problem hypotheses (desirability, feasibility, viability, efficacy, etc.).

Problem portfolio

As Peter Drucker said: “There is nothing so useless as doing efficiently that which should not be done at all.” As Head of Problems, once you’ve qualified problems, you need to develop a problem portfolio. Categorise problems according to core, adjacent, and disruptive customer needs, rank them according to value, strategic alignment, etc. A tool for this will be shared in my forthcoming book.

3. Build an engaged community around the problem

Finally, the third part of the job spec is to resource the problem with a community, a diverse set of ‘who’, vested in the problem. It’s not enough for the Head of Problems to ‘fall in love with the problem, not the solution’, a whole community needs to, too. All too often the reason why the solution-developers don’t spend time with customers/users is because it’s outside of their habits and norms, or they are not easily accessible.

Internal community: In every one of our organisations we are faced with many orphan problems – those that continue to persist because they have no owner. The job of Head of Problems is not just to find an executive ‘problem sponsor’, but also to build an internal community around the problem. They can find ‘problem advocates’ whose role tangentially can be impacted if the problem is solved, ‘problem followers’ who are keen to see it progress and will jump in when there is traction, or ‘problem blockers’, etc. Each of these will advance the problem thesis and eventually solve it through different conversations.

Problem-solving (external) networks: Too often when trying to solve a complex problem, the necessary skill sets, mindsets, permissions, etc. won’t all be available in-house. The Head of Problems’ job is to build or harness the value of networks which form around problems. Networks made up of people from public and private organisations of all sizes.

Customer communityTristan is astute in his observation that “When entrepreneurs start talking about problems, they stop talking about people”. That’s why the Head of Problems’ job is to ensure that customers facing the problem, most likely the early adopters, are easily accessible and part of the journey at the beginning and not just the end when trying to sell the solution to them.

In summary, the Head of Problems’ job is to create a repeatable process to a) identify, b) qualify, and c) build a community around the problems that are worth solving.

Source : https://www.linkedin.com/pulse/why-every-organisation-needs-head-problems-zevae-m-zaheer/

Money Out of Nowhere: How Internet Marketplaces Unlock Economic Wealth – Bill Gurley

In 1776, Adam Smith released his magnum opus, An Inquiry into the Nature and Causes of the Wealth of Nationsin which he outlined his fundamental economic theories. Front and center in the book — in fact in Book 1, Chapter 1 — is his realization of the productivity improvements made possible through the “Division of Labour”:

It is the great multiplication of the production of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people. Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of society.

Smith identified that when men and women specialize their skills, and also importantly “trade” with one another, the end result is a rise in productivity and standard of living for everyone. In 1817, David Ricardo published On the Principles of Political Economy and Taxation where he expanded upon Smith’s work in developing the theory of Comparative Advantage. What Ricardo proved mathematically, is that if one country has simply a comparative advantage (not even an absolute one), it still is in everyone’s best interest to embrace specialization and free trade. In the end, everyone ends up in a better place.

There are two key requirements for these mechanisms to take force. First and foremost, you need free and open trade. It is quite bizarre to see modern day politicians throw caution to the wind and ignore these fundamental tenants of economic science. Time and time again, the fact patterns show that when countries open borders and freely trade, the end result is increased economic prosperity. The second, and less discussed, requirement is for the two parties that should trade to be aware of one another’s goods or services. Unfortunately, either information asymmetry or physical distances and the resulting distribution costs can both cut against the economic advantages that would otherwise arise for all.

Fortunately, the rise of the Internet, and specifically Internet marketplace models, act as accelerants to the productivity benefits of the division of labour AND comparative advantage by reducing information asymmetry and increasing the likelihood of a perfect match with regard to the exchange of goods or services. In his 2005 book, The World Is Flat, Thomas Friedman recognizes that the Internet has the ability to create a “level playing field” for all participants, and one where geographic distances become less relevant. The core reason that Internet marketplaces are so powerful is because in connecting economic traders that would otherwise not be connected, they unlock economic wealth that otherwise would not exist. In other words, they literally create “money out of nowhere.”

EXCHANGE OF GOODS MARKETPLACES

Any discussion of Internet marketplaces begins with the first quintessential marketplace, ebay(*). Pierre Omidyarfounded AuctionWeb in September of 1995, and its rise to fame is legendary. What started as a web site to trade laser pointers and Beanie Babies (the Pez dispenser start is quite literally a legend), today enables transactions of approximately $100B per year. Over its twenty-plus year lifetime, just over one trillion dollars in goods have traded hands across eBay’s servers. These transactions, and the profits realized by the sellers, were truly “unlocked” by eBay’s matching and auction services.

In 1999, Jack Ma created Alibaba, a Chinese-based B2B marketplace for connecting small and medium enterprise with potential export opportunities. Four years later, in May of 2003, they launched Taobao Marketplace, Alibaba’s answer to eBay. By aggressively launching a free to use service, Alibaba’s Taobao quickly became the leading person-to-person trading site in China. In 2018, Taobao GMV (Gross Merchandise Value) was a staggering RMB2,689 billion, which equates to $428 billion in US dollars.

There have been many other successful goods marketplaces that have launched post eBay & Taobao — all providing a similar service of matching those who own or produce goods with a distributed set of buyers who are particularly interested in what they have to offer. In many cases, a deeper focus on a particular category or vertical allows these marketplaces to distinguish themselves from broader marketplaces like eBay.

  • In 2000, Eric Baker and Jeff Fluhr founded StubHub, a secondary ticket exchange marketplace. The company was acquired by ebay in January 2007. In its most recent quarter, StubHub’s GMV reached $1.4B, and for the entire year 2018, StubHub had GMV of $4.8B.
  • Launched in 2005, Etsy is a leading marketplaces for the exchange of vintage and handmade items. In its most recent quarter, the company processed the exchange of $923 million of sales, which equates to a $3.6B annual GMV.
  • Founded by Michael Bruno in Paris in 2001, 1stdibs(*) is the world’s largest online marketplace for luxury one-of-a-kind antiques, high-end modern furniture, vintage fashion, jewelry, and fine art. In November 2011, David Rosenblatt took over as CEO and has been scaling the company ever since. Over the past few years dealers, galleries, and makers have matched billions of dollars in merchandise to trade buyers and consumer buyers on the platform.
  • Poshmark was founded by Manish Chandra in 2011. The website, which is an exchange for new and used clothing, has been remarkably successful. Over 4 million sellers have earned over $1 billion transacting on the site.
  • Julie Wainwright founded The Real Real in 2011. The company is an online marketplace for authenticated luxury consignment. In 2017, the company reported sales of over $500 million.
  • In 2015, Eddy Lu and Daishin Sugano launched GOAT, a marketplace for the exchange of sneakers. Despite this narrow focus, the company has been remarkably successful. The estimated annual GMV of GOAT and its leading competitor Stock X is already over $1B per year (on a combined basis).

SHARING ECONOMY MARKETPLACES

With the launch of Airbnb in 2008 and Uber(*) in 2009, these two companies established a new category of marketplaces known as the “sharing economy.” Homes and automobiles are the two most expensive items that people own, and in many cases the ability to own the asset is made possible through debt — mortgages on houses and car loans or leases for automobiles. Despite this financial exposure, for many people these assets are materially underutilized. Many extra rooms and second homes are vacant most of the year, and the average car is used less than 5% of the time. Sharing economy marketplaces allow owners to “unlock” earning opportunities from these underutilized assets.

Airbnb was founded by Joe Gebbia and Brian Chesky in 2008. Today there are over 5 million Airbnb listings in 81,000 cities. Over two million people stay in an Airbnb each night. In November of this year, the company announced that it had achieved “substantially” more than $1B in revenue in the third quarter. Assuming a marketplace rake of something like 11%, this would imply gross room revenue of over $9B for the quarter — which would be $36B annualized. As the company is still growing, we can easily guess that in 2019-2020 time frame, Airbnb will be delivering around $50B per year to home-owners who were previously sitting on highly underutilized assets. This is a major “unlocking.”

When Garrett Camp and Travis Kalanick founded Uber in 2009, they hatched the industry now known as ride-sharing. Today over 3 million people around the world use their time and their underutilized automobiles to generate extra income. Without the proper technology to match people who wanted a ride with people who could provide that service, taxi and chauffeur companies were drastically underserving the potential market. As an example, we estimate that ride-sharing revenues in San Francisco are well north of 10X what taxis and black cars were providing prior to the launch of ride-sharing. These numbers will go even higher as people increasingly forgo the notion of car ownership altogether. We estimate that the global GMV for ride sharing was over $100B in 2018 (including Uber, Didi, Grab, Lyft, Yandex, etc) and still growing handsomely. Assuming a 20% rake, this equates to over $80B that went into the hands of ride-sharing drivers in a single year — and this is an industry that did not exist 10 years ago. The matching made possible with today’s GPS and Internet-enabled smart phones is a massive unlocking of wealth and value.

While it is a lesser known category, using your own backyard and home to host dog guests as an alternative to a kennel is a large and growing business. Once again, this is an asset against which the marginal cost to host a dog is near zero. By combining their time with this otherwise unused asset, dog sitters are able to offer a service that is quite compelling for consumers. Rover.com (*) in Seattle, which was founded by Greg Gottesman and Aaron Easterly in 2011, is the leading player in this market. (Benchmark is an investor in Rover through a merger with DogVacay in 2017). You may be surprised to learn that this is already a massive industry. In less than a decade since the company started, Rover has already paid out of half a billion dollars to hosts that participate on the platform.

EXCHANGE OF LABOR MARKETPLACES

While not as well known as the goods exchanges or sharing economy marketplaces, there is a growing and exciting increase in the number of marketplaces that help match specifically skilled labor with key opportunities to monetize their skills. The most noteworthy of these is likely Upwork(*), a company that formed from the merger of Elance and Odesk. Upwork is a global freelancing platform where businesses and independent professionals can connect and collaborate remotely. Popular categories include web developers, mobile developers, designers, writers, and accountants. In the 12 months ended June 30, 2018, the Upwork platform enabled $1.56 billion of GSV (gross services revenue) across 2.0 million projects between approximately 375,000 freelancers and 475,000 clients in over 180 countries. These labor matches represent the exact “world is flat” reality outlined in Friedman’s book.

Other noteworthy and emerging labor marketplaces:

  • HackerOne(*) is the leading global marketplace that coordinates the world’s largest corporate “bug bounty” programs with a network of the world’s leading hackers. The company was founded in 2012 by Michiel PrinsJobert AbmaAlex Rice and Merijn Terheggen, and today serves the needs of over 1,000 corporate bug bounty programs. On top of that, the HackerOne network of over 300,000 hackers (adding 600 more each day) has resolved over 100K confirmed vulnerabilities which resulted in over $46 million in awards to these individuals. There is an obvious network effect at work when you bring together the world’s leading programs and the world’s leading hackers on a single platform. The Fortune 500 is quickly learning that having a bug bounty program is an essential step in fighting cyber crime, and that HackerOne is the best place to host their program.
  • Wyzant is a leading Chicago-based marketplace that connects tutors with students around the country. The company was founded by Andrew Geant and Mike Weishuhn in 2005. The company has over 80,000 tutors on its platform and has paid out over $300 million to these professionals. The company started matching students with tutors for in-person sessions, but increasingly these are done “virtually” over the Internet.
  • Stitch Fix (*) is a leading provider of personalized clothing services that was founded by Katrina Lake in 2011. While the company is not primarily a marketplace, each order is hand-curated by a work-at-home “stylist” who works part-time on their own schedule from the comfort of their own home. Stitch Fix’s algorithms match the perfect stylist with each and every customer to help ensure the optimal outcome for each client. As of the end of 2018, Stitch Fix has paid out well over $100 million to their stylists.
  • Swing Education was founded in 2015 with the objective of creating a marketplace for substitute teachers. While it is still early in the company’s journey, they have already established themselves as the leader in the U.S. market. Swing is now at over 1,200 school partners and has filled over 115,000 teacher absence days. They have helped 2,000 substitute teachers get in the classroom in 2018, including 400 educators who earned permits, which Swing willingly financed. While it seems obvious in retrospect, having all substitutes on a single platform creates massive efficiency in a market where previously every single school had to keep their own list and make last minute calls when they had vacancies. And their subs just have to deal with one Swing setup process to get access to subbing opportunities at dozens of local schools and districts.
  • RigUp was founded by Xuan Yong and Mike Witte in Austin, Texas in March of 2014. RigUp is a leading labor marketplace focused on the oilfield services industry. “The company’s platform offers a large network of qualified, insured and compliant contractors and service providers across all upstream, midstream and downstream operations in every oil and gas basin, enabling companies to hire quickly, track contractor compliance, and minimize administrative work.” According to the company, GMV for 2017 was an impressive $150 million, followed by an astounding $600 million in 2018. Often, investors miss out on vertically focused companies like RigUp as they find themselves overly anxious about TAM (total available market). As you can see, that can be a big mistake.
  • VIPKid, which was founded in 2013 by Cindy Mi, is a truly amazing story. The idea is simple and simultaneously brilliant. VIPKid links students in China who want to learn English with native English speaking tutors in the United States and Canada. All sessions are done over the Internet, once again epitomizing Friedman’s very flat world. In November of 2018, the company reported having 60,000 teachers contracted to teach over 500,000 students. Many people believe the company is now well north of a US$1B run rate, which implies that around $1B will pass hands from Chinese parents to western teachers in 2019. That is quite a bit of supplemental income for U.S.-based teachers.

These vertical labor marketplaces are to LinkedIn what companies like Zillow, Expedia, and GrubHub are to Google search. Through a deeper understanding of a particular vertical, a much richer perspective on the quality and differentiation of the participants, and the enablement of transactions — you create an evolved service that has much more value to both sides of the transaction. And for those professionals participating in these markets, your reputation on the vertical service matters way more than your profile on LinkedIn.

NEW EMERGING MARKETPLACES

Having been a fortunate investor in many of the previously mentioned companies (*), Benchmark remains extremely excited about future marketplace opportunities that will unlock wealth on the Internet. Here are an example of two such companies that we have funded in the past few years.

The New York Times describes Hipcamp as “The Sharing Economy Visits the Backcountry.” Hipcamp(*) was founded in 2013 by Alyssa Ravasio as an engine to search across the dozens and dozens of State and National park websites for campsite availability. As Hipcamp gained traction with campers, landowners with land near many of the National and State parks started to reach out to Hipcamp asking if they could list their land on Hipcamp too. Hipcamp now offers access to more than 350k campsites across public and private land, and their most active private land hosts make over $100,000 per year hosting campers. This is a pretty amazing value proposition for both land owners and campers. If you are a rural landowner, here is a way to create “money out of nowhere” with very little capital expenditures. And if you are a camper, what could be better than to camp at a unique, bespoke campsite in your favorite location.

Instawork(*) is an on-demand staffing app for gig workers (professionals) and hospitality businesses (partners). These working professionals seek economic freedom and a better life, and Instawork gives them both — an opportunity to work as much as they like, but on their own terms with regard to when and where. On the business partner side, small business owners/managers/chefs do not have access to reliable sources to help them with talent sourcing and high turnover, and products like  LinkedIn are more focused on white-collar workers. Instawork was cofounded by Sumir Meghani in San Franciso and was a member of the 2015 Y-Combinator class. 2018 was a break-out year for Instawork with 10X revenue growth and 12X growth in Professionals on the platform. The average Instawork Professional is highly engaged on the platform, and typically opens the Instawork app ten times a day. This results in 97% of gigs being matched in less than 24 hours — which is powerfully important to both sides of the network. Also noteworthy, the Professionals on Instawork average 150% of minimum wage, significantly higher than many other labor marketplaces. This higher income allows Instawork Professionals like Jose, to begin to accomplish their dreams.

THE POWER OF THESE PLATFORMS

As you can see, these numerous marketplaces are a direct extension of the productivity enhancers first uncovered by Adam Smith and David Ricardo. Free trade, specialization, and comparative advantage are all enhanced when we can increase the matching of supply and demand of goods and services as well as eliminate inefficiency and waste caused by misinformation or distance. As a result, productivity naturally improves.

Specific benefits of global internet marketplaces:

    1. Increase wealth distribution (all examples)
    2. Unlock wasted potential of assets (Uber, AirBNB, Rover, and Hipcamp)
    3. Better match of specific workers with specific opportunities (Upwork, WyzAnt, RigUp, VIPKid, Instawork)
    4. Make specific assets reachable and findable (Ebay, Etsy, 1stDibs, Poshmark, GOAT)
    5. Allow for increased specialization (Etsy, Upwork, RigUp)
    6. Enhance supplemental labor opportunities (Uber, Stitch Fix, SwingEducation, Instawork, VIPKid), where the worker is in control of when and where they work
    7. Reduces forfeiture by enhancing utilization (mortgages, car loans, etc) (Uber, AirBnb, Rover, Hipcamp)

Source : http://abovethecrowd.com/2019/02/27/money-out-of-nowhere-how-internet-marketplaces-unlock-economic-wealth/

Predicting a Startup Valuation with Data Science – Sebastian Quintero

The following is a condensed and slightly modified version of a Radicle working paper on the startup economy in which we explore post-money valuations by venture capital stage classifications. We find that valuations have interesting distributional properties and then go on to describe a statistical model for estimating an undisclosed valuation with considerable ease. In conjunction with this post, we are releasing a free tool for estimating startup valuations. To use the tool and to download the full PDF of the working paper, go here, but please read the entirety of this post before doing so. This is not magic and the details matter. With that said, grab some coffee and get comfortable––we’re going deep.

Introduction

It’s often difficult to comprehend the significance of numbers thrown around in the startup economy. If a company raises a $550M Series F at a valuation of $4 billion [3]— how big is that really? How does that compare to other Series F rounds? Is that round approximately average when compared to historical financing events, or is it an anomaly?

At Radicle, a disruption research company, we use data science to better understand the entrepreneurial ecosystem. In our quest to remove opacity from the startup economy, we conducted an empirical study to better understand the nature of post-money valuations. While it’s popularly accepted that seed rounds tend to be at valuations somewhere in the $2m to the $10m valuation range [18], there isn’t much data to back this up, nor is it clear what valuations really look like at subsequent financing stages. Looking back at historical events, however, we can see some anecdotally interesting similarities.

Google and Facebook, before they were household names, each raised Series A rounds with valuations of $98m and $100m, respectively. More recently, Instacart, the grocery delivery company, and Medium, the social publishing network on which you’re currently reading this, raised Series B rounds with valuations of $400m and $457m, respectively. Instagram wasn’t too dissimilar at that stage, with a Series B valuation of $500m before its acquisition by Facebook in 2012. Moving one step further, Square (NYSE: SQ), Shopify (NYSE: SHOP), and Wish, the e-commerce company that is mounting a challenge against Amazon, all raised Series C rounds with valuations of exactly $1 billion. Casper, the privately held direct-to-consumer startup disrupting the mattress industry, raised a similar Series C with a post-money valuation of $920m. Admittedly, these are probably only systematic similarities in hindsight because human minds are wired to see patterns even when there aren’t any, but that still makes us wonder if there exists some underlying trend. Our research suggests that there is, but why is this important?

We think entrepreneurs, venture capitalists, and professionals working in corporate innovation or M&A would benefit greatly from having an empirical view of startup valuations. New company financings are announced on a daily cadence, and having more data-driven publicly available research helps anyone that engages with startups make better decisions. That said, this research is solely for informational purposes and our online tool is not a replacement for the intrinsic, from the ground up, valuation methods and tools already established by the venture capital community. Instead, we think of this body of research as complementary — removing information asymmetries and enabling more constructive conversations for decision-making around valuations.

Making Sense of Startup Valuations

We obtained data for this analysis from Crunchbase, a venture capital database that aggregates funding events and associated meta-data about the entrepreneurial ecosystem. Our sample consists of 8,812 financing events since the year 2010 with publicly disclosed valuations and associated venture stage classifications. Table I below provides summary statistics.

The sample size for the median amount of capital raised at each stage is much higher [N=84k] because round sizes are more frequently disclosed and publicly available.

To better understand the nature of post-money valuations, we assessed their distributional properties using kernel density estimation (KDE), a non-parametric approach commonly used to approximate the probability density function (PDF) of a continuous random variable [8]. Put simply, KDE draws the distribution for a variable of interest by analyzing the frequency of events much like a histogram does. Non-parametric is just a fancy way of saying that the method does not make any assumption about the data being normally distributed, which makes it perfect for exercises where we want to draw a probability distribution but have no prior knowledge about what it actually looks like.

The two plots immediately above and further down below show the valuation probability density functions for venture capital stages on a logarithmic scale, with vertical lines indicating the median for each class. Why on a logarithmic scale? Well, post-money valuations are power-law distributed, as most things are in the venture capital domain [5], which means that the majority of valuations are at low values but there’s a long tail of rare but exceptionally high valuation events. Technically speaking, post-money valuations can also be described as being log-normally distributed, which just means that taking the natural logarithm of valuations produces the bell curves we’re all so familiar with. Series A, B, and C valuations may be argued as being bimodal log-normal distributions, and seed valuations may be approaching multimodality (more on that later), but technical fuss aside, this detail is important because log-normal distributions are easy for us to understand using the common language of mean, median, and standard deviation — even if we have to exponentiate the terms to put them in dollar signs. More importantly, this allows us to consider classical statistical methods that only work when we make strong assumptions about normality.

Founders that seek venture capital to get their company off the ground usually start by raising an angel or a seed round. An angel round consists of capital raised from their friends, family members, or wealthy individuals, while seed rounds are usually a startup’s first round of capital from institutional investors [18]. The median valuation for both angel and seed is $2.2m USD, while the median valuation for pre-seed is $1.9m USD. While we anticipated some overlap between angel, pre-seed and seed valuations, we were surprised to find that the distributions for these three classes of rounds almost completely overlap. This implies that these early-stage classifications are remarkably similar in reality. That said, we think it’s possible that the angel sample is biased towards the larger events that get reported, so we remain slightly skeptical of the overlap. And as mentioned earlier, the distribution of seed stage valuations appears to be approaching multimodality, meaning it has multiple modes. This may be due to the changing definition of a seed round and the recent institutionalization of pre-seed rounds, which are equal to or less than $1m in total capital raised and have only recently started being classified as ’Pre-seed” in Crunchbase (and hence the small sample size). There’s also a clear mode in the seed valuation distribution around $7m USD, which overlaps with the Series A distribution, suggesting, as others recently have, that some subset of seed rounds are being pushed further out and resemble what Series A rounds were 10 years ago [1].

Around 21 percent of seed stage companies move on to raise a Series A [16] about 18 months after raising their seed — with approximately 50 percent of Series A companies moving on to a Series B a further 18–21 months out [17]. In that time the median valuation jumps to $16m at the Series A and leaps to $130m at the Series B stage. Valuations climb further to a median of $500m at Series C. In general, we think it’s interesting to see the binomial nature as well as the extent of overlap between the Series A, B, and C valuation distributions. It’s possible that the overlap stems from changes in investor behavior, with the general size and valuation at each stage continuously redefined. Just like some proportion of seed rounds today are what Series A rounds were 10 years ago, the data suggests, for instance, that some proportion of Series B rounds today are what Series C rounds used to be. This was further corroborated when we segmented the data by decades going back to the year 2000 and compared the resulting distributions. We would note, however, that the changes are very gradual, and not as sensational as is often reported [12].

The median valuation for startups reaches $1b between the Series D and E stages, and $1.65 billion at Series F. This answers our original question, putting Peloton’s $4 billion-dollar appraisal at the 81 percentile of valuations at the Series F stage, far above the median, and indeed above the median $2.4b valuation for Series G companies. From there we see a considerable jump to the median Series H and Series I valuations of $7.7b and $9b, respectively. The Series I distribution has a noticeably lower peak in density and higher variance due to a smaller sample size. We know companies rarely make it that far, so that’s expected. Lyft and SpaceX, at valuations of $15b and $27b, respectively, are recent examples of companies that have made to the Series I stage. (Note: In December 2018 SpaceX raised a Series J round, which is a classification not analyzed in this paper.)

We classified each stage into higher level classes using the distributions above, as one of Early (Angel, Pre-Seed, Seed), Growth (Series A, B, C), Late (Series D, E, F, G), or Private IPO (Series H, I). With these aggregate classifications, we further investigated how valuations have faired across time and found that the medians (and means) have been more or less stable on a logarithmic scale. What has changed, since 2013, is the appearance of the “Private IPO” [11, 13]. These rounds, described above with companies such as SpaceX, Lyft, and others such as Palantir Technologies, are occurring later and at higher valuations than have previously existed. These late-stage private rounds are at such high valuations that future IPOs, if they ever occur, may end up being down rounds [22].

Approximating an Undisclosed Valuation

Given the above, we designed a simple statistical model to predict a round’s post-money valuation by its stage classification and the amount of capital raised. Why might this be useful? Well, the relationship between capital raised and post-money valuation is true by mathematical definition, so we’re not interested in claiming to establish a causal relationship in the classical sense. A startup’s post-money valuation is equal to an intrinsic pre-money valuation calculated by investors at the time of investment plus the amount of new capital raised [19, 21]. However, pre-money valuations are often not disclosed, so a statistical model for estimating an undisclosed valuation would be helpful when the size of a financing round is available and its stage is either disclosed as well or easily inferred.

We formulated an ordinary least squares log-log regression model after considering that we did not have enough stage classifications and complete observations at each stage for multilevel modeling and that it would be desirable to build a model that could be easily understood and utilized by founders, investors, executives, and analysts. Formally, our model is of the form:

where y is the output post-money valuation, c is the amount of capital raised, r is a binary term that indicates the financing stage, and epsilon is the error term. log(c · r) is, therefore, an interaction term that specifies the amount of capital raised at a specific stage. The model we present does not include stage main effects because the model remains the same, whether they’re left in or pulled out, while the coefficients become reparameterizations of the original estimates [23]. In other words, boolean stage main effects adjust the constant and coefficients while maintaining equivalent summed values — increasing the mental gymnastics required for interpretation without adding any statistical power to the regression. Capital main effects are not included because domain knowledge and the distributions above suggest that financing events are always indicative of a company’s stage, so the effect is not fixed, and therefore including capital by itself results in a misspecified model alongside interaction terms. Of course, whether or not a stage classification is agreed upon by investors and founders and specified on the term sheet is another matter.

As is standard practice, we used heteroscedasticity robust standard errors to estimate the beta coefficients, and residual analysis via a fitted values versus residuals plot confirms that the model validates the general assumptions of ordinary least squares regression. There is no multicollinearity between the variables, and a Q-Q plot further confirmed that the data is log-normally distributed. The results are statistically significant at the p < 0.001 level for all terms with an adjusted  of 89 percent and an F-Statistic of 5,900 (p < 0.001). Table II outlines the results. Monetary values in the model are specified in millions, USD.

The model can be interpreted by solving for and differentiating with respect to to get the marginal effect. Therefore, we can think of percentage increases in x as leading to some percentage increase in y. At the seed stage, for example, for a 10 percent increase in money raised a company can expect a 6.6 percent increase in their post-money valuation, ceteris paribus. That premium increases as companies make their way through the venture capital funnel, peaking at the Series I stage with a 12.4 percent increase in valuation per 10 percent increase in capital raised. In practice, an analyst could approximate an unknown post-money valuation by specifying the amount of capital raised at the appropriate stage in the model, exponentiating the constant and the beta term, and multiplying the values, such that:

Using the first equation and the values in Table II, the estimated undisclosed post-money valuation of a startup after a $2m seed round is approximately $9.4m USD — for a $35m Series B, it’s $224m — and for a $200m Series D, it’s $1.7b. Subtracting the amount of capital raised from the estimated post-money valuation would yield an estimated pre-money valuation.

Can it really be that simple? Well, that depends entirely on your use case. If you want to approximate a valuation and don’t have the tools to do so, and can’t get on the phone with the founders of the company, then the calculations above should be good enough for that purpose. If instead, you’re interested in purchasing a company, this is a good starting point for discussions, but you probably want to use other valuation methods, too. As mentioned earlier, this research is not meant to supplant existing valuation methodologies established by the venture capital community.

As far as estimation errors, you can infer from the scatter plot above that, for the predictions at the early stages, you can expect valuations to be off by a few million dollars — for growth-stage companies, a few hundred million — and in the late and private IPO stages, being off by a few billion would be reasonable. Of course, the accuracy of any prediction depends on the reliability of the estimated means, i.e., the credible intervals of the posterior distributions under a Bayesian framework [6], as well as the size of the error from omitted variable bias — which is not insignificant. We can reformulate our model in a directly comparable probabilistic Bayesian framework, in vector notation, as:

where the distribution of log(y) given X, an n · k matrix of interaction terms, is normal with a mean that is a linear function of X, observation errors are independent and of equal variance, and represents an n · n identity matrix. We fit the model with a non-informative flat prior using the No-U-Turn Sampler (NUTS), an extension of the Hamiltonian Monte Carlo MCMC algorithm [9], for which our model converges appropriately and has the desirable hairy caterpillar sampling properties [6].

The 95 percent credible intervals in Figure V suggest that posterior distributions from angel to series E, excluding pre-seed, have stable ranges of highly probable values around our original OLS coefficients. However, the distributions become more uncertain at the later stages, particularly for series F, G, H, and I. This should be obvious, considering our original sample sizes for the pre-seed class and for the later stages. Since the data needs to be transformed back to its original scale for appropriate estimation, and the fact that the magnitudes of late-stage rounds tend to be very high, such changes in the exponential will lead to dramatically different prediction results. As with any simple tool then, your mileage may vary. For more accurate and precise estimates, we’d suggest hiring a data scientist to build a more sophisticated machine learning algorithm or Bayesian model to account for more features and hierarchy. If your budget doesn’t allow for it, the simple calculation using the estimates in Table II will get you in the ballpark.

Concluding Remarks

This paper provides an empirical foundation for how to think about startup valuations and introduces a statistical model as a simple tool to help practitioners working in venture capital approximate an undisclosed post-money valuation. That said, the information in this paper is not investment advice, and is provided solely for educational purposes from sources believed to be reliable. Historical data is a great indicator but never a guarantee of the future, and statistical models are never correct — only useful [2]. This paper also makes no comment on whether current valuation practices result in accurate representations of a startup’s fair market value, as that is an entirely separate discussion [7].

This research may also serve as a starting point for others to pursue their own applied machine learning research. We translated the model presented in this article into a more powerful learning algorithm [8] with more features that fills-in the missing post-money valuations in our own database. These estimates are then passed to Startup Anomaly Detection™, an algorithm we’ve developed to estimate the plausibility that a venture-backed startup will have a liquidity such as an IPO or acquisition event given the current state of knowledge about them. Our machine learning system appears to have some similarities with others recently disclosed by GV [15], Google’s venture capital arm, and Social Capital [14], with the exception that our probability estimates are available as part of Radicle’s research products.

Companies will likely continue raising even later and larger rounds in the coming years, and valuations at each stage may continue being redefined, but now we have a statistical perspective on valuations as well as greater insight into their distributional properties, which gives us a foundation for understanding disruption as we look forward.

Source : https://towardsdatascience.com/making-sense-of-startup-valuations-with-data-science-1dededaf18bb

Key to any successful industrial digitalisation project – Manufacturer

Intelligent use of real-time data is critical to successful industrial digitalisation. However, ensuring that data flows effectively is just as critical to success. Todd Gurela explains the importance of getting your manufacturing network right.

Industrial digitalisation, including the Industrial Internet of Things (IIoT), offers great promise for manufacturers looking to optimise business operations.

By bringing together the machines, processes, people and data on your plant floor through a secure Ethernet network, IIoT makes it possible to design, develop, and fabricate products faster, safer, and with less waste.

For example, one automotive parts supplier eliminated network downtime, saving around £750,000 in the process simply by deploying a new wireless network across the factory floor.

The time it took for the company to completely recoup their investment in the project? Just nine months.

The key to any successful industrial digitalisation project is factory data

Without data – extracted from multiple sources and delivered to the right application, at the right time – little optimisation can happen.

And there is a multitude of meaningful data held in factory equipment. Consider how real-time access to condition, performance, and quality data – across every machine on the floor – would help you make better business and production decisions.

Imagine the following. A machine sensor detects that volume is low for a particular part on your assembly line. Data analysis determines, based on real-time production speed and previous output totals, that the part needs to be re-stocked in one hour.

With this information, your team can arrange for replacement parts to arrive before you run out, and avoid a production stoppage.

This scenario may be a theoretical, but it illustrates a genuine truth. Manufacturers need reliable, scalable, secure factory networks so they can focus on their most important task: making whatever they make more efficiently, at higher quality levels, and at lower costs.

At the heart of this truth is the factory network. So, while the key to a successful Industry 4.0 project is data, the key to meaningful, accurate data is the network. And manufacturers need to plan carefully to ensure their network can deliver on their needs.

Five key network characteristics

There are five characteristics manufacturers should look for in a factory network before selecting a vendor.

In no particular order, they are:

Interoperability – this ability allows for the ‘flattening’ of the industrial network to improve data sharing, and usually includes Ethernet as a standard.

Automation – for ‘plug and play’ network deployment to streamline processes and drive productivity.

Simplicity – the network infrastructure should be simple, as should the management.

Security – your network should be secure and provide visibility into and control of your data to reduce risk, protect intellectual property, and ensure production integrity.

Intelligence – you need a network that makes it possible to analyse data, and take action quickly, even at the network edge.

Manufacturers need solutions with these features to help aggregate, visualise, and analyse data from connected machines and equipment, and to assure the reliable, rapid, and secure delivery of data. Anything less will leave them wanting, and with subpar results.

These five characteristics are explained in more detail below, along with a real-world case study of a British manufacturer who recently modernised its network and is now expanding globally. 

1. Interoperability

Network interoperability allows manufacturers to seamlessly pull data from anywhere in their facility. An emerging standard in this area is Time Sensitive Networking (TSN).

Although not yet widely adopted, TSN provides a common communications pathway for your machines. With TSN, the future of industrial networks will be a single, open Ethernet network across the factory floor that enables manufacturers to access data with ease and efficiency.

Most important, TSN opens up critical control applications such as robot control, drive control, and vision systems to the Industrial Internet of Things (IIoT), making it possible for manufacturers to identify areas for optimisation and cost reduction.

Also, with the OPC-UA protocol now running over TSN, it also becomes possible to have standard and secure communication from sensor to cloud. In fact, TSN fills an important gap in standard networking by protecting critical traffic.

How so? Automation and control applications require consistent delivery of data from sensors, to controllers and actuators.

TSN ensures that critical traffic flows promptly, securing bandwidth and time in the network infrastructure for critical applications, while supporting all other forms of traffic.

And because TSN is delivered over standard Industrial Ethernet, control networks can take advantage of the security built into the technology.

TSN eliminates network silos that block reachability to critical plant areas, so that you can extract real-time data for analytics and business insights.

This is key to the future of factory networks, as TSN will drive the interoperability required for manufacturers to maximise the value from Industry 4.0 projects.

One leading manufacturer estimated that unscheduled downtime cost them more than £16,000/minute in lost profits and productivity. That’s almost £1m per hour if production stops. Could your organisation survive a stoppage like that?


2. Automation

Network automation is critical for manufacturers who have growing network demands. This includes needing to add new machines, or integrate operational controls, to existing infrastructure as well as net-new deployments.

Network uptime becomes increasingly important as the network expands. Ask yourself whether your network and its supporting tools have the capability for ‘plug and play’ network deployments that greatly reduce downtime if – and when – failure occurs.

It’s essential that factories leverage networks that automate certain tasks – to automatically set correct switch settings, for example – to meet Industry 4.0 objectives. The task is too overwhelming otherwise.


3. Simplicity

Like automation, network simplicity is an essential component of the factory network. Choosing a single network infrastructure, capable of handling TSN, Ethernet IP, Profinet, and CCLink traffic can significantly simplify installation, reduce maintenance expense, and reduce downtime.

It also makes it possible to get all your machine controls, from any of the top worldwide automation vendors, to talk through the same network hardware.

Consider also that you want a network that can be managed by operations and IT professionals. Avoid solutions that are too IT-centric and look for user-friendly tools that operations can use to troubleshoot network issues quickly.

Tools that visualise the network topology for operations professionals can be especially useful in this regard.

For example, knowing which PLC (including firmware data) is connected to which port, and which I/O is connected to the same switch, can help speed commissioning and troubleshooting.

Last, validated network designs are essential to factory success. These designs help manufacturers quickly roll out new network deployments and maintain the performance of automation equipment. Make sure this is part of the service your network vendor can provide.


4. Security

Cybersecurity is critically important on the factory floor. As manufacturing networks grow, so does the attack surface, or vectors, for malicious activity such as a ransomware attack.

According to the Cisco 2017 Midyear Cybersecurity Report, nearly 50% of manufacturers use six or more security vendors in their facilities. This mix and match of security products and vendors can be difficult to manage for even the most seasoned security expert.

No single product, technology or methodology can fully secure industrial operations. However, there are vendors that can provide comprehensive network security solutions in their plant network infrastructure that include simple protections for physical assets, such as blocking access to ports in unmanaged switches or using managed switches.

Protecting critical manufacturing assets requires a holistic defence-in-depth security approach that uses multiple layers of defence to address different types of threats. It also requires a network design that leverages industrial security best practices such as ‘Demilitarized Zones’ (DMZs) to provide pervasive security across the entire plant.


5. Intelligence

Consider for a moment how professional athletes react to their surroundings. They interpret what is happening in real-time, and make split-second decisions based on what is going on around them.

Part of what makes those decisions possible is how the players have been coached to react in certain situations. If players needed to ask their coach for advice before taking every shot, tackling the opposition, or sprinting for victory…well, the results wouldn’t be very good.

Just as a team’s performance improves when players can take in their surroundings and perform an appropriate action, the factory performs better when certain network data can be processed and actioned upon immediately – without needing to travel to the data centre first.

Processing data in this way is called ‘edge’, or ‘fog’, computing. It entails running applications right on your network hardware to make more intelligent, faster decisions.

Manufacturers need to access information quickly, filter it in real-time, then use that data to better understand processes and areas for improvement.

Processing data at the edge is key to unlocking networking intelligence, so it’s important to ask yourself whether your factory network can support edge applications before beginning a project. And if it can’t, it’s time to consider a new network.

A final note on network intelligence. Once you deploy edge applications, make sure you have the tools to manage and implement them with confidence, at scale. Managing massive amounts of data can quickly become a problem, so you’ll need systems that can extract, compute, and move data to the right places at the right time.

The opportunity for manufacturers who invest in Industry 4.0 solutions is massive (and it’s time that leaders from the top floor and shop floor realised it). But before any Industry 4.0 project can get off the ground, the right foundation needs to be in place.

The factory (or industrial) network is that foundation… and manufacturers owe it to themselves to select the best one available.

Case Study:

SAS International is a leading British manufacturer of quality metal ceilings and bespoke architectural metalwork. Installed in iconic, landmark buildings worldwide, SAS products lead through innovation, cutting-edge design and technical acoustic expertise.

Their success is built on continued investment in manufacturing and achieving value for clients through world-class engineered solutions.

In the UK, SAS operates factories in Bridgend, Birmingham and Maybole, with headquarters and warehouse facilities in Reading. The company has recently expanded its export markets and employs nearly 1,000 staff internationally.

However, the IT infrastructure was operating on ageing equipment with connectivity, visibility and security constraints.

The company’s IT team recently modernised its network, upgrading from commercial-grade wireless to a new network solution with a unified dashboard that allows them to remotely manage distributed sites.

They now have instant visibility and control over the network devices, as well as the mobile devices used by employees daily.

Results

During the initial deployment, the IT team was able to identify cabling issues that previously they would not have been alerted to or been able to investigate.

With upcoming projects and continually working to optimise solutions, like cloud storage, the network is now robust enough and reliable enough to support future IT needs.

SAS is retrofitting numerous manufacturing machines with computers. This retrofit, partnered with the new network, allows remote communications between the machines and the designers without having to manually input data at the machines themselves.

The robust wireless infrastructure is changing the manual printing and checking of stock by enabling handheld scanners and creating a more efficient and cost-effective product flow.

Fault mitigation and anomaly detection have been huge benefits of the solution. For example, the IT team was able to quickly identify a bandwidth issue when a phenomenal amount of data was generated from an automated transfer to a shop machine.

They were able to spot the issue, identify the machine, and fix the problem. Before, they would merely have seen there was a network slowdown, but wouldn’t have been able to identify or resolve the problem.

The SAS team will continue to benefit from the included firmware updates and new feature releases that are integrated into the solution, providing them with a future-proof solution as they expand to global sites in the future.

Source : https://www.themanufacturer.com/articles/the-key-to-any-successful-industrial-digitalisation-project/

Why Blockchain Differs From Traditional Technology Life Cycles – Daniel Heyman

Why another bubble is likely and what the blockchain space should focus on now

In the aftermath of the 2001 internet bubble, Carlota Perez published her influential book Technological Revolutions and Financial Capital. This seminal work provides a framework for how new technologies create both opportunity and turmoil in society. I originally learned about Perez’s work through venture capitalist Fred Wilson, who credits it as a key intellectual underpinning of his investment theses.

In the wake of the 2018 ICO bubble and with the purported potential of blockchain, many people have drawn parallels to the 2001 bubble. I recently reread Perez’s work to think through if there are any lessons for the world of blockchain, and to understand the parallels and differences between then and now. As Mark Twain may or may not have said, “History doesn’t repeat itself, but it does rhyme.”

Framework Overview

In Technological Revolutions and Financial Capital, Carlota Perez analyzes five “surges of development” that have occurred over the last 250 years, each through the diffusion of a new technology and associated way of doing business. These surges are still household names hundreds of years later: the Industrial Revolution, the railway boom, the age of steel, the age of mass production and, of course, the information age. Each one created a burst of development, new ways of doing business, and generated a new class of successful entrepreneurs (from Carnegie to Ford to Jobs). Each one created an economic common sense and set of business models that supported the new technology, which Perez calls a ‘techno-economic paradigm’. Each surge also displaced old industries, drove bubbles to burst, and led to significant social turmoil.

Technology Life cycles

Perez provides a framework for how new technologies first take hold in society and then transform society. She calls the initial phase of this phenomenon “installation.” During installation, technologies demonstrate new ways of doing business and achieving financial gains. This usually creates a frenzy of investment in the new technology which drives a bubble and also intense experimentation in the technology. When the bubble bursts, the subsequent recession (or depression) is a turning point to implement social and regulatory changes to take advantage of the infrastructure created during the frenzy. If changes are made, a “golden age” typically follows as the new technology is productively deployed. If not, a “gilded age” follows where only the rich benefit. In either case, the technology eventually reaches maturity and additional avenues for investment and returns in the new technology dwindle. At this point, the opportunity for a new technology to irrupt onto the scene emerges.

Image from Technology Revolutions and Financial Capital

Inclusion-Exclusion

Within Perez’s framework, new techno-economic paradigms both encourage and discourage innovation, through an inclusion-exclusion process. This means that as new techno-economic paradigms are being deployed, they provide opportunities for entrepreneurs to mobilize and new modes of business to create growth, and at the same time, they exclude alternative technologies because entrepreneurs and capital are following the newly proven path provided by the techno-economic paradigm. When an existing technology reaches maturity and investment opportunities diminish, capital and talent go in search of new technologies and techno-economic paradigms.

Technologies Combine

One new technology isn’t enough for a new techno-economic paradigm. The age of mass production was created by combining oil and the combustion engine. Railways required the steam engine. The information age required the microprocessor, the internet, and much more. Often, a technology will, as Perez says, “gestate” as a small improvement to the existing techno-paradigm, until complementary technologies are created and the exclusion process of the old paradigm ends. Technologies can exist in this gestation period for quite sometime until the technologies and opportunities are aligned for the installation period to begin.

Frenzies and Bubbles

In many ways, the bubbles created by the frenzy in the installation phase makes it possible for the new technology to succeed. The bubble creates a burst of (over-)investment in the infrastructure of the new technology (railways, canals, fiber optic cables, etc.). This infrastructure makes it possible for the technology to successfully deploy after the bubble bursts. The bubbles also encourage a spate of experimentation with new business models and new approaches to the technologies, enabling future entrepreneurs to follow proven paths and avoid common pitfalls. While the bubble creates a lot of financial losses and economic pain, it can be crucial in the adoption of new technologies.

Connecting the Dots

A quick look at Perez’s framework would leave one to assume that 2018 was the blockchain frenzy and bubble, so we must be entering blockchain’s “turning point.” This would be a mistake.

My analysis of Perez’s framework suggests that blockchain is actually still in the gestation period, in the early days of a technology life cycle before the installation period. 2018 was not a Perez-style frenzy and bubble because it did not include key outcomes that are necessary to reach a turning point: significant infrastructure improvements and replicable business models that can serve as a roadmap during the deployment period. The bubble came early because blockchain technology enabled liquidity earlier in its life cycle.

There are three main implications of remaining in the gestation period. First, another blockchain-based frenzy and bubble is likely to come before the technology matures. In fact, multiple bubbles may be ahead of us. Second, the best path to success is to work through, rather than against, the existing technology paradigm. Third, the ecosystem needs to heavily invest in infrastructure for a new blockchain-based paradigm to emerge.

The ICO Bubble Doesn’t Match Up

2018 did show many of the signs of a Perez-style ‘frenzy period’ entering into a turning point. The best way (and ultimately the worst way) to make money was speculation. ‘Fundamentals’ of projects rarely mattered in their valuations or growth. Wealth was celebrated and individual prophets gained recognition. Expectations went through the roof. Scams and fraud were prevalent. Retail investors piled in for fear of missing out. The frenzy had all the tell-tale signs of a classic bubble.

Although there are no “good bubbles,” bubbles can have good side effects. During Canal Mania and Railway Mania, canals and railways were built that had little hope of ever being profitable. Investors lost money, but after the bubble, these canals and railways were still there. This new infrastructure made future endeavors cheaper and easier. After the internet bubble burst in 2001, fiber optic cables were selling for pennies on the dollar. Investors did terribly, but the fiber optics infrastructure created value for consumers and made it possible for the next generation of companies to be built. This over-investment in infrastructure is often necessary for the successful deployment of new technologies.

The ICO bubble, however, did not have the good side effects of a Perez-style bubble. It didn’t produce nearly enough infrastructure to help the blockchain ecosystem move forward.

Compared to previous bubbles, the cryptosphere’s investment in infrastructure was minimal and likely to be obsolete very soon. The physical infrastructure — in mining operations, for example — is unlikely to be useful. Additional mining power on a blockchain has significantly decreasing marginal returns and different characteristics to traditional infrastructure. Unlike a city getting a new fiber optic cable or a new canal, new people do not gain access to blockchain because of additional miners. Additionally, proof of work mining is unlikely to be the path blockchain takes moving forward.

The non-physical infrastructure was also minimal. The tools that can be best described as “core blockchain infrastructure” did not have easy access to the ICO market. Dev tools, wallets, software clients, user-friendly smart contract languages, and cloud services (to name a few) are the infrastructure that will drive blockchain technology toward maturity and full deployment. The cheap capital provided through ICOs primarily flowed to the application layer (even though the whole house has been built on an immature foundation). This created incentives for people to focus on what was easily fundable rather than most needed. These perverse incentives may have actually hurt the development of key infrastructure and splintered the ecosystem.

I don’t want to despair about the state of the ecosystem. Some good things came out of the ICO bubble. Talent has flooded the field. Startups have been experimenting with different use cases to see what sticks. New blockchains were launched incorporating a wide range of new technologies and approaches. New technologies have come to market. Many core infrastructure projects raised capital and made significant technical progress. Enterprises have created their blockchain strategies. Some very successful companies were born, which will continue to fund innovation in the space.The ecosystem as a whole continues to evolve at breakneck speed. As a whole, however, the bubble did not leave in its wake the infrastructure one would expect after a Perez-style bubble.

Liquidity Came Early

The 2018 ICO bubble happened early in blockchain technology’s life-cycle, during its gestation period, which is much earlier than Perez’s framework would predict. This is because the technology itself enabled liquidity earlier in the life-cycle. The financial assets became liquid before the underlying technology matured.

In the internet bubble, it took companies many years to go public, and as such there was some quality threshold and some reporting required. This process enabled the technology to iterate and improve before the liquidity arrived. Because blockchain enabled liquid tokens that were virtually free to issue, the rush was on to create valuable tokens rather than valuable companies or technologies. You could create a liquid asset without any work on the underlying technology. The financial layer jumped straight into a liquid state while the technology was left behind. The resulting tokens existed in very thin markets that were highly driven by momentum.

Because of the early liquidity, the dynamics of a bubble were able to start early for the space in relationship to the technology. After all, this was not the first blockchain bubble (bitcoin already has a rich history of bubbles and crashes). The thin markets in which these assets existed likely accelerated the dynamics of the bubble.

What the Blockchain Space Needs to Focus on now

In the fallout of a bubble, Perez outlines two necessary components to successfully deploy new and lasting technologies: proven, replicable business models and easy-to-use infrastructure. Blockchain hasn’t hit these targets yet, and so it’s a pretty obvious conclusion that blockchain is not yet at a “turning point.”

While protocol development is happening at a rapid clip, blockchain is not yet ready for mass deployment into a new techno-economic paradigm. We don’t have the proven, replicable business models that can expand industry to industry. Exchanges and mining companies, the main success stories of blockchain, are not replicable business models and do not cross industries. We don’t yet have the infrastructure for mass adoption. Moreover, the use cases that are gaining traction are mostly in support of the existing economic system. Komgo is using blockchain to improve an incredibly antiquated industry (trade finance) but it is still operating within the legacy economic paradigm.

Blockchain, therefore, is still in the “gestation period.” Before most technologies could enter the irruption phase and transform the economy, they were used to augment the existing economy. In blockchain, this looks like private and consortium chain solutions.

Some people in blockchain see this as a bad result. I see it as absolutely crucial. Without these experiments, blockchain risks fading out as a technological movement before its given the chance to mature and develop. In fact, one area where ConsenSys is not given the credit I believe it deserves is in bringing enterprises into the Ethereum blockchain space. This enterprise interest brings in more talent, lays the seeds for additional infrastructure, and adds credibility to the space. I am more excited by enterprise usage of blockchain today than any other short-term developments.

The Future of Blockchain Frenzy

This was not the first blockchain bubble. I don’t expect it to be the last (though hopefully some lessons will be learned from the last 12 months). Perez’s framework predicts that when the replicable business model is found in blockchain, another period of frenzied investment will occur, likely leading to a bubble. As Fred Wilson writes, “Carlota Perez [shows] ‘nothing important happens without crashes.’ ” Given the amount of capital available, I think this is a highly likely outcome. Given the massive potential of blockchain technology, the bubble is likely to involve more capital at risk than the 2018 one.

This next frenzy will have the same telltale signs of the previous one. Fundamentals will decrease in importance; retail investors will enter the market for fear of missing out; fraud will increase; and so on.

Lessons for Blockchain Businesses

Perez’s framework offers two direct strategic lessons for PegaSys and for any serious protocol development project in the blockchain space. First, we should continue to work with traditional enterprises. Working with enterprises will enable the technology to evolve and will power some experimentation of business models. This is a key component of the technology life-cycle and the best bet to help the ecosystem iterate.

Second, we must continue investing in infrastructure and diverse technologies for the ecosystem to succeed. This might sound obvious at first, but the point is that we will miss out on the new techno-economic paradigm if we only focus on the opportunities that are commercially viable today. Our efforts in Ethereum 1.x and 2.0 are directly born from our goal of helping the ecosystem mature and evolve. The work other groups in Ethereum and across blockchain are doing also drives towards this goal. We are deeply committed to the Ethereum roadmap and at the same time recognize the value that innovations outside Ethereum bring to the space. Ethereum’s roadmap has learned lessons from other blockchains, just as those chains have been inspired by Ethereum. This is how technologies evolve and improve.

Source : https://hackernoon.com/why-blockchain-differs-from-traditional-technology-life-cycles-95f0deabdf85

How The CIO Role Must Change Due To Digital Transformation – Peter Bendor

Digital transformation is sweeping through businesses, giving rise to new to new business models, new and different constraints, and presenting a need for more focused organizational attention and resources in a new way. It is also upending the C-suite, bringing in new corporate titles and functions such as the Chief Security Officer emerge, Chief Digital Officer and Chief Data Officer. These new roles seemingly pose an existential threat to existing roles – for example, the CIO.

As companies invent new business models through digital transformation and bring new organizations into being, they do more than cover new ground. They also carve new roles out of existing organizations (the CIO organization, for instance). Other digital threats potentially affect the CIO role:

  • Recognition that digital transformation now makes technology THE business, rather than technology supporting the business; therefore, IT and CIO roles are much more vital to growth in sales.
  • Competing through new digital models and digital platforms, focusing on redefining the customer experience and employee experience to create and deliver new value.

At Everest Group, we investigated the question of “Will the role of the CIO go away?” As a result of that investigation, we come back strongly with “no.” In fact, here’s happening to the role of the CIO: the CIO charter is changing and thus changing – but strengthening – the role.

Reasons For Changes In The CIO Charter

The focus of the CIO charter is increasingly changing – matching the new corporate charter for competitive repositioning. The prior focus was on the plumbing (infrastructure, ensuring applications are maintained and in compliance, etc.). Although those functions remain, the new charter focuses on building out and operating the new digital platforms and new digital operating models that are reshaping the competitive landscape.

The reason the CIO role is changing with the new corporate charter is that, in most organizations, the CIO is the only function that has these necessary capabilities for digital transformation:

  • Breadth of vision that sees the entire organization and all its workings
  • Depth of resources and ability to drive transformation projects and apply technology across silos, functions and divisions.

Digital transformation inevitably forces new operating models that have no respect for traditional organizations that are functional. Digital platforms and digital operating models collapse marketing and operations, for instance, spanning across these functions and groups to achieve a superb end-to-end for customer experience.

The new models force much tighter integration and often a realignment of organizations. The CIO organization and its breadth of vision and depth of resources to drive the transformation and support the new operating model that inevitably emerges from transformation.

How The CIO Role Must Change For The New Charter

Meeting the goals of the new charter for the CIO role will not come without CIOs changing their organizations and, in many cases, changing personally. To seizing the opportunities in the new charter, as well as shaping it, requires substantial change in (a) modernizing the IT, (b) the orientation and mind-set of the IT organization, and (c) changing the organizational structure.

To support digital transformation agendas, CIOs face a set of journeys in which they need to dramatically modernize their traditional functions. They first must think about their relationship with the business. To meet the needs of the business in a much more intimate, proactive, deeper way requires more investment and organizations with deeper industry domain knowledge and relationships. They need to move talent from remote centers back onshore to be close to the business so that they can better understand in a deeper way what the needs are and act on those quickly.

Second, the IT operating model needs to change from its historical structures so that it can deliver a seamless operating environment. The waterfall structures that still permeate IT need to change into a DevOps model with persistent teams that don’t change, teams that sit close to the business. IT also needs to accelerate the company’s journey to automation and cloud.

One thing companies quickly find about operating models is that they can’t get to a well-functioning DevOps team without migrating to a cloud-based infrastructure basis. And they can’t get to a cloud-based infrastructure basis without transforming their network and network operations model.

To meet the new charter, the CIO organization also needs to change in the following aspects:

  • Change its mind-set
  • Ensure deeper business knowledge
  • Increase agility and speed

The modernizations I mentioned above then call into question the historical organizational structure of IT with functions such as network, infrastructure, security, apps development, apps maintenance, etc. In the new digital charter, these functions inevitably start to collapse into pods or functions aligned by business services.

As I’ve described above, substantial organizational technology and organizational change is required within the CIO’s organization to live up the new mandate. I can’t overemphasize the fact that the change is substantial nor overemphasize the need. In upcoming blog posts, I’ll further discuss the CIO’s role in reorienting the charter from plumbing to transformation and supporting the new digital operating models.

Source : https://www.forbes.com/sites/peterbendorsamuel/2019/01/30/how-the-cio-role-must-change-due-to-digital-transformation/#24f9952f68be

API Metrics and Status – A Regulatory Requirement or a Strategic Concern? – John Heaton-Armstrong

TL;DR – those discussing what should be appropriate regulatory benchmarks for API performance and availability under PSD2 are missing a strategic opportunity. Any bank that simply focusses on minimum, mandatory product will rule itself out of commercial agreements with those relying parties who have the wherewithal to consume commercial APIs at scale.

Introduction

As March approaches, those financial institutions in the UK and Ireland impacted by PSD2 are focussed on readiness for full implementation. The Open Banking Implementation Entity (OBIE) has been consulting on Operational Guidelineswhich give colour to the regulatory requirements found in the Directive and Regulatory Technical Standards which support it. The areas covered are not unique to the UK, and whilst they are part of an OBIE-specific attestation process, the guidelines could prove useful to any ASPSP impacted by PSD2.

Regulatory Requirements

The EBA at guidelines 2.2-4 are clear on the obligations for ASPSPs. These are supplemented by the RTS – ” [ASPSPs must] ensure that the dedicated interface offers at all times the same level of availability and performance, including support, as the interfaces made available to the payment service user for directly accessing its payment account online…” and “…define transparent key performance indicators and service level targets, at least as stringent as those set for the interface used by their payment service users both in terms of availability and of data provided in accordance with Article 36″ (RTS Arts. 32(1) and (2)).

This places the market in a quandary – it is extremely difficult to compare, even at a theoretical level, the performance of two interfaces where one (PSU) is designed for human interaction and the other (API) for machine. Some suggested during the EBA’s consultation period that a more appropriate comparison might be between the APIs which support the PSU interface and those delivered in response to PSD2. Those in the game of reverse engineering confirm that there is broad comparability between the functions these support – unfortunately this proved too much technical detail for the EBA.

To fill the gap, OB surveyed the developers, reviewed those existing APIs already delivered by financial institutions, and settled on an average of 99% availability (c.22hrs downtime per quarter) and 1000 m/s per 1MB of payload response time (this is a short summary and more detail can be read on the same). A quick review of the API Performance page OB publish will show that, with average availability of 96.34% across the brands in November, and only Bank of Scotland, Lloyds and the HSBC brands achieving >99% availability, there is a long way to go before this target is met, made no easier by a significant amount of change to platforms as their functional scope expands over the next 6-8 months. This will also been in the face of increasing demand volumes, as those organisations which currently rely on screen scraping for access to data begin to transfer their integrations onto APIs. In short, ASPSPs are facing a perfect storm to achieve these goals.

Knowledge and Reporting

At para 2.3.1 of their guidelines, the OBIE expands on the EBA’s reporting guidelines, and provides a useful template for this purpose, but this introduces a conundrum. All of the data published to date has been the banks reporting on themselves – i.e. the technical solutions to generate this data sit inside their domains, so quite apart from the obvious issue of self-reporting, there have already been clear instances where services haven’t been functioning correctly, and the bank in question simply hasn’t known this to be the case until so informed by a TPP. One of the larger banks in the UK recently misconfigured a load balancer to the effect that 50% of the traffic it received was misdirected and received no response, but without its knowledge. A clear case of downtime that almost certainly went unreported – if an API call goes unacknowledged in the woods, does anyone care?

Banks have a challenge, in that risk and compliance departments typically baulk at any services they own being placed in the cloud, or indeed anywhere outside their physical infrastructure. This is absolutely what is required for their support teams to have a true understanding of how their platforms are functioning, and to generate reliable data for their regulatory reporting requirements.

[During week commencing 21st Jan, the Market Data Initiative will announce a free/open service to solve some of these issues. This platform monitors the performance and availability of API platforms using donated consents, with the aim of establishing a clear, independent view of how the market is performing, without prejudicial comment or reference to benchmarks. Watch this space for more on that.]

Regulatory or strategic concern?

For any TPP seeking investment, where their business model necessitates consuming open APIs at scale, one of the key questions they’re likely to face is how reliable these services are, and what remedies are available in the event of non-performance. In the regulatory space, some of this information is available (see above) but is hardly transparent or independently produced, and even with those caveats does not currently make for happy reading. For remedy, TPPs are reliant on regulators and a quarterly reporting cycle for the discovery of issues. Even in the event that the FCA decided to take action, the most significant step they could take would be to instruct and ASPSP to implement a fall-back interface, and given that they would have a period of weeks to build this, it is likely that any relying party’s business would have suffered significant detriment before it could even start testing such a facility. The consequence of this framework is that, for the open APIs, performance, availability and the transparency of information will have to improve dramatically before any commercial services rely on them.

Source : https://www.linkedin.com/pulse/api-metrics-status-regulatory-requirement-strategic-john?trk=portfolio_article-card_title

7 Big Lessons We Learned on How to Sell a Patent – Sammy Abdullah

In 2017, we had a death in the portfolio. Once all the employees left, the only remaining assets were some patents, servers, domains, and a lot of code. Recently, we managed to learn how to sell a patent and code. Here is what we learned on how to sell a patent:

How to sell a patent in 7 steps

1. Set expectations when selling patents

The value of IP is a small fraction of what the company was once valued at; it’s maybe 1 to 5 cents on the dollar. Any acquirer of the IP is unlikely to do an all-cash deal, so don’t be surprised if the final consideration is a blend of cash, stock, royalty, earn out, or some other creative structure that reduces the acquirer’s upfront risk.

Selling a patent is going to take a year or more with legal taking 6 to 9 months alone (we recommend specialized counsel that has M&A experience and experience in bankruptcy/winding down entities).

It’s also going to take some cash along the way as you foot the bill for legal, preparing the code, and other unforeseen expenses that have to be paid well ahead of the close. With those expectations in mind, you need to seriously consider whether it is worth the work to sell the IP, what you will really recover, and what the probability of success really is.

2. Reach out to everyone

If you’ve decided it’s worth it to try and recover something for the IP, reach out to absolutely everyone you know. That includes old customers, prospects, former customers, anyone who has ever solicited you for acquisition, your cousin, your aunt, etc.

The point is don’t eliminate anyone as a potential acquirer as you don’t know what’s on someone’s product roadmap and be shameless about reaching out to your entire network. The acquirer of the IP in our dead company was a prospect who never actually became a customer. We also had interest from very random firms that weren’t remotely adjacent to our space.

3. You need the CTO

In order to transfer code to an acquirer, you’re going to need the CTO or whoever built a majority of the code to assist. No acquirer is going to take the code as-is unless you want them to massively discount the price to hedge their risk.

They’re going to want it cleaned up and packaged specifically to their needs. In our case, it took a founding developer 3 months of hard work to get the code packaged just right for our acquirer, and of course, we paid him handsomely for successful delivery.

4. You need great counsel

The code was once part of a company, and that company has liabilities, creditors, equity owners, former employees, and various other obligations. All of those parties are probably pretty upset with you that things didn’t work out. Before you embark on a path to sell the IP, consult with an attorney that can tell you who has a right to any proceeds collected, what the waterfall of recipients looks like, who can potentially block a deal, who you need to get approval from, whether patents are in good standing, etc.

You’ll need to pay the attorney up front for his work and as you progress through the deal, so it takes take money to make money from selling IP.

5. Utilize Github

Put the code on Github. Have potential acquirers sign a very tight and punitive NDA before allowing them to see the code. It also may be advisable to only give acquirers access to portions of the code. Github is the best $7 a month you’ll ever spend when it comes to selling IP.

6. Get all the assets

Make sure you have access to all the assets. This includes all code, training modules, patents, domains, actual servers and hardware, trademarks, logos, etc. An acquirer is going to want absolutely everything even if there are some things he can’t necessarily use.

7. Make sure the acquirer is fair

The acquirer has to be someone that is negotiating fairly and in good faith with you. We got very lucky that our acquirer had an upstanding and reputable CEO. If you don’t trust the acquirer or if they’re being shifty, move on. In our case, had the acquirer been a bad guy, there were many times when he could have screwed us such as changing the terms of the deal before the close, among other things.

Given the limited recourse you often have in situations like this, ‘bad boy’ acquirers do it all the time. We got lucky finding an acquirer who was honest, forthright and kept his word. You’ll need to do the same.

Takeaways on how to sell a patent

Selling patents is incredibly challenging. In our case the recovery was very small relative to capital invested, the process took nearly 1 year, and there were a lot of people involved to make it happen. We also spent about tens of thousands of dollars in legal fees, data scientist consulting, patent reinstatement and recovery, shipping of servers, etc.

A lot of that expenditure was done along the way so we had to put more money at risk for the possibility of maybe recovering cash in the sale of IP. Learning how to sell a patent wasn’t easy, but it got done. Hopefully, we never have to do it again and neither do you.

Source: https://about.crunchbase.com/blog/how-sell-patent/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

Corporate venture building dilemma: investment vs. control – Carlos Borges

Having founded my startup a few years ago, I am familiar to why founders go through the pain & grit to build their own company. The statistics around startup survival rates show that the risk is high, but the potential reward both financially & emotionally is also significant.

In my case, risk was defined by the amount of money I invested in the venture plus the opportunity cost in case the startup goes nowhere. The later relates to the fact that I earned no salary at the beginning & that when I committed to that specific idea I was instantaneously saying “no” to many other opportunities and potential career advancements. The reward was two-fold too; the first one was the attractive financial outcome of a potential exit. The second one was the freedom to chase opportunities as they appear, doing what I want and how I want it.

Once I raised capital from investors, I basically traded reward for reduced risk. I started paying myself a small salary and anticipated that more resources would increase the success likelihood of the startup.

This pattern of weighing risk against rewards was crystal clear in my mind… until I joined the arena of corporate venture building. Directly during one of my first projects, I was tasked with the creation of a startup for a blue-chip corporate client. I was immediately puzzled by the reasoning behind this endeavor.

Ultimately corporate decisions are also guided by risk against reward: if they don’t take risks and innovate they might be left behind and, in some cases, join the once-great-now-extinct corporate hall of shame. That’s why they invest in research and development, spend hard earned cash in mergers and acquisitions and start innovation programs. But my interest was more at a micro level, meaning, which reasoning my corporate client follows to decide if and how to found a specific new venture?

Having thought about it a lot, I believe at micro level corporates weigh investment against control. Investment is the level of capital, manpower & political will provided by the corporate to propel the venture towards exit, break-even or strategic relevance. Control is the possibility to steer the venture towards the strategic goals the leadership team has in mind while defining the boundaries of what can & cannot be done.

In the startup case, the risk/reward is typically shared between the founders and external investors. In a corporate venture building case, the investment/control can be shared between the corporate, an empowered founder team and also external investors.

I am still in the middle of the corporate decision-making process but wanted to share with you the scenarios we are using to guide the discussions on how to structure the new venture. But before I do, I would like to mention that the considerations of investment vs. control takes place at three different stages of the venture’s existence:

• Incubation: develop & validate idea
• Acceleration: validate business model incl. product, operations & customer acquisition (find the winning formula)
• Growth: replicate the formula to grow exponentially

Based on that, three main scenarios are being considered to found the new venture.

Scenario 1: Control & Grow

  • Full investment & control during incubation & acceleration
  • Shared investment & control during the growth stage

Per definition, the incubation and acceleration stages are less capital intensive and is the moment when key strategic decisions that shape the future business are made. In these stages, the corporate is interested in maintaining the full control of the venture while absorbing the whole investment. Only when they enter the capital-intensive growth stage it becomes necessary to “share the burden” with other institutional or strategic investors. This scenario is suitable for ventures of high strategic value, especially the ones leveraging core assets and know-how of the corporate mothership.

Scenario 2: Spread the Bets

  • Lower investment & control during all stages

In this case, the corporate initiator empowers a founder team and joins the project almost like an external investor would do at Seed and Series A of a startup. They agree on a broad vision, provide the funding and retain a part of the shares with shareholder meetings in between to track progress. Beyond that, they let the founder team do their thing. External investors can join at any funding round to share the investment tickets. The corporate would have lower control and investment from the get-go and can increase their influence only when new funding rounds are required or via an acquisition offer. This scenario is suitable for ventures in which the corporate can function as the first client or use their network to manufacture, market or distribute the product or service.

Scenario 3: Build, operate & transfer

  • Lower investment & control during incubation & acceleration
  • Full investment & control during the growth stage

The venture is initially built by a founder team or external partners (often a consultancy). Only once they successfully finalized the incubation and acceleration stages, the corporate has the right or obligation to absorb the business. Differently than scenario 2, the corporate gains stronger control of the trajectory of the business during its initial stages by defining how a “transfer” event looks like. The investment necessary to put together a strong founder team is reduced by the reward of a pre-defined & short term exit event. The initial investment can be further reduced by the participation of Business Angels, also motivated by a clear path to exit and access to a new source of deal flow. This scenario is suitable for ventures closely linked to the core business of the corporate and where speed & excellence of execution is key.

There is obviously no right and wrong. Each scenario can make sense according to the end goal of the corporate. Furthermore, there are surely new scenarios and variations of the above. What is important in my opinion is to openly discuss which road to take. If the client can’t discern the alternatives and consequences, you will risk a “best of both worlds” mindset where expectations regarding investment & control don’t match. If that is the case, you will be up for a tough ride

Source : https://medium.com/@cbgf/a-corporate-venture-building-dilemma-investment-vs-control-a703b9c19c94

Scroll to top