Category: Mobility

Don’t get locked up into avoiding lock-in – MartinFowler

A significant share of architectural energy is spent on reducing or avoiding lock-in. That’s a rather noble objective: architecture is meant to give us options and lock-in does the opposite. However, lock-in isn’t a simple true-or-false matter: avoiding being locked into one aspect often locks you into another. Also, popular notions, such as open source automagically eliminating lock-in, turn out to be not entirely true. Time to have a closer look at lock-in, so you don’t get locked up into avoiding it!

One of an architect’s major objectives is to create options. Those options make systems change-tolerant, so we can defer decisions until more information becomes available or react to unforeseen events. Lock-in does the opposite: it makes switching from one solution to another difficult. Many architects may therefore consider it their archenemy while they view themselves as the guardians of the free world of IT systems where components are replaced and interconnected at will.

Lock-in – an architect’s archenemy?

But architecture is rarely that simple – it’s a business of trade-offs. Experienced architects know that there’s more behind lock-in than proclaiming that it must be avoided. Lock-in has many facets and can even be the favored solution. So, let’s get in the Architect Elevator to have a closer look at lock-in.

Open-source-hybrid-multi-cloud == lock-in free?

The platforms we are deploying software on these days are becoming ever more powerful – modern cloud platforms not only tell us whether our photo shows a puppy or a muffin, they also compile our code, deploy it, configure the necessary infrastructure, and store our data.

This great convenience and productivity booster also brings a whole new form of lock-in. Hybrid/multi-cloud setups, which seem to attract many architects’ attention these days, are a good example of the kind of things you’ll have to think of when dealing with lock-in. Let’s say you have an application that you’d like to deploy to the cloud. Easy enough to do, but from an architect’s point of view, there are many choices and even more trade-offs, especially related to lock-in.

You might want to deploy your application in containers. That sounds good, but should you use AWS’ Elastic Container Service (ECS) to run them? After all, it’s proprietary to Amazon’s cloud. Prefer Kubernetes? It’s open source and runs on most environments, including on premises. Problem solved? Not quite – now you are tied to Kubernetes – think of all those precious YAML files! So you traded one lock-in for another, didn’t you? And if you use a managed Kubernetes services such as Google’s GKE or Amazon’s EKS, you may also be tied to a specific version of Kubernetes and proprietary extensions.

If you need your software to run on premises, you could also opt for AWS Outposts, so you do have some options. But that again is proprietary. It integrates with VMWare, which you are likely already locked into, so does it really make a difference? Google’s equivalent, freshly minted Anthos, is built from open-source components, but nevertheless a proprietary offering: you can move applications to different clouds – as long as you keep using Anthos. Now that’s the very definition of lock-in, isn’t it?

Alternatively, if you neatly separate your deployment automation from your application run-time, doesn’t that make it fairly easy to switch infrastructure, reducing the effect of all that lock-in? Hey, there are even cross-platform infrastructure-as-code tools. Aren’t those supposed to make these concerns go away altogether?

For your storage needs, how about AWS S3? Other cloud providers offer S3-compatible APIs, so can S3 be considered multi-cloud compatible and lock-in free, even though it’s proprietary? You could also wrap all your data access behind an abstraction layer and thus localize any dependency. Is that a good idea?

It looks like avoiding lock-in isn’t quite so easy and might even get you locked up into trying to escape from it. To highlight that cloud architecture is fun nevertheless, I defer to Simon Wardley’s take on hybrid cloud.

Shades of lock-in

Lock-in isn’t an all-or-nothing affair.

Elevator Architects (those who ride the Architect Elevator up and down) see shades of gray where many only see black and white. When thinking about system design, they realize that common attributes like lock-in or coupling aren’t binary. Two systems aren’t just coupled or decoupled just like you aren’t simply locked into a product or not. Both properties have many nuances. For example, lock-in breaks down into numerous dimensions:

  • Vendor Lock-in: This is the kind that IT folks generally mean when they mention “lock-in”. It describes the difficulty of switching from one vendor to a competitor. For example, if migrating from Siebel CRM to SalesForce CRM or from an IBM DB2 database to an Oracle one will cost you an arm and a leg, you are “locked in”. This type of lock-in is common as vendors generally (more or less visibly) benefit from it. This lock-in includes commercial arrangements, such as long-term licensing and support agreements that earned you a discount off the license fees back then.
  • Product Lock-in: Related, but different is being locked into a product. When migrating from one vendor’s product to another vendor’s, you are usually changing both vendor and product, so the two are easily conflated. Open source products may avoid the vendor lock-in, but they don’t remove product lock-in: if you are using Kubernetes or Cassandra, you are certainly locked into a specific product’s APIs, configurations, and features. If you work in a professional (and especially enterprise) environment, you will also need commercial support, which will again lock you into a vendor contract – see above. Heavy customization, integration points, and proprietary extensions are forms of product lock-in: they make it difficult to switch to another product, even if it’s open source.
  • Version lock-in: Besides being locked into a product, you may even be locked into a specific version. Version upgrades can be costly if they break existing customizations and extensions you have built (SAP, anyone?). Other version upgrades essentially require you to rewrite your application – AngularJS vs. Angular 2 comes to mind. To make matters worse, version lock-in propagates: a certain product version may require a certain (often outdated) operating system version and so on, which turns any migration attempt into a Yak-shaving exercise. You feel this lock-in particularly badly when a vendor decides to deprecate your version or discontinues the whole product line: you have to choose between being out of support or doing a major overhaul. And things can get even worse, for example, if a major security vulnerability is found in your old version and patches aren’t provided.
  • Architecture lock-in: You may also be locked into a specific kind of architecture. For example. when you use Kubernetes extensively, you are likely building small-ish services that expose APIs and can be deployed as containers. If you want to migrate to a serverless architecture, you’ll want to change the granularity of your services closer to single functions, externalize state management, utilize an event-architecture, and probably a few more things. Such changes aren’t minor, but imply a major overhaul of your application architecture.
  • Platform lock-in: A special flavor of product lock-in is being locked into a platform, especially cloud platforms. Such platforms not only run your applications, but they may also hold your user accounts and associated access rights, security policies, infrastructure segmentations and many more aspects. They also provide application-level services such as storage or machine learning services, which are generally proprietary. Staying away from these services might seem like a way to reduce platform lock-in but it’d negate one of the major motivations for moving to the cloud in the first place. Non-software people call this finding yourself between a rock and a hard place.
  • Skills lock-in: As your developers are becoming familiar with a certain type of product or architecture, you’ll have skills lock-in: it’ll take you time to re-train (or hire) developers for a different product or technology. As skills availability is one of the major constraints in today’s IT shops, this type of lock-in is very real. Some niche enterprise products have a particularly limited supply of developers, causing your cost for developers to go up. This effect is particularly visible for products that employ custom languages or, somewhat ironically, for “config only” / no-code frameworks.
  • Legal lock-in: You may be locked into a specific solution for legal reasons, such as compliance. For example, you might not be able to migrate your data to another cloud provider’s data center if it’s located outside your country. Your software provider’s license may also not allow you to move your systems to the cloud even though they’d run perfectly fine. If you decide to do it anyway, you’ll be in violation of licensing terms. Legal aspects permeate more facets of engineering than we’d commonly assume: your small-engine air craft is likely to be powered by an engine that was designed back in the 1970s and burns heavily leaded fuel: new engine designs face high legal liabilities.
  • Mental Lock-in: The most subtle, but also the most dangerous type of lock-in is the one that affects your thinking. After working with a certain set of vendors and architectures, you are likely to absorb assumptions into your decision making, which may lead you to reject alternative options. For example, you may reject scale-out architectures as inefficient because they don’t scale linearly (you don’t get twice the performance when doubling the hardware). While technically accurate, this way of thinking ignores the fact that scalability, not efficiency, is the main driver. Or you may resent short release cycles as you have observed frequent changes leading to more defects. And surely you’ve been told that coding is expensive, time-consuming, and error-prone, so you’d be better off doing everything via configuration.

Open source software isn’t a magic cure for lock-in.

In summary, lock-in is far from an all-or-nothing affair, so understanding the different flavors can help you make more conscious architecture decisions. The list also debunks common myths, such as using open source source software magically eliminating lock-in. Open source can reduce vendor lock-in, but most of the other types of lock-in remain. This doesn’t mean open source is bad, but it isn’t a magic cure for lock-in.

Making better decisions using models

Experienced architects not only see more shades of gray, they also practice good decision discipline. That’s important because we are much worse decision makers than we commonly like to believe – a quick read of Kahneman’s Thinking, Fast and Slow is in order if you have any doubt.

One of the most effective ways to improve your decision making is to use models. Even, or especially, simple models are surprisingly effective at improving decision making:

Simple but evocative models are the signature of the great scientist, but over-elaboration and over-parameterization is often the mark of mediocrity.

— George Box

That’s why you shouldn’t laugh at the famed two-by-two matrix that’s so beloved by management consultants. It’s one of the simplest and therefore most effective models as we shall soon discover.

The more uncertain the environment, the more structured models can help you make better decisions.

There’s a second important point about models: a common belief tells us that in face of uncertainty you pretty much have to “shoot from the hip” – after all everything is in flux, anyway. The opposite is actually true: our generally poor decision making only gets worse when we have to deal with many interdependencies, high degrees of uncertainty, and small probabilities. Therefore, this is where models help the most to bring much needed structure and discipline into our decision-making. Deciding on whether and to what degree to accept lock-in falls well into this category, so let’s use some models.

Lock-in as a two-by-two matrix

A simple model can help us get past the “lock-in = bad” stigma. First, we have to realize that it’s difficult to not be locked into anything, so some amount of lock-in is inevitable. Second, we may happily accept some amount of lock-in if we get a commensurate pay-off, for example in form of a unique feature or utility that’s not offered by competitive products.

Let’s express these factors in a very simple model – a two-by-two matrix:

The matrix outlines our choices along the following axes:

  • switching cost (aka “lock-in”): how difficult will be for us to move to another solution?
  • unique utility: how much are we gaining from the solution compared to alternatives?

We can now consider each of the four quadrants:

  • Disposable: Components that don’t have a unique utility and are easy to replace are the ones we may have to worry about the least. We can leave them as is or, if we face any issues, we can easily replace them. Not a bad place to be for run-of-the-mill stuff. For example, most developer IDEs (EMACS likely being a notable exception!) fall into this category: mix and match as you please and don’t get too attached to them. Cloud storage for all your photos and other personal data has also largely moved your smartphone device into this box, but more on this later.
  • Accepted Lock-in: across the diagonal are the components that lock you into a specific product or vendor, but in return give you a unique feature or utility. While we generally prefer less lock-in, this trade-off may well be acceptable. You may use a product like Google Cloud BigQuery or AWS Bare Metal Instances, knowing well that you are locked in, having made a conscious decision based on the pay-off you’re getting. For a small application, you may also happily use native AWS services because a migration is unlikely and the reduction in development and operations effort is very welcome.
  • Caution: the least favorable box is the one that locks you in but doesn’t give you a lot of unique utility. Your traditional relational database may fall into this box – does using any proprietary database really increase your revenue? Not really. However, migrating off can be a lot of effort, so you better be sure that there’s a low likelihood you’re going to need to do that. If you selected a particular hardware for your embedded system that you launched into outer space, that’s likely OK – the chances of a migration are rather low.
  • Ideal: the best stuff is the one that gives you a unique utility but at the same time is easy to switch away from. While that sounds like the ideal to strive for, you’ll have to acknowledge that the box is a bit of an oxymoron: if a solution gives you unique utility, per definition competitive products won’t have it, making a migration difficult. S3 may be a suitable example for this category – multiple cloud vendors have adopted the same APIs, making a switch to let’s say GCP relatively easy. Still, each implementation has some distinct advantages regarding locality, performance, etc. To protect this kind of portability across differentiated products it’s important that we don’t allow APIs to be copyrighted or patented.

While the model is admittedly simple, placing your software (and perhaps hardware) components into this matrix is a worthwhile exercise. It not only visualizes your exposure but also communicates your decisions well to a variety of stakeholders.

For an every-day example of the four quadrants, you may have decided to use following items, which give you varying amounts of lock-in and utility (counter-clockwise from top-right):

  • Your beloved iPhone locks you into a vendor ecosystem, but it also gives unique utility, so you are likely OK to have this Accepted Lock-in.
  • Your mobile provider contract locks you into a single network, but doesn’t really provide much utility over other networks. It’s better to exercise Caution.
  • Your phone charger has a standard connector. Sadly, many iPhones don’t, but luckily an adapter cable places still makes this gadget Disposable.
  • Many of your apps, such as messaging, give you utility, such as having your friends on it, but they are still designed to make it easy to switch, for example by using your phone’s contact list. That’s Ideal.

A unique product feature doesn’t always translate into unique utility for you.

One word of caution on the unique utility: every vendor is going to give you some form of unique feature – that’s how they differentiate. However, what counts here is whether that feature translates into a concrete and unique value for you and your organization. For example, some cloud providers run Billion-user services over their amazing global network. That’s impressive and unique, but unlikely to be a utility for the average enterprise who’s quite happy to serve 1 million customers and may be restricted to doing business in a single country. Some people still buy Ferraris in small countries with strict speed limits, so apparently not all decision making is entirely rational, but perhaps a Ferrari gives you utility in more ways than a cloud platform can.

The actual cost of lock-in

Because this simple matrix was so useful, let’s do another one. The previous matrix treats switching cost as a single element (or dimension). A good architect can see that it breaks down into two dimensions:

The matrix differentiates between the cost of making the switch from the likelihood that you’ll have (or want) to make the switch. Things that have a low likelihood and a low cost shouldn’t bother you much while the opposite end, the ones with high switching cost and a high chance of switch, are no good and should be addressed. On the other diagonal, you are taking your chances on those options that will cost you, but are unlikely to occur – that’s where you’ll want to buy some insurance, for example by limiting the scope of change or by padding your maintenance budget. You could also accept the risk – how often would you really need to migrate off Oracle onto DB2, or vice versa? Lastly, if switches are likely but cheap, you achieved agility – you embrace change and designed your system for low cost of executing it. Oddly, this quadrant often gets less attention than the top left despite many small changes adding up quickly. That’s our poor decision making at work: the unlikely drama gets more attention because what if!

When discussing the likelihood of lock-in, you’ll want to consider a variety of scenarios that’ll make you switch: a vendor may go out of business, raise prices, or may no longer be able to support your scale or functional needs. Interestingly, the desire to reduce lock-in sometimes comes in form of a negotiation tool: when negotiating license renewals you can hint your vendor that you architected your system such that switching away from their product is realistic and inexpensive. This may help you negotiate a lower price because you’ve communicated that your BATNA – your Best Alternative To a Negotiated Agreement is low. This is an architecture option that’s not really meant to be used – it’s a deterrent, sort of like a stockpile of weapons in a cold war. You might be able to fake it and not actually reduce lock-in, but you better be a good poker player in case the vendor calls your bluff, e.g. by chatting with your developers at the water cooler.

Reducing lock-in: The strike price

Pulling in our options analogy from the very beginning once more, if avoiding lock-in gives you options, then the cost of making the switch is the option’s strike price: it’s how much you pay to execute the option. The lower the switching cost you want to achieve, the higher is the option’s value and therefore the price. While we’d dream of having all systems in the “green boxes” with minimal switching cost, the necessary invest may not actually pay off.

Minimizing switching costs may not be the most economical choice.

For example, many architects favor not being locked into a database vendor or cloud provider. However, how likely is a switch really? Maybe 5%, or even lower? How much will it cost you to bring that switching cost down from let’s say $50,000 (for a semi-manual migration) to near zero? Likely a lot more than the $2,500 ($50,000 x 5%) you can expect to save. Therefore, minimizing the switching cost isn’t the sole goal and can easily lead to over-invest. It’s the equivalent of being over-insured: paying a huge premium to bring the deductible down to zero may give you peace of mind, but it’s often not the most economical, and therefore, rational, choice.

A final model (for once not a matrix) can help you decide how much you should invest into reducing the cost of making a switch. The following diagram shows your liability, defined as the product of switching cost times the likelihood that it occurs in relation to the up-front invest you need to make (blue line).

By investing in options, you can surely reduce your liability, either by reducing the likelihood of a switch or by reducing the cost of executing it. For example, using an Object-relational Mapping (ORM) framework like Hibernate is a small investment that can reduce database vendor lock-in. You could also create a meta-language that is translated into each database vendor’s native stored procedure syntax. It’ll allow you to fully exploit the database’s performance without being dependent, but it’s going to take a lot of up-front effort for a relatively unlikely scenario.

The interesting function therefore is the red line, the one that adds the up-front invest to the potential liability. That’s your total cost and the thing you should be minimizing. In most cases, with increasing up-front invest, you’ll move towards an optimum range. Additional investment into reducing lock-in actually leads to higher total cost. The reason is simple: the returns on investment diminish, especially for switches that carry a small probability. If we make our architecture ever-so-flexible, we are likely stuck in this zone of over-investment. The Yagni (you ain’t gonna need it) folks may aim for the other end of the spectrum – as so often, the trick is to find the happy medium.

The total cost of avoiding lock-in

Now that we have a pretty good grip on the costs and potential pay-offs of being locked in, we need to have a closer look at the total cost of avoiding lock-in. In the previous model we assumed that avoiding lock-in is a simple cost. In reality, though, this cost can be broken down into several components:

Complexity can be the biggest price you pay for reducing lock-in.

  • Effort: This is the additional work to be done in terms of person-hours. If we opt to deploy in containers on top of Kubernetes in order to reduce cloud provider lock-in, this item would include the effort to learn a new tool, write Docker files, configure Kubernetes, etc.
  • Expense: This is the additional cash expense, e.g. for product licenses, to hire external providers, or to attend KubeCon.
  • Underutilization: This indirect cost occurs because avoiding lock-in often disallows you from using vendor-specific features. As a result, you get less utility out of the software you use. This in turn can mean more effort for you to build the missing features or it can cause a weakness in your product.
  • Complexity: Complexity is a core element of the equation, and too often ignored. Many efforts to reduce lock-in introduce an additional layer of abstraction: JDBC, Containers, common APIs. While all useful tools, such a layer adds another moving part, increasing the overall system complexity. This in turn increases the learning effort for new team members and the chance of systemic errors.
  • New Lock-ins: Avoiding one lock-in often comes at the expense of another one. For example, you may opt to avoid AWS CloudFormation and instead use Hashicorp’s Terraform or Pulumi, which both support multiple cloud providers. However, now you are locked into another product from an additional vendor and need to figure out whether that’s OK for you.

When calculating the cost of avoiding lock-in, an architect should make a quick run down this list to avoid blind spots. Also, be aware that attempts at avoiding lock-in can be leaky, very much like leaky abstractions. For example, Terraform is a fine tool, but its scripts use many vendor-specific constructs. Implementation details thus “leak” through, rendering the switching cost from one cloud to another decidedly non-zero.

Bringing it back together

With so much theory, let’s look at a few concrete examples.

Deploying Containers

I worked with a company who packages much of their code into Docker containers that they deploy to AWS ECS. Thus they are locked into AWS. Should they invest into replacing their container orchestration with Kubernetes, which is open source? Given that feature velocity is their main concern and the current ECS solution works well for them, I don’t think a migration would pay off. The likelihood of having to switch to another cloud provider is low and they have “bigger fish to fry”.

Recommendation: accept lock-in.

Relational database access

Many applications use a relational database that can be provided by numerous vendors and open source alternatives. However, SQL dialects, stored procedures, and bespoke management consoles all contribute to database lock-in. How much should you invest into avoiding this lock-in? For most languages and run-times common mapping frameworks such as Hibernate provide some level of database neutrality at a low cost. If you want to further minimize your strike price, you’d also need to avoid SQL functions and stored procedures, which may make your product less performant or require you to spend more on hardware.

Recommendation: use low-effort mechanisms to reduce lock-in. Don’t aim for zero switching cost.

Migrating to the cloud

Rather than switching from one database vendor to another, you may be more interested in moving your application, including its database, to the cloud. Besides technical considerations, you’ll need to be careful with some vendors’ licensing agreements that may make such a move uneconomical. In these cases, it’s wise to opt for an open source database.

Recommendation: select an open source database if it can meet your operational and support needs, but accept some degree of lock-in.

Multi-cloud

Many enterprises are fascinated the idea of portable multi-cloud deployments and come up with ever more elaborate and complex (and expensive) plans that’ll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you’d want to go to the cloud: low friction and the ability to use hosted services like storage or databases.

Recommendation: Exercise caution. Read my article on multi-cloud.

Architecture at the speed of thought

It may seem that one can put an enormous amount of time contemplating lock-in. Some may even dismiss our approach as “academic”, a word which I repeatedly fail to see as something bad because that’s where most of us got our education. Still, isn’t the old black-or-white method of architecture simpler and, perhaps, more efficient?

Architectural thinking is actually surprisingly fast if you focus and stick to simple models.

In reality thinking actually happens extremely fast. Running through all the models shown in this article may really just take a few minutes and yields well-documented decisions. No fancy tooling besides a piece of paper or a whiteboard is required. The key ingredient into fast architectural thinking is merely the ability to focus.

Compare that to the effort to prepare elaborate slide decks for lengthy steering committee meetings that are scheduled many weeks in advance and usually don’t have anyone attend who has the actual expertise to make an informed decision

Source : https://martinfowler.com/articles/oss-lockin.html

21 innovative growth strategies used by top growth teams – Appcues

A growth strategy isn’t just a set of functions you plug in to your business to boost grow your product—it’s also the way in which you organize and rally as a team. 

If growth is “more of a mindset than a toolkit,” as Ryan Holiday said, then it’s a collective mindset. 

Successful growth strategies are the product of engineering, marketing, leadership, design, and product management. Whether your team consists of 2 co-founders or a skyscraper full of employees, your growth hacking strategies will only be effective if you’re able to affix them to your organization, apply a workflow, and use the results of experiments to make intelligent decisions. 

In short, there’s no plugin for growth. To increase your product’s user base and activation rate, your company will need to be methodical and tailor the strategies you read about to your unique product, problem, and target audience.

What is a growth strategy?

Before we dive into specific examples of growth strategies, let’s take a moment to establish a  proper growth strategy definition:

A growth strategy is a plan of action that allows you to achieve a higher level of market share than you currently have. Contrary to popular belief, a growth strategy is not necessarily focused on short-term earnings—growth strategies can be long-term, too. Let’s keep that in mind with the following examples.

Another thing to keep in mind is that there are typically 4 types of strategies that roll up into a growth strategy. You might use one or all of the following:

  1. Product development strategy—growing your market share by developing new products to serve that market. These new products should either solve for a new problem or add to the existing problem you product solves.
  2. Market development strategy—growing your market share by developing new segments of the market, expanding your user base, or expanding your current users’ usage of your product.
  3. Market penetration strategy—growing your market share by bundling products, lowering prices, and advertising—basically everything you can do through marketing after your product is created.. This strategy is often confused with market development strategy.
  4. Diversification strategy—growing your market share by entering entirely new markets.

Below, we’ll explore 21 growth strategy examples from teams that have achieved massive growth in their companies. Many examples use one or more of the 4 classic growth strategies, but others are outside of the box. These out-of-the-box approaches are often called “growth hacking strategies”.

Growth strategy examples

Each of these examples should be understood in the context of the company where they were executed. While you can’t copy and paste their success onto your own unique product, there’s a lesson to be learned and leveraged from each one. 

Now let’s get to it!

1. How Clearbit drove 100k inbound leads by giving away free tools

Clearbit‘s APIs allow you to do amazing things—like enrich trial sign-ups on your homepage—but to use them effectively, you need a developer’s touch. Clearbit needed to get developers to try their tool in order to grow. Their strategy involved dedicating their own developer time to creating free tools, APIs, and browser extensions that would give other developers a chance to play. 

They experimented with creating free APIs for very specific purposes. One of the most successful was their free Logo API which allowed companies to quickly imprint their brand stamp onto pages of their website. Clearbit launched the API on ProductHunt and spread the word to their developer communities and email list—within a week, the Logo API had received 60,000 views and word-of-mouth traction had grown rapidly.

Clearbit Logo API free API example that helped Clearbit generate inbound leads

Clearbit made a bite-sized version of their overall product. The Logo API represents Clearbit at large—it’s a flexible and easy-to-implement way for companies to integrate data into their workflows. 

Offering a bite-sized version of your product that provides value for free creates an incredible first impression. It validates that what you’re making really works and drives testers to commit to your main product. And it can be an incredibly effective source of acquisition—Clearbit’s free APIs have driven over 100,000 inbound leads for the company.

2. How Segment increased conversions by experimenting with paid acquisition

As a customer analytics tool, Segment practices what it preaches when it comes to acquisition. The Segment team has developed a data-driven, experimental approach to identify its most successful acquisition channels and double down on those strategies. 

In an AMA, their head of marketing Diana Smith told the audience that they’d recently been experimenting with which paid channels worked for them. “In a nutshell, we’ve learned that retargeting definitely works and search does not,” Smith explained.

Segment learned that their marketing efforts were more effective when they reached out to users who’d viewed their site before versus when they relied on users finding them through search. So they set out to refine their retargeting strategy. They started customizing their Facebook and Twitter ads to visitors who’d viewed particular pages: to visitors who’d viewed their docs, they sent API-related messages; to visitors who’d looked at pricing, they sent free trial messages. 

By narrowing your acquisition strategy, you can dramatically increase ROI on paid acquisition, increasing conversions while minimizing CAC.

3. How Tinder tripled its user base by reaching target users in person

Tinder famously found success by gamifying dating. But to get its growth started, Tinder needed a strategy that would allow potential users to play the game and find a willing dating pool on the other side of the app.

In order to validate their product, people needed to see it in action. Tinder’s strategy was surprisingly high touch—they sent a team to visit potential users and demonstrate the product’s value in person.

  • They invested in a tour of sororities and fraternities at colleges to manually recruit signups from their target audience: millennials. It was a move that increased their user base from less than 5,000 users to over 15,000.
  • First, they helped groups of women install the app, guiding them past initial install friction.
  • Then they did the same pitch to a group of men. Both cohorts were able to see value quickly because the app was now used people who had something important in common—they all went to the same school.

To find the right growth strategy for your product, you have to understand what it will take for users to see it working. Tinder’s in-person pitches were a massive success because it helped users see value faster by populating the 2-sided app with more relevant connections.

4. How Zapier growth hacked signups by writing about other products

Zapier is all about integrations—it brings together tools across a user’s tech stack, allowing events in one tool to trigger events in another, from Asana to HubSpot to Buffer. The beauty of Zapier is that it sort of disappears behind these other tools. But that raises an interesting question: How do you market an invisible tool?

Zapier’s strategy was to leverage its multifaceted product personality through content marketing. The team takes every new integration on Zapier as a new opportunity to build authority on search and to appeal to a new audience. 

The blog reads like a collective guide to hundreds of tools, with specific titles like “How to Quickly Append Text to a Note in Evernote or OneNote from Your Browser” and “How to Automatically Generate Charts and Reports in Google Sheets and Docs.” Zapier’s strategy is to sneakily make itself a content destination for the audiences of all these different tools. 

screenshot from zapier blog article about onenote

This strategy helped their blog grow from scratch to over 600,000 readers in just 3 years, and the blog continues to grow as new tools and integrations are added to Zapier.

If you have a product with multiple use cases and integrations, try targeting your content marketing to specific audiences, rather than aiming for a catch-all approach.

5. How Twitter strengthened their network effect with onboarding suggestions

Andy Johns arrived at Twitter as a product manager in 2010, when the platform already had over 30 million active users. But according to Johns, growth was slowing. So the Twitter user growth team got creative and tried a new growth experiment every day—the team would pick an area in which to engage more users, create an experiment, and nudge the needle up by as much as 60,000 users in a day. 

One crucial user growth strategy that worked for Twitter was to coax users into following more people during the onboarding. They started suggesting 10 accounts to new users shortly after signup. 

Because users never had to encounter an empty Twitter feed, they were able to experience the product’s value much faster.

mobile screenshot of twitter onboarding suggestions for accounts to follow

Your users’ first aha moment—whether it’s connecting with friends, sending messages, or sharing files—should serve to give them a secure footing in your product and nudge your network effect into action one user at a time.

6. How LinkedIn growth hacked connections by asking a simple question

LinkedIn was designed to connect users. But in the very beginning, most users still only a few connections and needed help making more.

LinkedIn’s strategy was to capitalize on high user motivation just after signup. Nicknamed the “Reconnect Flow,” LinkedIn implemented a single question to new users during onboarding: “Where did you used to work?” 

Based on this input, LinkedIn then displayed a list of possible connections from the user’s former workplace. This  jogged new users’ memories and reduced the effort required to reconnect with old colleagues . Once they had made this step, users were more likely to make further connections on their own. 

Thanks to this simple prompt, LinkedIn’s pageviews increased by 41%, searches jumped up 33%, and users’ profiles became richer with 38% more work positions listed.

If you notice your users aren’t making the most of your product on their own, help them out while you have their attention. Use the momentum of your onboarding to help your users become engaged.

7. How Facebook increased week 1 retention by finding its north star metric

Facebook’s active user base surpassed 1 billion in 2012. It’s easy to look at the massive growth of Facebook and see it as a sort of big bang effect—a natural event difficult to pick apart for its separate catalysts. But Facebook’s growth can be pinned down to several key strategies.

Again and again, Facebook carved out growth by maintaining a steely focus on user behavior data. They’ve identified markers of user success and used those markers as North Star metrics to guide their product decisions. 

Facebook used analytics to compare cohorts of users—those who were still engaged in the site and those who’d left shortly after signing up. They found that the clearest indicator of retention whether or not users connected with 7 friends within 10 days. 

Once Facebook had identified their activation metric, they crafted the onboarding experience to nudge users up to the magic number. 

By focusing on a metric that correlates with stickiness, your team can take a scientific approach to growing engagement and retention, and measuring its progress.

8. How Slack got users to stick around by mirroring successful teams

Slack has grown by watching how teams interact with their product. Their own team was the very first test case and from then on, they’ve refined their product by engaging companies to act as testers. 

To understand patterns of retention and churn, Slack peered into their user data. They found that teams who’d sent 2,000 or more messages almost never dropped out of the product. That’s a lot of messages—you only get to that number by really playing around with the product and integrating it into your routine. 

Slack knew they had to give new users as many reasons as possible to send messages through the platform. They started plotting interactions with users in a way that encouraged multiple message sending. 

For example, Slack’s onboarding experience simulates how a seasoned Slack user behaves. New users are introduced to the platform through interactions with the Slackbot, and are encouraged to upload files, use keyboard shortcuts, and start new conversations.

slack new user onboarding screenshot with a new channel and slackbot introduction

Find what success means for your product by watching loyal users closely. Mirror that behavior for new users, and encourage them to get into a pattern that leads to long-term retention.

9. How ConvertKit grew $125,000 MRR by helping users switch tools

In early 2013, self-employed e-book writer Nathan Barry publicly set himself an unusual resolution. He announced the “Web App Challenge”—he wanted to build an app from scratch and get to $5,000+ in monthly recurring revenue within 6 months. 

Though he didn’t quite make it to that $5,000 mark, he did build a product—ConvertKit—with validated demand that went on to reach $125,000 in MRR per month. 

Barry experimented with a lot of growth strategies over the first 3 three years, but the one he kept turning back to was direct communication with potential customers. Through personalized emails, Barry found tons of people who loved the idea of ConvertKit but said it was too much trouble for them to think about switching tools—all their contacts and drafts were set up in their existing tools.

So Barry developed a “concierge migration service.” The ConvertKit team would literally go into whichever tool the blogger was using, scrape everything out, and settle the new customer into ConvertKit. Just 15 months after initiating this strategy, ConvertKit was making $125,000 in MRR. 

By actively reaching out and listening to you target users, you’ll be better able to identify precise barriers to entry and come up with creative solutions to help them overcome these hurdles.

10. How Yahoo doubled mobile revenue by rearranging their team

When Yahoo doubled their mobile revenue between 2012 and 2013, it wasn’t just the product that evolved. Yahoo had hired a new leader for its Mobile and Emerging Products, Adam Cahan. As soon as Cahan arrived, he set to work making organizational changes that allowed Yahoo’s mobile division to get experimental, iterate, and develop new products quickly.

  • First, he encouraged elements of a startup environment. Cahan brought together talented individuals from different disciplines—design, product management, engineering—and encouraged them to work like a founding team to focus solely on developing mobile products that would grow.
  • Cahan maintained that collaborative environment even as the division grew to 50 members. By making every member of the team focused on user experience before all else, he removed some of the bottlenecks and divisions that often build up in a large tech company. He gave the team a mission to discover how to make Yahoo better for customers, even if that meant dismantling the status quo or abandoning older software.

In 2 years, Cahan grew Yahoo’s mobile division from 150 million mobile users to 550 million. By hiring the right people and enabling them to focus on solving problems for users, he had opened the doors for organic growth.

11. How Stripe grew by looking after developers first

Payment processing platform Stripe always knew that developers were the key to adoption of their service. Founders John and Patrick Collison started Stripe to address a very specific problem—developers were sorely in need of a payment solution they could adapt to different merchant needs and match the speed and complexity of the buyer side of the ecommerce interface. 

Merchants started clamoring for Stripe because their developers were raving about it—today, Stripe commands 15.34% of the market share for payment processing. That’s in large part to Stripe’s strategy of prioritizing the needs of developers first and foremost. For instance:

  • Code could only get Stripe so far—so in order to drive adoption, they focused on creating clear, comprehensive documentation so that developers could pick up Stripe products and run with them.
  • Stripe created a library of docs that lead the user through each product. There’s more plain English in these docs than code, bridging the gap for new users.
  • There’s a “Try Now” section where users can see what it takes to tokenize a credit card with Stripe. 
stripe help documentation. stripe is a great example of a company with excellent develper help docs

Know your audience. By focusing on the people that are most directly affected by your problem, you can generate faster and more valuable word-of-mouth. 

12. How Groove turned high churn around with targeted emails

In 2013, help desk tool Groove was experiencing a worryingly high churn rate of 4.5%. They were acquiring new users just fine, but people were leaving as fast as they came. So they set out to get to know these users better. It was a strategy that would allow them to reduce churn from 4.5% to 1.6%. “Your customers probably won’t tell you when they hit a snag,” says Alex Turnbull, founder and CEO of Groove. “Dig into your data and look for creative ways to find those customers having trouble, and help them.”

  • Groove used Kissmetrics to examine customer data. They identified who was leaving and who was staying in the app.
  • They compared the user behavior of both cohorts and found that staying in the app was strongly correlated with performing certain key actions—like being able to create a support widget in 2 to 3 minutes. Users who churned were taking far longer, meaning that for some reason they weren’t able to get a grasp of the tool.
  • Groove was then able to send highly targeted emails to this second cohort, bringing them back into the app and helping them achieve value.

By using analytics, you can identify behaviors that drive engagement vs. churn, then proactively reach out to customers when you spot these behaviors in action. By getting ahead of individual cases of churn, you can drive engagement up.

13. How PayPal paid users to growth hack for them

PayPal was growth hacking referrals before it was cool. When PayPal launched, they were introducing a new type of payment method—and they knew that they needed to build trust and authority in order to grow. Their strategy involved getting early adopters to refer users to the platform. 

  • PayPal paid its first users to sign up. They literally gave them free money. These bonuses began at $20 for signing up.
  • As users grew accustomed to the idea of PayPal, signup bonuses were decreased to $10, then $5, then were phased out—but by that time, their user base had started to grow organically.

“We must have spent tens of millions in signup and referral bonuses the first year,” says David Sacks, original COO at PayPal. But that initial investment worked—PayPal’s radical first iteration of their referral program allowed them to grow to 5 million daily users in only a few months.

Incentivize your users in a way that makes sense for your business. If users adore your product, the initial cost of setting up a referral program can be recouped many times over as your users become advocates.

14. How Postmates reached 1 million deliveries by baking growth into engineering and product

In 2016, the on-demand delivery service Postmates, reached 1 million monthly deliveries. They also launched a subscription service, called Postmates Plus Unlimited. 

With growing demand, Postmates focused on developing products that are highly accessible and easy to use. At the same time, they gathered funding. In October 2016, they gained another $140 million investment taking their post-money valuation to $600 million. But to cope with this growth in valuation, Postmates needed to scale their growth team. 

According to Siqi Chen, VP of Growth at Postmates, the company had “an incredibly scrappy, hard working team who did the best they could with the tools given, but it’s very hard to make growth work at Postmates scale without dedicated engineering and product support.”

So the team shifted to include engineering and product at every level. Now, Postmates’ growth team has 3 arms of its own—“growth product,” “growth marketing,” and “user acquisition”—each one with its own engineering support.

By connecting their growth team directly to the technical decision makers, Postmates created a team that can scale with the company.

15. How BuzzFeed grew to 9 billion monthly visitors with their “golden rules of shareability”

BuzzFeed is a constantly churning content machine, publishing hundreds of pieces a day, and getting over billion content views per month. BuzzFeed’s key growth strategy has been to define virality, and pursue it in everything they do.

  • Jonah Peretti, BuzzFeed’s CEO, shut off the noise and started listening to readers. He found that readers were more concerned about their communities than about the content—they were disappointed when they didn’t find something to share with their friends. The most important metrics the Buzzfeed team could judge themselves by were social shares and traffic from social sites.
  • BuzzFeed created the Golden Rules of Shareability to further refine their criteria, and analyzed their viral content to create a formula for what makes something inherently shareable. This is important, because it makes it possible for Team BuzzFeed to take leaps into new topics and areas.
  • BuzzFeed’s focus has followed its social crowd and has been able to adapt to changing reading patterns and platforms. The company has also upped its political arm, and has made big investments in branded video.

The lesson? To go viral, you need to give the people what they want, and that means striking a balance between consistency and novelty. 

16. How Airbnb continued to scale by simplifying user reviews

Airbnb’s origin story is one of the infamous growth hacking tales. Founders Brian Chesky and Joe Gebbia knew their potential audience was already using Craigslist, so they engineered their own integration, allowing hosts to double post their ads to Airbnb and Craigslist at the same time. 

But it’s their review strategy that has enabled Airbnb to keep growing, once this short-term tactic wore out its effectiveness. Reviews enrich the Airbnb platform. For 50% of bookings, guests visit a host profile at least once before booking a trip, and hosts with more than 10 reviews are 10X more likely to receive bookings. 

Airbnb growth hacked their network effect by making reviewing really easy:

  • They made the review process double-blind, so feedback isn’t visible until both traveler and host have filled out the form. This not only ensures more honest reviews, but removes a key source of friction from the review process.
airbnb double-blind review process with new review notifications
  • They also enabled private feedback and reduced the timeline for leaving a review to 14 days, making reviewing more spontaneous and authentic.

By making reviews easier and more honest, Airbnb grew the number of reviews on the site, which in turn grew its authority. You can growth hack your shareability by identifying barriers to trust and smoothing out points of friction along the way.

17. How AdRoll used Appcues modal windows to increase adoption to 60%

AdRoll has a great MailChimp integration—it allows users to retarget ads to their email subscribers in MailChimp. But they found that very few users were actually making use of this feature.

Peter Clark, head of Growth at AdRoll, wanted to experiment with in-app messaging in order to target the right Adroll users more effectively. 

But growth experiments like this require rapid iteration. His engineers were better suited to longer development cycles, and he didn’t want to disrupt the flow of his organization.So Peter and his team started using Appcues to create custom  modal windows quickly and easily—and without input from their technical team members. 

With a code-free solution, AdRoll’s growth team could design and implement however many windows they needed to drive adoption of the features they were working on. Here’s how it worked for the MailChimp integration:

  • The team first used a tool called Datanyze to isolate users who used both AdRoll and MailChimp.
  • They copied this list into Appcues and created the modal window below, targeting it only to  appear to users with both tools who could take immediate advantage of the integration.
mailchimp and adroll integration feature announcement modal window made with appcues
  • They set the modal to appear as users arrived logged in to their dashboards—the core area of the AdRoll tool, in which users are already poised to take action on their ad campaigns.

This single experiment yielded thousands of conversions and ended up increasing adoption rate of the integration to 60%. The experiment is so easy to replicate that Clark and the team now use modal windows for all kinds of growth experiments.

18. How GitHub grew to 100,000 users in a year by nurturing its network effect

GitHub began as a software development tool called Git. It was designed to solve a problem its coder founders were having by enabling multiple developers to work together on a single project. But it was the discussion around Git—what the founders nicknamed “the Github”— that became the tool’s core value. 

Github’s founders realized that the problem of collaboration wasn’t just a practical software problem—the whole developer community was missing a communal factor. So they focused on growing the community side of the product, creating a freemium product with an open-source repository where coders could come together to discuss projects and solve problems with a collective mindset.

They created the ability to follow projects and track contributions, so there’s both an element of camaraderie and an element of competitiveness. This turned GitHub into a sort of social network for coding. A little over a year after launch, Github had gained its first 100,000 users. In July of 2012, GitHub secured $100M in venture capital

By catalyzing the network effect, it’s possible to turn a tool into a culture.For GitHub, the more developers got involved, the better the tool became. Find a community for your product and give them a place to come together.

19. How Yelp reached 176 million unique monthly visits by gamifying reviews

It’s relatively easy for a consumer review site to get drive-by traffic. What makes Yelp different, and allows it to draw return visitors and community members, is that it has strategically grown the social aspect of its platform. 

This is what has earned Yelp 176 million unique monthly visitors in Q2 2019 and has allowed them to overtake competitors by creating their own category of service. Yelp set out to amplify its existing network effect by rewarding users for certain behaviors.

  • They created user levels—users could achieve “Elite” status by writing good reviews frequently and for voting and commenting on other users’ reviews.
  • Yelp judged reviews based on several factors, including level of detail and how many votes of approval they received. All of these factors helped to make Yelp more shareable. Essentially, they were teaching loyal users to be better content creators by rewarding them for upping the quality of Yelp’s content.
yelp review from an elite reviewer, showing user profile friend and review count and buttons to rate a yelp review as useful, funny, or cool

By making reviews into a status symbol, Yelp turned itself into a community with active members who feel a sense of belonging there—and who feel motivated to use the platform more often. 

20. How Etsy grew to 42.7 million active buyers by empowering sellers 

Etsy reached IPO with a $2 billion valuation in 2015, ten years after the startup was founded. Today, the company boasts 42.7 million active buyers and 2.3 million active sellers who made $3.9 billion in annual gross merchandise sales in 2018. Not too shabby (chic)!

The key to their success was Etsy’s creation of a “community-centric” platform. Rather than building a simple ecommerce site, Etsy set about to create a community of like-minded craft-makers. One of the ways they did this was by boosting organic new user growth by actively encouraging sellers to share their wares on social media.

  • First, Etsy’s strategy was to focus on the seller side of its user acquisition. They gave their sellers tons of support but also tons of independence to promote and curate their businesses—which ultimately gave sellers a sense of ownership over their own success. Thanks to this approach, Etsy sellers were motivated to recruit their own buyers, who then visited Etsy and got hooked on the site itself.
  • Etsy’s seller handbook is basically a course in how to operate a small online business—hashtags and all. Vendors create their own regulars, and drum up their own new business through social sharing, while Etsy positions itself as the supportive platform.
etsy seller handbook dashboard

If your product involves a 2-sided market, focus on one side of that equation first. What can you do to enable those people to become an acquisition channel in and of themselves?

21. How IBM created a growth hacking team to spur startup-level growth

As cloud-based software has taken off, traditional hardware technology companies have struggled. IBM has been proactive in their efforts to redefine  its brand and product offering for an increasingly mobile audience.

Faced with an increasingly competitive, cloud-based landscape, IBM decided that it was time to start telling a different story. This legacy giant began acting more like a nascent startup, as the company aggressively reinvented its portfolio. 

Their strategy for reinvigorating growth and achieving startup-like mentality has been to take a product-led approach

  • In 2014, IBM created a growth hacking team. Already a large corporation, IBM didn’t need to climb the initial hill of growth to get its product off the ground. But by building this focused team, it aimed to grow into new areas and new audiences with “data-driven creativity,” by using the small business strategies it was seeing in the startup scene.
  • IBM now essentially has startup-sized teams within its massive team, working in a lab style with the autonomy to test marketing strategies

No matter what your team looks like—whether it’s a nimble 10-person startup or an enterprise with low flexibility—you can turn your organizational structure into a space where growth can thrive. Of course, that achievement is not without its struggles. But as Nancy Hensley, Chief Digital Officer of Data and AI at IBM says:

“There’s always pain in transformation. That’s how you know you’re transforming!”

Listen up before you get loud

None of these growth spurts happened by changing a whole company all at once. Instead, these teams found something—something small, a way in, a loophole, a detail—and carved out that space so growth could follow. 

Whether you find that a single feature in your product is the key to engaging users, or you discover a north star metric that allows you to replicate success—pinpoint your area for growth and dig into it. 

Pay attention. Listen to your users and notice what’s happening in your product and what could be happening better. That learning is your next growth strategy

GitHub’s Top 100 Most Valuable Repositories Out of 96 Million – Hackernoon

GitHub is not just a code hosting service with version control — it’s also an enormous developer network.

The sheer size of GitHub at over 30 million accounts, more than 2 million organizations, and over 96 million repositories translates into one of the world’s most valuable development networks.

How do you quantify the value of this network? And is there a way to get the top repositories?

Here at U°OS, we ran the GitHub network through a simplified version¹ of our reputation algorithm and produced the top 100 most valuable repositories.

The result is as fascinating as it is eclectic in the way that it does feel like a good reflection of our society’s interest in the technology and where it moves.

There are the big proprietary players with open source projects — Google, Apple, Microsoft, Facebook, and even Baidu. And at the same time, there’s a Chinese anti-censorship tool.

There’s Bitcoin for cryptocurrency.

There’s a particle detector for CERN’s Large Hadron Collider.

There are gaming projects like Space Station 13 and Cataclysm: Dark Days Ahead and a gaming engine Godot.

There are education projects like freeCodeCamp, Open edX, Oppia, and Code.org.

There are web and mobile app building projects like WordPress, Joomla, and Flutter to publish your content on.

There are databases to store your content for the web like Ceph and CockroachDB.

And there’s a search engine to navigate through the content — Elasticsearch.

There are also, perhaps unsurprisingly, jailbreak projects like Cydia compatibility manager for iOS and Nintendo 3DS custom firmware.

And there’s a smart home system — Home Assistant.

All in all, it’s really a great outlook for the technology world: we learn, build stuff to broadcast our unique voices, we use crypto, break free from proprietary software on our hardware, and in the spare time we game in our automated homes. And the big companies open-source their projects.

Before I proceed with the list, a result of running the Octoverse through the reputation algorithm also produced a value score for every individual GitHub contributor. So, if you have a GitHub account and curious, you can get your score at https://u.community/github and convert it to a Universal Portable Reputation.

Top 100 projects & repositories

Out of over 96 million repositories

  1. Google Kubernetes
    Container scheduling and management
    Repository: https://github.com/kubernetes/kubernetes
    Website: https://kubernetes.io/
  2. Apache Spark
    A unified analytics engine for large-scale data processing
    Repository: https://github.com/apache/spark
    Website: http://spark.apache.org/
  3. Microsoft Visual Studio Code
    A source-code editor
    Repository: https://github.com/Microsoft/vscode
    Website: https://code.visualstudio.com/
  4. NixOS Package Collection
    A collection of packages for the Nix package manager
    Repository: https://github.com/NixOS/nixpkgs
    Website: https://nixos.org
  5. Rust
    Programming language
    Repository: https://github.com/rust-lang/rust
    Website: https://www.rust-lang.org/
  6. Firehol IP Lists
    Blacklists for Firehol, a firewall builder
    Repository: https://github.com/firehol/blocklist-ipsets
    Website: https://iplists.firehol.org/
  7. Red Hat OpenShift
    A community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment
    Repository: https://github.com/openshift/origin
    Website: https://www.openshift.com/
  8. Ansible
    A deployment automation platform
    Repository: https://github.com/ansible/ansible
    Website: https://www.ansible.com/
  9. Automattic WordPress Calypso
    A JavaScript and API powered front-end for WordPress.com
    Repository: https://github.com/Automattic/wp-calypso
    Website: https://developer.wordpress.com/calypso/
  10. Microsoft .NET CoreFX
    Foundational class libraries for .NET Core
    Repository: https://github.com/dotnet/corefx
    Website: https://docs.microsoft.com/en-us/dotnet/core/
  11. Microsoft .NET Roslyn
    .NET compiler
    Repository: https://github.com/dotnet/roslyn
    Website: https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/
  12. Node.js
    A JavaScript runtime built on Chrome’s V8 JavaScript engine
    Repository: https://github.com/nodejs/node
    Website: https://nodejs.org/en/
  13. TensorFlow
    Google’s machine learning framework
    Repository: https://github.com/tensorflow/tensorflow
    Website: https://www.tensorflow.org/
  14. freeCodeCamp
    Code learning platform
    Repository: https://github.com/freeCodeCamp/freeCodeCamp
    Website: https://www.freecodecamp.org/
  15. Space Station 13
    A round-based roleplaying game
    Repository: https://github.com/tgstation/tgstation
    Website: https://www.tgstation13.org/
  16. Apple Swift
    Apple’s programming language
    Repository: https://github.com/apple/swift
    Website: https://swift.org/
  17. Elasticsearch
    A search engine
    Repository: https://github.com/elastic/elasticsearch
    Website: https://www.elastic.co/products/elasticsearch
  18. Moby
    An open framework to assemble specialized container systems
    Repository: https://github.com/moby/moby
    Website: https://mobyproject.org/
  19. CockroachDB
    A cloud-native SQL database
    Repository: https://github.com/cockroachdb/cockroach
    Website: https://www.cockroachlabs.com/
  20. Cydia Compatibility Checker
    A compatibility checker for Cydia — a package manager for iOS jailbroken devices
    Repository: https://github.com/jlippold/tweakCompatible
    Website: https://jlippold.github.io/tweakCompatible/
  21. Servo
    A web browser engine
    Repository: https://github.com/servo/servo
    Website: https://servo.org/
  22. Google Flutter
    Google’s mobile app SDK to create interfaces for iOS and Android
    Repository: https://github.com/flutter/flutter
    Website: https://flutter.dev/
  23. macOS Homebrew Package Manager
    Default formulae for the missing package manager for macOS
    Repository: https://github.com/homebrew/homebrew-core
    Website: https://brew.sh/
  24. Home Assistant
    Home automation software
    Repository: https://github.com/home-assistant/home-assistant
    Website: https://www.home-assistant.io/
  25. Microsoft .NET CoreCLR
    Runtime for .NET Core
    Repository: https://github.com/dotnet/coreclr
    Website: https://docs.microsoft.com/en-us/dotnet/core/
  26. CocoaPods Specifications
    Specifications for CocoaPods, a Cocoa dependency manager
    Repository: https://github.com/CocoaPods/Specs
    Website: https://cocoapods.org/
  27. Elastic Kibana
    An analytics and search dashboard for Elasticsearch
    Repository: https://github.com/elastic/kibana
    Website: https://www.elastic.co/products/kibana
  28. Julia Language
    A technical computing language
    Repository: https://github.com/JuliaLang/julia
    Website: https://julialang.org/
  29. Microsoft TypeScript
    A superset of JavaScript that compiles to plain JavaScript
    Repository: https://github.com/Microsoft/TypeScript
    Website: https://www.typescriptlang.org/
  30. Joomla
    A content management system
    Repository: https://github.com/joomla/joomla-cms
    Website: https://www.joomla.org/
  31. DefinitelyTyped
    A repository for TypeScript type definitions
    Repository: https://github.com/DefinitelyTyped/DefinitelyTyped
    Website: http://definitelytyped.org/
  32. Homebrew Cask
    A CLI workflow for the administration of macOS applications distributed as binaries
    Repository: https://github.com/Homebrew/homebrew-cask
    Website: https://brew.sh/
  33. Ceph
    A distributed object, block, and file storage platform
    Repository: https://github.com/ceph/ceph
    Website: https://ceph.com/
  34. Go
    Programming language
    Repository: https://github.com/golang/go
    Website: https://golang.org/
  35. AMP HTML Builder
    A way to build pages for Google AMP
    Repository: https://github.com/ampproject/amphtml
    Website: https://amp.dev/
  36. Open edX
    An online education platform
    Repository: https://github.com/edx/edx-platform
    Website: https://open.edx.org/
  37. Pandas
    A data analysis and manipulation library for Python
    Repository: https://github.com/pandas-dev/pandas
    Website: https://pandas.pydata.org/
  38. Istio
    A platform to manage microservices
    Repository: https://github.com/istio/istio
    Website: https://istio.io/
  39. ManageIQ
    A containers, virtual machines, networks, and storage management platform
    Repository: https://github.com/ManageIQ/manageiq
    Website: http://manageiq.org/
  40. Godot Engine
    A multi-platform 2D and 3D game engine
    Repository: https://github.com/godotengine/godot
    Website: https://godotengine.org/
  41. Gentoo Repository Mirror
    A Gentoo ebuild repository mirror
    Repository: https://github.com/gentoo/gentoo
    Website: https://www.gentoo.org/
  42. Odoo
    A suite of web based open source business apps
    Repository: https://github.com/odoo/odoo
    Website: https://www.odoo.com/
  43. Azure Documentation
    Documentation of Microsoft Azure
    Repository: https://github.com/MicrosoftDocs/azure-docs
    Website: https://docs.microsoft.com/azure
  44. Magento
    An eCommerce platform
    Repository: https://github.com/magento/magento2
    Website: https://magento.com/
  45. Saltstack
    Software to automate the management and configuration of any infrastructure or application at scale
    Repository: https://github.com/saltstack/salt
    Website: https://www.saltstack.com/
  46. AdGuard Filters
    Ad blocking filters for AdGuard
    Repository: https://github.com/AdguardTeam/AdguardFilters
    Website: https://adguard.com/en/welcome.html
  47. Symfony
    A PHP framework
    Repository: https://github.com/symfony/symfony
    Website: https://symfony.com/
  48. CMS Software for the Large Hadron Collider
    Particle detector software components for CERN’s Large Hadron Collider
    Repository: https://github.com/cms-sw/cmssw
    Website: http://cms-sw.github.io/
  49. Red Hat OpenShift
    OpenShift installation and configuration management
    Repository: https://github.com/openshift/openshift-ansible
    Website: https://www.openshift.com/
  50. ownCloud
    Personal cloud software
    Repository: https://github.com/owncloud/core
    Website: https://owncloud.org/
  51. gRPC
    A remote procedure call (RPC) framework
    Repository: https://github.com/grpc/grpc
    Website: https://grpc.io/
  52. Liferay
    An enterprise web platform
    Repository: https://github.com/brianchandotcom/liferay-portal
    Website: https://www.liferay.com/
  53. CommCare HQ
    A mobile data collection platform
    Repository: https://github.com/dimagi/commcare-hq
    Website: https://www.commcarehq.org/
  54. WordPress Gutenberg
    An editor plugin for WordPress
    Repository: https://github.com/WordPress/gutenberg
    Website: https://wordpress.org/gutenberg/
  55. PyTorch
    A Python package for Tensor computation and deep neural networks
    Repository: https://github.com/pytorch/pytorch
    Website: https://pytorch.org/
  56. Kubernetes Test Infrastructure
    A test-infra repository for Kubernetes
    Repository: https://github.com/kubernetes/test-infra
    Website: https://kubernetes.io/
  57. Keybase
    Keybase client repository
    Repository: https://github.com/keybase/client
    Website: https://keybase.io/
  58. Facebook React
    A JavaScript library for building user interfaces
    Repository: https://github.com/facebook/react
    Website: https://reactjs.org/
  59. Code.org
    Code learning resource
    Repository: https://github.com/code-dot-org/code-dot-org
    Website: https://code.org/
  60. Bitcoin Core
    Bitcoin client software
    Repository: https://github.com/bitcoin/bitcoin
    Website: https://bitcoincore.org/
  61. Arm Mbed OS
    A platform operating system for the Internet of Things
    Repository: https://github.com/ARMmbed/mbed-os
    Website: https://www.mbed.com
  62. scikit-learn
    A Python module for machine learning
    Repository: https://github.com/scikit-learn/scikit-learn
    Website: https://scikit-learn.org
  63. Nextcloud
    A self-hosted productivity platform
    Repository: https://github.com/nextcloud/server
    Website: https://nextcloud.com/
  64. Helm Charts
    A curated list of applications for Kubernetes
    Repository: https://github.com/helm/charts
    Website: https://kubernetes.io/
  65. Terraform
    An infrastructure management tool
    Repository: https://github.com/hashicorp/terraform
    Website: https://www.terraform.io/
  66. Ant Design
    A UI design language
    Repository: https://github.com/ant-design/ant-design
    Website: https://ant.design/
  67. Phalcon Framework Documentation
    Documentation for Phalcon, a PHP framework
    Repository: https://github.com/phalcon/docs
    Website: https://docs.phalconphp.com
  68. Documentation for CMS Software for the Large Hadron Collider
    Documentation for CMS Software for CERN’s Large Hadron Collider
    Repository: https://github.com/cms-sw/cms-sw.github.io
    Website: http://cms-sw.github.io/
  69. Apache Kafka Mirror
    A mirror for Apache Kafka, a distributed streaming platform
    Repository: https://github.com/apache/kafka
    Website: https://kafka.apache.org/
  70. Electron
    A framework to write cross-platform desktop applications using JavaScript, HTML and CSS
    Repository: https://github.com/electron/electron
    Website: https://electronjs.org/
  71. Zephyr Project
    A real-time operating system
    Repository: https://github.com/zephyrproject-rtos/zephyr
    Website: https://www.zephyrproject.org/
  72. The web-platform-tests Project
    A cross-browser testsuite for the Web-platform stack
    Repository: https://github.com/web-platform-tests/wpt
    Website: https://www.w3.org/
  73. Marlin Firmware
    Optimized firmware for RepRap 3D printers based on the Arduino platform
    Repository: https://github.com/MarlinFirmware/Marlin
    Website: http://marlinfw.org/
  74. Apache MXNet
    A library for deep learning
    Repository: https://github.com/apache/incubator-mxnet
    Website: https://mxnet.apache.org/
  75. Apache Beam
    A unified programming model
    Repository: https://github.com/apache/beam
    Website: https://beam.apache.org/
  76. Fastlane
    A build and release automaton for iOS and Android apps
    Repository: https://github.com/fastlane/fastlane
    Website: https://fastlane.tools/
  77. Kubernetes Website and Documentation
    A repository for the Kubernetes website and documentation
    Repository: https://github.com/kubernetes/website
    Website: https://kubernetes.io
  78. Ruby on Rails
    A web-application framework
    Repository: https://github.com/rails/rails
    Website: https://rubyonrails.org/
  79. Zulip
    Team chat software
    Repository: https://github.com/zulip/zulip
    Website: https://zulipchat.com/
  80. Laravel
    A web application framework
    Repository: https://github.com/laravel/framework
    Website: https://laravel.com/
  81. Baidu PaddlePaddle
    Baidu’s deep learning framework
    Repository: https://github.com/PaddlePaddle/Paddle
    Website: http://www.paddlepaddle.org/
  82. Gatsby
    A web application framework
    Repository: https://github.com/gatsbyjs/gatsby
    Website: https://www.gatsbyjs.org/
  83. Rust Crate Registry
    Rust’s community package registry
    Repository: https://github.com/rust-lang/crates.io-index
    Website: https://crates.io/
  84. Nintendo 3DS Custom Firmware
    A complete guide to 3DS custom firmware
    Repository: https://github.com/hacks-guide/Guide_3DS
    Website: https://3ds.hacks.guide/
  85. TiDB
    A NewSQL database
    Repository: https://github.com/pingcap/tidb
    Website: https://pingcap.com
  86. Angular CLI
    CLI tool for Angular, a Google web application framework
    Repository: https://github.com/angular/angular-cli
    Website: https://cli.angular.io/
  87. MAPS.ME
    Offline OpenStreetMap maps for iOS and Android
    Repository: https://github.com/mapsme/omim
    Website: https://maps.me/
  88. Eclipse Che
    A cloud IDE for Eclipse
    Repository: https://github.com/eclipse/che
    Website: http://www.eclipse.org/che/
  89. Brave Browser
    A browser with native BAT cryptocurrency
    Repository: https://github.com/brave/browser-laptop
    Website: https://www.brave.com/
  90. Patchwork
    A repository to learn Git
    Repository: https://github.com/jlord/patchwork
    Website: http://jlord.us/patchwork/
  91. Angular Material
    Component infrastructure and Material Design components for Angular, a Google web application framework
    Repository: https://github.com/angular/components
    Website: https://material.angular.io/
  92. Python
    Programming language
    Repository: https://github.com/python/cpython
    Website: https://www.python.org/
  93. Space Station 13
    A round-based roleplaying game
    Repository: https://github.com/vgstation-coders/vgstation13
    Website: http://ss13.moe/
  94. Cataclysm: Dark Days Ahead
    A turn-based survival game
    Repository: https://github.com/CleverRaven/Cataclysm-DDA
    Website: http://cataclysmdda.org/
  95. Material-UI
    React components that implement Google’s Material Design
    Repository: https://github.com/mui-org/material-ui
    Website: https://material-ui.com/
  96. Ionic
    A Progressive Web Apps development framework
    Repository: https://github.com/ionic-team/ionic
    Website: https://ionicframework.com/
  97. Oppia
    A tool for collaboratively building interactive lessons
    Repository: https://github.com/oppia/oppia
    Website: https://www.oppia.org
  98. Alluxio
    A virtual distributed storage system
    Repository: https://github.com/Alluxio/alluxio
    Website: https://www.alluxio.io/
  99. XX Net
    A Chinese web proxy and anti-censorship tool
    Repository: https://github.com/XX-net/XX-Net
    Website: None
  100. Microsoft .NET CLI
    A CLI tool for .NET
    Repository: https://github.com/dotnet/cli
    Website: https://docs.microsoft.com/en-us/dotnet/core/tools/

[1] The explanation of the calculation of the simplified version is at the U°OS Network GitHub repository.

Source : https://hackernoon.com/githubs-top-100-most-valuable-repositories-out-of-96-million-bb48caa9eb0b

 

Improving the Accuracy of Automatic Speech Recognition Models for Broadcast News – Appen

Sound Waves illustration
In their paper entitled English Broadcast News Speech Recognition by Humans and Machines, the team proposes to identify techniques that close the gap between automatic speech recognition (ASR) and human performance.

Where does the data come from?

IBM’s initial work in the voice recognition space was done as part of the U.S. government’s Defense Advanced Research Projects Agency (DARPA) Effective Affordable Reusable Speech-to-Text (EARS) program, which led to significant advances in speech recognition technology. The EARS program produced about 140 hours of supervised BN training data and around 9,000 hours of very lightly supervised training data from closed captions from television shows. By contrast, EARS produced around 2,000 hours of highly supervised, human-transcribed training data for conversational telephone speech (CTS).

Lost in translation?

Because so much training data is available for CTS, the team from IBM and Appen endeavored to apply similar speech recognition strategies to BN to see how well those techniques translate across applications. To understand the challenge the team faced, it’s important to call out some important differences between the two speech styles:

Broadcast news (BN)

  • Clear, well-produced audio quality
  • Wide variety of speakers with different speaking styles
  • Varied background noise conditions — think of reporters in the field
  • Wide variety of news topics

Conversational telephone speech (CTS)

  • Often poor audio quality with sound artifacts
  • Unscripted
  • Interspersed with moments where speech overlaps between participants
  • Interruptions, sentence restarts, and background confirmations between participants i.e. “okay”, “oh”, “yes

People speaking into a phone
How the team adapted speech recognition models from CTS to BN

The team adapted the speech recognition systems that were so successfully used for the EARS CTS research: Multiple long short-term memory (LSTM) and ResNet acoustic models trained on a range of acoustic features, along with word and character LSTMs and convolutional WaveNet-style language models. This strategy had produced results between 5.1% and 9.9% accuracy for CTS in a previous study, specifically the HUB5 2000 English Evaluation conducted by the Linguistic Data Consortium (LDC). The team tested a simplified version of this approach on the BN data set, which wasn’t human-annotated, but rather created using closed captions.

Instead of adding all the available training data, the team carefully selected a reliable subset, then trained LSTM and residual network-based acoustic models with a combination of n-gram and neural network language models on that subset. In addition to automatic speech recognition testing, the team benchmarked the automatic system against an Appen-produced high-quality human transcription. The primary language model training text for all these models consisted of a total of 350 million words from different publicly available sources suitable for broadcast news.

Getting down to business

In the first set of experiments the team separately tested the LSTM and ResNet models in conjunction with the n-gram and FF-NNLM before combining scores from the two acoustic models in comparison with the results obtained on the older CTS evaluation. Unlike results observed on original CTS testing, no significant reduction in the word error rate (WER) was achieved after scores from both the LSTM and ResNet models were combined. The LSTM model with an n-gram LM individually performs quite well and its results further improve with the addition of the FF-NNLM.

For the second set of experiments, word lattices were generated after decoding with the LSTM+ResNet+n-gram+FF-NNLM model. The team generated n-best lists from these lattices and rescored them with the LSTM1-LM. LSTM2-LM was also used to rescore word lattices independently. Significant WER gains were observed after using the LSTM LMs. This led the researchers to hypothesize that the secondary fine-tuning with BN-specific data is what allows LSTM2-LM to perform better than LSTM1-LM.

The results

Our ASR results have clearly improved state-of-the-art performance, and significant progress has been made compared to systems developed over the last decade. When compared to the human performance results, the absolute ASR WER is about 3% worse. Although the machine and human error rates are comparable, the ASR system has much higher substitution and deletion error rates.

Looking at the different error types and rates, the research produced interesting takeaways:

  • There’s a significant overlap in the words that ASR and humans delete, substitute, and insert.
  • Humans seem to be careful about marking hesitations: %hesitation was the most inserted symbol in these experiments. Hesitations seem to be important in conveying meaning to the sentences in human transcriptions. The ASR systems, however, focus on blind recognition and were not successful in conveying the same meaning.
  • Machines have trouble recognizing short function words: theandofathat and these get deleted the most. Humans on the other hand, seem to catch most of them. It seems likely that these words aren’t fully articulated so the machine fails to recognize them, while humans are able to infer these words naturally.

Silhouette of person speaking on phone
Conclusion

The experiments show that speech ASR techniques can be transferred across domains to provide highly accurate transcriptions. For both acoustic and language modeling, the LSTM- and ResNet-based models proved effective and human evaluation experiments kept us honest. That said, while our methods keep improving, there is still a gap to close between human and machine performance, demonstrating a continued need for research on automatic transcription for broadcast news.

Source : https://appen.com/blog/improving-the-accuracy-of-automatic-speech-recognition-models-for-broadcast-news/

 

Which New Business Models Will Be Unleashed By Web 3.0? – Fabric

The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.

The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?

A history of Business Models across Web 1.0, Web 2.0 and Web 3.0

We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.

To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:

  • In 1999, while enjoying good traffic, they were clearly struggling with their business model. Their lead investor Mike Moritz (Sequoia Capital) openly stated “we really couldn’t figure out the business model, there was a period where things were looking pretty bleak”.
  • In 2001, Google was making $85m in revenue while their rival Overture was making $288m in revenue, as CPM based online advertising was falling away post dot-com crash.
  • In 2002, adopting Overture’s ad model, Google went on to launch AdWords Select: its own pay-per-click, auction-based search-advertising product.
  • Two years later, in 2004, Google hits 84.7% of all internet searches and goes public with a valuation of $23.2 billion with annualised revenues of $2.7 billion.

After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.

Looking back at the wave of Web 2.0 Business Models

Content

The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.

Marketplaces

Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.

Open Source Software

Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.

SaaS

In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.

Sharing Economy

It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.

Advertising

While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.

Emerging Web 3.0 Business Models

Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.

By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.

  • Issuing a native asset
  • Holding the native asset, building the network:
  • Taxation on speculation (exchanges)
  • Payment tokens
  • Burn tokens
  • Work Tokens
  • Other models

Issuing a native asset:

Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.

These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.

Holding the native asset, building the network:

Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.

While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.

Taxing the Speculative Nature of these Native Assets:

The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.

Payment Tokens:

With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.

Burn Tokens:

Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.

Work Tokens:

One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.

A wide array of other models are being explored and worth touching upon:

  • Dual token model such as MKR/DAI & SPANK/BOOTY where one asset absorbs the volatile up- & down-side of usage and the other asset is kept stable for optimal transacting.
  • Governance tokens which provide the ability to influence parameters such as fees and development prioritisation and can be valued from the perspective of an insurance against a fork.
  • Tokenised securities as digital representations of existing assets (shares, commodities, invoices or real estate) which are valued based on the underlying asset with a potential premium for divisibility & borderless liquidity.
  • Transaction fees for features such as the models BloXroute & Aztec Protocol have been exploring with a treasury that takes a small transaction fee in exchange for its enhancements (e.g. scalability & privacy respectively).
  • Tech 4 Tokens as proposed by the Starkware team who wish to provide their technology as an investment in exchange for tokens — effectively building a treasury of all the projects they work with.
  • Providing UX/UI for protocols, such as Veil & Guesser are doing for Augur and Balance is doing for the MakerDAO ecosystem, relying on small fees or referrals & commissions.
  • Network specific services which currently include staking providers (e.g. Staked.us), CDP managers (e.g. topping off MakerDAO CDPs before they become undercollateralised) or marketplace management services such as OB1 on OpenBazaar which can charge traditional fees (subscription or as a % of revenues)
  • Liquidity providers operating in applications that don’t have revenue generating business models. For example, Uniswap is an automated market maker, in which the only route to generating revenues is providing liquidity pairs.

With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.

When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.

Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller

Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10

Why are Machine Learning Projects so Hard to Manage? – Lukas Biewald

I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?

1. It’s really hard to tell in advance what’s hard and what’s easy.

Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy. Even within a domain, performance can vary wildly. What’s good accuracy for predicting sentiment? On movie reviews, there is a lot of text and writers tend to be fairly clear about what they think and these days 90–95% accuracy is expected. On Twitter, two humans might only agree on the sentiment of a tweet 80% of the time. It might be possible to get 95% accuracy on the sentiment of tweets about certain airlines by just always predicting that the sentiment is going to be negative.

Metrics can also increase a lot in the early days of a project and then suddenly hit a wall. I once ran a Kaggle competition where thousands of people competed around the world to model my data. In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques. Those people competing in the Kaggle competition worked incredibly hard to get that 68% accuracy and I’m sure felt like it was a huge achievement. But for most use cases, 65% vs 68% is totally indistinguishable. If that had been an internal project, I would have definitely been disappointed by the outcome.

My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning. Engineering projects generally move forward, but machine learning projects can completely stall. It’s possible, even common, for a week spent on modeling data to result in no improvement whatsoever.

2. Machine Learning is prone to fail in unexpected ways.

Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data. Humans are so good at generalizing from training data that we have terrible intuitions about this. I built a little robot with a camera and a vision model trained on the millions of images of ImageNet which were taken off the web. I preprocessed the images on my robot camera to look like the images from the web but the accuracy was much worse than I expected. Why? Images off the web tend to frame the object in question. My robot wouldn’t necessarily look right at an object in the same way a human photographer would. Humans likely not even notice the difference but modern deep learning networks suffered a lot. There are ways to deal with this phenomenon, but I only noticed it because the degradation in performance was so jarring that I spent a lot of time debugging it.

Much more pernicious are the subtle differences that lead to degraded performance that are hard to spot. Language models trained on the New York Times don’t generalize well to social media texts. We might expect that. But apparently, models trained on text from 2017 experience degraded performance on text written in 2018. Upstream distributions shift over time in lots of ways. Fraud models break down completely as adversaries adapt to what the model is doing.

3. Machine Learning requires lots and lots of relevant training data.

Everyone knows this and yet it’s such a huge barrier. Computer vision can do amazing things, provided you are able to collect and label a massive amount of training data. For some use cases, the data is a free byproduct of some business process. This is where machine learning tends to work really well. For many other use cases, training data is incredibly expensive and challenging to collect. A lot of medical use cases seem perfect for machine learning — crucial decisions with lots of weak signals and clear outcomes — but the data is locked up due to important privacy issues or not collected consistently in the first place.

Many companies don’t know where to start in investing in collecting training data. It’s a significant effort and it’s hard to predict a priori how well the model will work.

What are the best practices to deal with these issues?

1. Pay a lot of attention to your training data.
Look at the cases where the algorithm is misclassifying data that it was trained on. These are almost always mislabels or strange edge cases. Either way you really want to know about them. Make everyone working on building models look at the training data and label some of the training data themselves. For many use cases, it’s very unlikely that a model will do better than the rate at which two independent humans agree.

2. Get something working end-to-end right away, then improve one thing at a time.
Start with the simplest thing that might work and get it deployed. You will learn a ton from doing this. Additional complexity at any stage in the process always improves models in research papers but it seldom improves models in the real world. Justify every additional piece of complexity.

Getting something into the hands of the end user helps you get an early read on how well the model is likely to work and it can bring up crucial issues like a disagreement between what the model is optimizing and what the end user wants. It also may make you reassess the kind of training data you are collecting. It’s much better to discover those issues quickly.

3. Look for graceful ways to handle the inevitable cases where the algorithm fails.
Nearly all machine learning models fail a fair amount of the time and how this is handled is absolutely crucial. Models often have a reliable confidence score that you can use. With batch processes, you can build human-in-the-loop systems that send low confidence predictions to an operator to make the system work reliably end to end and collect high-quality training data. With other use cases, you might be able to present low confident predictions in a way that potential errors are flagged or are less annoying to the end user.

What’s Next?

The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use. As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic. I’m incredibly excited about the opportunity to build completely new kinds of products but worried about the lack of tools and best practices. So much so that I started a company to help with this called Weights and Biases. If you’re interested in learning more, check out what we’re up to.

Source : https://medium.com/@l2k/why-are-machine-learning-projects-so-hard-to-manage-8e9b9cf49641

Open Source Software – Investable Business Model or Not? – Natallia Chykina

Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”

Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).

Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).

But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?

In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.

OSS Monetization Models

By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”

While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.

As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.

Latched on to this are the monetization strategies, the most common being the following:

  • Extra charge for enterprise services, support, and consulting. The classic model targeted at large enterprise clients with sophisticated needs. Examples: MySQL, Red Hat, Hortonworks, DataStax
  • Freemium. (advanced features/products/add-ons) A custom licensed product on top of the OSS might generate a lavish revenue stream, but it requires a lot of R&D costs and time to build. Example: Cloudera, which provides the basic version for free and charges the customers for Cloudera Enterprise
  • SaaS/PaaS business model: The modern way to monetize the OSS products that assumes centrally hosting the software and shifting its maintenance costs to the provider. Examples: Elastic, GitHub, Databricks, SugarCRM

Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.

Influx of VC Dollars

An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).

Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.

At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).

With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.

Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.

Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.

Case Study 1: Red Hat

Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.

At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.

Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).

The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.

However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.

More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”

In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).

If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.

Case Study 2: Hortonworks

To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.

The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).

All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.

At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.

Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).

Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).

Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).

On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.

All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.

But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.

With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.

Why are VCs Investing in OSS Companies?

Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?

Like what you’re reading?
Get the latest updates first.
No spam. Just great articles & insights.

Good Fit for a Strategic Acquisition

Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).

What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:

  • Getting access to the client base. Sun is reported to have been motivated by this when it acquired MySQL. They wanted to access the SME market and cross-sell other products to smaller clients. Simply forking the product or developing a competing technology internally wouldn’t deliver the customer base and would have made Sun incur additional customer acquisition costs.
  • Getting control over the product. The ability to influence further development of the product is a crucial factor for a strategic buyer. This allows it to build and expand its own product offering based on the acquired products without worrying about sudden substantial changes in it. Example: Red Hat acquiring Ansible, KVM, Gluster, Inktank (Ceph), and many more
  • Entering adjacent markets. Acquiring open source companies in adjacent market segments, again, allows a company to expand the product offering, which makes vendor lock-in easier, and scales the business further. Example: Citrix acquiring XenSource
  • Acquiring the team. This is more relevant for smaller and younger projects than for larger, more well-established ones, but is worth mentioning.

What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”

With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.

My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.

A Better Monetization Model: SaaS

While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.

But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.

In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”

Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.

First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.

Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”

Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.

This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.

Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.

In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.

Success Stories for OSS with SaaS

Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.

Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.

Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.

Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.

Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).

Investable Business Model or Not?

To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.

Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.

The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.

Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive

Source : https://www.toptal.com/finance/venture-capital-consultants/open-source-software-investable-business-model-or-not

Industrial tech may not be sexy, but VCs are loving it – John Tough

There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.

Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.

Just how active is the sector now?

That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.

And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.

The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.

Detail by stage of financing

Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!

Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.

Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.

These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?

Above: Growth in pre-money valuation particularly acute in later stage deals

The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.

  • The average Series A round now buys about 24 percent, slightly less than five years ago
  • The average Series B round now buys about 22 percent of the company, down from 26 percent five years ago
  • The average Series C round now buys approximately 20 percent, down from 23 percent five years ago.

Some conclusions

  • Dollars invested as a portion of industry revenue and profit allows for further capital commitments.
  • There is a growing appreciation for the industrial sales cycle. Investor willingness to wait for reduced risk to deploy even more capital in the perceived winners appears to be driving this trend.
  • Entrepreneurs that can successfully de-risk their enterprise through revenue, partnerships, and industry hires will gain access to outsized capital pools. The winners in this market tend to compound as later customers look to early adopters
  • Uncertainty still remains about exit opportunities for technology companies that serve these industries. While there are a few headline-grabbing acquisitions (PlanGrid, Kurion, OSIsoft), we are not hearing about a sizable exit from this market on a weekly or monthly cadence. This means we won’t know for a few years about the returns impact of these rising valuations. Grab your hard hat!

Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/

Money Out of Nowhere: How Internet Marketplaces Unlock Economic Wealth – Bill Gurley

In 1776, Adam Smith released his magnum opus, An Inquiry into the Nature and Causes of the Wealth of Nationsin which he outlined his fundamental economic theories. Front and center in the book — in fact in Book 1, Chapter 1 — is his realization of the productivity improvements made possible through the “Division of Labour”:

It is the great multiplication of the production of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people. Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of society.

Smith identified that when men and women specialize their skills, and also importantly “trade” with one another, the end result is a rise in productivity and standard of living for everyone. In 1817, David Ricardo published On the Principles of Political Economy and Taxation where he expanded upon Smith’s work in developing the theory of Comparative Advantage. What Ricardo proved mathematically, is that if one country has simply a comparative advantage (not even an absolute one), it still is in everyone’s best interest to embrace specialization and free trade. In the end, everyone ends up in a better place.

There are two key requirements for these mechanisms to take force. First and foremost, you need free and open trade. It is quite bizarre to see modern day politicians throw caution to the wind and ignore these fundamental tenants of economic science. Time and time again, the fact patterns show that when countries open borders and freely trade, the end result is increased economic prosperity. The second, and less discussed, requirement is for the two parties that should trade to be aware of one another’s goods or services. Unfortunately, either information asymmetry or physical distances and the resulting distribution costs can both cut against the economic advantages that would otherwise arise for all.

Fortunately, the rise of the Internet, and specifically Internet marketplace models, act as accelerants to the productivity benefits of the division of labour AND comparative advantage by reducing information asymmetry and increasing the likelihood of a perfect match with regard to the exchange of goods or services. In his 2005 book, The World Is Flat, Thomas Friedman recognizes that the Internet has the ability to create a “level playing field” for all participants, and one where geographic distances become less relevant. The core reason that Internet marketplaces are so powerful is because in connecting economic traders that would otherwise not be connected, they unlock economic wealth that otherwise would not exist. In other words, they literally create “money out of nowhere.”

EXCHANGE OF GOODS MARKETPLACES

Any discussion of Internet marketplaces begins with the first quintessential marketplace, ebay(*). Pierre Omidyarfounded AuctionWeb in September of 1995, and its rise to fame is legendary. What started as a web site to trade laser pointers and Beanie Babies (the Pez dispenser start is quite literally a legend), today enables transactions of approximately $100B per year. Over its twenty-plus year lifetime, just over one trillion dollars in goods have traded hands across eBay’s servers. These transactions, and the profits realized by the sellers, were truly “unlocked” by eBay’s matching and auction services.

In 1999, Jack Ma created Alibaba, a Chinese-based B2B marketplace for connecting small and medium enterprise with potential export opportunities. Four years later, in May of 2003, they launched Taobao Marketplace, Alibaba’s answer to eBay. By aggressively launching a free to use service, Alibaba’s Taobao quickly became the leading person-to-person trading site in China. In 2018, Taobao GMV (Gross Merchandise Value) was a staggering RMB2,689 billion, which equates to $428 billion in US dollars.

There have been many other successful goods marketplaces that have launched post eBay & Taobao — all providing a similar service of matching those who own or produce goods with a distributed set of buyers who are particularly interested in what they have to offer. In many cases, a deeper focus on a particular category or vertical allows these marketplaces to distinguish themselves from broader marketplaces like eBay.

  • In 2000, Eric Baker and Jeff Fluhr founded StubHub, a secondary ticket exchange marketplace. The company was acquired by ebay in January 2007. In its most recent quarter, StubHub’s GMV reached $1.4B, and for the entire year 2018, StubHub had GMV of $4.8B.
  • Launched in 2005, Etsy is a leading marketplaces for the exchange of vintage and handmade items. In its most recent quarter, the company processed the exchange of $923 million of sales, which equates to a $3.6B annual GMV.
  • Founded by Michael Bruno in Paris in 2001, 1stdibs(*) is the world’s largest online marketplace for luxury one-of-a-kind antiques, high-end modern furniture, vintage fashion, jewelry, and fine art. In November 2011, David Rosenblatt took over as CEO and has been scaling the company ever since. Over the past few years dealers, galleries, and makers have matched billions of dollars in merchandise to trade buyers and consumer buyers on the platform.
  • Poshmark was founded by Manish Chandra in 2011. The website, which is an exchange for new and used clothing, has been remarkably successful. Over 4 million sellers have earned over $1 billion transacting on the site.
  • Julie Wainwright founded The Real Real in 2011. The company is an online marketplace for authenticated luxury consignment. In 2017, the company reported sales of over $500 million.
  • In 2015, Eddy Lu and Daishin Sugano launched GOAT, a marketplace for the exchange of sneakers. Despite this narrow focus, the company has been remarkably successful. The estimated annual GMV of GOAT and its leading competitor Stock X is already over $1B per year (on a combined basis).

SHARING ECONOMY MARKETPLACES

With the launch of Airbnb in 2008 and Uber(*) in 2009, these two companies established a new category of marketplaces known as the “sharing economy.” Homes and automobiles are the two most expensive items that people own, and in many cases the ability to own the asset is made possible through debt — mortgages on houses and car loans or leases for automobiles. Despite this financial exposure, for many people these assets are materially underutilized. Many extra rooms and second homes are vacant most of the year, and the average car is used less than 5% of the time. Sharing economy marketplaces allow owners to “unlock” earning opportunities from these underutilized assets.

Airbnb was founded by Joe Gebbia and Brian Chesky in 2008. Today there are over 5 million Airbnb listings in 81,000 cities. Over two million people stay in an Airbnb each night. In November of this year, the company announced that it had achieved “substantially” more than $1B in revenue in the third quarter. Assuming a marketplace rake of something like 11%, this would imply gross room revenue of over $9B for the quarter — which would be $36B annualized. As the company is still growing, we can easily guess that in 2019-2020 time frame, Airbnb will be delivering around $50B per year to home-owners who were previously sitting on highly underutilized assets. This is a major “unlocking.”

When Garrett Camp and Travis Kalanick founded Uber in 2009, they hatched the industry now known as ride-sharing. Today over 3 million people around the world use their time and their underutilized automobiles to generate extra income. Without the proper technology to match people who wanted a ride with people who could provide that service, taxi and chauffeur companies were drastically underserving the potential market. As an example, we estimate that ride-sharing revenues in San Francisco are well north of 10X what taxis and black cars were providing prior to the launch of ride-sharing. These numbers will go even higher as people increasingly forgo the notion of car ownership altogether. We estimate that the global GMV for ride sharing was over $100B in 2018 (including Uber, Didi, Grab, Lyft, Yandex, etc) and still growing handsomely. Assuming a 20% rake, this equates to over $80B that went into the hands of ride-sharing drivers in a single year — and this is an industry that did not exist 10 years ago. The matching made possible with today’s GPS and Internet-enabled smart phones is a massive unlocking of wealth and value.

While it is a lesser known category, using your own backyard and home to host dog guests as an alternative to a kennel is a large and growing business. Once again, this is an asset against which the marginal cost to host a dog is near zero. By combining their time with this otherwise unused asset, dog sitters are able to offer a service that is quite compelling for consumers. Rover.com (*) in Seattle, which was founded by Greg Gottesman and Aaron Easterly in 2011, is the leading player in this market. (Benchmark is an investor in Rover through a merger with DogVacay in 2017). You may be surprised to learn that this is already a massive industry. In less than a decade since the company started, Rover has already paid out of half a billion dollars to hosts that participate on the platform.

EXCHANGE OF LABOR MARKETPLACES

While not as well known as the goods exchanges or sharing economy marketplaces, there is a growing and exciting increase in the number of marketplaces that help match specifically skilled labor with key opportunities to monetize their skills. The most noteworthy of these is likely Upwork(*), a company that formed from the merger of Elance and Odesk. Upwork is a global freelancing platform where businesses and independent professionals can connect and collaborate remotely. Popular categories include web developers, mobile developers, designers, writers, and accountants. In the 12 months ended June 30, 2018, the Upwork platform enabled $1.56 billion of GSV (gross services revenue) across 2.0 million projects between approximately 375,000 freelancers and 475,000 clients in over 180 countries. These labor matches represent the exact “world is flat” reality outlined in Friedman’s book.

Other noteworthy and emerging labor marketplaces:

  • HackerOne(*) is the leading global marketplace that coordinates the world’s largest corporate “bug bounty” programs with a network of the world’s leading hackers. The company was founded in 2012 by Michiel PrinsJobert AbmaAlex Rice and Merijn Terheggen, and today serves the needs of over 1,000 corporate bug bounty programs. On top of that, the HackerOne network of over 300,000 hackers (adding 600 more each day) has resolved over 100K confirmed vulnerabilities which resulted in over $46 million in awards to these individuals. There is an obvious network effect at work when you bring together the world’s leading programs and the world’s leading hackers on a single platform. The Fortune 500 is quickly learning that having a bug bounty program is an essential step in fighting cyber crime, and that HackerOne is the best place to host their program.
  • Wyzant is a leading Chicago-based marketplace that connects tutors with students around the country. The company was founded by Andrew Geant and Mike Weishuhn in 2005. The company has over 80,000 tutors on its platform and has paid out over $300 million to these professionals. The company started matching students with tutors for in-person sessions, but increasingly these are done “virtually” over the Internet.
  • Stitch Fix (*) is a leading provider of personalized clothing services that was founded by Katrina Lake in 2011. While the company is not primarily a marketplace, each order is hand-curated by a work-at-home “stylist” who works part-time on their own schedule from the comfort of their own home. Stitch Fix’s algorithms match the perfect stylist with each and every customer to help ensure the optimal outcome for each client. As of the end of 2018, Stitch Fix has paid out well over $100 million to their stylists.
  • Swing Education was founded in 2015 with the objective of creating a marketplace for substitute teachers. While it is still early in the company’s journey, they have already established themselves as the leader in the U.S. market. Swing is now at over 1,200 school partners and has filled over 115,000 teacher absence days. They have helped 2,000 substitute teachers get in the classroom in 2018, including 400 educators who earned permits, which Swing willingly financed. While it seems obvious in retrospect, having all substitutes on a single platform creates massive efficiency in a market where previously every single school had to keep their own list and make last minute calls when they had vacancies. And their subs just have to deal with one Swing setup process to get access to subbing opportunities at dozens of local schools and districts.
  • RigUp was founded by Xuan Yong and Mike Witte in Austin, Texas in March of 2014. RigUp is a leading labor marketplace focused on the oilfield services industry. “The company’s platform offers a large network of qualified, insured and compliant contractors and service providers across all upstream, midstream and downstream operations in every oil and gas basin, enabling companies to hire quickly, track contractor compliance, and minimize administrative work.” According to the company, GMV for 2017 was an impressive $150 million, followed by an astounding $600 million in 2018. Often, investors miss out on vertically focused companies like RigUp as they find themselves overly anxious about TAM (total available market). As you can see, that can be a big mistake.
  • VIPKid, which was founded in 2013 by Cindy Mi, is a truly amazing story. The idea is simple and simultaneously brilliant. VIPKid links students in China who want to learn English with native English speaking tutors in the United States and Canada. All sessions are done over the Internet, once again epitomizing Friedman’s very flat world. In November of 2018, the company reported having 60,000 teachers contracted to teach over 500,000 students. Many people believe the company is now well north of a US$1B run rate, which implies that around $1B will pass hands from Chinese parents to western teachers in 2019. That is quite a bit of supplemental income for U.S.-based teachers.

These vertical labor marketplaces are to LinkedIn what companies like Zillow, Expedia, and GrubHub are to Google search. Through a deeper understanding of a particular vertical, a much richer perspective on the quality and differentiation of the participants, and the enablement of transactions — you create an evolved service that has much more value to both sides of the transaction. And for those professionals participating in these markets, your reputation on the vertical service matters way more than your profile on LinkedIn.

NEW EMERGING MARKETPLACES

Having been a fortunate investor in many of the previously mentioned companies (*), Benchmark remains extremely excited about future marketplace opportunities that will unlock wealth on the Internet. Here are an example of two such companies that we have funded in the past few years.

The New York Times describes Hipcamp as “The Sharing Economy Visits the Backcountry.” Hipcamp(*) was founded in 2013 by Alyssa Ravasio as an engine to search across the dozens and dozens of State and National park websites for campsite availability. As Hipcamp gained traction with campers, landowners with land near many of the National and State parks started to reach out to Hipcamp asking if they could list their land on Hipcamp too. Hipcamp now offers access to more than 350k campsites across public and private land, and their most active private land hosts make over $100,000 per year hosting campers. This is a pretty amazing value proposition for both land owners and campers. If you are a rural landowner, here is a way to create “money out of nowhere” with very little capital expenditures. And if you are a camper, what could be better than to camp at a unique, bespoke campsite in your favorite location.

Instawork(*) is an on-demand staffing app for gig workers (professionals) and hospitality businesses (partners). These working professionals seek economic freedom and a better life, and Instawork gives them both — an opportunity to work as much as they like, but on their own terms with regard to when and where. On the business partner side, small business owners/managers/chefs do not have access to reliable sources to help them with talent sourcing and high turnover, and products like  LinkedIn are more focused on white-collar workers. Instawork was cofounded by Sumir Meghani in San Franciso and was a member of the 2015 Y-Combinator class. 2018 was a break-out year for Instawork with 10X revenue growth and 12X growth in Professionals on the platform. The average Instawork Professional is highly engaged on the platform, and typically opens the Instawork app ten times a day. This results in 97% of gigs being matched in less than 24 hours — which is powerfully important to both sides of the network. Also noteworthy, the Professionals on Instawork average 150% of minimum wage, significantly higher than many other labor marketplaces. This higher income allows Instawork Professionals like Jose, to begin to accomplish their dreams.

THE POWER OF THESE PLATFORMS

As you can see, these numerous marketplaces are a direct extension of the productivity enhancers first uncovered by Adam Smith and David Ricardo. Free trade, specialization, and comparative advantage are all enhanced when we can increase the matching of supply and demand of goods and services as well as eliminate inefficiency and waste caused by misinformation or distance. As a result, productivity naturally improves.

Specific benefits of global internet marketplaces:

    1. Increase wealth distribution (all examples)
    2. Unlock wasted potential of assets (Uber, AirBNB, Rover, and Hipcamp)
    3. Better match of specific workers with specific opportunities (Upwork, WyzAnt, RigUp, VIPKid, Instawork)
    4. Make specific assets reachable and findable (Ebay, Etsy, 1stDibs, Poshmark, GOAT)
    5. Allow for increased specialization (Etsy, Upwork, RigUp)
    6. Enhance supplemental labor opportunities (Uber, Stitch Fix, SwingEducation, Instawork, VIPKid), where the worker is in control of when and where they work
    7. Reduces forfeiture by enhancing utilization (mortgages, car loans, etc) (Uber, AirBnb, Rover, Hipcamp)

Source : http://abovethecrowd.com/2019/02/27/money-out-of-nowhere-how-internet-marketplaces-unlock-economic-wealth/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

Scroll to top