Category: Mobility

Why are Machine Learning Projects so Hard to Manage? – Lukas Biewald

I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?

1. It’s really hard to tell in advance what’s hard and what’s easy.

Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy. Even within a domain, performance can vary wildly. What’s good accuracy for predicting sentiment? On movie reviews, there is a lot of text and writers tend to be fairly clear about what they think and these days 90–95% accuracy is expected. On Twitter, two humans might only agree on the sentiment of a tweet 80% of the time. It might be possible to get 95% accuracy on the sentiment of tweets about certain airlines by just always predicting that the sentiment is going to be negative.

Metrics can also increase a lot in the early days of a project and then suddenly hit a wall. I once ran a Kaggle competition where thousands of people competed around the world to model my data. In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques. Those people competing in the Kaggle competition worked incredibly hard to get that 68% accuracy and I’m sure felt like it was a huge achievement. But for most use cases, 65% vs 68% is totally indistinguishable. If that had been an internal project, I would have definitely been disappointed by the outcome.

My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning. Engineering projects generally move forward, but machine learning projects can completely stall. It’s possible, even common, for a week spent on modeling data to result in no improvement whatsoever.

2. Machine Learning is prone to fail in unexpected ways.

Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data. Humans are so good at generalizing from training data that we have terrible intuitions about this. I built a little robot with a camera and a vision model trained on the millions of images of ImageNet which were taken off the web. I preprocessed the images on my robot camera to look like the images from the web but the accuracy was much worse than I expected. Why? Images off the web tend to frame the object in question. My robot wouldn’t necessarily look right at an object in the same way a human photographer would. Humans likely not even notice the difference but modern deep learning networks suffered a lot. There are ways to deal with this phenomenon, but I only noticed it because the degradation in performance was so jarring that I spent a lot of time debugging it.

Much more pernicious are the subtle differences that lead to degraded performance that are hard to spot. Language models trained on the New York Times don’t generalize well to social media texts. We might expect that. But apparently, models trained on text from 2017 experience degraded performance on text written in 2018. Upstream distributions shift over time in lots of ways. Fraud models break down completely as adversaries adapt to what the model is doing.

3. Machine Learning requires lots and lots of relevant training data.

Everyone knows this and yet it’s such a huge barrier. Computer vision can do amazing things, provided you are able to collect and label a massive amount of training data. For some use cases, the data is a free byproduct of some business process. This is where machine learning tends to work really well. For many other use cases, training data is incredibly expensive and challenging to collect. A lot of medical use cases seem perfect for machine learning — crucial decisions with lots of weak signals and clear outcomes — but the data is locked up due to important privacy issues or not collected consistently in the first place.

Many companies don’t know where to start in investing in collecting training data. It’s a significant effort and it’s hard to predict a priori how well the model will work.

What are the best practices to deal with these issues?

1. Pay a lot of attention to your training data.
Look at the cases where the algorithm is misclassifying data that it was trained on. These are almost always mislabels or strange edge cases. Either way you really want to know about them. Make everyone working on building models look at the training data and label some of the training data themselves. For many use cases, it’s very unlikely that a model will do better than the rate at which two independent humans agree.

2. Get something working end-to-end right away, then improve one thing at a time.
Start with the simplest thing that might work and get it deployed. You will learn a ton from doing this. Additional complexity at any stage in the process always improves models in research papers but it seldom improves models in the real world. Justify every additional piece of complexity.

Getting something into the hands of the end user helps you get an early read on how well the model is likely to work and it can bring up crucial issues like a disagreement between what the model is optimizing and what the end user wants. It also may make you reassess the kind of training data you are collecting. It’s much better to discover those issues quickly.

3. Look for graceful ways to handle the inevitable cases where the algorithm fails.
Nearly all machine learning models fail a fair amount of the time and how this is handled is absolutely crucial. Models often have a reliable confidence score that you can use. With batch processes, you can build human-in-the-loop systems that send low confidence predictions to an operator to make the system work reliably end to end and collect high-quality training data. With other use cases, you might be able to present low confident predictions in a way that potential errors are flagged or are less annoying to the end user.

What’s Next?

The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use. As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic. I’m incredibly excited about the opportunity to build completely new kinds of products but worried about the lack of tools and best practices. So much so that I started a company to help with this called Weights and Biases. If you’re interested in learning more, check out what we’re up to.

Source : https://medium.com/@l2k/why-are-machine-learning-projects-so-hard-to-manage-8e9b9cf49641

Open Source Software – Investable Business Model or Not? – Natallia Chykina

Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”

Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).

Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).

But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?

In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.

OSS Monetization Models

By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”

While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.

As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.

Latched on to this are the monetization strategies, the most common being the following:

  • Extra charge for enterprise services, support, and consulting. The classic model targeted at large enterprise clients with sophisticated needs. Examples: MySQL, Red Hat, Hortonworks, DataStax
  • Freemium. (advanced features/products/add-ons) A custom licensed product on top of the OSS might generate a lavish revenue stream, but it requires a lot of R&D costs and time to build. Example: Cloudera, which provides the basic version for free and charges the customers for Cloudera Enterprise
  • SaaS/PaaS business model: The modern way to monetize the OSS products that assumes centrally hosting the software and shifting its maintenance costs to the provider. Examples: Elastic, GitHub, Databricks, SugarCRM

Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.

Influx of VC Dollars

An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).

Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.

At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).

With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.

Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.

Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.

Case Study 1: Red Hat

Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.

At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.

Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).

The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.

However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.

More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”

In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).

If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.

Case Study 2: Hortonworks

To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.

The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).

All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.

At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.

Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).

Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).

Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).

On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.

All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.

But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.

With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.

Why are VCs Investing in OSS Companies?

Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?

Like what you’re reading?
Get the latest updates first.
No spam. Just great articles & insights.

Good Fit for a Strategic Acquisition

Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).

What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:

  • Getting access to the client base. Sun is reported to have been motivated by this when it acquired MySQL. They wanted to access the SME market and cross-sell other products to smaller clients. Simply forking the product or developing a competing technology internally wouldn’t deliver the customer base and would have made Sun incur additional customer acquisition costs.
  • Getting control over the product. The ability to influence further development of the product is a crucial factor for a strategic buyer. This allows it to build and expand its own product offering based on the acquired products without worrying about sudden substantial changes in it. Example: Red Hat acquiring Ansible, KVM, Gluster, Inktank (Ceph), and many more
  • Entering adjacent markets. Acquiring open source companies in adjacent market segments, again, allows a company to expand the product offering, which makes vendor lock-in easier, and scales the business further. Example: Citrix acquiring XenSource
  • Acquiring the team. This is more relevant for smaller and younger projects than for larger, more well-established ones, but is worth mentioning.

What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”

With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.

My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.

A Better Monetization Model: SaaS

While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.

But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.

In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”

Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.

First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.

Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”

Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.

This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.

Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.

In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.

Success Stories for OSS with SaaS

Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.

Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.

Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.

Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.

Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).

Investable Business Model or Not?

To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.

Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.

The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.

Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive

Source : https://www.toptal.com/finance/venture-capital-consultants/open-source-software-investable-business-model-or-not

Industrial tech may not be sexy, but VCs are loving it – John Tough

There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.

Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.

Just how active is the sector now?

That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.

And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.

The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.

Detail by stage of financing

Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!

Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.

Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.

These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?

Above: Growth in pre-money valuation particularly acute in later stage deals

The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.

  • The average Series A round now buys about 24 percent, slightly less than five years ago
  • The average Series B round now buys about 22 percent of the company, down from 26 percent five years ago
  • The average Series C round now buys approximately 20 percent, down from 23 percent five years ago.

Some conclusions

  • Dollars invested as a portion of industry revenue and profit allows for further capital commitments.
  • There is a growing appreciation for the industrial sales cycle. Investor willingness to wait for reduced risk to deploy even more capital in the perceived winners appears to be driving this trend.
  • Entrepreneurs that can successfully de-risk their enterprise through revenue, partnerships, and industry hires will gain access to outsized capital pools. The winners in this market tend to compound as later customers look to early adopters
  • Uncertainty still remains about exit opportunities for technology companies that serve these industries. While there are a few headline-grabbing acquisitions (PlanGrid, Kurion, OSIsoft), we are not hearing about a sizable exit from this market on a weekly or monthly cadence. This means we won’t know for a few years about the returns impact of these rising valuations. Grab your hard hat!

Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/

Money Out of Nowhere: How Internet Marketplaces Unlock Economic Wealth – Bill Gurley

In 1776, Adam Smith released his magnum opus, An Inquiry into the Nature and Causes of the Wealth of Nationsin which he outlined his fundamental economic theories. Front and center in the book — in fact in Book 1, Chapter 1 — is his realization of the productivity improvements made possible through the “Division of Labour”:

It is the great multiplication of the production of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people. Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of society.

Smith identified that when men and women specialize their skills, and also importantly “trade” with one another, the end result is a rise in productivity and standard of living for everyone. In 1817, David Ricardo published On the Principles of Political Economy and Taxation where he expanded upon Smith’s work in developing the theory of Comparative Advantage. What Ricardo proved mathematically, is that if one country has simply a comparative advantage (not even an absolute one), it still is in everyone’s best interest to embrace specialization and free trade. In the end, everyone ends up in a better place.

There are two key requirements for these mechanisms to take force. First and foremost, you need free and open trade. It is quite bizarre to see modern day politicians throw caution to the wind and ignore these fundamental tenants of economic science. Time and time again, the fact patterns show that when countries open borders and freely trade, the end result is increased economic prosperity. The second, and less discussed, requirement is for the two parties that should trade to be aware of one another’s goods or services. Unfortunately, either information asymmetry or physical distances and the resulting distribution costs can both cut against the economic advantages that would otherwise arise for all.

Fortunately, the rise of the Internet, and specifically Internet marketplace models, act as accelerants to the productivity benefits of the division of labour AND comparative advantage by reducing information asymmetry and increasing the likelihood of a perfect match with regard to the exchange of goods or services. In his 2005 book, The World Is Flat, Thomas Friedman recognizes that the Internet has the ability to create a “level playing field” for all participants, and one where geographic distances become less relevant. The core reason that Internet marketplaces are so powerful is because in connecting economic traders that would otherwise not be connected, they unlock economic wealth that otherwise would not exist. In other words, they literally create “money out of nowhere.”

EXCHANGE OF GOODS MARKETPLACES

Any discussion of Internet marketplaces begins with the first quintessential marketplace, ebay(*). Pierre Omidyarfounded AuctionWeb in September of 1995, and its rise to fame is legendary. What started as a web site to trade laser pointers and Beanie Babies (the Pez dispenser start is quite literally a legend), today enables transactions of approximately $100B per year. Over its twenty-plus year lifetime, just over one trillion dollars in goods have traded hands across eBay’s servers. These transactions, and the profits realized by the sellers, were truly “unlocked” by eBay’s matching and auction services.

In 1999, Jack Ma created Alibaba, a Chinese-based B2B marketplace for connecting small and medium enterprise with potential export opportunities. Four years later, in May of 2003, they launched Taobao Marketplace, Alibaba’s answer to eBay. By aggressively launching a free to use service, Alibaba’s Taobao quickly became the leading person-to-person trading site in China. In 2018, Taobao GMV (Gross Merchandise Value) was a staggering RMB2,689 billion, which equates to $428 billion in US dollars.

There have been many other successful goods marketplaces that have launched post eBay & Taobao — all providing a similar service of matching those who own or produce goods with a distributed set of buyers who are particularly interested in what they have to offer. In many cases, a deeper focus on a particular category or vertical allows these marketplaces to distinguish themselves from broader marketplaces like eBay.

  • In 2000, Eric Baker and Jeff Fluhr founded StubHub, a secondary ticket exchange marketplace. The company was acquired by ebay in January 2007. In its most recent quarter, StubHub’s GMV reached $1.4B, and for the entire year 2018, StubHub had GMV of $4.8B.
  • Launched in 2005, Etsy is a leading marketplaces for the exchange of vintage and handmade items. In its most recent quarter, the company processed the exchange of $923 million of sales, which equates to a $3.6B annual GMV.
  • Founded by Michael Bruno in Paris in 2001, 1stdibs(*) is the world’s largest online marketplace for luxury one-of-a-kind antiques, high-end modern furniture, vintage fashion, jewelry, and fine art. In November 2011, David Rosenblatt took over as CEO and has been scaling the company ever since. Over the past few years dealers, galleries, and makers have matched billions of dollars in merchandise to trade buyers and consumer buyers on the platform.
  • Poshmark was founded by Manish Chandra in 2011. The website, which is an exchange for new and used clothing, has been remarkably successful. Over 4 million sellers have earned over $1 billion transacting on the site.
  • Julie Wainwright founded The Real Real in 2011. The company is an online marketplace for authenticated luxury consignment. In 2017, the company reported sales of over $500 million.
  • In 2015, Eddy Lu and Daishin Sugano launched GOAT, a marketplace for the exchange of sneakers. Despite this narrow focus, the company has been remarkably successful. The estimated annual GMV of GOAT and its leading competitor Stock X is already over $1B per year (on a combined basis).

SHARING ECONOMY MARKETPLACES

With the launch of Airbnb in 2008 and Uber(*) in 2009, these two companies established a new category of marketplaces known as the “sharing economy.” Homes and automobiles are the two most expensive items that people own, and in many cases the ability to own the asset is made possible through debt — mortgages on houses and car loans or leases for automobiles. Despite this financial exposure, for many people these assets are materially underutilized. Many extra rooms and second homes are vacant most of the year, and the average car is used less than 5% of the time. Sharing economy marketplaces allow owners to “unlock” earning opportunities from these underutilized assets.

Airbnb was founded by Joe Gebbia and Brian Chesky in 2008. Today there are over 5 million Airbnb listings in 81,000 cities. Over two million people stay in an Airbnb each night. In November of this year, the company announced that it had achieved “substantially” more than $1B in revenue in the third quarter. Assuming a marketplace rake of something like 11%, this would imply gross room revenue of over $9B for the quarter — which would be $36B annualized. As the company is still growing, we can easily guess that in 2019-2020 time frame, Airbnb will be delivering around $50B per year to home-owners who were previously sitting on highly underutilized assets. This is a major “unlocking.”

When Garrett Camp and Travis Kalanick founded Uber in 2009, they hatched the industry now known as ride-sharing. Today over 3 million people around the world use their time and their underutilized automobiles to generate extra income. Without the proper technology to match people who wanted a ride with people who could provide that service, taxi and chauffeur companies were drastically underserving the potential market. As an example, we estimate that ride-sharing revenues in San Francisco are well north of 10X what taxis and black cars were providing prior to the launch of ride-sharing. These numbers will go even higher as people increasingly forgo the notion of car ownership altogether. We estimate that the global GMV for ride sharing was over $100B in 2018 (including Uber, Didi, Grab, Lyft, Yandex, etc) and still growing handsomely. Assuming a 20% rake, this equates to over $80B that went into the hands of ride-sharing drivers in a single year — and this is an industry that did not exist 10 years ago. The matching made possible with today’s GPS and Internet-enabled smart phones is a massive unlocking of wealth and value.

While it is a lesser known category, using your own backyard and home to host dog guests as an alternative to a kennel is a large and growing business. Once again, this is an asset against which the marginal cost to host a dog is near zero. By combining their time with this otherwise unused asset, dog sitters are able to offer a service that is quite compelling for consumers. Rover.com (*) in Seattle, which was founded by Greg Gottesman and Aaron Easterly in 2011, is the leading player in this market. (Benchmark is an investor in Rover through a merger with DogVacay in 2017). You may be surprised to learn that this is already a massive industry. In less than a decade since the company started, Rover has already paid out of half a billion dollars to hosts that participate on the platform.

EXCHANGE OF LABOR MARKETPLACES

While not as well known as the goods exchanges or sharing economy marketplaces, there is a growing and exciting increase in the number of marketplaces that help match specifically skilled labor with key opportunities to monetize their skills. The most noteworthy of these is likely Upwork(*), a company that formed from the merger of Elance and Odesk. Upwork is a global freelancing platform where businesses and independent professionals can connect and collaborate remotely. Popular categories include web developers, mobile developers, designers, writers, and accountants. In the 12 months ended June 30, 2018, the Upwork platform enabled $1.56 billion of GSV (gross services revenue) across 2.0 million projects between approximately 375,000 freelancers and 475,000 clients in over 180 countries. These labor matches represent the exact “world is flat” reality outlined in Friedman’s book.

Other noteworthy and emerging labor marketplaces:

  • HackerOne(*) is the leading global marketplace that coordinates the world’s largest corporate “bug bounty” programs with a network of the world’s leading hackers. The company was founded in 2012 by Michiel PrinsJobert AbmaAlex Rice and Merijn Terheggen, and today serves the needs of over 1,000 corporate bug bounty programs. On top of that, the HackerOne network of over 300,000 hackers (adding 600 more each day) has resolved over 100K confirmed vulnerabilities which resulted in over $46 million in awards to these individuals. There is an obvious network effect at work when you bring together the world’s leading programs and the world’s leading hackers on a single platform. The Fortune 500 is quickly learning that having a bug bounty program is an essential step in fighting cyber crime, and that HackerOne is the best place to host their program.
  • Wyzant is a leading Chicago-based marketplace that connects tutors with students around the country. The company was founded by Andrew Geant and Mike Weishuhn in 2005. The company has over 80,000 tutors on its platform and has paid out over $300 million to these professionals. The company started matching students with tutors for in-person sessions, but increasingly these are done “virtually” over the Internet.
  • Stitch Fix (*) is a leading provider of personalized clothing services that was founded by Katrina Lake in 2011. While the company is not primarily a marketplace, each order is hand-curated by a work-at-home “stylist” who works part-time on their own schedule from the comfort of their own home. Stitch Fix’s algorithms match the perfect stylist with each and every customer to help ensure the optimal outcome for each client. As of the end of 2018, Stitch Fix has paid out well over $100 million to their stylists.
  • Swing Education was founded in 2015 with the objective of creating a marketplace for substitute teachers. While it is still early in the company’s journey, they have already established themselves as the leader in the U.S. market. Swing is now at over 1,200 school partners and has filled over 115,000 teacher absence days. They have helped 2,000 substitute teachers get in the classroom in 2018, including 400 educators who earned permits, which Swing willingly financed. While it seems obvious in retrospect, having all substitutes on a single platform creates massive efficiency in a market where previously every single school had to keep their own list and make last minute calls when they had vacancies. And their subs just have to deal with one Swing setup process to get access to subbing opportunities at dozens of local schools and districts.
  • RigUp was founded by Xuan Yong and Mike Witte in Austin, Texas in March of 2014. RigUp is a leading labor marketplace focused on the oilfield services industry. “The company’s platform offers a large network of qualified, insured and compliant contractors and service providers across all upstream, midstream and downstream operations in every oil and gas basin, enabling companies to hire quickly, track contractor compliance, and minimize administrative work.” According to the company, GMV for 2017 was an impressive $150 million, followed by an astounding $600 million in 2018. Often, investors miss out on vertically focused companies like RigUp as they find themselves overly anxious about TAM (total available market). As you can see, that can be a big mistake.
  • VIPKid, which was founded in 2013 by Cindy Mi, is a truly amazing story. The idea is simple and simultaneously brilliant. VIPKid links students in China who want to learn English with native English speaking tutors in the United States and Canada. All sessions are done over the Internet, once again epitomizing Friedman’s very flat world. In November of 2018, the company reported having 60,000 teachers contracted to teach over 500,000 students. Many people believe the company is now well north of a US$1B run rate, which implies that around $1B will pass hands from Chinese parents to western teachers in 2019. That is quite a bit of supplemental income for U.S.-based teachers.

These vertical labor marketplaces are to LinkedIn what companies like Zillow, Expedia, and GrubHub are to Google search. Through a deeper understanding of a particular vertical, a much richer perspective on the quality and differentiation of the participants, and the enablement of transactions — you create an evolved service that has much more value to both sides of the transaction. And for those professionals participating in these markets, your reputation on the vertical service matters way more than your profile on LinkedIn.

NEW EMERGING MARKETPLACES

Having been a fortunate investor in many of the previously mentioned companies (*), Benchmark remains extremely excited about future marketplace opportunities that will unlock wealth on the Internet. Here are an example of two such companies that we have funded in the past few years.

The New York Times describes Hipcamp as “The Sharing Economy Visits the Backcountry.” Hipcamp(*) was founded in 2013 by Alyssa Ravasio as an engine to search across the dozens and dozens of State and National park websites for campsite availability. As Hipcamp gained traction with campers, landowners with land near many of the National and State parks started to reach out to Hipcamp asking if they could list their land on Hipcamp too. Hipcamp now offers access to more than 350k campsites across public and private land, and their most active private land hosts make over $100,000 per year hosting campers. This is a pretty amazing value proposition for both land owners and campers. If you are a rural landowner, here is a way to create “money out of nowhere” with very little capital expenditures. And if you are a camper, what could be better than to camp at a unique, bespoke campsite in your favorite location.

Instawork(*) is an on-demand staffing app for gig workers (professionals) and hospitality businesses (partners). These working professionals seek economic freedom and a better life, and Instawork gives them both — an opportunity to work as much as they like, but on their own terms with regard to when and where. On the business partner side, small business owners/managers/chefs do not have access to reliable sources to help them with talent sourcing and high turnover, and products like  LinkedIn are more focused on white-collar workers. Instawork was cofounded by Sumir Meghani in San Franciso and was a member of the 2015 Y-Combinator class. 2018 was a break-out year for Instawork with 10X revenue growth and 12X growth in Professionals on the platform. The average Instawork Professional is highly engaged on the platform, and typically opens the Instawork app ten times a day. This results in 97% of gigs being matched in less than 24 hours — which is powerfully important to both sides of the network. Also noteworthy, the Professionals on Instawork average 150% of minimum wage, significantly higher than many other labor marketplaces. This higher income allows Instawork Professionals like Jose, to begin to accomplish their dreams.

THE POWER OF THESE PLATFORMS

As you can see, these numerous marketplaces are a direct extension of the productivity enhancers first uncovered by Adam Smith and David Ricardo. Free trade, specialization, and comparative advantage are all enhanced when we can increase the matching of supply and demand of goods and services as well as eliminate inefficiency and waste caused by misinformation or distance. As a result, productivity naturally improves.

Specific benefits of global internet marketplaces:

    1. Increase wealth distribution (all examples)
    2. Unlock wasted potential of assets (Uber, AirBNB, Rover, and Hipcamp)
    3. Better match of specific workers with specific opportunities (Upwork, WyzAnt, RigUp, VIPKid, Instawork)
    4. Make specific assets reachable and findable (Ebay, Etsy, 1stDibs, Poshmark, GOAT)
    5. Allow for increased specialization (Etsy, Upwork, RigUp)
    6. Enhance supplemental labor opportunities (Uber, Stitch Fix, SwingEducation, Instawork, VIPKid), where the worker is in control of when and where they work
    7. Reduces forfeiture by enhancing utilization (mortgages, car loans, etc) (Uber, AirBnb, Rover, Hipcamp)

Source : http://abovethecrowd.com/2019/02/27/money-out-of-nowhere-how-internet-marketplaces-unlock-economic-wealth/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

How redesigning an enterprise product taught me to extend myself – Instacart

As designers, we want to work on problems that are intriguing and “game-changing”. All too often, we limit the “game-changing” category to a handful of consumer-facing mobile apps and social networks. The truth is: enterprise software gives designers a unique set of complex problems to solve. Enterprise platforms usually have a savvy set of users with very specific needs — needs that, when addressed, often affect a business’s bottom line.

One of my first projects as a product designer here at Instacart was to redesign elements of our inventory management tool for retailers (e.g. Kroger, Publix, Safeway, Costco, etc.). As I worked on the project more and more, I learned that Enterprise tools are full of gnarly complexity and often present opportunities to practice deep thought. As Jonathan, one of our current enterprise platform designers said —

The greater the complexity, the greater the opportunity to find elegance.

New login screen

As we scoped the project we found that the existing product wasn’t enabling retailers to manage their inventories as concisely and efficiently as they could. We found retailer users were relying on customer support to help carry out smaller tasks. Our goal with the redesign was to build and deliver a better experience that would enable retailers to manage their inventory more easily and grow their business with Instacart.

The first step in redesigning was to understand the flow of the current product. We mapped out the journey of a partner going through the tool and spoke with the PMs to figure out what we could incorporate into the roadmap.

Overview of the older version of the retailer tool

Once we had a good understanding of the lay of the land, engineering resources, and retailers’ needs, we got into the weeds. Here are a few improvements we made to the tool —

Aisle and department management for Retailers

We used the department tiles feature from our customer-facing product as the catalog’s landing page (1.0 above). With this, we worked to:

  • Refine our visual style
  • Present retailers with an actionable page on the get-go
  • Make it quick and easy to add, delete, and modify items
New Departments page for the Partner Tool. Responsive tiles allow partners to view and edit their Aisles and Departments quickly.

Establishing Overall Hierarchy

Older item search page
Beverages > Coffee returns a list of coffees from the retailer’s catalog

Our solution simplified a few things:

  • A search bar rests atop the product to help find and add items without having to be on this specific page. It pops up a modal that offers a search and add experience. This was visually prioritized since it’s the most common action taken by retailers
  • Decoupled search flow and “Add new product” flow to streamline the workflows
  • Pagination, which was originally on the top and bottom, is now pinned to the bottom of the page for easy navigation
  • We also rethought the information hierarchy on this page. In the example below, the retailer is in the “Beverages” aisle under the “Coffee” item category, which is on the top left. They are editing or adding the item “Eight O’Clock Coffee,” which is the page title. This title is bigger to anchor the user on the page and improve navigation throughout the platform
Focused view of top bar. The “New Product” button is disabled since this is a view to add products

Achieving Clarity

While it’s great that the older Item Details page was partitioned into sections, from an IA perspective, it offered challenges for two reasons:

  1. The category grouping didn’t make sense to retailers
  2. Retailers had to read the information vertically but digest it horizontally and vertically
Older version of Item Details page

To address this, we broke down the sections into what’s truly necessary. From there, we identified four main categories of information that the data fell under:

  1. Images — This is first to encourage retailers to add product photos
  2. Basic Info — Name, brand, size, and unit
  3. Item description — Below the item description field, we offered the description seen on the original package (where the data was available) to help guide them as they wrote
  4. Product attributes — help better categorize the product (e.g. Kosher)

Sources now pop up on the top right of the input fields so the editor knows who last made changes.


Takeaways

Seeking validation through numbers is always fantastic. We did a small beta launch of this product and saw an increase in weekly engagement and decrease in support requests.

I learned that designing enterprise products helps you extend yourself as a visual designer and deep product thinker. I approached this project as an opportunity to break down complex interactions and bring visual elegance to a product through thoughtful design. To this day, it remains one of my favorite projects at Instacart as it stretched my thinking and enhanced my visual design chops. Most importantly, it taught me to look at Enterprise tools in a new light; now when I look at them, I am able to appreciate the complexity within

Source: https://tech.instacart.com/how-redesigning-an-enterprise-product-taught-me-to-extend-myself-8f83d72ebcdf

6 Biases Holding You Back From Rational Thinking – Robert Greene

Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.

These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.

1) Confirmation Bias

I look at the evidence and arrive at my decisions through more or less rational processes.

To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.

We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.

When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.

2) Conviction Bias

I believe in this idea so strongly. It must be true.

We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.

3) Appearance Bias

I understand the people I deal with; I see them just as they are.

We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.

4) The Group Bias

My ideas are my own. I do not listen to the group. I am not a conformist.

We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.

5) The Blame Bias

I learn from my experience and mistakes.

Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.

6) Superiority Bias

I’m different. I’m more rational than others, more ethical as well.

Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.

We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.

Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f

Here Are the Top Five Questions CEOs Ask About AI – CIO

Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives—all of whom agreed that the company needed to invest in artificial intelligence—seemed baffled by the need for so much detail. “How will we know if it’s working?” asked a senior director to the visible relief of his colleagues.

Although they believe AI’s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:

1. “What’s the reporting structure for an AI team?”

Organizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn’t new, the answer might be.

Captivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).

AI positioning depends on incumbent practices. A retailer’s customer service department designated a group of AI experts to develop “follow the sun chatbots” that would serve the retailer’s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.

These decisions were vastly different, but they were both the right ones for their respective companies.

Considerations:

  • How unique (e.g., competitively differentiating) is the expected outcome? If the proposed AI effort is seen as strategic, it might be better to create team of subject matter experts and developers with its own budget, headcount, and skills so as not distract from or siphon resources from existing projects.
  • To what extent are internal skills available? If data scientists and AI developers are already clustered within a COE, it might be better to leave the team as-is, hiring additional experts as demand grows.
  • How important will it be to package and brand the results of an AI effort? If AI outcome is a new product or service, it might be better to create a dedicated team that can deliver the product and assume maintenance and enhancement duties as it continues to innovate.

2. “Should we launch our AI effort using some sort of solution, or will coding from scratch distinguish our offering?”

When people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.

Executives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.

These packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.

Considerations:

  • Is time-to-delivery a success metric? In other words, is there lower tolerance for research or so-called “skunkworks” projects where timeframes and outcomes could be vague?
  • Is there a discrete budget for an AI project? This could make it easier to procure developer SDKs or other productivity tools.
  • How much research will developer toolboxes require? Depending on your company’s level of skill, in the time it takes to research, obtain approval for, procure, and learn an AI developer toolkit your team could have delivered important new functionality.

3. “Do we need a business case for AI?”

It’s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.

I’ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn’t done yet, and when they support existing cultural norms.

Considerations:

  • Do other strategic projects require business cases? If they do, decide whether you want AI to be part of the standard cadre of successful strategic initiatives, or to stand on its own.
  • Are business cases generally required for capital expenditures? If so, would bucking the norm make you an innovative disruptor, or an obstinate rule-breaker?
  • How formal is the initiative approval process? The absence of a business case might signal a lack of rigor, jeopardizing funding.
  • What will be sacrificed if you don’t build a business case? Budget? Headcount? Visibility? Prestige?

4. “We’ve had an executive sponsor for nearly every high-profile project. What about AI?”

Incumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.

A senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives “so that they’d know what they’d be in for.” More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. “This is a company where you absolutely need someone powerful in your corner,” she explained.

Considerations:

  • Does the company’s funding model require an executive sponsor? Challenging that rule might cost you time, not to mention allies.
  • Have high-impact projects with no executive sponsor failed?  You might not want your AI project to be the first.
  • Is the proposed AI effort specific to a line of business? In this case enlisting an executive sponsor familiar with the business problem AI is slated to solve can be an effective insurance policy.

5. “What practical advice do you have for teams just getting started?”

If you’re new to AI you’ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker’s quote about culture eating strategy for breakfast? Going rogue is risky.

On the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.

Either way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.

Considerations:

  • Distribute a high-level delivery schedule. An unbounded research project is not enough. Be sure you’re building something—AI experts agree that execution matters—and be clear about the delivery plan.
  • Help colleagues envision the benefits. Does AI promise first mover advantage? Significant cost reductions? Brand awareness?
  • Explain enough to color in the goal. Building a convolutional neural network to diagnose skin lesions via image scans is a world away from using unsupervised learning to discover unanticipated correlations between customer segments. As one of my clients says, “Don’t let the vague in.”

These days AI has mojo. Companies are getting serious about it in a way they haven’t been before. And the more your executives understand about how it will be deployed—and why—the better the chances for delivering ongoing value.

Source : https://www.cio.com/article/3318639/artificial-intelligence/5-questions-ceos-are-asking-about-ai.html

Augmented reality , the state of art in the industry- Miscible

Miscible.io attended The Augmented World Expo in Europe / Munich , October 2018, here is my report.

What a great #AWE2018 show in Munich, with a strong focus on the industry usage and, of course , the german automotive industry was well represented. Some new , simple but efficient, AR devices , and plenty of good use cases with a confirmed ROI. This edition was PRAGMATIC.

Here are my six take aways from this edition. Enjoy it !

1 – The return of investment of the AR solutions

The use of XR by automotive companies, big pharma, and teachers confirmed some good ROI with some “ready to use” solutions, especially in this domains :

2 – This is still the firstfruits of AR and some improvements are expected for drawbacks

  • Hardware : field of view, contrast/brigtness , 3D asset resolutions
  • Some AR headset are heavy to wear, it can have some consequences on the operator confort and security.
  • Accuracy between virtual and reality overlay / recognition
  • Automation process from Authoring software to build an end user solution.

3 – Challenge of the Authoring

To create specific and advanced AR Apps, there is still some challenges with the content authoring and with the integration to the legacy systems to retrieve master data and 3D assets. Automotized and integrated AR app need some ingenious developments.

An interesting use case from Boeing ( using hololens to assist the mounting of cables) shows how they did to get an integrated and automatized AR app. Their AR solution architecture in 4 blocks :

  • A web service to design the new AR app (UX and workflow)
  • A call to legacy systems to collect Master Data and 3D data / assets
  • Creation of an integrated Packaged data = asset bundle for the AR
  • Creation of the specific AR app (Vuforia / Unity) , to be transfered to the stand alone system, the Hololens glass.

4 – concept of 3D asset as a master data

The usage of AR and VR becomes more important in many domains : From conception to maintenance and sales (configurator, catalogs …)

The consequence is that original CAD files can be transformed and used in different processes of your company, where it becomes a challenge to use high polygon from CAD applications into other 3D / VR / AR applications, where there is a need of lighter 3D assets, also with some needs of texture and rendering adjustment.

gIFT can be a solution , glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.

The main challenge is to implement a good centralised and integrated 3D asset management strategy, considering them as important as your other key master data.

5 – service company and expert to design advanced AR / VR solutions , integrated in the enterprise information system.

The conception of advanced and integrated AR solutions for large companies needs some new expert combining knowlegde in 3D apps and experience in system integration.

This projects need new types of information system architecture taking in account the AR technologies.

PTC looks like a leader in providing efficient and scalable tools for large companies. PTC, owner of Vuforia is also exceling with other 3D / PLM management solutions like windchill , to smoothly integrate 3D management in all the processes and IT of the enterprise.

Sopra Steria , the french IS integration company, is also taking this role , bringing his system integration experience into the new AR /VR usages in the industry.

If you don’t want to invest in this kind of complex projects, for a first step in AR/VR or for some quick wins at a low budget , new content authoring solutions exist to build your AR app with some simple user interfaces and workflows : skylight by Upskill , worklink by Scope AR

6 – The need for an open AR Cloud

“A real time 3D (or spatial) map of the world, the AR cloud, will be the single most important software infrastructure in computing. Far more valuable than facebook social graph, or google page rank index” say Ori Inbar, Co-Founder and CEO of Augmented Reality.ORG. A promising prediction.

The AR cloud provide a persistant , multiuser and cross device AR landscape. It allows people to share experiences and collaborate. The most known AR cloud experience so far is the famous Pokemon Go game.

So far the AR map works using GPS or image recognition, or local point of cloud for a limited space / a building. The dream will be to copy the world as a point of cloud, for a global AR cloud landscape. A real time systems that could be used by robots, drones etc…

The AWE exhibition presented some interesting AR cloud initiative :

  • The Open AR Cloud Initiative launched at the event and had its first working session.
  • Some good SDK are now available to build your own local AR clouds : Wikitude an Immersal

Source : https://www.linkedin.com/pulse/augmented-reality-state-art-industry-fr%C3%A9d%C3%A9ric-niederberger/

 

Edge Computing Emerges as Megatrend in Automation – Design News

Edge computing technology is quickly becoming a megatrend in industrial control, offering a wide range of benefits for factory automation applications.  While the major cloud suppliers are expanding, new communications hardware and software technology are beginning to provide new solutions compared to the previous offerings used in factory automation.

B&R Industrial Automation, edge computing, automation control
A future application possibility that illustrates both the general concept and potential impact of edge computing in automation and control is edge data being visualized on a tablet in a brownfield application. (Image source: B&R Industrial Automation)

“The most important benefit [compared to existing solutions] will be interoperability—from the device level to the cloud,” John Kowal, director of business development for B&R Industrial Automation, told Design News. “So it’s very important that communications be standards-based, as you see with OPC UA TSN. ‘Flavors’ of Ethernet including ‘flavors’ of TSN should not be considered as providing interoperable edge communications, although they will function perfectly well in a closed system. Interoperability is one of the primary differences between previous solutions. OPC UA TSN is critical to connecting the edge device to everything else.”

Emerging Technology Solutions

Kowal added that, in legacy installations, gateways will be necessary to translate data from proprietary systems—ideally using OPC UA over standard Ethernet to the cloud. An edge computing device can also provide this gateway translation capability. “One of the benefits of Edge technology is its ability to perform analytics and optimization locally, and therefore achieve faster response for more dynamic applications, such as adjusting line speeds and product accumulation to balance the line. You do not expect this capability of a gateway,’” Kowal added.

Sari Germanos of B&R added that these comments about edge computing can also be equally applied to the cloud. “With edge, you are using fog instead of cloud with a gateway. Edge controllers need things like redundancy and backup, while cloud services do that for you automatically,” Germanos said. He also noted that cloud computing generally makes data readily accessible from anywhere in the world, while the choice of serious cloud providers for industrial production applications is limited. Edge controllers are likely to have more local features and functions, though the responsibility for tasks like maintenance and backup falls on the user.

Factory Automation Applications

Kowal noted that you could say that any automation application would benefit from collecting and analyzing data at the edge. But the key is what kind of data, what aspects of operations, and what are the expectations of analytics that can deliver actionable productivity improvements? “If your goal is uptime, then you will want to collect data on machine health, such as bearing frequencies, temperatures, lubrication and coolant levels, increased friction on mechanical systems, gauging, and metrology,” he said.

Some of the same logic applies to product quality. Machine wear and tear leads to reduced yield which can, in turn, be defined in terms of OEE data gathering that may already be taking place, but will not be captured at shorter intervals and automatically communicated and analyzed.

Capturing Production Capacity as well as Machine and Materials Availability

Beyond the maintenance and production efficiency aspects, Kowal said that users should consider capturing production capacity, machine and raw material availability, and constraint and output data. These will be needed to schedule smaller batch sizes, tier more effectively into ordering and production scheduling systems, and ultimately improve delivery times to customers.

Edge control technology also offers benefits compared to IoT gateway products. Kowal said that he’s never been big on splitting hairs with technology definitions—at least not from the perspective of results. But fundamentally, brownfield operators tend to want gateways to translate between their installed base of equipment, which may not even be currently networked, and the cloud. Typically, these are boxes equipped with legacy communications interfaces that act as a gateway to get data from the control system without a controls retrofit, which can be costly, risky, and even ineffective.

“We have done some work in this space, though B&R’s primary market is in new equipment,” Kowal added. “In that case, you have many options how to implement edge computing on a new machine or production line. You can use smart sensors and other devices direct to cloud or to an edge controller. The edge controller or computing resource can take many form factors. It can be a machine controller, an industrial PC that’s also used for other tasks like HMI or cell control, a small PLC used within the machine, or a standalone dedicated edge controller.”

Boosted Memory, Processing, and Connections

Germanos noted that industrial controllers were not designed to be edge controllers; they are typically designed to control one machine versus a complete production line.  Edge controllers have built-in redundancy to maintain production line operation.

“If I was designing a new machine, cell, line, or facility, I would set up the machine controllers as the edge controller/computers rather than add another piece of control hardware or gateway,” Germanos said. “Today, you can get machine controllers with plenty of memory, processing power, and network connections. I would not select a control platform unless it supports OPC UA, and I would strongly urge selecting a technology provider that supports the OPC UA TSN movement known as “The Shapers,” so that as this new standard for Industrial Ethernet evolves, I would be free from the ‘flavors’ of Ethernet.”

His recommendation is to use a platform that runs a real-time operating system for the machinery on one core or, using a Hypervisor, whatever other OS might be appropriate for any additional applications that run on Windows or Linux.

Source : https://www.designnews.com/automation-motion-control/edge-computing-emerges-megatrend-automation/27888481159634

 

Scroll to top