News & Press

Follow our trends news and stay up to date.

Why Blockchain Differs From Traditional Technology Life Cycles – Daniel Heyman

Why another bubble is likely and what the blockchain space should focus on now

In the aftermath of the 2001 internet bubble, Carlota Perez published her influential book Technological Revolutions and Financial Capital. This seminal work provides a framework for how new technologies create both opportunity and turmoil in society. I originally learned about Perez’s work through venture capitalist Fred Wilson, who credits it as a key intellectual underpinning of his investment theses.

In the wake of the 2018 ICO bubble and with the purported potential of blockchain, many people have drawn parallels to the 2001 bubble. I recently reread Perez’s work to think through if there are any lessons for the world of blockchain, and to understand the parallels and differences between then and now. As Mark Twain may or may not have said, “History doesn’t repeat itself, but it does rhyme.”

Framework Overview

In Technological Revolutions and Financial Capital, Carlota Perez analyzes five “surges of development” that have occurred over the last 250 years, each through the diffusion of a new technology and associated way of doing business. These surges are still household names hundreds of years later: the Industrial Revolution, the railway boom, the age of steel, the age of mass production and, of course, the information age. Each one created a burst of development, new ways of doing business, and generated a new class of successful entrepreneurs (from Carnegie to Ford to Jobs). Each one created an economic common sense and set of business models that supported the new technology, which Perez calls a ‘techno-economic paradigm’. Each surge also displaced old industries, drove bubbles to burst, and led to significant social turmoil.

Technology Life cycles

Perez provides a framework for how new technologies first take hold in society and then transform society. She calls the initial phase of this phenomenon “installation.” During installation, technologies demonstrate new ways of doing business and achieving financial gains. This usually creates a frenzy of investment in the new technology which drives a bubble and also intense experimentation in the technology. When the bubble bursts, the subsequent recession (or depression) is a turning point to implement social and regulatory changes to take advantage of the infrastructure created during the frenzy. If changes are made, a “golden age” typically follows as the new technology is productively deployed. If not, a “gilded age” follows where only the rich benefit. In either case, the technology eventually reaches maturity and additional avenues for investment and returns in the new technology dwindle. At this point, the opportunity for a new technology to irrupt onto the scene emerges.

Image from Technology Revolutions and Financial Capital

Inclusion-Exclusion

Within Perez’s framework, new techno-economic paradigms both encourage and discourage innovation, through an inclusion-exclusion process. This means that as new techno-economic paradigms are being deployed, they provide opportunities for entrepreneurs to mobilize and new modes of business to create growth, and at the same time, they exclude alternative technologies because entrepreneurs and capital are following the newly proven path provided by the techno-economic paradigm. When an existing technology reaches maturity and investment opportunities diminish, capital and talent go in search of new technologies and techno-economic paradigms.

Technologies Combine

One new technology isn’t enough for a new techno-economic paradigm. The age of mass production was created by combining oil and the combustion engine. Railways required the steam engine. The information age required the microprocessor, the internet, and much more. Often, a technology will, as Perez says, “gestate” as a small improvement to the existing techno-paradigm, until complementary technologies are created and the exclusion process of the old paradigm ends. Technologies can exist in this gestation period for quite sometime until the technologies and opportunities are aligned for the installation period to begin.

Frenzies and Bubbles

In many ways, the bubbles created by the frenzy in the installation phase makes it possible for the new technology to succeed. The bubble creates a burst of (over-)investment in the infrastructure of the new technology (railways, canals, fiber optic cables, etc.). This infrastructure makes it possible for the technology to successfully deploy after the bubble bursts. The bubbles also encourage a spate of experimentation with new business models and new approaches to the technologies, enabling future entrepreneurs to follow proven paths and avoid common pitfalls. While the bubble creates a lot of financial losses and economic pain, it can be crucial in the adoption of new technologies.

Connecting the Dots

A quick look at Perez’s framework would leave one to assume that 2018 was the blockchain frenzy and bubble, so we must be entering blockchain’s “turning point.” This would be a mistake.

My analysis of Perez’s framework suggests that blockchain is actually still in the gestation period, in the early days of a technology life cycle before the installation period. 2018 was not a Perez-style frenzy and bubble because it did not include key outcomes that are necessary to reach a turning point: significant infrastructure improvements and replicable business models that can serve as a roadmap during the deployment period. The bubble came early because blockchain technology enabled liquidity earlier in its life cycle.

There are three main implications of remaining in the gestation period. First, another blockchain-based frenzy and bubble is likely to come before the technology matures. In fact, multiple bubbles may be ahead of us. Second, the best path to success is to work through, rather than against, the existing technology paradigm. Third, the ecosystem needs to heavily invest in infrastructure for a new blockchain-based paradigm to emerge.

The ICO Bubble Doesn’t Match Up

2018 did show many of the signs of a Perez-style ‘frenzy period’ entering into a turning point. The best way (and ultimately the worst way) to make money was speculation. ‘Fundamentals’ of projects rarely mattered in their valuations or growth. Wealth was celebrated and individual prophets gained recognition. Expectations went through the roof. Scams and fraud were prevalent. Retail investors piled in for fear of missing out. The frenzy had all the tell-tale signs of a classic bubble.

Although there are no “good bubbles,” bubbles can have good side effects. During Canal Mania and Railway Mania, canals and railways were built that had little hope of ever being profitable. Investors lost money, but after the bubble, these canals and railways were still there. This new infrastructure made future endeavors cheaper and easier. After the internet bubble burst in 2001, fiber optic cables were selling for pennies on the dollar. Investors did terribly, but the fiber optics infrastructure created value for consumers and made it possible for the next generation of companies to be built. This over-investment in infrastructure is often necessary for the successful deployment of new technologies.

The ICO bubble, however, did not have the good side effects of a Perez-style bubble. It didn’t produce nearly enough infrastructure to help the blockchain ecosystem move forward.

Compared to previous bubbles, the cryptosphere’s investment in infrastructure was minimal and likely to be obsolete very soon. The physical infrastructure — in mining operations, for example — is unlikely to be useful. Additional mining power on a blockchain has significantly decreasing marginal returns and different characteristics to traditional infrastructure. Unlike a city getting a new fiber optic cable or a new canal, new people do not gain access to blockchain because of additional miners. Additionally, proof of work mining is unlikely to be the path blockchain takes moving forward.

The non-physical infrastructure was also minimal. The tools that can be best described as “core blockchain infrastructure” did not have easy access to the ICO market. Dev tools, wallets, software clients, user-friendly smart contract languages, and cloud services (to name a few) are the infrastructure that will drive blockchain technology toward maturity and full deployment. The cheap capital provided through ICOs primarily flowed to the application layer (even though the whole house has been built on an immature foundation). This created incentives for people to focus on what was easily fundable rather than most needed. These perverse incentives may have actually hurt the development of key infrastructure and splintered the ecosystem.

I don’t want to despair about the state of the ecosystem. Some good things came out of the ICO bubble. Talent has flooded the field. Startups have been experimenting with different use cases to see what sticks. New blockchains were launched incorporating a wide range of new technologies and approaches. New technologies have come to market. Many core infrastructure projects raised capital and made significant technical progress. Enterprises have created their blockchain strategies. Some very successful companies were born, which will continue to fund innovation in the space.The ecosystem as a whole continues to evolve at breakneck speed. As a whole, however, the bubble did not leave in its wake the infrastructure one would expect after a Perez-style bubble.

Liquidity Came Early

The 2018 ICO bubble happened early in blockchain technology’s life-cycle, during its gestation period, which is much earlier than Perez’s framework would predict. This is because the technology itself enabled liquidity earlier in the life-cycle. The financial assets became liquid before the underlying technology matured.

In the internet bubble, it took companies many years to go public, and as such there was some quality threshold and some reporting required. This process enabled the technology to iterate and improve before the liquidity arrived. Because blockchain enabled liquid tokens that were virtually free to issue, the rush was on to create valuable tokens rather than valuable companies or technologies. You could create a liquid asset without any work on the underlying technology. The financial layer jumped straight into a liquid state while the technology was left behind. The resulting tokens existed in very thin markets that were highly driven by momentum.

Because of the early liquidity, the dynamics of a bubble were able to start early for the space in relationship to the technology. After all, this was not the first blockchain bubble (bitcoin already has a rich history of bubbles and crashes). The thin markets in which these assets existed likely accelerated the dynamics of the bubble.

What the Blockchain Space Needs to Focus on now

In the fallout of a bubble, Perez outlines two necessary components to successfully deploy new and lasting technologies: proven, replicable business models and easy-to-use infrastructure. Blockchain hasn’t hit these targets yet, and so it’s a pretty obvious conclusion that blockchain is not yet at a “turning point.”

While protocol development is happening at a rapid clip, blockchain is not yet ready for mass deployment into a new techno-economic paradigm. We don’t have the proven, replicable business models that can expand industry to industry. Exchanges and mining companies, the main success stories of blockchain, are not replicable business models and do not cross industries. We don’t yet have the infrastructure for mass adoption. Moreover, the use cases that are gaining traction are mostly in support of the existing economic system. Komgo is using blockchain to improve an incredibly antiquated industry (trade finance) but it is still operating within the legacy economic paradigm.

Blockchain, therefore, is still in the “gestation period.” Before most technologies could enter the irruption phase and transform the economy, they were used to augment the existing economy. In blockchain, this looks like private and consortium chain solutions.

Some people in blockchain see this as a bad result. I see it as absolutely crucial. Without these experiments, blockchain risks fading out as a technological movement before its given the chance to mature and develop. In fact, one area where ConsenSys is not given the credit I believe it deserves is in bringing enterprises into the Ethereum blockchain space. This enterprise interest brings in more talent, lays the seeds for additional infrastructure, and adds credibility to the space. I am more excited by enterprise usage of blockchain today than any other short-term developments.

The Future of Blockchain Frenzy

This was not the first blockchain bubble. I don’t expect it to be the last (though hopefully some lessons will be learned from the last 12 months). Perez’s framework predicts that when the replicable business model is found in blockchain, another period of frenzied investment will occur, likely leading to a bubble. As Fred Wilson writes, “Carlota Perez [shows] ‘nothing important happens without crashes.’ ” Given the amount of capital available, I think this is a highly likely outcome. Given the massive potential of blockchain technology, the bubble is likely to involve more capital at risk than the 2018 one.

This next frenzy will have the same telltale signs of the previous one. Fundamentals will decrease in importance; retail investors will enter the market for fear of missing out; fraud will increase; and so on.

Lessons for Blockchain Businesses

Perez’s framework offers two direct strategic lessons for PegaSys and for any serious protocol development project in the blockchain space. First, we should continue to work with traditional enterprises. Working with enterprises will enable the technology to evolve and will power some experimentation of business models. This is a key component of the technology life-cycle and the best bet to help the ecosystem iterate.

Second, we must continue investing in infrastructure and diverse technologies for the ecosystem to succeed. This might sound obvious at first, but the point is that we will miss out on the new techno-economic paradigm if we only focus on the opportunities that are commercially viable today. Our efforts in Ethereum 1.x and 2.0 are directly born from our goal of helping the ecosystem mature and evolve. The work other groups in Ethereum and across blockchain are doing also drives towards this goal. We are deeply committed to the Ethereum roadmap and at the same time recognize the value that innovations outside Ethereum bring to the space. Ethereum’s roadmap has learned lessons from other blockchains, just as those chains have been inspired by Ethereum. This is how technologies evolve and improve.

Source : https://hackernoon.com/why-blockchain-differs-from-traditional-technology-life-cycles-95f0deabdf85

How The CIO Role Must Change Due To Digital Transformation – Peter Bendor

Digital transformation is sweeping through businesses, giving rise to new to new business models, new and different constraints, and presenting a need for more focused organizational attention and resources in a new way. It is also upending the C-suite, bringing in new corporate titles and functions such as the Chief Security Officer emerge, Chief Digital Officer and Chief Data Officer. These new roles seemingly pose an existential threat to existing roles – for example, the CIO.

As companies invent new business models through digital transformation and bring new organizations into being, they do more than cover new ground. They also carve new roles out of existing organizations (the CIO organization, for instance). Other digital threats potentially affect the CIO role:

  • Recognition that digital transformation now makes technology THE business, rather than technology supporting the business; therefore, IT and CIO roles are much more vital to growth in sales.
  • Competing through new digital models and digital platforms, focusing on redefining the customer experience and employee experience to create and deliver new value.

At Everest Group, we investigated the question of “Will the role of the CIO go away?” As a result of that investigation, we come back strongly with “no.” In fact, here’s happening to the role of the CIO: the CIO charter is changing and thus changing – but strengthening – the role.

Reasons For Changes In The CIO Charter

The focus of the CIO charter is increasingly changing – matching the new corporate charter for competitive repositioning. The prior focus was on the plumbing (infrastructure, ensuring applications are maintained and in compliance, etc.). Although those functions remain, the new charter focuses on building out and operating the new digital platforms and new digital operating models that are reshaping the competitive landscape.

The reason the CIO role is changing with the new corporate charter is that, in most organizations, the CIO is the only function that has these necessary capabilities for digital transformation:

  • Breadth of vision that sees the entire organization and all its workings
  • Depth of resources and ability to drive transformation projects and apply technology across silos, functions and divisions.

Digital transformation inevitably forces new operating models that have no respect for traditional organizations that are functional. Digital platforms and digital operating models collapse marketing and operations, for instance, spanning across these functions and groups to achieve a superb end-to-end for customer experience.

The new models force much tighter integration and often a realignment of organizations. The CIO organization and its breadth of vision and depth of resources to drive the transformation and support the new operating model that inevitably emerges from transformation.

How The CIO Role Must Change For The New Charter

Meeting the goals of the new charter for the CIO role will not come without CIOs changing their organizations and, in many cases, changing personally. To seizing the opportunities in the new charter, as well as shaping it, requires substantial change in (a) modernizing the IT, (b) the orientation and mind-set of the IT organization, and (c) changing the organizational structure.

To support digital transformation agendas, CIOs face a set of journeys in which they need to dramatically modernize their traditional functions. They first must think about their relationship with the business. To meet the needs of the business in a much more intimate, proactive, deeper way requires more investment and organizations with deeper industry domain knowledge and relationships. They need to move talent from remote centers back onshore to be close to the business so that they can better understand in a deeper way what the needs are and act on those quickly.

Second, the IT operating model needs to change from its historical structures so that it can deliver a seamless operating environment. The waterfall structures that still permeate IT need to change into a DevOps model with persistent teams that don’t change, teams that sit close to the business. IT also needs to accelerate the company’s journey to automation and cloud.

One thing companies quickly find about operating models is that they can’t get to a well-functioning DevOps team without migrating to a cloud-based infrastructure basis. And they can’t get to a cloud-based infrastructure basis without transforming their network and network operations model.

To meet the new charter, the CIO organization also needs to change in the following aspects:

  • Change its mind-set
  • Ensure deeper business knowledge
  • Increase agility and speed

The modernizations I mentioned above then call into question the historical organizational structure of IT with functions such as network, infrastructure, security, apps development, apps maintenance, etc. In the new digital charter, these functions inevitably start to collapse into pods or functions aligned by business services.

As I’ve described above, substantial organizational technology and organizational change is required within the CIO’s organization to live up the new mandate. I can’t overemphasize the fact that the change is substantial nor overemphasize the need. In upcoming blog posts, I’ll further discuss the CIO’s role in reorienting the charter from plumbing to transformation and supporting the new digital operating models.

Source : https://www.forbes.com/sites/peterbendorsamuel/2019/01/30/how-the-cio-role-must-change-due-to-digital-transformation/#24f9952f68be

API Metrics and Status – A Regulatory Requirement or a Strategic Concern? – John Heaton-Armstrong

TL;DR – those discussing what should be appropriate regulatory benchmarks for API performance and availability under PSD2 are missing a strategic opportunity. Any bank that simply focusses on minimum, mandatory product will rule itself out of commercial agreements with those relying parties who have the wherewithal to consume commercial APIs at scale.

Introduction

As March approaches, those financial institutions in the UK and Ireland impacted by PSD2 are focussed on readiness for full implementation. The Open Banking Implementation Entity (OBIE) has been consulting on Operational Guidelineswhich give colour to the regulatory requirements found in the Directive and Regulatory Technical Standards which support it. The areas covered are not unique to the UK, and whilst they are part of an OBIE-specific attestation process, the guidelines could prove useful to any ASPSP impacted by PSD2.

Regulatory Requirements

The EBA at guidelines 2.2-4 are clear on the obligations for ASPSPs. These are supplemented by the RTS – ” [ASPSPs must] ensure that the dedicated interface offers at all times the same level of availability and performance, including support, as the interfaces made available to the payment service user for directly accessing its payment account online…” and “…define transparent key performance indicators and service level targets, at least as stringent as those set for the interface used by their payment service users both in terms of availability and of data provided in accordance with Article 36″ (RTS Arts. 32(1) and (2)).

This places the market in a quandary – it is extremely difficult to compare, even at a theoretical level, the performance of two interfaces where one (PSU) is designed for human interaction and the other (API) for machine. Some suggested during the EBA’s consultation period that a more appropriate comparison might be between the APIs which support the PSU interface and those delivered in response to PSD2. Those in the game of reverse engineering confirm that there is broad comparability between the functions these support – unfortunately this proved too much technical detail for the EBA.

To fill the gap, OB surveyed the developers, reviewed those existing APIs already delivered by financial institutions, and settled on an average of 99% availability (c.22hrs downtime per quarter) and 1000 m/s per 1MB of payload response time (this is a short summary and more detail can be read on the same). A quick review of the API Performance page OB publish will show that, with average availability of 96.34% across the brands in November, and only Bank of Scotland, Lloyds and the HSBC brands achieving >99% availability, there is a long way to go before this target is met, made no easier by a significant amount of change to platforms as their functional scope expands over the next 6-8 months. This will also been in the face of increasing demand volumes, as those organisations which currently rely on screen scraping for access to data begin to transfer their integrations onto APIs. In short, ASPSPs are facing a perfect storm to achieve these goals.

Knowledge and Reporting

At para 2.3.1 of their guidelines, the OBIE expands on the EBA’s reporting guidelines, and provides a useful template for this purpose, but this introduces a conundrum. All of the data published to date has been the banks reporting on themselves – i.e. the technical solutions to generate this data sit inside their domains, so quite apart from the obvious issue of self-reporting, there have already been clear instances where services haven’t been functioning correctly, and the bank in question simply hasn’t known this to be the case until so informed by a TPP. One of the larger banks in the UK recently misconfigured a load balancer to the effect that 50% of the traffic it received was misdirected and received no response, but without its knowledge. A clear case of downtime that almost certainly went unreported – if an API call goes unacknowledged in the woods, does anyone care?

Banks have a challenge, in that risk and compliance departments typically baulk at any services they own being placed in the cloud, or indeed anywhere outside their physical infrastructure. This is absolutely what is required for their support teams to have a true understanding of how their platforms are functioning, and to generate reliable data for their regulatory reporting requirements.

[During week commencing 21st Jan, the Market Data Initiative will announce a free/open service to solve some of these issues. This platform monitors the performance and availability of API platforms using donated consents, with the aim of establishing a clear, independent view of how the market is performing, without prejudicial comment or reference to benchmarks. Watch this space for more on that.]

Regulatory or strategic concern?

For any TPP seeking investment, where their business model necessitates consuming open APIs at scale, one of the key questions they’re likely to face is how reliable these services are, and what remedies are available in the event of non-performance. In the regulatory space, some of this information is available (see above) but is hardly transparent or independently produced, and even with those caveats does not currently make for happy reading. For remedy, TPPs are reliant on regulators and a quarterly reporting cycle for the discovery of issues. Even in the event that the FCA decided to take action, the most significant step they could take would be to instruct and ASPSP to implement a fall-back interface, and given that they would have a period of weeks to build this, it is likely that any relying party’s business would have suffered significant detriment before it could even start testing such a facility. The consequence of this framework is that, for the open APIs, performance, availability and the transparency of information will have to improve dramatically before any commercial services rely on them.

Source : https://www.linkedin.com/pulse/api-metrics-status-regulatory-requirement-strategic-john?trk=portfolio_article-card_title

7 Big Lessons We Learned on How to Sell a Patent – Sammy Abdullah

In 2017, we had a death in the portfolio. Once all the employees left, the only remaining assets were some patents, servers, domains, and a lot of code. Recently, we managed to learn how to sell a patent and code. Here is what we learned on how to sell a patent:

How to sell a patent in 7 steps

1. Set expectations when selling patents

The value of IP is a small fraction of what the company was once valued at; it’s maybe 1 to 5 cents on the dollar. Any acquirer of the IP is unlikely to do an all-cash deal, so don’t be surprised if the final consideration is a blend of cash, stock, royalty, earn out, or some other creative structure that reduces the acquirer’s upfront risk.

Selling a patent is going to take a year or more with legal taking 6 to 9 months alone (we recommend specialized counsel that has M&A experience and experience in bankruptcy/winding down entities).

It’s also going to take some cash along the way as you foot the bill for legal, preparing the code, and other unforeseen expenses that have to be paid well ahead of the close. With those expectations in mind, you need to seriously consider whether it is worth the work to sell the IP, what you will really recover, and what the probability of success really is.

2. Reach out to everyone

If you’ve decided it’s worth it to try and recover something for the IP, reach out to absolutely everyone you know. That includes old customers, prospects, former customers, anyone who has ever solicited you for acquisition, your cousin, your aunt, etc.

The point is don’t eliminate anyone as a potential acquirer as you don’t know what’s on someone’s product roadmap and be shameless about reaching out to your entire network. The acquirer of the IP in our dead company was a prospect who never actually became a customer. We also had interest from very random firms that weren’t remotely adjacent to our space.

3. You need the CTO

In order to transfer code to an acquirer, you’re going to need the CTO or whoever built a majority of the code to assist. No acquirer is going to take the code as-is unless you want them to massively discount the price to hedge their risk.

They’re going to want it cleaned up and packaged specifically to their needs. In our case, it took a founding developer 3 months of hard work to get the code packaged just right for our acquirer, and of course, we paid him handsomely for successful delivery.

4. You need great counsel

The code was once part of a company, and that company has liabilities, creditors, equity owners, former employees, and various other obligations. All of those parties are probably pretty upset with you that things didn’t work out. Before you embark on a path to sell the IP, consult with an attorney that can tell you who has a right to any proceeds collected, what the waterfall of recipients looks like, who can potentially block a deal, who you need to get approval from, whether patents are in good standing, etc.

You’ll need to pay the attorney up front for his work and as you progress through the deal, so it takes take money to make money from selling IP.

5. Utilize Github

Put the code on Github. Have potential acquirers sign a very tight and punitive NDA before allowing them to see the code. It also may be advisable to only give acquirers access to portions of the code. Github is the best $7 a month you’ll ever spend when it comes to selling IP.

6. Get all the assets

Make sure you have access to all the assets. This includes all code, training modules, patents, domains, actual servers and hardware, trademarks, logos, etc. An acquirer is going to want absolutely everything even if there are some things he can’t necessarily use.

7. Make sure the acquirer is fair

The acquirer has to be someone that is negotiating fairly and in good faith with you. We got very lucky that our acquirer had an upstanding and reputable CEO. If you don’t trust the acquirer or if they’re being shifty, move on. In our case, had the acquirer been a bad guy, there were many times when he could have screwed us such as changing the terms of the deal before the close, among other things.

Given the limited recourse you often have in situations like this, ‘bad boy’ acquirers do it all the time. We got lucky finding an acquirer who was honest, forthright and kept his word. You’ll need to do the same.

Takeaways on how to sell a patent

Selling patents is incredibly challenging. In our case the recovery was very small relative to capital invested, the process took nearly 1 year, and there were a lot of people involved to make it happen. We also spent about tens of thousands of dollars in legal fees, data scientist consulting, patent reinstatement and recovery, shipping of servers, etc.

A lot of that expenditure was done along the way so we had to put more money at risk for the possibility of maybe recovering cash in the sale of IP. Learning how to sell a patent wasn’t easy, but it got done. Hopefully, we never have to do it again and neither do you.

Source: https://about.crunchbase.com/blog/how-sell-patent/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

Corporate venture building dilemma: investment vs. control – Carlos Borges

Having founded my startup a few years ago, I am familiar to why founders go through the pain & grit to build their own company. The statistics around startup survival rates show that the risk is high, but the potential reward both financially & emotionally is also significant.

In my case, risk was defined by the amount of money I invested in the venture plus the opportunity cost in case the startup goes nowhere. The later relates to the fact that I earned no salary at the beginning & that when I committed to that specific idea I was instantaneously saying “no” to many other opportunities and potential career advancements. The reward was two-fold too; the first one was the attractive financial outcome of a potential exit. The second one was the freedom to chase opportunities as they appear, doing what I want and how I want it.

Once I raised capital from investors, I basically traded reward for reduced risk. I started paying myself a small salary and anticipated that more resources would increase the success likelihood of the startup.

This pattern of weighing risk against rewards was crystal clear in my mind… until I joined the arena of corporate venture building. Directly during one of my first projects, I was tasked with the creation of a startup for a blue-chip corporate client. I was immediately puzzled by the reasoning behind this endeavor.

Ultimately corporate decisions are also guided by risk against reward: if they don’t take risks and innovate they might be left behind and, in some cases, join the once-great-now-extinct corporate hall of shame. That’s why they invest in research and development, spend hard earned cash in mergers and acquisitions and start innovation programs. But my interest was more at a micro level, meaning, which reasoning my corporate client follows to decide if and how to found a specific new venture?

Having thought about it a lot, I believe at micro level corporates weigh investment against control. Investment is the level of capital, manpower & political will provided by the corporate to propel the venture towards exit, break-even or strategic relevance. Control is the possibility to steer the venture towards the strategic goals the leadership team has in mind while defining the boundaries of what can & cannot be done.

In the startup case, the risk/reward is typically shared between the founders and external investors. In a corporate venture building case, the investment/control can be shared between the corporate, an empowered founder team and also external investors.

I am still in the middle of the corporate decision-making process but wanted to share with you the scenarios we are using to guide the discussions on how to structure the new venture. But before I do, I would like to mention that the considerations of investment vs. control takes place at three different stages of the venture’s existence:

• Incubation: develop & validate idea
• Acceleration: validate business model incl. product, operations & customer acquisition (find the winning formula)
• Growth: replicate the formula to grow exponentially

Based on that, three main scenarios are being considered to found the new venture.

Scenario 1: Control & Grow

  • Full investment & control during incubation & acceleration
  • Shared investment & control during the growth stage

Per definition, the incubation and acceleration stages are less capital intensive and is the moment when key strategic decisions that shape the future business are made. In these stages, the corporate is interested in maintaining the full control of the venture while absorbing the whole investment. Only when they enter the capital-intensive growth stage it becomes necessary to “share the burden” with other institutional or strategic investors. This scenario is suitable for ventures of high strategic value, especially the ones leveraging core assets and know-how of the corporate mothership.

Scenario 2: Spread the Bets

  • Lower investment & control during all stages

In this case, the corporate initiator empowers a founder team and joins the project almost like an external investor would do at Seed and Series A of a startup. They agree on a broad vision, provide the funding and retain a part of the shares with shareholder meetings in between to track progress. Beyond that, they let the founder team do their thing. External investors can join at any funding round to share the investment tickets. The corporate would have lower control and investment from the get-go and can increase their influence only when new funding rounds are required or via an acquisition offer. This scenario is suitable for ventures in which the corporate can function as the first client or use their network to manufacture, market or distribute the product or service.

Scenario 3: Build, operate & transfer

  • Lower investment & control during incubation & acceleration
  • Full investment & control during the growth stage

The venture is initially built by a founder team or external partners (often a consultancy). Only once they successfully finalized the incubation and acceleration stages, the corporate has the right or obligation to absorb the business. Differently than scenario 2, the corporate gains stronger control of the trajectory of the business during its initial stages by defining how a “transfer” event looks like. The investment necessary to put together a strong founder team is reduced by the reward of a pre-defined & short term exit event. The initial investment can be further reduced by the participation of Business Angels, also motivated by a clear path to exit and access to a new source of deal flow. This scenario is suitable for ventures closely linked to the core business of the corporate and where speed & excellence of execution is key.

There is obviously no right and wrong. Each scenario can make sense according to the end goal of the corporate. Furthermore, there are surely new scenarios and variations of the above. What is important in my opinion is to openly discuss which road to take. If the client can’t discern the alternatives and consequences, you will risk a “best of both worlds” mindset where expectations regarding investment & control don’t match. If that is the case, you will be up for a tough ride

Source : https://medium.com/@cbgf/a-corporate-venture-building-dilemma-investment-vs-control-a703b9c19c94

Robots for Rent – Why RaaS Works – RIA

Renting robots as temp labor? Not a new idea. But it’s certainly one that is gaining followers.

Rising labor shortages, tightly contested global markets, and growing interest in automation are tightening the screws on traditional business models. A broader spectrum of users are seeking flexible automation solutions. More suppliers are adopting new-age rental or lease options to satisfy the demand. Some are mature companies answering the call, others are startups blazing a path for the rest of the industry. Robotics as a Service (RaaS) is an emerging trend whose time has come.

Steel Collar Associates may have been ahead of its time when RIA spoke with its owner in 2013 about his “Humanoids for Hire” – aka Yaskawa dual-arm robots for rent. Already several years into his venture at the time, Bill Higgins was having little success contracting out his robo-employees. Back then, industry was barely warming up to the idea of cage-free robots rubbing elbows with their human coworkers. Now every major robot manufacturer has a collaborative robot on its roster. And a slew of startups have joined the fray.

Just like human-robot collaboration is helping democratize robotics, RaaS will help bring robots to the masses. And cobots aren’t the only robots for rent.

Whether you have a short-term need, want to try before you buy, forgo a capital expenditure, or lower your cost of entry to robotic automation, RaaS is worth a closer look. It’s robots on demand, when and where you want them.

An out-of-the-box collaborative robot solution on wheels is easy to redeploy as production needs change. A rental option further enhances ROI. (Courtesy of READY Robotics)

Robots on Demand
Out-of-the-box solutions like those offered by READY Robotics, which are easy to use and easy to deploy, are making RaaS a reality. Your next, or perhaps first, robotic solution may be a Johnny-on-the-spot – on wheels.

“The TaskMate is a ready-to-use, on-demand robot worker that is specifically designed to come out of its shipping crate ready to be deployed to the production line,” says READY Robotics CEO Ben Gibbs, noting that manufacturers without the time to undertake custom robot integration are looking for an out-of-the box automation solution. Rental options make the foray easier.

“Time is their most precious resource. They want something like the TaskMate that is essentially ready to go out of the box,” says Gibbs. “They may have to do a little fixturing or put together a parts presentation hopper. Besides that, it’s something they can deploy pretty quickly. We’re driving towards providing a solution that’s as easy to use as your personal computer.”

The system consists of a collaborative robot arm mounted on a stand with casters, so you can wheel it into position anywhere on the production floor. The ease of portability makes it ideal for high-mix, low-volume production where it can be quickly relocated to different manufacturing cells. Nicknamed the “Swiss Army Knife” of robots, the TaskMate performs a variety of automation tasks from machine tending to pick-and-place applications, to parts inspection.

The TaskMate comes in two varieties, the 5-kg payload R5 and 10-kg payload R10 (pictured). Both systems use robot arms from collaborative robot maker Universal Robots. The UR arm is equipped with a force sensor and a universal interface called the TEACHMATE that allows different robot grippers to be hot-swapped onto the end of the arm. Supported end effector brands include SCHUNK, Robotiq and Piab.

Contributing to the system’s ease of use is READY’s proprietary operating system, the FORGE/OS software. A simple flowchart interface (pictured) controls the robot arm, end-of-arm tooling and other peripherals. No coding is required.

For those tasks requiring a higher payload, reach, or cycle time than is capable with the power-and-force limiting cobot included with the TaskMate R5 and R10 systems, READY also offers its FORGE controller (formerly called the TaskMate Kit). Running the intuitive FORGE/OS software, the controller provides the same easy programming interface but is designed as a standalone system for ABB, FANUC, UR and Yaskawa robots.

“For example, if you plug the FORGE controller into a FANUC robot, you no longer have to program in Karel (the robot OEM’s proprietary programming language),” explains Gibbs. “On the teach pendant, you can use FORGE/OS to program the robot directly, so you have the same programming experience on the controller as you do on the TaskMate.

Intuitive software interface with a flowchart design and compatibility with multiple robot brands makes programming easier and faster. (Courtesy of READY Robotics)

“We started primarily with smaller six degree-of-freedom robot arms, like the FANUC LR Mate and GP7 from Yaskawa,” continues Gibbs. “We have started to integrate some of the larger robots as well, like the FANUC M-710iC/50. Ultimately, we’re driving toward a ubiquitous programming experience regardless of what robot arm or robot manufacturer you’re using.”

In the Cloud
A common element in the RaaS rental model is cloud robotics. READY offers customers the ability to remotely monitor the TaskMate or other robotic systems hooked up to the FORGE controller.

“We can set them up with alerts, so when the production cycle is completed or the robot enters an unexpected error state, they can receive an email notifying the floor manager or line operator to check the system,” says Gibbs.

You can also save and back up programs to the cloud, and deploy them from one robot to another. If an operator were to inadvertently lose a program, rather than rewrite it from scratch, you can just drop the backup version from the cloud onto the system and be up and running again in minutes.

The TaskMate systems and FORGE controller are available for both purchase and rental.

“We provide a menu to our customers of how they might want to consume our products and services,” says Gibbs. “That may be all the way from a traditional CapEx (capital expenditure) purchase if they want to buy one of our TaskMates upfront, to the other end of the spectrum where they can rent the system with no contract for however long or short of a duration they want.”

For an additional charge, READY can manage the entire asset for the customer.

“We set it up, we program it, and we remotely monitor it to make sure it’s maximizing its uptime. We can come in and tweak the program if it’s running into unexpected errors. All of the systems are equipped with cell modems, so they can update the software over the air. We handle all of the maintenance or it’s handled by our channel partners.”

No-Term Rental
Gibbs says flexibility is the biggest advantage to their rental option. READY offers a 3-month trial rental. But customers are not required to keep it for that full term.

“We have a no-term rental. That’s even more appealing because it can come entirely out of your OpEx (operating expenditure) budget. Instead of going through a lengthy CapEx approval process, we’ve had some customers just run their corporate credit card, because the rental is below their approval level for an OpEx purchase. They can easily set up the system and use it for a few months. That alone provides them with a much stronger justification for moving forward with CapEx if they want, or just continue to expand their rental.

“At the end of the first month, if they decide that it’s not working out, just like any incompetent worker, they can fire it and send it back.”

If the customer chooses to continue renting, Gibbs says it’s more cost-effective to sign a contract. This reduces the risk for everyone, so there’s usually a financial incentive.

“The primary way we differentiate ourselves is that we offer that no-term rental with a fixed monthly fee, which allows these factories to capture the traditional value of automation. We don’t have a meter running that says you ran it 22 hours this day, so you owe us for 22 hours of work. We encourage them to run it as long as they want. The expectation is the longer you run it, the cheaper it should be.”

Flexibility for High-Mix, Low-Volume
READY’s target customers range from small job shops to large multinationals and Fortune 500 companies.

“Attwood is a great example of the type of high-mix, low-volume production environment where the flexibility of the TaskMate really shines,” says Gibbs.

Attwood Marine in Lowell, Michigan, is one of the world’s largest producers of boat parts, accessories and supplies. If it’s on your boat, there’s a good chance this century-old company made it. They make thousands of different parts, but cater to a relatively small marine market. The challenges of high-mix, low-volume production in a highly competitive market had them looking for an automation solution.

The flexibility of the TaskMate to quickly deploy and redeploy depending on Attwood’s short- or long-term needs was a deciding factor. With only a couple hundred employees and no dedicated robotics programmer on staff, the customer appreciates the FORGE software’s ease of use. Plus the ability to rent the system plays to the seasonal nature of Attwood’s business and lowers the cost of their first foray into robotic automation.

Attwood has deployed the TaskMate R10 to a half-dozen cells on the production floor performing CNC machine tending, pick-and-place tasks like palletizing, loading/unloading conveyors and case packing, and even repetitive testing. You need to actuate a switch or pull a cord 250,000 times? That’s a job for flexible automation.

By deploying one robot system to multiple production cells, Attwood was able to spread their ROI across multiple product lines and realize up to a 30 percent reduction in overall manufacturing costs. Watch the TaskMate on the job at Attwood Marine.

Small to midsized businesses aren’t the only ones benefiting. Large multinationals like tools manufacturer Stanley Black & Decker use the TaskMate R10 for machine tending CNC lathes.

“Multinationals may have robot programmers on staff, but usually not enough of them,” says Gibbs. “Automation engineers are in high demand and very difficult to come by. Any technology that makes it faster and easier for people to set up robots is a tremendous value. Even with large multinationals, some like to be asset-light and do a rental, but everyone loves the ease of programming we offer through FORGE.”

Forged in the Lab
READY’s portable plug-and-play solution is a technology spinoff from Professor Greg Hager’s research in human-machine collaborative systems at Johns Hopkins University. Gibbs, an alumnus, was working in the university’s technology ventures office helping researchers like Prof. Hager develop commercialization strategies for their new technologies. Hager, along with Gibbs, and fellow alum CTO Kelleher Guerin cofounded the startup in October 2015. Another cofounder, Drew Greenblatt, President of Marlin Steel Wire Products (an SME in the Know), offered up his nearby Baltimore, Maryland-based custom metal forms factory as a prototype test site for the TaskMate. The system was officially launched in July 2017.

Prof. Hager is now an advisor to the company. Distinguished robotics researcher, Henrik Christensen, is Chairman of the Board of Advisors. In December 2017, the startup secured $15 million in Series A funding led by Drive Capital.

READY maintains an office in Baltimore, while its headquarters is in Columbus, Ohio. They are a FANUC Authorized System Integrator. Gibbs says they are in the process of building a channel partner network of integrators and distributors to support future growth.

Pay As You Go
Business models under the RaaS umbrella vary widely, and are evolving. Startups like Hirebotics and Kindred leverage cloud robotics more intensely to monitor robot uptime, collect data, and enhance performance using AI. They charge by the hour, or even by the second. You pay for only what you use. Each service model has its advantages.

Some RaaS advocates offer subscription-based models. Some took a page from the sharing economy. Think Airbnb, Lyft, TaskRabbit, Poshmark. Share an abode, a car or clothes. Skip the overhead, the infrastructure and the long-term commitment. Pay as you go for a robot on the run.

Mobile Robots for Hire
Autonomous mobile robots (AMRs) are no strangers to the RaaS model, either. RIA members Aethon and Savioke lease their mobile robots for various applications in healthcare, hospitality and manufacturing. Startup inVia Robotics offers a subscription-based RaaS solution for its warehouse “Picker” robots.

Autonomous mobile robot navigates production floors to transport pallets and heavy loads via the most efficient route, while safely maneuvering around people and other obstacles. (Courtesy of Mobile Industrial Robots A/S)We first explored the emergence of AMRs in the Always-On Supply Chain. It’s startling how much the logistics robot market has changed in just a couple of years. Since then, prototypes and beta deployments have turned into full product lines with significant investor funding. Major users like DHL, Walmart and Kroger, not to mention early adopter Amazon, are doubling down on their mobile fleets.

After triple-digit revenue growth in Europe, Mobile Industrial Robots (MiR) was just breaking onto the North American scene two years ago. Now, as they celebrate comparable growth on this side of the pond, MiR prepares to launch a new lease program in January.

MiR is another prodigy of Denmark’s booming robotics cluster. They join Danish cousin Universal Robots on the list of Teradyne’s smart robotics acquisitions. Odense must have the Midas touch.

Go Big or Go Home
Responding to customer demands for larger payloads, MiR introduced its 500-kg mobile platform at Automatica in June. The MiR500 (pictured) comes with a pallet transport system that automatically lifts pallets off a rack and delivers them autonomously. Watch it in action on the production floor of this agricultural machine manufacturer.

“Everybody we deal with today is making a big push to eliminate forklift traffic from the inner aisleways of production lines,” says Ed Mullen, Vice President of Sales – Americas for MiR in Holbrook, New York. “That’s really driving the whole launch of the MiR500. We’ve gone through some epic growth here in my division.”

Mullen’s division is responsible for supporting MiR’s extensive distributor network in all markets between Canada and Brazil. Right now, the Americas account for about a third of the global business.

“We’re seeing applications in industrial automation, warehouses and distribution centers,” says Mullen. “Electronics, semiconductor and a lot of the tier automotive companies, like Faurecia, Visteon and Magna, have all invested in our platforms and are scaling the business. We see this being implemented across all industries, which is really adding to our excitement.”

Lease Options
Although Mullen says they’ve seen tremendous success with the current buy model, MiR is trying to make it even easier to work with this emerging technology. That drove them to the RaaS model.

“We think a leasing option will allow companies that are still trying to understand the use cases for the technology to get in quicker, and then slowly scale the business up as they learn how to apply it and what the sweet spots are for autonomous mobile robots. The lease option is intended to reduce the cost of entry. Today it’s mainly the bigger multinationals that are buying, but we believe by providing options for lower entry points, this will make the use cases in the small-to-midsized companies come to light.”

He says a third-party company will handle all the leases. MiR’s distributor network will engage with the third-party company to put together lease programs for customers.

MiR has also implemented a Preferred System Integrator (PSI) program to augment the existing network of distribution partners. Two and a half years ago, it was mainly large companies investing in these mobile platforms. They were purchasing in volumes of one to five robots. Today, they’re seeing investments of 20, 30, or even more than 50 robots.

“When you get into these bigger deployments, it’s more critical to have companies that are equipped to handle them. Our distribution partners are set up as a sales channel. Although most of them have integration capabilities, they don’t want to invest in deploying hundreds of robots at one time. They rather hand that off to a company that’s able to properly support large-scale deployments.”

Over the last couple of years, MiR had been focused on bringing more efficiency to the manufacturing process; not necessarily replacing existing AGVs and forklifts.

“For example, you have a guy that gets paid a healthy salary to sit in front of a machine tool and use his skills to do a certain task. That’s what makes the company money. But when he has to get up and carry a tray of parts to the next phase in the production cycle, that’s inefficient. That’s what we’ve been focusing on, at least with our MiR100 and MiR200 (pictured).”

 Autonomous mobile robot efficiently transports finished product to the inspection area, freeing up employees for more high-value tasks at this custom plastic injection molder. (Courtesy of Mobile Industrial Robots A/S)

Technologies, an Indiana-based company specializing in custom plastic injection molding and mold tooling. The mobile robot loops the shop floor, autonomously transporting finished product from the presses to quality inspection. This frees up personnel for more high-value tasks and eliminates material flow bottlenecks.

“With the new MiR500, we’re going after heavier loads and palletizer loads. That’s replacing standard AGVs and forklifts. We’re also starting to see big conveyor companies like Simplimatic Automation and FlexLink move to a more flexible type of platform with autonomous mobile robots.

“Parallel to the hardware is our software. A key part of our company is the way we develop the software, the way we allow people to interface with the product. We’re continuously making it more intuitive and easier to use.”

MiR offers two software packages, the operating system that comes with the robot and the fleet management software that manages two or more robots. The latter is not a requirement, but Mullen says most companies are investing in it to get additional functionality when interfacing with their enterprise system. The newest fleet system is moving to a cloud-based option.

Hardware and software updates are all handled through MiR’s distribution channel and Mullen doesn’t think any of that will change under the lease option.

“The support model will stay the same. Our distributors are all trained on hardware updates, preventative maintenance and troubleshooting. I firmly believe the major component to our success today is our distribution model.”

Mullen says he’s looking forward to new products coming out in 2019. MiR is also hiring. They expect to double their employee count in the Americas and globally.

High-Tech, Short-Term Need
It’s many of these feisty startups that we’re seeing adopt nontraditional models like RaaS. But stalwarts are coming on board, too.

On-demand material handling robots come in all sizes, payloads and reaches for rental by the week. (Courtesy of RobotWorx)Established in 1992, RobotWorx is part of SCOTT Technology Ltd., a century-old New Zealand-based company specializing in automated production, robotics and process machinery. RobotWorx joined the SCOTT family of international companies in 2014 and recently completed a rigorous audit process to become an RIA Certified Robot Integrator.

RobotWorx buys, reconditions and sells used robots, along with maintaining an inventory of new robotic systems and offering full robot integration and training services. Rentals are nothing new to them. They’ve been renting robots for several years, before it was a trend. But in response to the upswing in industry requests of late, RobotWorx rolled out a major push on their rental program this past spring.

“We’ve done a lot with the TV and film industry,” says Tom Fischer, Operations Manager for RobotWorx in Marion, Ohio. “If you’ve seen the latest AT&T commercial, there are blue and orange robots in it. We rented those out for a week.”

Dubbed “Bruce” and “Linda” on strips of tape along their outstretched arms, these brightly colored robots have a starring role in this AT&T Business commercial promoting Edge-to-Edge Intelligence? solutions. Fischer says companies in this industry usually select a particular size of robot, typically either a long-reach or large-payload material handling robot, like the Yaskawa Motoman long-reach robots in this AT&T commercial.

Ever wonder if the robots in commercials are just there for effect? It turns out, not always. Fischer says these are fully functioning robots. AT&T’s ad agency must have a robot wrangler off camera to keep Bruce and Linda in line. However, the other robots in the background are the result of TV magic.

“We basically just sent them the robots,” says Fischer. “They did what they wanted to do with them and then sent them back.”

For quick gigs like this commercial, or maybe a movie cameo or even a tradeshow display, rental robots make sense. But how do you know when it’s better to rent or buy?

“We’ll do a cost analysis with the customer,” says Fischer. “We have an ROI calculator on our website if they want to see what their long-term commitment capital investment would be. (Check out RIA’s Robot ROI Calculator). We also look at it from the standpoint that if they have a long-term contract with somebody, their return on investment is going to be a lot better with a purchase. If they think they’re only going to use the robot for six months, it doesn’t make sense for them to buy it.”

Rent-A-Cell
RobotWorx rents robots by the week, month or year. A week is the minimum, but there’s no long-term commitment required. A rental includes a robot, the robot controller, teach pendant and end-of-arm tooling (EOAT). Robot brands available include ABB, FANUC, KUKA, Universal Robots, and Yaskawa Motoman.

They also rent entire ready-to-ship robot cells for welding or material handling. The most popular systems are the RWZero (pictured) and RW950 cells.

Self-contained, ready-to-ship robotic welding cell accelerates uptime whether you buy or rent it. (Courtesy of RobotWorx)

“The RWZero cell is very basic,” says Fischer. “You have a widget and you need 5,000 of them. Rent this cell and you have a production line instantly.”

The RW950 is more portable. Fisher calls it a “pallet platform.” The robot, controller, operator station and workpiece positioner all share a common base, which is basically a large steel structure that can be moved around with a forklift whenever needed. See the RW950 Welding Workcell in action.

“We’ve done a lot of the small weld cells,” he says. “We always have a couple on hand so we can supply those on demand. We’ve done larger material handling cells, as well.

“We have a third-party company that does the financing if you need it. A lot of people just end up paying it upfront. If they were to purchase the robot after they’ve rented it, we apply that towards the purchase as well.”

Fischer says 20 percent of the rental price is credited to the purchase if a customer decides to keep the robot. All the robots and robotic cells are up to date on maintenance before they leave the RobotWorx floor and shouldn’t require any major maintenance for at least a year. He says most customers end up buying the robot if their rental period exceeds a year.

Time is not always the deciding factor under the RaaS model. As robotic systems become easier to deploy and redeploy, the idea of robots as a service will gain more permanence as a long-term solution. In the future, robotics in our workplaces and homes will be as ubiquitous as the Internet. In the meantime, we’ll keep our eyes on RaaS as it gets ready for primetime

Source : https://www.robotics.org/content-detail.cfm/Industrial-Robotics-Industry-Insights/Robots-for-Rent-Why-RaaS-Works/content_id/7665

Ten Signs You’re Headed for Trouble in 2019 – ITL

gartner-hype

Many of you have seen the Gartner Hype Cycle curve. When a hot technology appears, it gets hyped and hyped until one day enough people become impatient, and sentiment turns against the technology. It then heads into what Gartner calls the Trough of Disillusionment. Eventually, the technology finds its role – often a major one – in the market.

The idea has always struck me as rather obvious (I described the curve to reporter colleagues on the tech beat at the Wall Street Journal years before I ever saw the Gartner chart), but Gartner popularized the notion, which is why it’s known as the Gartner Hype Cycle rather than, say, the Carroll Hype Cycle. Gartner is to be commended, because technologies can be plotted on the curve, and, drawing on history, their futures can be predicted with some confidence.

On the Carroll…er, Gartner Hype Cycle, the idea of technology-driven innovation in insurance seems to be heading into the Trough of Disillusionment (great name) among incumbents. A Lemonade or Trov hasn’t taken over the world. Big Tech is coming to insurance but not really here yet for most insurers. Industry executives seem to have read everything they care to about AI, blockchain, etc., and are starting to describe plans for small-bore improvements rather than truly innovative ones. Not total disillusionment, but headed in that direction.

Which brings me to the warning signs for 2019.

The slide into the Trough of Disillusionment creates real opportunities because prices of insurtechs will start to settle back toward reality. In any case, technologies keep maturing, no matter how we feel about them, so the day of reckoning in the market creeps closer all the time, and the slide toward disillusionment is the last opportunity for companies to position themselves before a host of technologies and startups will shake the insurance market.

If I’m right, 2019 may well be the last chance for insurance industry incumbents to start taking advantage of the opportunities presented by insurtech, or lose out to nimbler competitors. In that spirit, my colleagues and I at ITL pulled some thoughts together for incumbents on:

10 Signs You’re Headed for Trouble in 2019

  • You set up an innovation fund and think that means you’re innovative.
  • Your innovations focus on cutting expenses, to the exclusion of all else, and – worse – you reward executives based on those cuts.
  • You say your legacy IT systems are what is preventing you from innovating.
  • You say your defensive culture is preventing you from innovating.
  • You practice “innovation tourism,” going to Silicon Valley and assuming magic dust will wear off on you. (Related warning sign: You have a ping pong table and coffee bar and think they signify creativity.)
  • You have 6,000 ideas but can’t figure out how to turn one into a product.
  • You can’t name 20 insurtechs that operate in your strategic domain or adjacent ones.
  • You aren’t starting to move your operations into the cloud.
  • You don’t have significant diversity in your management team and board, in terms of gender, race, age and nationality.
  • You can’t quantify and measure how you’re doing on your innovation journey and hope you’re improving.

Bonus warning sign: You make television commercials criticizing innovative companies.

In “The Sun Also Rises,” a character is asked how he went bankrupt. “Two ways,” he says, “gradually, then suddenly.” We’re still in the “gradually” part of innovation driven by insurtech, but “suddenly” is coming. I suggest insurance industry incumbents view 2019 and warning signs like these as a last warning to get moving and avoid innovation bankruptcy.

Source : http://blog.insurancethoughtleadership.com/blog/ten-signs-youre-headed-for-trouble-in-2019

Data-driven transformation of the life sciences industry – RockHealth

Digital health innovation continues moving full-force in transforming the business of healthcare. For pharma and medtech companies in particular, this ongoing shift has pushed them to identify ways to create value for patients beyond the drugs themselves. From new partnerships between digital health and life science companies to revamped commercial models, collecting and extracting insights from data is at the core of these growth opportunities. But navigating the rapidly evolving terrain is no simple task.

To help these companies effectively incorporate and utilize digital health tools, Rock Health partner ZS Associates draws on over 30 years of industry expertise to guide them through the complex digital health landscape. We chatted with Principal Pete Masloski to discuss how he works with clients to help identify, develop, and commercialize digital health solutions within their core businesses—and where he sees patients benefiting the most as a result.

Note: This interview has been lightly edited for clarity.

Where does ZS see the promise of data- and analytics-enabled digital health tools leading to in the next five years, 10 years, and beyond?

Data and analytics will play a central role in the digital health industry’s growth over the next five to ten years. Startups are able to capture larger, novel sets of data in a way that large life science companies historically have not been able to. As a result, consumers will be better informed about their health choices; physicians will have more visibility into what treatment options work best for whom under what circumstances; health plans will have a better understanding of treatment choices; and pharmaceutical and medical device companies will be able to strategically determine which products and services to build.

We see personalized medicine, driven by genomics and targeted therapies, rapidly expanding over the next few years. Pharmaceutical discovery and development will also transition to become more digitally enabled. The ability to match patients with clinical trials and improve the patient experience will result in lower costs, faster completion, and more targeted therapies. The increase in real-world evidence will be used to demonstrate the efficacy of therapeutics and devices in different populations, which assures payers and providers that outcomes from studies can be replicated in the real world.

How is digital health helping life sciences companies innovate their commercial models? What is the role of data and analytics in these new models?

The pharmaceutical industry continues to face a number of challenges, including the increasingly competitive markets, growing biosimilar competition, and overall scrutiny on pricing. We’ve seen excitement around solutions that integrate drugs with meaningful outcomes and solutions that address gaps in care delivery and promote medication adherence.

Solving these problems creates new business model opportunities for the industry through fresh revenue sources and ways of structuring agreements with customers. For example, risk-based contracts with health plans, employers, or integrated delivery networks (IDNs) become more feasible when you can demonstrate increased likelihood of better outcomes for more patients. We see this coming to fruition when pharma companies integrate comprehensive digital adherence solutions focused on patient behavior change around a specific drug, as in Healthprize’s partnership with Boehringer Ingelheim. In medtech, digital health tools can both differentiate core products and create new profitable software or services businesses. Integrating data collection technology and connectivity into devices and adding software-enabled services can support a move from traditional equipment sales to pay-per-use. This allows customers to access the new equipment technology without paying a large sum up front—and ensures manufacturers will have a more predictable ongoing source of revenue.

That said, data and analytics remain at the core of these new models. In some cases, such as remote monitoring, the data itself is the heart of the solution; in others, the data collected helps establish effectiveness and value as a baseline for measuring impact. Digital ambulatory blood pressure monitors capture an individual’s complete blood pressure profile throughout the day, which provides a previously unavailable and reliable “baseline.” Because in-office only readings may be skewed by “white coat hypertension,” or stress-induced blood pressure readings, having a more comprehensive look at this data can lead to deeper understandings of user behaviors or conditions. Continuous blood pressure readings can help with diagnoses of stress-related drivers of blood pressure spikes, for example. These insights become the catalyst for life science companies’ new product offerings and go-to-market strategies.

What are some examples of how data sets gathered from partnerships with digital health companies can be leveraged to uncover new value for patients and address their unmet needs?

As digital health companies achieve a certain degree of scale, their expansive data sets become more valuable because of the insights that can be harnessed to improve outcomes and business decisions. Companies like 23andMe, for example, have focused on leveraging their data for research into targeted therapies. Flatiron Health is another great example of a startup that created a foundational platform (EMR) whose clinical data from diverse sources (e.g., laboratories, research repositories, and payer networks) became so highly valued in cancer therapy development that Roche acquired it earlier this year for close to $2B.

It’s exciting to think about the wide array of digital health solutions and the actionable insight that can be gleaned from them. One reason partnerships are important for the industry is few innovators who are collecting data have the capabilities and resources to fully capitalize on its use on their own. Pharma companies and startups must work together to achieve all of this at scale. Earlier this year, Fitbit announced a new partnership with Google to make the data collected from its devices available to doctors. Google’s API can directly link heart rate and fitness activity to the EMR, allowing doctors to easily review and analyze larger amounts of data. This increase in visibility provides physicians with more insight into how patients are doing in between visits, and therefore can also help with decision pathways.

Another example announced earlier this year is a partnership between Evidation Health and Tidepool, who are conducting a new research study, called the T1D Sleep Pilot, to study real-world data from Type 1 diabetics. With Evidation’s data platform and Tidepool’s device-agnostic consumer software, the goal is to better understand the dynamics of sleep and diabetes by studying data from glucose monitors, insulin pumps, and sleep and activity trackers. The data collected from sleep and activity trackers in particular allows us to better understand possible correlations between specific chronic conditions, like diabetes, and the impact of sleep—which in the past has been challenging to monitor. These additional insights provide a more comprehensive understanding of a patient’s condition and can lead to changes in treatment decisions—and ultimately, better outcomes.

How do you assess the quality and reliability of the data generated by digital health companies? What standards are you measuring them against?

Data quality management (DQM) is the way in which leading companies evaluate the quality and reliability of data sources. ISO 9000’s definition of quality is “the degree to which a set of inherent characteristics fulfills requirements.” At ZS, we have a very robust DQM methodology, and our definition goes beyond the basics to include both the accuracy and the value of the data. Factors such as accuracy and absence of errors, and fulfilling specifications (business rules, designs, etc.), are foundational, but in our experience it’s most important to also include an assessment of value, completeness, and lack of bias because often these factors can lead to misleading or inaccurate insights from analysis of that data.

However, it’s not easy assessing the value of a new data source, which presents an entirely different set of challenges. One very important one is the actual interpretation of the data that’s being collected. How do you know when someone is shaking their phone or Fitbit to inflate their steps, or how do you interpret that the device has been taken off and it’s not tracking activity? How do you account for that and go beyond the data to understand what is really happening? As we get more experience with IOT devices and algorithms get smarter, we will get better at interpreting what these devices are collecting and be more forgiving of underlying data quality.

What are the ethical implications or issues (such as data ownership, privacy, and bias) you’ve encountered thus far, or anticipate encountering in the near future?

The ethical stewardship and protection of personal health data are just as essential for the long-term sustainability of the digital health industry as the data itself. The key question is, how can the industry realize the full value from this data without crossing the line? Protecting personal data in an increasingly digitized world—where we’ve largely become apathetic to the ubiquitous “terms and conditions” agreements—is a non-negotiable. How digital health and life science companies collect, manage, and protect users’ information will remain a big concern.

There are also ethical issues around what the data that is captured is used for. Companies need to carefully establish how to appropriately leverage the data without crossing the line. For example, using de-identified data for research purposes with the goal of improving products or services is aligned with creating a better experience for the patient, as opposed to leveraging the data for targeted marketing purposes.

Companies also face the issue of potential biases that may emerge when introducing AI and machine learning into decision-making processes around treatment or access to care. Statistical models are only as good as the data that are used to train them. Companies introducing these models need to test datasets and their AI model outputs to ensure gaps are eliminated from training data, the algorithms don’t learn to introduce bias, and they establish a process for evaluating bias as the models continue to learn and evolve.

Source : https://rockhealth.com/the-data-driven-transformation-of-the-life-sciences-industry-a-qa-with-zs-associates-pete-masloski/

When, which … Design Thinking, Lean, Design Sprint, Agile? – Geert Claes

Confusion galore!

A lot of people are — understandably so — very confused when it comes to innovation methodologies, frameworks, and techniques. Questions like: “When should we use Design Thinking?”, “What is the purpose of a Design Sprint?”, “Is Lean Startup just for startups?”, “Where does Agile fit in?”, “What happens after the <some methodology> phase?” are all very common questions.

(How) does it all connect?

When browsing the Internet for answers, one notices quickly that others too are struggling to understand how it all works together.

Gartner (as well as numerous others) tried to visualise how methodologies like Design Thinking, Lean, Design Sprint and Agile flow nicely from one to the next. Most of these visualisations have a number of nicely coloured and connected circles, but for me they seem to miss the mark. The place where one methodology flows into the next is very debatable, because there are too many similar techniques and there is just too much overlap.

The innovation spectrum

It probably makes more sense to just look at Design Thinking, Lean, Design Sprint & Agile as a bunch of tools and techniques in one’s toolbox, rather than argue for one over the other, because they can all add value somewhere on the innovation spectrum.

Innovation initiatives can range from exploring an abstract problem space, to experimenting with a number of solutions, before continuously improving a very concrete solution in a specific market space.

Business model

An aspect which often seems to be omitted, is the business model maturity axis. For established products as well as adjacent ones (think McKinsey’s Horizon 1 and 2), the business models are often very well understood. For startups and disruptive innovations within an established business however, the business model will need to be validated through experiments.

Methodologies

Design Thinking

Design Thinking really shines when we need to better understand the problem space and identify the early adopters. There are various flavors of design thinking, but they all sort of follow the double-diamond flow. Simplistically the first diamond starts by diverging and gathering lots of insights through talking to our target stakeholders, followed by converging through clustering these insights and identifying key pain-points, problems or jobs to be done. The second diamond starts by a diverging exercise to ideate a large number of potential solutions before prototyping and testing the most promising ideas. Design Thinking is mainly focussed on qualitative rather than quantitative insights.

Lean Startup

The slight difference with Design Thinking is that the entrepreneur (or intrapreneur) often already has a good understanding of the problem space. Lean considers everything to be a hypothesis or assumption until validated …so even that good understanding of the problem space is just an assumption. Lean tends to starts by specifying your assumptions on a customer focussed (lean) canvas and then prioritizing and validating the assumptions according to highest risk for the entire product. The process to validate assumptions is creating an experiment (build), testing it (measure) and learn whether our assumption or hypothesis still stands. Lean uses qualitative insights early on but later forces you to define actionable quantitative data to measure how effective the solution addresses the problem and whether the growth strategy is on track. The “Get out of the building” phrase is often associated with Lean Startup, but the same principle of reaching out the customers obviously also counts for Design Thinking (… and Design Sprint … and Agile).

Design Sprint

It appears that the Google Venture-style Design Sprint method could have its roots from a technique described in the Lean UX book. The key strength of a Design Sprint is to share insights, ideate, prototype and test a concept all in a 5-day sprint. Given the short timeframe, Design Sprints only focus on part of the solution, but it’s an excellent way to learn really quickly if you are on the right track or not.

Agile

Just like dealing with the uncertainty of our problem, solution and market assumptions, agile development is a great way to cope with uncertainty in product development. No need to specify every detail of a product up-front, because here too there are plenty of assumptions and uncertainty. Agile is a great way to build-measure-learn and validate assumptions whilst creating a Minimum Viable Product in Lean Startup parlance. We should define and prioritize a backlog of value to be delivered and work in short sprints, delivering and testing the value as part of each sprint.

Conclusion

Probably not really the answer you were looking for, but there is no clear rule on when to start where. There is also no obvious handover point because there is just too much overlap, and this significant overlap could be the explanation of why some people claim methodology <x> is better than <y>.

Anyhow, most innovation methodologies can add great value and it’s really up to the team to decide where to start and when to apply which methods and techniques. The common ground most can agree with, is to avoid falling in love with your own solution and listen to qualitative as well as quantitative customer feedback.

Innovation Spectrum

Some great books: Creative Confidence, Lean Startup, Running Lean, Sprint, Dual Transformation, Lean UX, Lean Enterprise, Scaling Lean … and a nice video on Innovation@50x

Update: minor update in the innovation canvas, moving the top axis of problem-solution-market to the side

Source : https://medium.com/@geertwlclaes/when-which-design-thinking-lean-design-sprint-agile-a4614fa778b9

Scroll to top