Category: Silicon Valley

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

When, which … Design Thinking, Lean, Design Sprint, Agile? – Geert Claes

Confusion galore!

A lot of people are — understandably so — very confused when it comes to innovation methodologies, frameworks, and techniques. Questions like: “When should we use Design Thinking?”, “What is the purpose of a Design Sprint?”, “Is Lean Startup just for startups?”, “Where does Agile fit in?”, “What happens after the <some methodology> phase?” are all very common questions.

(How) does it all connect?

When browsing the Internet for answers, one notices quickly that others too are struggling to understand how it all works together.

Gartner (as well as numerous others) tried to visualise how methodologies like Design Thinking, Lean, Design Sprint and Agile flow nicely from one to the next. Most of these visualisations have a number of nicely coloured and connected circles, but for me they seem to miss the mark. The place where one methodology flows into the next is very debatable, because there are too many similar techniques and there is just too much overlap.

The innovation spectrum

It probably makes more sense to just look at Design Thinking, Lean, Design Sprint & Agile as a bunch of tools and techniques in one’s toolbox, rather than argue for one over the other, because they can all add value somewhere on the innovation spectrum.

Innovation initiatives can range from exploring an abstract problem space, to experimenting with a number of solutions, before continuously improving a very concrete solution in a specific market space.

Business model

An aspect which often seems to be omitted, is the business model maturity axis. For established products as well as adjacent ones (think McKinsey’s Horizon 1 and 2), the business models are often very well understood. For startups and disruptive innovations within an established business however, the business model will need to be validated through experiments.

Methodologies

Design Thinking

Design Thinking really shines when we need to better understand the problem space and identify the early adopters. There are various flavors of design thinking, but they all sort of follow the double-diamond flow. Simplistically the first diamond starts by diverging and gathering lots of insights through talking to our target stakeholders, followed by converging through clustering these insights and identifying key pain-points, problems or jobs to be done. The second diamond starts by a diverging exercise to ideate a large number of potential solutions before prototyping and testing the most promising ideas. Design Thinking is mainly focussed on qualitative rather than quantitative insights.

Lean Startup

The slight difference with Design Thinking is that the entrepreneur (or intrapreneur) often already has a good understanding of the problem space. Lean considers everything to be a hypothesis or assumption until validated …so even that good understanding of the problem space is just an assumption. Lean tends to starts by specifying your assumptions on a customer focussed (lean) canvas and then prioritizing and validating the assumptions according to highest risk for the entire product. The process to validate assumptions is creating an experiment (build), testing it (measure) and learn whether our assumption or hypothesis still stands. Lean uses qualitative insights early on but later forces you to define actionable quantitative data to measure how effective the solution addresses the problem and whether the growth strategy is on track. The “Get out of the building” phrase is often associated with Lean Startup, but the same principle of reaching out the customers obviously also counts for Design Thinking (… and Design Sprint … and Agile).

Design Sprint

It appears that the Google Venture-style Design Sprint method could have its roots from a technique described in the Lean UX book. The key strength of a Design Sprint is to share insights, ideate, prototype and test a concept all in a 5-day sprint. Given the short timeframe, Design Sprints only focus on part of the solution, but it’s an excellent way to learn really quickly if you are on the right track or not.

Agile

Just like dealing with the uncertainty of our problem, solution and market assumptions, agile development is a great way to cope with uncertainty in product development. No need to specify every detail of a product up-front, because here too there are plenty of assumptions and uncertainty. Agile is a great way to build-measure-learn and validate assumptions whilst creating a Minimum Viable Product in Lean Startup parlance. We should define and prioritize a backlog of value to be delivered and work in short sprints, delivering and testing the value as part of each sprint.

Conclusion

Probably not really the answer you were looking for, but there is no clear rule on when to start where. There is also no obvious handover point because there is just too much overlap, and this significant overlap could be the explanation of why some people claim methodology <x> is better than <y>.

Anyhow, most innovation methodologies can add great value and it’s really up to the team to decide where to start and when to apply which methods and techniques. The common ground most can agree with, is to avoid falling in love with your own solution and listen to qualitative as well as quantitative customer feedback.

Innovation Spectrum

Some great books: Creative Confidence, Lean Startup, Running Lean, Sprint, Dual Transformation, Lean UX, Lean Enterprise, Scaling Lean … and a nice video on Innovation@50x

Update: minor update in the innovation canvas, moving the top axis of problem-solution-market to the side

Source : https://medium.com/@geertwlclaes/when-which-design-thinking-lean-design-sprint-agile-a4614fa778b9

Former Google CEO Eric Schmidt listed the ‘3 big failures’ he sees in tech startups today – Business Insider

Former Google CEO Eric Schmidt has listed the three “big failures” in tech entrepreneurship around the world.

Schmidt outlined the failings in a speech he gave at the Centre for Entrepreneurs in London this week. He later expanded on his thoughts in an interview with former BBC News boss James Harding.

Below are the three mistakes he outlined, with quotes taken from both a draft of his speech seen by Business Insider, and comments he delivered on the night.

1. People stick to who and what they know

“Far too often, we invest mostly in people we already know, who are working in very narrow disciplines,” Schmidt wrote in his draft.

In his speech, Schmidt pegged this point closely to a need for diversity and inclusion. He said companies need to be open to bringing in people from other countries and backgrounds.

He said entrepreneurship won’t flourish if people are “going to one institution, hiring only those people, and only — if I can be blunt — only white males.”

During the Q&A, Schmidt specifically addressed the gender imbalance in the tech industry. He said there’s a reason to be optimistic about women’s representation in tech improving, predicting that tech’s gender imbalance will vanish in one generation.

2. Too much focus on product and not on platforms

“We frequently don’t build the best technology platforms to tackle big social challenges, because often there is no immediate promise of commercial return,” Schmidt wrote in his draft.

“There are a million e-commerce apps but not enough speciality platforms for safely sharing and analyzing data on homelessness, climate change or refugees.”

Schmidt’s omitted this mention of socially conscious tech from his final speech, but did say that he sees a lot of innovation coming out of network platforms, which allow people to connect and pool data, because “the barrier to entry for these startups is very, very low.”

3. Companies aren’t partnering up early enough

Finally, Schmidt wrote in his draft that tech startups don’t partner enough with other companies in the modern, hyper-connected world. “It’s impossible to think about any major challenge for society in a silo,” he wrote.

He said in his speech that tech firms have to be ready to partner “fairly early.” He gave the example of a startup that wants to build homecare robots.

“The market for homecare robots is going to be very, very large. The problem is that you need visual systems, and machine learning systems, and listening systems, and motor systems, and so forth. You’re not going to be able to do it with three people,” he said.

After detailing his failures in tech entrepreneurship, Schmidt laid out what he views as the solution. He referred back to the Renaissance in Europe, saying people turned their hand to all sorts of disciplines, from science, to art, to business.

Source : https://www.businessinsider.com/eric-schmidt-3-big-failures-he-sees-in-tech-entrepreneurship-2018-11

6 Biases Holding You Back From Rational Thinking – Robert Greene

Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.

These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.

1) Confirmation Bias

I look at the evidence and arrive at my decisions through more or less rational processes.

To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.

We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.

When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.

2) Conviction Bias

I believe in this idea so strongly. It must be true.

We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.

3) Appearance Bias

I understand the people I deal with; I see them just as they are.

We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.

4) The Group Bias

My ideas are my own. I do not listen to the group. I am not a conformist.

We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.

5) The Blame Bias

I learn from my experience and mistakes.

Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.

6) Superiority Bias

I’m different. I’m more rational than others, more ethical as well.

Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.

We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.

Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f

Building safe artificial intelligence: specification, robustness, and assurance – DeepMind

Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.

If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.

At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.

In this inaugural post, we discuss three areas of technical AI safety: specificationrobustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.

Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.

Specification: define the purpose of the system

You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.

This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:

  • ideal specification (the “wishes”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;
  • design specification (the “blueprint”), corresponding to the specification that we actually use to build the AI system, e.g. the reward function that a reinforcement learning system maximises;
  • and revealed specification (the “behaviour”), which is the specification that best describes what actually happens, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.

specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.

For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.

*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.

From Faulty Reward Functions in the Wild by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.

As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.

Robustness: design the system to withstand perturbations

There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shiftadversarial inputs, and unsafe exploration can be classified as robustness problems.

To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.

From AI Safety Gridworlds. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.

Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.

An adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.

Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015Amodei and Olah et al., 2016).

Assurance: monitor and control system activity

Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.

Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.

AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).

ToMNet discovers two subspecies of agents and predicts their behaviour (from “Machine Theory of Mind”)

Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).

A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See Orseau and Armstrong, 2016.

Looking ahead

We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.

Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.

With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!

We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research

Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1

 

How 20 big-name US VC firms invest at Series A & B – Pitchbook

NEA is one of the most well-known investors around, and the firm also takes the crown as the most active VC investor in Series A and B rounds in the US so far in 2018. Andreessen HorowitzAccel and plenty of the other usual early-stage suspects are on the list, too.

Also included is a pair of names that have been in the news this year for backing away from the traditional VC model: Social Capital and SV Angel. The two are on the list thanks to deals completed earlier in the year.

Just how much are these prolific investors betting on Series A and Series B rounds? And at what valuation? We’ve used data from the PitchBook Platform to highlight a collection of the top venture capital investors in the US (excluding accelerators) and provide information about the Series A and B rounds they’ve joined so far this year. Click on the graphic below to open a PDF.

Source : https://pitchbook.com/news/articles/how-20-big-name-us-vc-firms-invest-at-series-a-b

Lyft – Geofencing San Francisco Valencia Street – Greater investment in loading zones is needed for this to be more effective

Creating a Safer Valencia Street

San Francisco is known for its famous neighborhoods and commercial corridors — and the Mission District’s Valencia Street takes it to the next level. For Lyft, Valencia Street is filled with top destinations that our passengers frequent: trendy cafes, hipster clothing stores, bars, and live music.

To put it simply, there’s a lot happening along Valencia Street. Besides the foot traffic, many of its restaurants are popular choices on the city’s growing network of courier services, providing on-demand food delivery via cars and bicycles. Residents of the Mission are increasingly relying on FedEx, Amazon, and UPS for stuff. Merchants welcome commercial trucks to deliver their goods. In light of a recent road diet on Mission Street to create much needed dedicated lanes to improve MUNI bus service, many vehicles have been re-routed to parallel streets like Valencia. And of course, Valencia Street is also one of the most heavily trafficked bicycling corridors in the City, with 2,100 cyclists commuting along Valencia Street each day.

Source: SFMTA

With so many different users of the street and a street design that has largely remained unchanged, it’s no surprise that the corridor has experienced growing safety concerns — particularly around increased traffic, double parking, and bicycle dooring.

Valencia Street is part of the City’s Vision Zero High-Injury Network, the 13% of city streets that account for 75% of severe and fatal collisions. From January 2012 to December 2016, there were 204 people injured and 268 reported collisions along the corridor, of which one was fatal.

As the street has become more popular and the need to act has become more apparent, community organizers have played an important role in rallying City forces to commit to a redesign. The San Francisco Bicycle Coalition has been a steadfast advocate for the cycling community’s needs: going back to the 1990s when they helped bring painted bike lanes to the corridor, to today’s efforts to upgrade to a protected bike lane. The People Protected Bike Lane Protests have helped catalyze the urgency of finding a solution. And elected officials, including Supervisor Ronen and former Supervisor Sheehyhave been vocal about the need for change.

Earlier this spring, encouraged by the SFMTA’s first steps in bringing new, much-needed infrastructure to the corridor, we began conducting an experiment to leverage our technology as part of the solution. As we continue to partner closely with the SFMTA as they work on a new design for the street, we want to report back what we’ve learned.

Introduction

As we began our pilot, we set out with the following goals:

  1. Promote safety on the busiest parts of Valencia Street for the most vulnerable users by helping minimize conflict for bicyclists, pedestrians, and transit riders.
  2. Continue to provide a good experience for drivers and passengers to help ensure overall compliance with the pilot.
  3. Understand the effectiveness of geofencing as a tool to manage pickup activity.
  4. Work collaboratively with city officials and the community to improve Valencia Street.

To meet these goals, we first examined Lyft ride activity in the 30-block project area: Valencia Street between Market Street and Cesar Chavez.

Within this project area, we found that the most heavily traveled corridors were Valencia between 16th and 17th Street, 17th and 18th Street, and 18th and 19th Street. We found that these three blocks make up 27% of total Lyft rides along the Valencia corridor.

We also wanted to understand the top destinations along the corridor. To do this, we looked at ride history where passengers typed in the location they wanted to get picked up from.

Next, we looked at how demand for Lyft changed over time of day and over the course of the week. This would help answer questions such as “how does demand for Lyft differ on weekends vs. weeknights” or “what times of day do people use Lyft to access the Valencia corridor?”

We found that Lyft activity on Valencia Street was highest on weekends and in the evenings. Demand is fairly consistent on weekdays, with major spikes of activity on Fridays, Saturdays, and Sundays. The nighttime hours of 8 PM to 2 AM are also the busiest time for trips, making up 44% of all rides. These findings suggest the important role Lyft plays as a reliable option when transit service doesn’t run as frequently, or as a safe alternative to driving under the influence (a phenomenon we are observing around the country).

The Pilot

Our hypothesis was that because of the increased need for curb space between multiple on-demand services, as well as the the unsafe experience of double parking or crossing over the bike lane to reach passengers, improvements in the Lyft app could help create a better experience for everyone.

To test this, our curb access pilot program was conducted as an “A/B experiment”, where subjects were randomly assigned to a control or treatment group, and statistical analysis is used to determine which variation performs better. 50% of riders continued to have the same experience requesting rides within the pilot area: able to get picked up wherever they wanted. The other 50% of Lyft passengers requesting rides within the pilot zone were shown the experiment scenario, which asked them to walk to a dedicated pickup spot.

Geofencing and Venues

Screenshot from the Lyft app showing our Valencia “Venue” between 17th and 18th Street. Passengers requesting a ride are re-directed to a dedicated pickup spot on a side street (depicted as a purple dot). During the pilot, we created these hot spots on Valencia Street between 16th St and 19th St.

Our pilot was built using a Lyft feature called “Venues”, a geospatial tool designed to recommend pre-set pickup locations to passengers. When a user tries to request a ride from an area that has been mapped with a Venue, they are unable to manually control the area in which they’d like to be picked up. Rather, the Venue feature automatically redirects them to a pre-established location. This forced geofencing feature helps ensure that passengers are requesting rides from safe locations and build reliability and predictability for both passengers and drivers as they find each other.

Given our understanding of ride activity and demand, we decided to create Venues on Valencia Street between 16th Street and 19th Street. We prioritized creating pickup zones along side streets in areas of lower traffic. Where possible, we tried to route pickups to existing loading zones: however, a major finding of the pilot was that existing curb space is insufficient and that the city needs more loading zones. To support better routing and reduce midblock u-turns or other unsafe driving behavior, we tried to put pickup spots on side streets that allowed for both westbound and eastbound directionality.

Findings

Our pilot ran for three months, from March 2018 to June 2018. Although our initial research focused on rideshare activity during hours of peak demand (i.e. nights and weekends), to support our project goals of increasing overall safety along the corridor and to create an easy and intuitive experience for passengers, we ultimately decided to run the experiments 24/7.

The graphic below illustrates where passengers were standing when they requested a ride, and which hotspot they were redirected to. We found that the top hot spots were on 16th Street. This finding suggests the need for continued coordination with the City to make sure that the dedicated pickup spots to protect cyclists on Valencia Street don’t interrupt on-time performance for the 55–16th Street or 22–Fillmore Muni bus routes.

Loading Time

Loading time, when a driver has pulled over to wait for a passenger to arrive or exit their car, was important for us to look at in terms of traffic flow. This is a similar metric to the transportation planning metric, dwell time.

Currently, our metric for loading time looks at the time between when a driver arrives at the pickup location and when they press the “I have picked up my passenger” button. However, this is an imperfect measurement for dwell time, as drivers may press the button before the passenger gets in the vehicle. Based on our pilot, we have identified this as an area for further research.

Going into our experiment, we expected to see a slight increase in loading time, as passengers would need to get used to walking to the pickup spot. This hypothesis was correct: during the pilot, we saw loading time increased from an average of 25 seconds per ride to 28 seconds. To help speed up the process of drivers and passengers finding each other, we recommend the addition of wayfinding and signage in popular loading areas.

We also wanted to understand the difference between pickups and drop-offs. Generally, we found that pickups have a longer loading time than a drop-off.

Post Pilot Recommendations

Ridesharing is one part of the puzzle to creating a more organized streetscape along the Valencia corridor, so sharing information and coordinating with city stakeholders was critical. After our experiment, we sat down with elected officials, project staff from the SFMTA, WalkSF, and the San Francisco Bicycle Coalition to discuss the pilot findings and collaborate on how our work could support other initiatives underway across the city. We are now formally engaged with the SFMTA’s Valencia Bikeway Improvement Project and look forward to continuing to support this initiative.

Given the findings of this pilot program and our commitment to creating sustainable streets (including our acquisition of the leading bikeshare company Motivate and introduction of bike and scooter sharing to the Lyft platform), we decided to move our project from a pilot to a permanent featurewithin the Lyft app. This means that currently, anyone requesting a ride on Valencia Street between 16th Street and 19th Street will be redirected to a pickup spot on a side street.

Based on the learnings of our pilot, we recommend the following:

  1. The city needs more loading zones to support increased demand for curbside loading.
  2. Valencia Street can best support all users of the road by building infrastructure like protected bike lanes that offer physical separation from motor vehicle traffic.
  3. Ridesharing is one of many competing uses for curb space. The City needs to take a comprehensive approach to curb space management.
  4. Geofencing alone does not solve a space allocation problem. Lyft’s digital solutions are best leveraged when the necessary infrastructure (i.e. loading zones) are in place. The digital and physical environments should reinforce each other.
  5. Wayfinding and signage can inform a user’s trip-making process before someone opens their app. Having clear and concise information that directs both passengers and riders can help ensure greater compliance.
  6. Collaboration is key. Keeping various stakeholders (public agencies, the private sector, community and advocacy groups, merchants associations, etc.) aware and engaged in ongoing initiatives can help create better outcomes.

Technology is Not a Silver Bullet

We know that ridesharing is just one of the many competing uses of Valencia Street and technology alone will not solve the challenges of pickups and drop-offs: adequate infrastructure like protected bike lanes and loading zones will be necessary to achieving Vision Zero.

Looking ahead, we know there’s much to be done on this front. To start with, we are are excited to partner with civic engagement leaders like Streetmixwhose participatory tools ensure that public spaces and urban design support safe streets. By bringing infrastructure designs like parking protected bike lanes or ridesharing loading zones into Streetmix, planners can begin to have the tools to engage community groups on what they’d like to see their streets look like.

We’ve also begun partnering with Together for Safer Roads to support local bike and pedestrian advocacy groups and share Lyft performance data to help improve safety on some of the nation’s most dangerous street corridors. And finally, through our application to the SFMTA to become a permitted scooter operator in the City, we are committing $1 per day per scooter to support expansion of the City’s protected bike lane network. We know that this kind of infrastructure is critical to making safer streets for everyone.

Our work on Valencia Street is a continuation of our commitment to rebuild our transportation network and place people not cars at the center of our communities.

We know that this exciting work ahead cannot be done alone: we look forward to bringing this type of work to other cities around the country and to working together to achieve this vision

Source : https://medium.com/@debsarctica/creating-a-safer-valencia-street-54c25a75b753

 

Pitchbook – Under the influence: How VCs are embracing next-gen advertising

@lilmiquela has 1.3 million followers on Instagram. Her bio reads that she’s 19 years old, lives in Los Angeles, and supports causes including Black Lives Matter and the Innocence Project. Oh, and she’s a robot.

Her Instagram feed, which at the time of writing has 245 posts, is her entire existence. She likes memes and posting selfies. One photo in particular shows her relaxing on a lawn chair, while another has her posing on a washer/dryer set. There’s even a snap of her being tattooed by similarly Insta-famous tattoo artist Dr. Woo.

But. She’s. Not. Real. @lilmiquela is a “virtual influencer” and the brainchild of a venture capital-backed company called Brud, which describes itself as a group of “problem solvers specializing in robotics, artificial intelligence and their applications to media businesses.”


In April, @lilmiquela and Brud brought in approximately $6 million in VC funding from SequoiaBoxGroup, SV Angel and Ludlow Ventures. It’s unclear how that money will be spent; perhaps it will go toward building out more virtual influencer accounts, some “friends” for @lilmiquela.

But the real question is why is a surreal—literally—freckly teenage girl worth millions to Silicon Valley?

After all, Brud isn’t the first company to capitalize off the platform Instagram provides, nor is it the first to illustrate how much money one can make as an “influencer.” Former “Bachelor” and “Bachelorette” contestants, each member of the Kardashian family and pretty much every C-list actor has proven that. Brud, rather, has shown that you can manufacture that influence using technology. You don’t have to pay an actual person to post an Instagram story about how he or she just “looooooves” your products.

The team at Brud decides what @lilmiquela “likes,” what she will promote on her Instagram and how she will behave online. Earlier this year, @lilmiquela posted an Instagram story advertising her partnership with Prada, undoubtedly a lucrative deal that had her advertising for the brand just in time for fashion week in February. It appeared to be one of the first official brand partnerships advertised on her feed.

Brud is hacking influencer marketing, which has already disrupted traditional advertising streams in recent years. Influencer marketing is a new opportunity stemming from that Instagram usage; it has allowed skillful bloggers, who have themselves become valuable media properties and brand assets, to make a living off social media posts. This is mostly a result of the successes of social media platforms like Twitter and Facebook, though Instagram is at the center of the influencer movement specifically.

Venture capital investors, of course, were backers of all three of those platforms in their nascent days. Now, VCs are investing in a new generation of startups vying to capitalize on the innovative form of narrative advertising that is influencer marketing.

The influencer economy

Let’s go over the basics. What’s an influencer? It’s basically the 2018 version of that really cool person in your class at school. Typically, it’s someone who posts frequently online, has a large following and likely also has strong engagement rates, meaning people tend to “like” and comment on their content frequently. Most importantly, influencers can have an impact on their followers’ purchasing decisions, whether that be because of their fame, knowledge of a specific industry or product, job title or follower count.

The influencer economy truly began with the birth of the blogosphere during the dot-com boom, but the invention of sharing apps like Instagram created the phenomenon as we know it today. The app officially launched in the fall of 2010; less than two years later, Facebook, which was about eight years old at the time, spent $1 billion to acquire it. What may have seemed like a ludicrous deal in 2012—Instagram only had 13 employees at the time and had raised about $57 million in VC funding—has proven to be Facebook’s most crucial and lucrative acquisition ever. Not to mention it was a goddamned steal.

Last month, Facebook reported its most disappointing earnings to date, an announcement that resulted in a major stock plunge. Instagram, on the other hand, continues to boom, with more than 1 billion users on its platform. It’s driving a large part of Facebook’s advertising profits. Wells Fargo analyst Ken Sena reportedly said the photo-sharing app could contribute $20 billion to Facebook’s revenue by 2020, or roughly a quarter of the social media giant’s total revenue.

Why? Because advertisers love Instagram. They are expected to spend $1.6 billion on Instagram advertising in 2018, a number that could grow to as much as $5 billion over the next few years, per MediaKix. If you’re not an avid Instagram user and you’ve found yourself wondering, “How could a photo-sharing app bring in that kind of money?,” let me throw some mind-boggling stats your way.

Kylie Jenner, the youngest member of the Kardashian family, can earn as much as $1 million per Instagram post. To repeat, she can make $1 million by posting one photo to her Instagram feed with a hashtag or brief product description. For the most part, she uses her feed to promote her own business, Kylie Cosmetics. The company was recently valued at around $800 million and Jenner herself is expected to be the youngest billionaire ever, according to a recent viral Forbes profile, because of the success of her business and her social media fame. Jenner, of course, posted a photo of the Forbes cover story to her Instagram to celebrate this achievement:


She’s not the only one raking in Instagram cash. There are a lot of users leveraging the influencer economy to supplement their income.

Vine star Cameron Dallas, who also has his own Netflix show for some reason, reportedly earns some $25,000 per post. Indian cricket team captain Virat Kohli makes some $120,000. Celebrity chef Gordon Ramsey can earn roughly $5,500 for a post. And Logan Paul, the controversial YouTube star, can bring in $17,000 each time he grams. This is all according to social media tool provider Hopper’s Instagram Rich List, which ranks Insta users by how much they can purportedly bring in. Every person on the list is considered an influencer.

The VCs behind that IG ad

The first VC to leap entirely into the influencer economy was Benjamin Grubbs, the former global director of top creator partnerships at YouTube—a mouthful of a title that basically means Grubbs was in charge of the team that oversaw the growth of the most popular YouTubers. After six years at YouTube, including a stint at its parent company Google, Grubbs stepped down to launch a venture capital fund called Next 10 Ventures.

Next 10 Ventures closed its debut vehicle in May, a $50 million fund intended to back businesses in the creator economy. While other venture capitalists have closed select deals for startups in the influencer space, Next 10 raised a sizable amount of cash to bet solely on people whose living relies on platforms like YouTube and Instagram.

“Over the past five years, I have seen firsthand the immense growth of the Creator economy in terms of reach, consumer engagement, and commercialization,” Grubbs wrote in a statement announcing the fund. “We forecast the global creator economy excluding China to reach $23 billion this year, driven by tens of thousands of creators who make a living on digital video and social platforms. This scale affords our company ample opportunity to build assets that produce meaningful value in the years ahead.”

It’s unclear which, if any, startups Next 10 has backed since it wrapped its initial fund. A handful of startups in the space, however, have raised funding in the last year.

Brud, the developers of @lilmiquela, brought in their reported $6 million financing in April, of course. That round was followed by 21 Buttons‘ $17 million round led by Idinvest Partners. The following month, Octoly brought in a $10 million Series A for its platform, which helps influencers receive free products in exchange for reviews. HavasOtium and Twin Partners participated in that round.

Several other startups, including Lumanu, which has created software that helps influencers reach larger audiences, and Victorious, a developer of apps that target specific fandoms, have also raised VC recently. Meanwhile, two companies focused on influencer marketing have exited. Viacom picked up WHOSAY, which works with brands to craft campaign strategies and produce content; IZEA, the provider of a digital marketplace that connects brands with influencers, agreed to acquire TapInfluence, which plans and executes influencer marketing campaigns.

And these are just the early adopters. Given the stats shared above, I’d expect a whole lot more entrepreneurs to enter the space in years to come.

The bottom line is that influencers and influencer marketing have created an incredibly powerful tool that’s poised to disrupt the marketing and advertising industries, much like Craigslist disrupted the classified ad business and Airbnb changed the way we think about hotels.

VCs, of course, will follow the money. And as we’ve learned from Kylie Jenner, social media influence can be quite profitable.

Perhaps the real question is this: Will @lilmiquela make 2019’s Instagram Rich List? Time will tell.

https://pitchbook.com/news/articles/under-the-influence-how-vcs-are-embracing-next-gen-advertising

NFX – Social Networks Were The Last 10 Years. Market-Networks Will Be The Next 10

Most people didn’t notice last month when a 35-person company in San Francisco called HoneyBook* announced a $22 million Series B.

What was unusual about the deal is that nearly all the best-known Silicon Valley VCs competed for it. That’s because HoneyBook is a prime example of an important new category of digital company that combines the best elements of networks like Facebook with marketplaces like Airbnb — what we call a market-network.

Market-networks will produce a new class of unicorn companies and impact how millions of service professionals will work and earn their living.

 

What Is A Market-Network?

“Marketplaces” provide transactions among multiple buyers and multiple sellers — like Poshmark*, eBay, UberPatreon*, and LendingClub.

“Networks” provide profiles that project a person’s identity and then lets them communicate in a 360-degree pattern with other people in the network. Think FacebookTwitterGoodReads*, Meerkat*, and LinkedIn.

What’s unique about market-networks is that they:

  • Combine the main elements of both networks and marketplaces
  • Use SaaS workflow software to focus action around longer-term projects, not just a quick transaction
  • Promote the service provider as a differentiated individual, helping build long-term relationships

market network three rings

An example will help: let’s go back to HoneyBook, a market-network for the events industry.

An event planner builds a profile on HoneyBook.com. That profile serves as her professional home on the Web. She uses the HoneyBook SaaS workflow to send self-branded proposals to clients and sign contracts digitally.

She then connects the other professionals she works with like florists and photographers to that project. They also get profiles on HoneyBook and everyone can team up to service a client, send each other proposals, sign contracts and get paid by everyone else.

Market networks Angelist Honeybook

 

This many-to-many transaction pattern is key. HoneyBook is an N-sided marketplace — transactions happen a 360-degree pattern like a network, but they come here with transacting in mind.  That makes HoneyBook both a marketplace and network.

A market-network often starts by enhancing a network of professionals that exists offline today. Many of them have been transacting with each other for years using fax, checks, overnight packages, and phone calls.

By moving these connections and transactions into software, a market-network makes it significantly easier for professionals to operate their businesses and clients to get better service.

We’ve Seen This Before

AngelList* is also a market-network. I don’t know if it was the first, but Naval Ravikant and Babak Nivi deserve a lot of credit for pioneering the model in 2010.

On AngelList, the pattern is similar. The CEO of the startup creates her own profile, then prompts her personal network of investors, employees, advisors and customers to build their own profiles. The CEO can then complete some or all of her fundraising paperwork through the AngelList SaaS workflow, and everyone can share deals with everyone else in the network, hire employees, and find customers in a 360-degree pattern.

In 2013, when I met Oz and Naama Alon, two of the founders of HoneyBook, they were building a beautiful network product — a photo-sharing app for weddings. We sat down and I walked them through the new idea of a market-network. They embraced it immediately, and have taken it to a whole new level – from the design and workflow to the profile customization and business model.

Houzz* is a third good example. Houzz connects homeowners with home improvement professionals and with products they can buy for their home. They have a product that is very nearly a market-network. The company raised $165M in its last round.

Joist is another good example. Based in Toronto, it provides a market-network for the home remodel and construction industry. Houzz is also in that space, with broader reach and a different approach. DotLoop in Cincinnati shows the same pattern for the residential real estate brokerage industry.

Looking at AngelList, Joist, DotLoop, Houzz and HoneyBook, the market-network pattern is visible.

Currier Market Network Map 1

Seven Attributes Of A Successful Market-Network

  1. Market-networks target more complex services

In the last six years, the tech industry has obsessed over on-demand labor marketplaces for quick transactions of simple services. Companies like Uber, Lyft*, Mechanical Turk, Thumbtack, DoorDash* and many others make it efficient to buy simple services whose quality is judged objectively. Their success is based on commodifying the people on both sides of the marketplace.

However, the highest value services – like event planning and home remodels — are neither simple nor objectively judged. They are more involved and longer term. Market-networks are designed for these.

  1. People matter

With complex services, each client is unique and the professional they get matters. Would you hand over your wedding to just anyone? Your home remodel? The people on both sides of those equations are not interchangeable like they are with Lyft or Uber. Each person brings unique opinions, expertise, and relationships to the transaction. A market-network is designed to acknowledge that as a core tenet and provide a solution.

Currier Market Network Map 2

Collaboration happens around a project

For most complex services, multiple professionals collaborate among themselves—and with a client—over a period of time. The SaaS at the center of market-networks focuses the action on a project that can take days or years to complete.

  1. They have unique profiles of the people involved

Pleasing profiles with information unique to their context give the people involved a reason to come back and interact here. It captures part of their identity better than elsewhere on the Web.

  1. They help build long-term relationships

Market-networks bring a career’s worth of professional connections online and make them more useful. For years, social networks like LinkedIn and Facebook have helped built long-term relationships. However, until market-networks, they hadn’t been used for commerce and transactions.

  1. Referrals flow freely

In these industries, referrals are gold, for both client and service professional. The market-network software is designed to make referrals simple and more frequent.

  1. They increase transaction velocity and satisfaction

By putting the network of professionals and clients into software, the market-network increases transaction velocity for everyone. It increases the close rate on proposals and speeds up payment. The software also increases customer satisfaction scores, reduces miscommunication, and makes the work pleasing and beautiful. Never underestimate pleasing and beautiful.

Social Networks Were The Last 10 Years. Market-Networks Will Be The Next 10.

First we had communication networks like telephones and email. Then we had social networks like Facebook and LinkedIn. Now we have market networks like HoneyBook, AngelList, DotLoop, Houzz and Joist.

You can imagine a market-network for every industry where professionals are not interchangeable: law, travel, real estate, media production, architecture, investment banking, personal finance, construction, management consulting, and more. Each market-network will have different attributes that make it work in each vertical, but the principles will remain the same.

Over time, nearly all independent professionals and their clients will conduct business through the market-network of their industry. We’re just seeing the beginning of it now.

Market-networks will have a massive positive impact on how millions of people work and live, and how hundreds of millions of people buy better services.

I hope more entrepreneurs will set their sights on building these businesses. It’s time. They are hard products to get right, but the payoff is potentially massive

https://www.nfx.com/post/10-years-about-market-networks

McKinsey – AI frontier : Analysis of more than 400 use cases across 19 industries and nine business functions

An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value

 

Mapping AI techniques to problem types

As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).

AI analytics techniques

Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.

We analyzed the applications and value of three neural network techniques:

  • Feed forward neural networks: the simplest type of artificial neural network. In this architecture, information moves in only one direction, forward, from the input layer, through the “hidden” layers, to the output layer. There are no loops in the network. The first single-neuron network was proposed already in 1958 by AI pioneer Frank Rosenblatt. While the idea is not new, advances in computing power, training algorithms, and available data led to higher levels of performance than previously possible.
  • Recurrent neural networks (RNNs): Artificial neural networks whose connections between neurons include loops, well-suited for processing sequences of inputs. In November 2016, Oxford University researchers reported that a system based on recurrent neural networks (and convolutional neural networks) had achieved 95 percent accuracy in reading lips, outperforming experienced human lip readers, who tested at 52 percent accuracy.
  • Convolutional neural networks (CNNs): Artificial neural networks in which the connections between neural layers are inspired by the organization of the animal visual cortex, the portion of the brain that processes images, well suited for perceptual tasks.

For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.

Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.

Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.

 

Section 2

Insights from use cases

We collated and analyzed more than 400 use cases across 19 industries and nine business functions. They provided insight into the areas within specific sectors where deep neural networks can potentially create the most value, the incremental lift that these neural networks can generate compared with traditional analytics (Exhibit 2), and the voracious data requirements—in terms of volume, variety, and velocity—that must be met for this potential to be realized. Our library of use cases, while extensive, is not exhaustive, and may overstate or understate the potential for certain sectors. We will continue refining and adding to it.

Advanced deep learning AI techniques can be applied across industries

Examples of where AI can be used to improve the performance of existing use cases include:

  • Predictive maintenance: the power of machine learning to detect anomalies. Deep learning’s capacity to analyze very large amounts of high dimensional data can take existing preventive maintenance systems to a new level. Layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield. For example, AI can extend the life of a cargo plane beyond what is possible using traditional analytic techniques by combining plane model data, maintenance history, IoT sensor data such as anomaly detection on engine vibration data, and images and video of engine condition.
  • AI-driven logistics optimization can reduce costs through real-time forecasts and behavioral coaching. Application of AI techniques such as continuous estimation to logistics can add substantial value across sectors. AI can optimize routing of delivery traffic, thereby improving fuel efficiency and reducing delivery times. One European trucking company has reduced fuel costs by 15 percent, for example, by using sensors that monitor both vehicle performance and driver behavior; drivers receive real-time coaching, including when to speed up or slow down, optimizing fuel consumption and reducing maintenance costs.
  • AI can be a valuable tool for customer service management and personalization challenges. Improved speech recognition in call center management and call routing as a result of the application of AI techniques allow a more seamless experience for customers—and more efficient processing. The capabilities go beyond words alone. For example, deep learning analysis of audio allows systems to assess a customers’ emotional tone; in the event a customer is responding badly to the system, the call can be rerouted automatically to human operators and managers. In other areas of marketing and sales, AI techniques can also have a significant impact. Combining customer demographic and past transaction data with social media monitoring can help generate individualized product recommendations. “Next product to buy” recommendations that target individual customers—as companies such as Amazon and Netflix have successfully been doing–can lead to a twofold increase in the rate of sales conversions.

Two-thirds of the opportunities to use AI are in improving the performance of existing analytics use cases

In 69 percent of the use cases we studied, deep neural networks can be used to improve performance beyond that provided by other analytic techniques. Cases in which only neural networks can be used, which we refer to here as “greenfield” cases, constituted just 16 percent of the total. For the remaining 15 percent, artificial neural networks provided limited additional performance over other analytics techniques, among other reasons because of data limitations that made these cases unsuitable for deep learning (Exhibit 3).

AI improves the performance of existing analytics techniques

Greenfield AI solutions are prevalent in business areas such as customer service management, as well as among some industries where the data are rich and voluminous and at times integrate human reactions. Among industries, we found many greenfield use cases in healthcare, in particular. Some of these cases involve disease diagnosis and improved care, and rely on rich data sets incorporating image and video inputs, including from MRIs.

On average, our use cases suggest that modern deep learning AI techniques have the potential to provide a boost in additional value above and beyond traditional analytics techniques ranging from 30 percent to 128 percent, depending on industry.

In many of our use cases, however, traditional analytics and machine learning techniques continue to underpin a large percentage of the value creation potential in industries including insurance, pharmaceuticals and medical products, and telecommunications, with the potential of AI limited in certain contexts. In part this is due to the way data are used by these industries and to regulatory issues.

Data requirements for deep learning are substantially greater than for other analytics

Making effective use of neural networks in most applications requires large labeled training data sets alongside access to sufficient computing infrastructure. Furthermore, these deep learning techniques are particularly powerful in extracting patterns from complex, multidimensional data types such as images, video, and audio or speech.

Deep-learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. By one estimate, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labeled examples. In some cases where advanced analytics is currently used, so much data are available—million or even billions of rows per data set—that AI usage is the most appropriate technique. However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.

These massive data sets can be difficult to obtain or create for many business use cases, and labeling remains a challenge. Most current AI models are trained through “supervised learning”, which requires humans to label and categorize the underlying data. However promising new techniques are emerging to overcome these data bottlenecks, such as reinforcement learning, generative adversarial networks, transfer learning, and “one-shot learning,” which allows a trained AI model to learn about a subject based on a small number of real-world demonstrations or examples—and sometimes just one.

Organizations will have to adopt and implement strategies that enable them to collect and integrate data at scale. Even with large datasets, they will have to guard against “overfitting,” where a model too tightly matches the “noisy” or random features of the training set, resulting in a corresponding lack of accuracy in future performance, and against “underfitting,” where the model fails to capture all of the relevant features. Linking data across customer segments and channels, rather than allowing the data to languish in silos, is especially important to create value.

Realizing AI’s full potential requires a diverse range of data types including images, video, and audio

Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multidimensional nature, known by practitioners as “high dimensionality.” Neural networks are good at dealing with high dimensionality, as multiple layers in a network can learn to represent the many different features present in the data. Thus, for facial recognition, the first layer in the network could focus on raw pixels, the next on edges and lines, another on generic facial features, and the final layer might identify the face. Unlike previous generations of AI, which often required human expertise to do “feature engineering,” these neural network techniques are often able to learn to represent these features in their simulated neural networks as part of the training process.

Along with issues around the volume and variety of data, velocity is also a requirement: AI techniques require models to be retrained to match potential changing conditions, so the training data must be refreshed frequently. In one-third of the cases, the model needs to be refreshed at least monthly, and almost one in four cases requires a daily refresh; this is especially the case in marketing and sales and in supply chain management and manufacturing.

 

Section 3

Sizing the potential value of AI

We estimate that the AI techniques we cite in this briefing together have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques (Exhibit 4).

AI has the potential to create value across sectors

Per industry, we estimate that AI’s potential value amounts to between one and nine percent of 2016 revenue. The value as measured by percentage of industry revenue varies significantly among industries, depending on the specific applicable use cases, the availability of abundant and complex data, as well as on regulatory and other constraints.

These figures are not forecasts for a particular period, but they are indicative of the considerable potential for the global economy that advanced analytics represents.

From the use cases we have examined, we find that the greatest potential value impact from using AI are both in top-line-oriented functions, such as in marketing and sales, and bottom-line-oriented operational functions, including supply chain management and manufacturing.

Consumer industries such as retail and high tech will tend to see more potential from marketing and sales AI applications because frequent and digital interactions between business and customers generate larger data sets for AI techniques to tap into. E-commerce platforms, in particular, stand to benefit. This is because of the ease with which these platforms collect customer information such as click data or time spent on a web page and can then customize promotions, prices, and products for each customer dynamically and in real time.

AI's impact is likely to be most substantial in M&S, supply-chain management, and manufacturing

Here is a snapshot of three sectors where we have seen AI’s impact: (Exhibit 5)

  • In retail, marketing and sales is the area with the most significant potential value from AI, and within that function, pricing and promotion and customer service management are the main value areas. Our use cases show that using customer data to personalize promotions, for example, including tailoring individual offers every day, can lead to a one to two percent increase in incremental sales for brick-and-mortar retailers alone.
  • In consumer goods, supply-chain management is the key function that could benefit from AI deployment. Among the examples in our use cases, we see how forecasting based on underlying causal drivers of demand rather than prior outcomes can improve forecasting accuracy by 10 to 20 percent, which translates into a potential five percent reduction in inventory costs and revenue increases of two to three percent.
  • In banking, particularly retail banking, AI has significant value potential in marketing and sales, much as it does in retail. However, because of the importance of assessing and managing risk in banking, for example for loan underwriting and fraud detection, AI has much higher value potential to improve performance in risk in the banking sector than in many other industries.

 

Section 4

The road to impact and value

Artificial intelligence is attracting growing amounts of corporate investment, and as the technologies develop, the potential value that can be unlocked is likely to grow. So far, however, only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.

For all their promise, AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed above, but also five other limitations:

  • First is the challenge of labeling training data, which often must be done manually and is necessary for supervised learning. Promising new techniques are emerging to address this challenge, such as reinforcement learning and in-stream supervision, in which data can be labeled in the course of natural usage.
  • Second is the difficulty of obtaining data sets that are sufficiently large and comprehensive to be used for training; for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
  • Third is the difficulty of explaining in human terms results from large and complex models: why was a certain decision reached? Product certifications in healthcare and in the automotive and aerospace industries, for example, can be an obstacle; among other constraints, regulators often want rules and choice criteria to be clearly explainable.
  • Fourth is the generalizability of learning: AI models continue to have difficulties in carrying their experiences from one set of circumstances to another. That means companies must commit resources to train new models even for use cases that are similar to previous ones. Transfer learning—in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity—is one promising response to this challenge.
  • The fifth limitation concerns the risk of bias in data and algorithms. This issue touches on concerns that are more social in nature and which could require broader steps to resolve, such as understanding how the processes used to collect training data can influence the behavior of models they are used to train. For example, unintended biases can be introduced when training data is not representative of the larger population to which an AI model is applied. Thus, facial recognition models trained on a population of faces corresponding to the demographics of AI developers could struggle when applied to populations with more diverse characteristics. A recent report on the malicious use of AIhighlights a range of security threats, from sophisticated automation of hacking to hyper-personalized political disinformation campaigns.

Organizational challenges around technology, processes, and people can slow or impede AI adoption

Organizations planning to adopt significant deep learning efforts will need to consider a spectrum of options about how to do so. The range of options includes building a complete in-house AI capability, outsourcing these capabilities, or leveraging AI-as-a-service offerings.

Based on the use cases they plan to build, companies will need to create a data plan that produces results and predictions, which can be fed either into designed interfaces for humans to act on or into transaction systems. Key data engineering challenges include data creation or acquisition, defining data ontology, and building appropriate data “pipes.” Given the significant computational requirements of deep learning, some organizations will maintain their own data centers, because of regulations or security concerns, but the capital expenditures could be considerable, particularly when using specialized hardware. Cloud vendors offer another option.

Process can also become an impediment to successful adoption unless organizations are digitally mature. On the technical side, organizations will have to develop robust data maintenance and governance processes, and implement modern software disciplines such as Agile and DevOps. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.

On the people front, much of the construction and optimization of deep neural networks remains something of an art requiring real experts to deliver step-change performance increases. Demand for these skills far outstrips supply at present; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems. and competition for them is fierce among the tech giants.

AI can seem an elusive business case

Where AI techniques and data are available and the value is clearly proven, organizations can already pursue the opportunity. In some areas, the techniques today may be mature and the data available, but the cost and complexity of deploying AI may simply not be worthwhile, given the value that could be generated. For example, an airline could use facial recognition and other biometric scanning technology to streamline aircraft boarding, but the value of doing so may not justify the cost and issues around privacy and personal identification.

Similarly, we can see potential cases where the data and the techniques are maturing, but the value is not yet clear. The most unpredictable scenario is where either the data (both the types and volume) or the techniques are simply too new and untested to know how much value they could unlock. For example, in healthcare, if AI were able to build on the superhuman precision we are already starting to see with X-ray analysis and broaden that to more accurate diagnoses and even automated medical procedures, the economic value could be very significant. At the same time, the complexities and costs of arriving at this frontier are also daunting. Among other issues, it would require flawless technical execution and resolving issues of malpractice insurance and other legal concerns.

Societal concerns and regulations can also constrain AI use. Regulatory constraints are especially prevalent in use cases related to personally identifiable information. This is particularly relevant at a time of growing public debate about the use and commercialization of individual data on some online platforms. Use and storage of personal information is especially sensitive in sectors such as banking, health care, and pharmaceutical and medical products, as well as in the public and social sector. In addition to addressing these issues, businesses and other users of data for AI will need to continue to evolve business models related to data use in order to address societies’ concerns.. Furthermore, regulatory requirements and restrictions can differ from country to country, as well from sector to sector.

Implications for stakeholders

As we have seen, it is a company’s ability to execute against AI models that creates value, rather than the models themselves. In this final section, we sketch out some of the high-level implications of our study of AI use cases for providers of AI technology, appliers of AI technology, and policy makers, who set the context for both.

  • For AI technology provider companies: Many companies that develop or provide AI to others have considerable strength in the technology itself and the data scientists needed to make it work, but they can lack a deep understanding of end markets. Understanding the value potential of AI across sectors and functions can help shape the portfolios of these AI technology companies. That said, they shouldn’t necessarily only prioritize the areas of highest potential value. Instead, they can combine that data with complementary analyses of the competitor landscape, of their own existing strengths, sector or function knowledge, and customer relationships, to shape their investment portfolios. On the technical side, the mapping of problem types and techniques to sectors and functions of potential value can guide a company with specific areas of expertise on where to focus.
  • Many companies seeking to adopt AI in their operations have started machine learning and AI experiments across their business. Before launching more pilots or testing solutions, it is useful to step back and take a holistic approach to the issue, moving to create a prioritized portfolio of initiatives across the enterprise, including AI and the wider analytic and digital techniques available. For a business leader to create an appropriate portfolio, it is important to develop an understanding about which use cases and domains have the potential to drive the most value for a company, as well as which AI and other analytical techniques will need to be deployed to capture that value. This portfolio ought to be informed not only by where the theoretical value can be captured, but by the question of how the techniques can be deployed at scale across the enterprise. The question of how analytical techniques are scaling is driven less by the techniques themselves and more by a company’s skills, capabilities, and data. Companies will need to consider efforts on the “first mile,” that is, how to acquire and organize data and efforts, as well as on the “last mile,” or how to integrate the output of AI models into work flows ranging from clinical trial managers and sales force managers to procurement officers. Previous MGI research suggests that AI leaders invest heavily in these first- and last-mile efforts.
  • Policy makers will need to strike a balance between supporting the development of AI technologies and managing any risks from bad actors. They have an interest in supporting broad adoption, since AI can lead to higher labor productivity, economic growth, and societal prosperity. Their tools include public investments in research and development as well as support for a variety of training programs, which can help nurture AI talent. On the issue of data, governments can spur the development of training data directly through open data initiatives. Opening up public-sector data can spur private-sector innovation. Setting common data standards can also help. AI is also raising new questions for policy makers to grapple with for which historical tools and frameworks may not be adequate. Therefore, some policy innovations will likely be needed to cope with these rapidly evolving technologies. But given the scale of the beneficial impact on business the economy and society, the goal should not be to constrain the adoption and application of AI, but rather to encourage its beneficial and safe use.

https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning

Scroll to top