At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf
A lot of people are — understandably so — very confused when it comes to innovation methodologies, frameworks, and techniques. Questions like: “When should we use Design Thinking?”, “What is the purpose of a Design Sprint?”, “Is Lean Startup just for startups?”, “Where does Agile fit in?”, “What happens after the <some methodology> phase?” are all very common questions.
When browsing the Internet for answers, one notices quickly that others too are struggling to understand how it all works together.
Gartner (as well as numerous others) tried to visualise how methodologies like Design Thinking, Lean, Design Sprint and Agile flow nicely from one to the next. Most of these visualisations have a number of nicely coloured and connected circles, but for me they seem to miss the mark. The place where one methodology flows into the next is very debatable, because there are too many similar techniques and there is just too much overlap.
It probably makes more sense to just look at Design Thinking, Lean, Design Sprint & Agile as a bunch of tools and techniques in one’s toolbox, rather than argue for one over the other, because they can all add value somewhere on the innovation spectrum.
Innovation initiatives can range from exploring an abstract problem space, to experimenting with a number of solutions, before continuously improving a very concrete solution in a specific market space.
An aspect which often seems to be omitted, is the business model maturity axis. For established products as well as adjacent ones (think McKinsey’s Horizon 1 and 2), the business models are often very well understood. For startups and disruptive innovations within an established business however, the business model will need to be validated through experiments.
Design Thinking really shines when we need to better understand the problem space and identify the early adopters. There are various flavors of design thinking, but they all sort of follow the double-diamond flow. Simplistically the first diamond starts by diverging and gathering lots of insights through talking to our target stakeholders, followed by converging through clustering these insights and identifying key pain-points, problems or jobs to be done. The second diamond starts by a diverging exercise to ideate a large number of potential solutions before prototyping and testing the most promising ideas. Design Thinking is mainly focussed on qualitative rather than quantitative insights.
The slight difference with Design Thinking is that the entrepreneur (or intrapreneur) often already has a good understanding of the problem space. Lean considers everything to be a hypothesis or assumption until validated …so even that good understanding of the problem space is just an assumption. Lean tends to starts by specifying your assumptions on a customer focussed (lean) canvas and then prioritizing and validating the assumptions according to highest risk for the entire product. The process to validate assumptions is creating an experiment (build), testing it (measure) and learn whether our assumption or hypothesis still stands. Lean uses qualitative insights early on but later forces you to define actionable quantitative data to measure how effective the solution addresses the problem and whether the growth strategy is on track. The “Get out of the building” phrase is often associated with Lean Startup, but the same principle of reaching out the customers obviously also counts for Design Thinking (… and Design Sprint … and Agile).
It appears that the Google Venture-style Design Sprint method could have its roots from a technique described in the Lean UX book. The key strength of a Design Sprint is to share insights, ideate, prototype and test a concept all in a 5-day sprint. Given the short timeframe, Design Sprints only focus on part of the solution, but it’s an excellent way to learn really quickly if you are on the right track or not.
Just like dealing with the uncertainty of our problem, solution and market assumptions, agile development is a great way to cope with uncertainty in product development. No need to specify every detail of a product up-front, because here too there are plenty of assumptions and uncertainty. Agile is a great way to build-measure-learn and validate assumptions whilst creating a Minimum Viable Product in Lean Startup parlance. We should define and prioritize a backlog of value to be delivered and work in short sprints, delivering and testing the value as part of each sprint.
Probably not really the answer you were looking for, but there is no clear rule on when to start where. There is also no obvious handover point because there is just too much overlap, and this significant overlap could be the explanation of why some people claim methodology <x> is better than <y>.
Anyhow, most innovation methodologies can add great value and it’s really up to the team to decide where to start and when to apply which methods and techniques. The common ground most can agree with, is to avoid falling in love with your own solution and listen to qualitative as well as quantitative customer feedback.
Update: minor update in the innovation canvas, moving the top axis of problem-solution-market to the side
Source : https://medium.com/@geertwlclaes/when-which-design-thinking-lean-design-sprint-agile-a4614fa778b9
Former Google CEO Eric Schmidt has listed the three “big failures” in tech entrepreneurship around the world.
Schmidt outlined the failings in a speech he gave at the Centre for Entrepreneurs in London this week. He later expanded on his thoughts in an interview with former BBC News boss James Harding.
Below are the three mistakes he outlined, with quotes taken from both a draft of his speech seen by Business Insider, and comments he delivered on the night.
“Far too often, we invest mostly in people we already know, who are working in very narrow disciplines,” Schmidt wrote in his draft.
In his speech, Schmidt pegged this point closely to a need for diversity and inclusion. He said companies need to be open to bringing in people from other countries and backgrounds.
He said entrepreneurship won’t flourish if people are “going to one institution, hiring only those people, and only — if I can be blunt — only white males.”
During the Q&A, Schmidt specifically addressed the gender imbalance in the tech industry. He said there’s a reason to be optimistic about women’s representation in tech improving, predicting that tech’s gender imbalance will vanish in one generation.
“We frequently don’t build the best technology platforms to tackle big social challenges, because often there is no immediate promise of commercial return,” Schmidt wrote in his draft.
“There are a million e-commerce apps but not enough speciality platforms for safely sharing and analyzing data on homelessness, climate change or refugees.”
Schmidt’s omitted this mention of socially conscious tech from his final speech, but did say that he sees a lot of innovation coming out of network platforms, which allow people to connect and pool data, because “the barrier to entry for these startups is very, very low.”
Finally, Schmidt wrote in his draft that tech startups don’t partner enough with other companies in the modern, hyper-connected world. “It’s impossible to think about any major challenge for society in a silo,” he wrote.
He said in his speech that tech firms have to be ready to partner “fairly early.” He gave the example of a startup that wants to build homecare robots.
“The market for homecare robots is going to be very, very large. The problem is that you need visual systems, and machine learning systems, and listening systems, and motor systems, and so forth. You’re not going to be able to do it with three people,” he said.
After detailing his failures in tech entrepreneurship, Schmidt laid out what he views as the solution. He referred back to the Renaissance in Europe, saying people turned their hand to all sorts of disciplines, from science, to art, to business.
Source : https://www.businessinsider.com/eric-schmidt-3-big-failures-he-sees-in-tech-entrepreneurship-2018-11
Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.
These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.
I look at the evidence and arrive at my decisions through more or less rational processes.
To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.
We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.
When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.
I believe in this idea so strongly. It must be true.
We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.
I understand the people I deal with; I see them just as they are.
We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.
My ideas are my own. I do not listen to the group. I am not a conformist.
We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.
I learn from my experience and mistakes.
Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.
I’m different. I’m more rational than others, more ethical as well.
Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.
We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.
Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f
Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.
If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.
At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.
In this inaugural post, we discuss three areas of technical AI safety: specification, robustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.
You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.
This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:
A specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.
For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.
*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.
As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.
There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shift, adversarial inputs, and unsafe exploration can be classified as robustness problems.
To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.
Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.
Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015; Amodei and Olah et al., 2016).
Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.
Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.
AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).
Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).
We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.
Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.
With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!
We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research
Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
NEA is one of the most well-known investors around, and the firm also takes the crown as the most active VC investor in Series A and B rounds in the US so far in 2018. Andreessen Horowitz, Accel and plenty of the other usual early-stage suspects are on the list, too.
Also included is a pair of names that have been in the news this year for backing away from the traditional VC model: Social Capital and SV Angel. The two are on the list thanks to deals completed earlier in the year.
Just how much are these prolific investors betting on Series A and Series B rounds? And at what valuation? We’ve used data from the PitchBook Platform to highlight a collection of the top venture capital investors in the US (excluding accelerators) and provide information about the Series A and B rounds they’ve joined so far this year. Click on the graphic below to open a PDF.
San Francisco is known for its famous neighborhoods and commercial corridors — and the Mission District’s Valencia Street takes it to the next level. For Lyft, Valencia Street is filled with top destinations that our passengers frequent: trendy cafes, hipster clothing stores, bars, and live music.
To put it simply, there’s a lot happening along Valencia Street. Besides the foot traffic, many of its restaurants are popular choices on the city’s growing network of courier services, providing on-demand food delivery via cars and bicycles. Residents of the Mission are increasingly relying on FedEx, Amazon, and UPS for stuff. Merchants welcome commercial trucks to deliver their goods. In light of a recent road diet on Mission Street to create much needed dedicated lanes to improve MUNI bus service, many vehicles have been re-routed to parallel streets like Valencia. And of course, Valencia Street is also one of the most heavily trafficked bicycling corridors in the City, with 2,100 cyclists commuting along Valencia Street each day.
With so many different users of the street and a street design that has largely remained unchanged, it’s no surprise that the corridor has experienced growing safety concerns — particularly around increased traffic, double parking, and bicycle dooring.
Valencia Street is part of the City’s Vision Zero High-Injury Network, the 13% of city streets that account for 75% of severe and fatal collisions. From January 2012 to December 2016, there were 204 people injured and 268 reported collisions along the corridor, of which one was fatal.
As the street has become more popular and the need to act has become more apparent, community organizers have played an important role in rallying City forces to commit to a redesign. The San Francisco Bicycle Coalition has been a steadfast advocate for the cycling community’s needs: going back to the 1990s when they helped bring painted bike lanes to the corridor, to today’s efforts to upgrade to a protected bike lane. The People Protected Bike Lane Protests have helped catalyze the urgency of finding a solution. And elected officials, including Supervisor Ronen and former Supervisor Sheehyhave been vocal about the need for change.
Earlier this spring, encouraged by the SFMTA’s first steps in bringing new, much-needed infrastructure to the corridor, we began conducting an experiment to leverage our technology as part of the solution. As we continue to partner closely with the SFMTA as they work on a new design for the street, we want to report back what we’ve learned.
As we began our pilot, we set out with the following goals:
To meet these goals, we first examined Lyft ride activity in the 30-block project area: Valencia Street between Market Street and Cesar Chavez.
Within this project area, we found that the most heavily traveled corridors were Valencia between 16th and 17th Street, 17th and 18th Street, and 18th and 19th Street. We found that these three blocks make up 27% of total Lyft rides along the Valencia corridor.
We also wanted to understand the top destinations along the corridor. To do this, we looked at ride history where passengers typed in the location they wanted to get picked up from.
Next, we looked at how demand for Lyft changed over time of day and over the course of the week. This would help answer questions such as “how does demand for Lyft differ on weekends vs. weeknights” or “what times of day do people use Lyft to access the Valencia corridor?”
We found that Lyft activity on Valencia Street was highest on weekends and in the evenings. Demand is fairly consistent on weekdays, with major spikes of activity on Fridays, Saturdays, and Sundays. The nighttime hours of 8 PM to 2 AM are also the busiest time for trips, making up 44% of all rides. These findings suggest the important role Lyft plays as a reliable option when transit service doesn’t run as frequently, or as a safe alternative to driving under the influence (a phenomenon we are observing around the country).
Our hypothesis was that because of the increased need for curb space between multiple on-demand services, as well as the the unsafe experience of double parking or crossing over the bike lane to reach passengers, improvements in the Lyft app could help create a better experience for everyone.
To test this, our curb access pilot program was conducted as an “A/B experiment”, where subjects were randomly assigned to a control or treatment group, and statistical analysis is used to determine which variation performs better. 50% of riders continued to have the same experience requesting rides within the pilot area: able to get picked up wherever they wanted. The other 50% of Lyft passengers requesting rides within the pilot zone were shown the experiment scenario, which asked them to walk to a dedicated pickup spot.
Geofencing and Venues
Our pilot was built using a Lyft feature called “Venues”, a geospatial tool designed to recommend pre-set pickup locations to passengers. When a user tries to request a ride from an area that has been mapped with a Venue, they are unable to manually control the area in which they’d like to be picked up. Rather, the Venue feature automatically redirects them to a pre-established location. This forced geofencing feature helps ensure that passengers are requesting rides from safe locations and build reliability and predictability for both passengers and drivers as they find each other.
Given our understanding of ride activity and demand, we decided to create Venues on Valencia Street between 16th Street and 19th Street. We prioritized creating pickup zones along side streets in areas of lower traffic. Where possible, we tried to route pickups to existing loading zones: however, a major finding of the pilot was that existing curb space is insufficient and that the city needs more loading zones. To support better routing and reduce midblock u-turns or other unsafe driving behavior, we tried to put pickup spots on side streets that allowed for both westbound and eastbound directionality.
Our pilot ran for three months, from March 2018 to June 2018. Although our initial research focused on rideshare activity during hours of peak demand (i.e. nights and weekends), to support our project goals of increasing overall safety along the corridor and to create an easy and intuitive experience for passengers, we ultimately decided to run the experiments 24/7.
The graphic below illustrates where passengers were standing when they requested a ride, and which hotspot they were redirected to. We found that the top hot spots were on 16th Street. This finding suggests the need for continued coordination with the City to make sure that the dedicated pickup spots to protect cyclists on Valencia Street don’t interrupt on-time performance for the 55–16th Street or 22–Fillmore Muni bus routes.
Loading time, when a driver has pulled over to wait for a passenger to arrive or exit their car, was important for us to look at in terms of traffic flow. This is a similar metric to the transportation planning metric, dwell time.
Currently, our metric for loading time looks at the time between when a driver arrives at the pickup location and when they press the “I have picked up my passenger” button. However, this is an imperfect measurement for dwell time, as drivers may press the button before the passenger gets in the vehicle. Based on our pilot, we have identified this as an area for further research.
Going into our experiment, we expected to see a slight increase in loading time, as passengers would need to get used to walking to the pickup spot. This hypothesis was correct: during the pilot, we saw loading time increased from an average of 25 seconds per ride to 28 seconds. To help speed up the process of drivers and passengers finding each other, we recommend the addition of wayfinding and signage in popular loading areas.
We also wanted to understand the difference between pickups and drop-offs. Generally, we found that pickups have a longer loading time than a drop-off.
Ridesharing is one part of the puzzle to creating a more organized streetscape along the Valencia corridor, so sharing information and coordinating with city stakeholders was critical. After our experiment, we sat down with elected officials, project staff from the SFMTA, WalkSF, and the San Francisco Bicycle Coalition to discuss the pilot findings and collaborate on how our work could support other initiatives underway across the city. We are now formally engaged with the SFMTA’s Valencia Bikeway Improvement Project and look forward to continuing to support this initiative.
Given the findings of this pilot program and our commitment to creating sustainable streets (including our acquisition of the leading bikeshare company Motivate and introduction of bike and scooter sharing to the Lyft platform), we decided to move our project from a pilot to a permanent featurewithin the Lyft app. This means that currently, anyone requesting a ride on Valencia Street between 16th Street and 19th Street will be redirected to a pickup spot on a side street.
Based on the learnings of our pilot, we recommend the following:
We know that ridesharing is just one of the many competing uses of Valencia Street and technology alone will not solve the challenges of pickups and drop-offs: adequate infrastructure like protected bike lanes and loading zones will be necessary to achieving Vision Zero.
Looking ahead, we know there’s much to be done on this front. To start with, we are are excited to partner with civic engagement leaders like Streetmixwhose participatory tools ensure that public spaces and urban design support safe streets. By bringing infrastructure designs like parking protected bike lanes or ridesharing loading zones into Streetmix, planners can begin to have the tools to engage community groups on what they’d like to see their streets look like.
We’ve also begun partnering with Together for Safer Roads to support local bike and pedestrian advocacy groups and share Lyft performance data to help improve safety on some of the nation’s most dangerous street corridors. And finally, through our application to the SFMTA to become a permitted scooter operator in the City, we are committing $1 per day per scooter to support expansion of the City’s protected bike lane network. We know that this kind of infrastructure is critical to making safer streets for everyone.
We know that this exciting work ahead cannot be done alone: we look forward to bringing this type of work to other cities around the country and to working together to achieve this vision
Source : https://medium.com/@debsarctica/creating-a-safer-valencia-street-54c25a75b753
@lilmiquela has 1.3 million followers on Instagram. Her bio reads that she’s 19 years old, lives in Los Angeles, and supports causes including Black Lives Matter and the Innocence Project. Oh, and she’s a robot.
Her Instagram feed, which at the time of writing has 245 posts, is her entire existence. She likes memes and posting selfies. One photo in particular shows her relaxing on a lawn chair, while another has her posing on a washer/dryer set. There’s even a snap of her being tattooed by similarly Insta-famous tattoo artist Dr. Woo.
But. She’s. Not. Real. @lilmiquela is a “virtual influencer” and the brainchild of a venture capital-backed company called Brud, which describes itself as a group of “problem solvers specializing in robotics, artificial intelligence and their applications to media businesses.”
But the real question is why is a surreal—literally—freckly teenage girl worth millions to Silicon Valley?
After all, Brud isn’t the first company to capitalize off the platform Instagram provides, nor is it the first to illustrate how much money one can make as an “influencer.” Former “Bachelor” and “Bachelorette” contestants, each member of the Kardashian family and pretty much every C-list actor has proven that. Brud, rather, has shown that you can manufacture that influence using technology. You don’t have to pay an actual person to post an Instagram story about how he or she just “looooooves” your products.
The team at Brud decides what @lilmiquela “likes,” what she will promote on her Instagram and how she will behave online. Earlier this year, @lilmiquela posted an Instagram story advertising her partnership with Prada, undoubtedly a lucrative deal that had her advertising for the brand just in time for fashion week in February. It appeared to be one of the first official brand partnerships advertised on her feed.
Brud is hacking influencer marketing, which has already disrupted traditional advertising streams in recent years. Influencer marketing is a new opportunity stemming from that Instagram usage; it has allowed skillful bloggers, who have themselves become valuable media properties and brand assets, to make a living off social media posts. This is mostly a result of the successes of social media platforms like Twitter and Facebook, though Instagram is at the center of the influencer movement specifically.
Venture capital investors, of course, were backers of all three of those platforms in their nascent days. Now, VCs are investing in a new generation of startups vying to capitalize on the innovative form of narrative advertising that is influencer marketing.
Let’s go over the basics. What’s an influencer? It’s basically the 2018 version of that really cool person in your class at school. Typically, it’s someone who posts frequently online, has a large following and likely also has strong engagement rates, meaning people tend to “like” and comment on their content frequently. Most importantly, influencers can have an impact on their followers’ purchasing decisions, whether that be because of their fame, knowledge of a specific industry or product, job title or follower count.
The influencer economy truly began with the birth of the blogosphere during the dot-com boom, but the invention of sharing apps like Instagram created the phenomenon as we know it today. The app officially launched in the fall of 2010; less than two years later, Facebook, which was about eight years old at the time, spent $1 billion to acquire it. What may have seemed like a ludicrous deal in 2012—Instagram only had 13 employees at the time and had raised about $57 million in VC funding—has proven to be Facebook’s most crucial and lucrative acquisition ever. Not to mention it was a goddamned steal.
Last month, Facebook reported its most disappointing earnings to date, an announcement that resulted in a major stock plunge. Instagram, on the other hand, continues to boom, with more than 1 billion users on its platform. It’s driving a large part of Facebook’s advertising profits. Wells Fargo analyst Ken Sena reportedly said the photo-sharing app could contribute $20 billion to Facebook’s revenue by 2020, or roughly a quarter of the social media giant’s total revenue.
Why? Because advertisers love Instagram. They are expected to spend $1.6 billion on Instagram advertising in 2018, a number that could grow to as much as $5 billion over the next few years, per MediaKix. If you’re not an avid Instagram user and you’ve found yourself wondering, “How could a photo-sharing app bring in that kind of money?,” let me throw some mind-boggling stats your way.
Kylie Jenner, the youngest member of the Kardashian family, can earn as much as $1 million per Instagram post. To repeat, she can make $1 million by posting one photo to her Instagram feed with a hashtag or brief product description. For the most part, she uses her feed to promote her own business, Kylie Cosmetics. The company was recently valued at around $800 million and Jenner herself is expected to be the youngest billionaire ever, according to a recent viral Forbes profile, because of the success of her business and her social media fame. Jenner, of course, posted a photo of the Forbes cover story to her Instagram to celebrate this achievement:
Vine star Cameron Dallas, who also has his own Netflix show for some reason, reportedly earns some $25,000 per post. Indian cricket team captain Virat Kohli makes some $120,000. Celebrity chef Gordon Ramsey can earn roughly $5,500 for a post. And Logan Paul, the controversial YouTube star, can bring in $17,000 each time he grams. This is all according to social media tool provider Hopper’s Instagram Rich List, which ranks Insta users by how much they can purportedly bring in. Every person on the list is considered an influencer.
The first VC to leap entirely into the influencer economy was Benjamin Grubbs, the former global director of top creator partnerships at YouTube—a mouthful of a title that basically means Grubbs was in charge of the team that oversaw the growth of the most popular YouTubers. After six years at YouTube, including a stint at its parent company Google, Grubbs stepped down to launch a venture capital fund called Next 10 Ventures.
Next 10 Ventures closed its debut vehicle in May, a $50 million fund intended to back businesses in the creator economy. While other venture capitalists have closed select deals for startups in the influencer space, Next 10 raised a sizable amount of cash to bet solely on people whose living relies on platforms like YouTube and Instagram.
“Over the past five years, I have seen firsthand the immense growth of the Creator economy in terms of reach, consumer engagement, and commercialization,” Grubbs wrote in a statement announcing the fund. “We forecast the global creator economy excluding China to reach $23 billion this year, driven by tens of thousands of creators who make a living on digital video and social platforms. This scale affords our company ample opportunity to build assets that produce meaningful value in the years ahead.”
It’s unclear which, if any, startups Next 10 has backed since it wrapped its initial fund. A handful of startups in the space, however, have raised funding in the last year.
Brud, the developers of @lilmiquela, brought in their reported $6 million financing in April, of course. That round was followed by 21 Buttons‘ $17 million round led by Idinvest Partners. The following month, Octoly brought in a $10 million Series A for its platform, which helps influencers receive free products in exchange for reviews. Havas, Otium and Twin Partners participated in that round.
Several other startups, including Lumanu, which has created software that helps influencers reach larger audiences, and Victorious, a developer of apps that target specific fandoms, have also raised VC recently. Meanwhile, two companies focused on influencer marketing have exited. Viacom picked up WHOSAY, which works with brands to craft campaign strategies and produce content; IZEA, the provider of a digital marketplace that connects brands with influencers, agreed to acquire TapInfluence, which plans and executes influencer marketing campaigns.
And these are just the early adopters. Given the stats shared above, I’d expect a whole lot more entrepreneurs to enter the space in years to come.
The bottom line is that influencers and influencer marketing have created an incredibly powerful tool that’s poised to disrupt the marketing and advertising industries, much like Craigslist disrupted the classified ad business and Airbnb changed the way we think about hotels.
VCs, of course, will follow the money. And as we’ve learned from Kylie Jenner, social media influence can be quite profitable.
Perhaps the real question is this: Will @lilmiquela make 2019’s Instagram Rich List? Time will tell.
Most people didn’t notice last month when a 35-person company in San Francisco called HoneyBook* announced a $22 million Series B.
What was unusual about the deal is that nearly all the best-known Silicon Valley VCs competed for it. That’s because HoneyBook is a prime example of an important new category of digital company that combines the best elements of networks like Facebook with marketplaces like Airbnb — what we call a market-network.
Market-networks will produce a new class of unicorn companies and impact how millions of service professionals will work and earn their living.
“Networks” provide profiles that project a person’s identity and then lets them communicate in a 360-degree pattern with other people in the network. Think Facebook, Twitter, GoodReads*, Meerkat*, and LinkedIn.
What’s unique about market-networks is that they:
An example will help: let’s go back to HoneyBook, a market-network for the events industry.
An event planner builds a profile on HoneyBook.com. That profile serves as her professional home on the Web. She uses the HoneyBook SaaS workflow to send self-branded proposals to clients and sign contracts digitally.
She then connects the other professionals she works with like florists and photographers to that project. They also get profiles on HoneyBook and everyone can team up to service a client, send each other proposals, sign contracts and get paid by everyone else.
This many-to-many transaction pattern is key. HoneyBook is an N-sided marketplace — transactions happen a 360-degree pattern like a network, but they come here with transacting in mind. That makes HoneyBook both a marketplace and network.
A market-network often starts by enhancing a network of professionals that exists offline today. Many of them have been transacting with each other for years using fax, checks, overnight packages, and phone calls.
By moving these connections and transactions into software, a market-network makes it significantly easier for professionals to operate their businesses and clients to get better service.
AngelList* is also a market-network. I don’t know if it was the first, but Naval Ravikant and Babak Nivi deserve a lot of credit for pioneering the model in 2010.
On AngelList, the pattern is similar. The CEO of the startup creates her own profile, then prompts her personal network of investors, employees, advisors and customers to build their own profiles. The CEO can then complete some or all of her fundraising paperwork through the AngelList SaaS workflow, and everyone can share deals with everyone else in the network, hire employees, and find customers in a 360-degree pattern.
In 2013, when I met Oz and Naama Alon, two of the founders of HoneyBook, they were building a beautiful network product — a photo-sharing app for weddings. We sat down and I walked them through the new idea of a market-network. They embraced it immediately, and have taken it to a whole new level – from the design and workflow to the profile customization and business model.
Houzz* is a third good example. Houzz connects homeowners with home improvement professionals and with products they can buy for their home. They have a product that is very nearly a market-network. The company raised $165M in its last round.
Joist is another good example. Based in Toronto, it provides a market-network for the home remodel and construction industry. Houzz is also in that space, with broader reach and a different approach. DotLoop in Cincinnati shows the same pattern for the residential real estate brokerage industry.
Looking at AngelList, Joist, DotLoop, Houzz and HoneyBook, the market-network pattern is visible.
Seven Attributes Of A Successful Market-Network
In the last six years, the tech industry has obsessed over on-demand labor marketplaces for quick transactions of simple services. Companies like Uber, Lyft*, Mechanical Turk, Thumbtack, DoorDash* and many others make it efficient to buy simple services whose quality is judged objectively. Their success is based on commodifying the people on both sides of the marketplace.
However, the highest value services – like event planning and home remodels — are neither simple nor objectively judged. They are more involved and longer term. Market-networks are designed for these.
With complex services, each client is unique and the professional they get matters. Would you hand over your wedding to just anyone? Your home remodel? The people on both sides of those equations are not interchangeable like they are with Lyft or Uber. Each person brings unique opinions, expertise, and relationships to the transaction. A market-network is designed to acknowledge that as a core tenet and provide a solution.
Collaboration happens around a project
For most complex services, multiple professionals collaborate among themselves—and with a client—over a period of time. The SaaS at the center of market-networks focuses the action on a project that can take days or years to complete.
Pleasing profiles with information unique to their context give the people involved a reason to come back and interact here. It captures part of their identity better than elsewhere on the Web.
Market-networks bring a career’s worth of professional connections online and make them more useful. For years, social networks like LinkedIn and Facebook have helped built long-term relationships. However, until market-networks, they hadn’t been used for commerce and transactions.
In these industries, referrals are gold, for both client and service professional. The market-network software is designed to make referrals simple and more frequent.
By putting the network of professionals and clients into software, the market-network increases transaction velocity for everyone. It increases the close rate on proposals and speeds up payment. The software also increases customer satisfaction scores, reduces miscommunication, and makes the work pleasing and beautiful. Never underestimate pleasing and beautiful.
First we had communication networks like telephones and email. Then we had social networks like Facebook and LinkedIn. Now we have market networks like HoneyBook, AngelList, DotLoop, Houzz and Joist.
You can imagine a market-network for every industry where professionals are not interchangeable: law, travel, real estate, media production, architecture, investment banking, personal finance, construction, management consulting, and more. Each market-network will have different attributes that make it work in each vertical, but the principles will remain the same.
Over time, nearly all independent professionals and their clients will conduct business through the market-network of their industry. We’re just seeing the beginning of it now.
Market-networks will have a massive positive impact on how millions of people work and live, and how hundreds of millions of people buy better services.
I hope more entrepreneurs will set their sights on building these businesses. It’s time. They are hard products to get right, but the payoff is potentially massive
An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.
Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.
It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.
As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).
Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.
We analyzed the applications and value of three neural network techniques:
For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.
Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.
Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.
We collated and analyzed more than 400 use cases across 19 industries and nine business functions. They provided insight into the areas within specific sectors where deep neural networks can potentially create the most value, the incremental lift that these neural networks can generate compared with traditional analytics (Exhibit 2), and the voracious data requirements—in terms of volume, variety, and velocity—that must be met for this potential to be realized. Our library of use cases, while extensive, is not exhaustive, and may overstate or understate the potential for certain sectors. We will continue refining and adding to it.
Examples of where AI can be used to improve the performance of existing use cases include:
In 69 percent of the use cases we studied, deep neural networks can be used to improve performance beyond that provided by other analytic techniques. Cases in which only neural networks can be used, which we refer to here as “greenfield” cases, constituted just 16 percent of the total. For the remaining 15 percent, artificial neural networks provided limited additional performance over other analytics techniques, among other reasons because of data limitations that made these cases unsuitable for deep learning (Exhibit 3).
Greenfield AI solutions are prevalent in business areas such as customer service management, as well as among some industries where the data are rich and voluminous and at times integrate human reactions. Among industries, we found many greenfield use cases in healthcare, in particular. Some of these cases involve disease diagnosis and improved care, and rely on rich data sets incorporating image and video inputs, including from MRIs.
On average, our use cases suggest that modern deep learning AI techniques have the potential to provide a boost in additional value above and beyond traditional analytics techniques ranging from 30 percent to 128 percent, depending on industry.
In many of our use cases, however, traditional analytics and machine learning techniques continue to underpin a large percentage of the value creation potential in industries including insurance, pharmaceuticals and medical products, and telecommunications, with the potential of AI limited in certain contexts. In part this is due to the way data are used by these industries and to regulatory issues.
Making effective use of neural networks in most applications requires large labeled training data sets alongside access to sufficient computing infrastructure. Furthermore, these deep learning techniques are particularly powerful in extracting patterns from complex, multidimensional data types such as images, video, and audio or speech.
Deep-learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. By one estimate, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labeled examples. In some cases where advanced analytics is currently used, so much data are available—million or even billions of rows per data set—that AI usage is the most appropriate technique. However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.
These massive data sets can be difficult to obtain or create for many business use cases, and labeling remains a challenge. Most current AI models are trained through “supervised learning”, which requires humans to label and categorize the underlying data. However promising new techniques are emerging to overcome these data bottlenecks, such as reinforcement learning, generative adversarial networks, transfer learning, and “one-shot learning,” which allows a trained AI model to learn about a subject based on a small number of real-world demonstrations or examples—and sometimes just one.
Organizations will have to adopt and implement strategies that enable them to collect and integrate data at scale. Even with large datasets, they will have to guard against “overfitting,” where a model too tightly matches the “noisy” or random features of the training set, resulting in a corresponding lack of accuracy in future performance, and against “underfitting,” where the model fails to capture all of the relevant features. Linking data across customer segments and channels, rather than allowing the data to languish in silos, is especially important to create value.
Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multidimensional nature, known by practitioners as “high dimensionality.” Neural networks are good at dealing with high dimensionality, as multiple layers in a network can learn to represent the many different features present in the data. Thus, for facial recognition, the first layer in the network could focus on raw pixels, the next on edges and lines, another on generic facial features, and the final layer might identify the face. Unlike previous generations of AI, which often required human expertise to do “feature engineering,” these neural network techniques are often able to learn to represent these features in their simulated neural networks as part of the training process.
Along with issues around the volume and variety of data, velocity is also a requirement: AI techniques require models to be retrained to match potential changing conditions, so the training data must be refreshed frequently. In one-third of the cases, the model needs to be refreshed at least monthly, and almost one in four cases requires a daily refresh; this is especially the case in marketing and sales and in supply chain management and manufacturing.
We estimate that the AI techniques we cite in this briefing together have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques (Exhibit 4).
Per industry, we estimate that AI’s potential value amounts to between one and nine percent of 2016 revenue. The value as measured by percentage of industry revenue varies significantly among industries, depending on the specific applicable use cases, the availability of abundant and complex data, as well as on regulatory and other constraints.
These figures are not forecasts for a particular period, but they are indicative of the considerable potential for the global economy that advanced analytics represents.
From the use cases we have examined, we find that the greatest potential value impact from using AI are both in top-line-oriented functions, such as in marketing and sales, and bottom-line-oriented operational functions, including supply chain management and manufacturing.
Consumer industries such as retail and high tech will tend to see more potential from marketing and sales AI applications because frequent and digital interactions between business and customers generate larger data sets for AI techniques to tap into. E-commerce platforms, in particular, stand to benefit. This is because of the ease with which these platforms collect customer information such as click data or time spent on a web page and can then customize promotions, prices, and products for each customer dynamically and in real time.
Here is a snapshot of three sectors where we have seen AI’s impact: (Exhibit 5)
Artificial intelligence is attracting growing amounts of corporate investment, and as the technologies develop, the potential value that can be unlocked is likely to grow. So far, however, only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.
For all their promise, AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed above, but also five other limitations:
Organizations planning to adopt significant deep learning efforts will need to consider a spectrum of options about how to do so. The range of options includes building a complete in-house AI capability, outsourcing these capabilities, or leveraging AI-as-a-service offerings.
Based on the use cases they plan to build, companies will need to create a data plan that produces results and predictions, which can be fed either into designed interfaces for humans to act on or into transaction systems. Key data engineering challenges include data creation or acquisition, defining data ontology, and building appropriate data “pipes.” Given the significant computational requirements of deep learning, some organizations will maintain their own data centers, because of regulations or security concerns, but the capital expenditures could be considerable, particularly when using specialized hardware. Cloud vendors offer another option.
Process can also become an impediment to successful adoption unless organizations are digitally mature. On the technical side, organizations will have to develop robust data maintenance and governance processes, and implement modern software disciplines such as Agile and DevOps. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.
On the people front, much of the construction and optimization of deep neural networks remains something of an art requiring real experts to deliver step-change performance increases. Demand for these skills far outstrips supply at present; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems. and competition for them is fierce among the tech giants.
Where AI techniques and data are available and the value is clearly proven, organizations can already pursue the opportunity. In some areas, the techniques today may be mature and the data available, but the cost and complexity of deploying AI may simply not be worthwhile, given the value that could be generated. For example, an airline could use facial recognition and other biometric scanning technology to streamline aircraft boarding, but the value of doing so may not justify the cost and issues around privacy and personal identification.
Similarly, we can see potential cases where the data and the techniques are maturing, but the value is not yet clear. The most unpredictable scenario is where either the data (both the types and volume) or the techniques are simply too new and untested to know how much value they could unlock. For example, in healthcare, if AI were able to build on the superhuman precision we are already starting to see with X-ray analysis and broaden that to more accurate diagnoses and even automated medical procedures, the economic value could be very significant. At the same time, the complexities and costs of arriving at this frontier are also daunting. Among other issues, it would require flawless technical execution and resolving issues of malpractice insurance and other legal concerns.
Societal concerns and regulations can also constrain AI use. Regulatory constraints are especially prevalent in use cases related to personally identifiable information. This is particularly relevant at a time of growing public debate about the use and commercialization of individual data on some online platforms. Use and storage of personal information is especially sensitive in sectors such as banking, health care, and pharmaceutical and medical products, as well as in the public and social sector. In addition to addressing these issues, businesses and other users of data for AI will need to continue to evolve business models related to data use in order to address societies’ concerns.. Furthermore, regulatory requirements and restrictions can differ from country to country, as well from sector to sector.
As we have seen, it is a company’s ability to execute against AI models that creates value, rather than the models themselves. In this final section, we sketch out some of the high-level implications of our study of AI use cases for providers of AI technology, appliers of AI technology, and policy makers, who set the context for both.