At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Having founded my startup a few years ago, I am familiar to why founders go through the pain & grit to build their own company. The statistics around startup survival rates show that the risk is high, but the potential reward both financially & emotionally is also significant.
In my case, risk was defined by the amount of money I invested in the venture plus the opportunity cost in case the startup goes nowhere. The later relates to the fact that I earned no salary at the beginning & that when I committed to that specific idea I was instantaneously saying “no” to many other opportunities and potential career advancements. The reward was two-fold too; the first one was the attractive financial outcome of a potential exit. The second one was the freedom to chase opportunities as they appear, doing what I want and how I want it.
Once I raised capital from investors, I basically traded reward for reduced risk. I started paying myself a small salary and anticipated that more resources would increase the success likelihood of the startup.
This pattern of weighing risk against rewards was crystal clear in my mind… until I joined the arena of corporate venture building. Directly during one of my first projects, I was tasked with the creation of a startup for a blue-chip corporate client. I was immediately puzzled by the reasoning behind this endeavor.
Ultimately corporate decisions are also guided by risk against reward: if they don’t take risks and innovate they might be left behind and, in some cases, join the once-great-now-extinct corporate hall of shame. That’s why they invest in research and development, spend hard earned cash in mergers and acquisitions and start innovation programs. But my interest was more at a micro level, meaning, which reasoning my corporate client follows to decide if and how to found a specific new venture?
Having thought about it a lot, I believe at micro level corporates weigh investment against control. Investment is the level of capital, manpower & political will provided by the corporate to propel the venture towards exit, break-even or strategic relevance. Control is the possibility to steer the venture towards the strategic goals the leadership team has in mind while defining the boundaries of what can & cannot be done.
In the startup case, the risk/reward is typically shared between the founders and external investors. In a corporate venture building case, the investment/control can be shared between the corporate, an empowered founder team and also external investors.
I am still in the middle of the corporate decision-making process but wanted to share with you the scenarios we are using to guide the discussions on how to structure the new venture. But before I do, I would like to mention that the considerations of investment vs. control takes place at three different stages of the venture’s existence:
• Incubation: develop & validate idea • Acceleration: validate business model incl. product, operations & customer acquisition (find the winning formula) • Growth: replicate the formula to grow exponentially
Based on that, three main scenarios are being considered to found the new venture.
Scenario 1: Control & Grow
Full investment & control during incubation & acceleration
Shared investment & control during the growth stage
Per definition, the incubation and acceleration stages are less capital intensive and is the moment when key strategic decisions that shape the future business are made. In these stages, the corporate is interested in maintaining the full control of the venture while absorbing the whole investment. Only when they enter the capital-intensive growth stage it becomes necessary to “share the burden” with other institutional or strategic investors. This scenario is suitable for ventures of high strategic value, especially the ones leveraging core assets and know-how of the corporate mothership.
Scenario 2: Spread the Bets
Lower investment & control during all stages
In this case, the corporate initiator empowers a founder team and joins the project almost like an external investor would do at Seed and Series A of a startup. They agree on a broad vision, provide the funding and retain a part of the shares with shareholder meetings in between to track progress. Beyond that, they let the founder team do their thing. External investors can join at any funding round to share the investment tickets. The corporate would have lower control and investment from the get-go and can increase their influence only when new funding rounds are required or via an acquisition offer. This scenario is suitable for ventures in which the corporate can function as the first client or use their network to manufacture, market or distribute the product or service.
Scenario 3: Build, operate & transfer
Lower investment & control during incubation & acceleration
Full investment & control during the growth stage
The venture is initially built by a founder team or external partners (often a consultancy). Only once they successfully finalized the incubation and acceleration stages, the corporate has the right or obligation to absorb the business. Differently than scenario 2, the corporate gains stronger control of the trajectory of the business during its initial stages by defining how a “transfer” event looks like. The investment necessary to put together a strong founder team is reduced by the reward of a pre-defined & short term exit event. The initial investment can be further reduced by the participation of Business Angels, also motivated by a clear path to exit and access to a new source of deal flow. This scenario is suitable for ventures closely linked to the core business of the corporate and where speed & excellence of execution is key.
There is obviously no right and wrong. Each scenario can make sense according to the end goal of the corporate. Furthermore, there are surely new scenarios and variations of the above. What is important in my opinion is to openly discuss which road to take. If the client can’t discern the alternatives and consequences, you will risk a “best of both worlds” mindset where expectations regarding investment & control don’t match. If that is the case, you will be up for a tough ride
Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.
These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.
1) Confirmation Bias
I look at the evidence and arrive at my decisions through more or less rational processes.
To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.
We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.
When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.
2) Conviction Bias
I believe in this idea so strongly. It must be true.
We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.
3) Appearance Bias
I understand the people I deal with; I see them just as they are.
We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.
4) The Group Bias
My ideas are my own. I do not listen to the group. I am not a conformist.
We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.
5) The Blame Bias
I learn from my experience and mistakes.
Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.
6) Superiority Bias
I’m different. I’m more rational than others, more ethical as well.
Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.
We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.
Why Olam is Deploying Tech First, Then Thinking About CVC
“We have realized that some companies have gone down the wrong path by adopting the approach of inventing the problem. They find a technology that’s exciting and try to force-fit that technology for a problem that they don’t have. This is why we want to be very deliberate about the problems first, and then come to technology.”
“I’ll give you an example of blockchain. There’s so much hype about blockchain around the world. And in our industry, there are a few companies that have done some pilots. But we have not gone down that route, because we have not seen a tangible, scalable use case that could give us significant benefits for adopting blockchain.”
If one company could benefit from the efficiencies new technology can bring, it’s Olam, with a complex supply chain that grows, sources, processes, manufactures, transports, trades and markets 47 different agrifood products across 70 countries. These include commodities like coffee, cotton, cocoa, and palm oil that are farmed by over 4 million farmers globally, most of which are smallholders in developing countries.
But the third largest agribusiness in the world has been noticeably absent from the agrifood corporate venture capital scene in recent years, instead opting mostly to build its own technology solutions in-house. (It did deploy Phytech’s FitBit for crops in Australia in 2016 as an outside example.)
For traceability, and perhaps an alternative to blockchain-enabled technology, there’s Olam AtSource, with a digital dashboard that provides Olam customers with access to rich data, advanced foot-printing, and granular traceability. Olam hopes AtSource will help its customers “meet multiple social and environmental targets thereby increasing resilience in supply chains.”
Olam has also developed and deployed the Olam Farmer Information System (OFIS), a smallholder farm data collection platform providing smallholders with management tools and Olam customers with information about the provenance of products.
“OFIS solves the information issue by providing a revolutionary tech innovation for collecting and analyzing first mile data,” Brayn-Smith told AgFunderNews when OFIS launched in 2017. “We are able to register thousands of smallholders, GPS map their farms and local infrastructure, collect all types of farm gate level data such as the age of trees, and record every training intervention.”
This product is a clear example of a “transformational technology” that solves a problem for Olam and also gives the business efficiencies that could impact the bottom line, according to Sundararajan.
And Olam has built on top of OFIS to transact directly with cocoa farmers in Indonesia where Olam is publishing prices to around 30,000 farmers and buying cocoa directly from them.
“Before technology was available, it was almost impossible for any company to buy directly from the farmers, just because of the sheer volume and number of farmers. But, with technology, you have a far better reach, which will allow us to directly communicate with them,” Sundararajan tells AgFunderNews.
“Now the farmer can just accept a price and type in that he wants to supply it, and we arrange the complete logistics to pick up the cocoa from the farmer,” he says adding that the company’s country heads in other parts of the world are keen to launch this service in their markets. The company is starting next in Peru, then Guatemala, Colombia, Cote d’Ivoire, Ghana, and Nigeria.
Olam as Disruptor
While Olam deployed OFIS to solve for a problem, it also gives the company the opportunity to be disruptive in the markets it serves, according to Sundararajan.
As well as looking for transformational ways to solve specific problems, Olam also looks at “any ideas we have that will give Olam an opportunity to disrupt our own industry. So, we end up being a disrupter and not be at the risk of being disrupted by a new player,” he says.
“This fundamental shift in terms of Olam getting an opportunity to directly interact and transact with farmers is a starting point of disruption for us. This is a very complex point, which will bring into play several technologies for us to be able to successfully scale it.”
Going down this route, Sundararajan says Olam could end up providing farmers with new services and creating “separate streams of revenue that has nothing to do with what we were doing five or 10 years back.”
In this vein, Olam is working on deploying a technology to detect moisture — and therefore quality — in its commodities. The company is also looking at financial tools for its farmers.
“Looking at our business model, we believe that we have a few very good opportunities at the first mile of the supply chain and the last mile of the supply chain to change the way we compete,” says Sundararajan. “We believe that since we have control of the supply chain end-to-end, we can use technology to differentiate our service to customers in a way that our competitors will find difficult to replicate.”
Informal Startup Interactions
Olam does interact with startups on a selective basis, and Sundararajan’s participation in Rethink’s Singapore conference, as well as a hackathon it took part in with Fujitsu in Australia last year, are two examples. Sundararajan said he is considering an idea like The Unilever Foundry, but the company has yet to create a formal process or framework for these interactions. And the same goes for corporate venture capital.
“We believe that our digital journey has to mature much more, where we should demonstrate success within, by implementing the solutions that we’re developing, before even considering investing in venture capital. We believe that we have a very good strategy and a suite of products, stretching across from farm to the factories, to digitize our operations, whether it is a digital buying model, or whether it is spot factories in terms of predictive maintenance or increasing yield or it’s drone imagery from our own plantations, and productivity apps for employees.”
Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.
If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.
At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.
In this inaugural post, we discuss three areas of technical AI safety: specification, robustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.
Specification: define the purpose of the system
You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.
This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:
ideal specification (the “wishes”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;
design specification (the “blueprint”), corresponding to the specification that we actually use to build the AI system, e.g. the reward function that a reinforcement learning system maximises;
and revealed specification (the “behaviour”), which is the specification that best describes what actually happens, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.
A specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.
For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.
*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.
As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.
Robustness: design the system to withstand perturbations
There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shift, adversarial inputs, and unsafe exploration can be classified as robustness problems.
To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.
Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.
Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015; Amodei and Olah et al., 2016).
Assurance: monitor and control system activity
Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.
Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.
AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).
Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).
We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.
Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.
With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!
We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research
NEA is one of the most well-known investors around, and the firm also takes the crown as the most active VC investor in Series A and B rounds in the US so far in 2018. Andreessen Horowitz, Accel and plenty of the other usual early-stage suspects are on the list, too.
Also included is a pair of names that have been in the news this year for backing away from the traditional VC model: Social Capital and SV Angel. The two are on the list thanks to deals completed earlier in the year.
Just how much are these prolific investors betting on Series A and Series B rounds? And at what valuation? We’ve used data from the PitchBook Platform to highlight a collection of the top venture capital investors in the US (excluding accelerators) and provide information about the Series A and B rounds they’ve joined so far this year. Click on the graphic below to open a PDF.
Many corporations are pinning their futures on their venture investment portfolios. If you can’t beat startups at the innovation game, go into business with them as financial partners.
Though many technology companies have robust venture investment initiatives—Alphabet’s venture funding universe and Intel Capital’s prolific approach to startup investment come to mind—other corporations are just now doubling down on venture investments.
And 2018 is on track to set a record for U.S. corporate involvement in venture deals. We come to this conclusion after analyzing corporate venture investment patterns of the top 100 publicly traded U.S.-based companies (as ranked by market capitalizations at time of writing). The chart below shows that investing activity, broken out by stage, for each year since 2007.
A few things stick out in this chart.
The number of rounds these big corporations invest in is on track to set a new record in 2018. Keep in mind that there’s a little over one full quarter left in the year. And although the holidays tend to bring a modest slowdown in venture activity over time, there’s probably sufficient momentum to break prior records.
The other thing to note is that our subset of corporate investors have, over time, made more investments in seed and early-stage companies. In 2018 to date, seed and early-stage rounds account for over sixty percent of corporate venture deal flow, which may creep up as more rounds get reported. (There’s a documented reporting lag in angel, seed, and Series A deals in particular.) This is in line with the past couple of years.
Finally, we can view this chart as a kind of microcosm for blue-chip corporate risk attitudes over the past decade. It’s possible to see the fear and uncertainty of the 2008 financial crisis causing a pullback in risk capital investment.
Even though the crisis started in 2008, the stock market didn’t bottom out until 2009. You can see that bottom reflected in the low point of corporate venture investment activity. The economic recovery that followed, bolstered by cheap interest rates, that ultimately yielded the slightly bloated and strung-out market for both public and private investors? We’re in the thick of it now.
Whereas most traditional venture firms are beholden to their limited partners, that investor base is often spread rather thinly between different pension funds, endowments, funds-of-funds, and high-net worth family offices. With rare exception, corporate venture firms have just one investor: the corporation itself.
More often than not, that results in corporate venture investments being directionally aligned with corporate strategy. But corporations also invest in startups for the same reason garden-variety venture capitalists and angels do: to own a piece of the future.
A Note On Data
Our goal here was to develop as full a picture as possible of a corporation’s investing activity, which isn’t as straightforward as it sounds.
We started with a somewhat constrained dataset: the top 100 U.S.-based publicly traded companies, ranked by market capitalization at time of writing. We then traversed through each corporation’s network of sub-organizations as represented in Crunchbase data. This allowed us to collect not just the direct investments made by a given corporation, but investments made by its in-house venture funds and other subsidiaries as well.
It’s a similar method to what we did when investigating Alphabet’s investing universe. Using Alphabet as an example, we were able to capture its direct investments, plus the investments associated with its sub-organizations, and their sub-organizations in turn. Except instead of doing that for just one company, we did it for a list of 100.
This is by no means a perfect approach. It’s possible that corporations have venture arms listed in Crunchbase, but for one reason or another the venture arm isn’t listed as a sub-organization of its corporate parent. Additionally, since most of the corporations on this list have a global presence despite being based in the United States, it’s likely that some of them make investments in foreign markets that don’t get reported.
Outsourcing may be the only way to access cost-effective expertise in artificial intelligence, but beware of the possible pitfalls
Artificial intelligence (AI) has fuelled science fiction for decades. Yet now, with technology having caught up with and overtaken human imagination, its capabilities are becoming science fact and too powerful for business leaders to ignore.
AI, even in its relative infancy, is enabling C-suiters to redraw all aspects of their organisations. Those who embrace AI and related nascent digital technologies – automation, robotics, machine-learning, big data – are already gaining a significant advantage over laggards.
“Everything invented in the past 150 years will be reinvented using AI within the next 15 years,” predicts Randy Dean, San Francisco-based chief business officer at Launchpad.AI. “Every industry is going to be affected; almost every enterprise problem is ripe for an AI-derived solution or improvement. Early adopters will have an advantage.”
The potential corporate market for AI services, software and hardware is colossal. International Data Corporation research forecasts it will reach $57.6 billion in 2021, significantly more than the $12 billion spent last year. Further, PwC estimates AI could contribute up to $15.7 trillion to the global economy in 2030, a higher figure than the current output of China and India combined.
However, incorporating AI into an organisation can be extremely challenging and risky, especially for large enterprises lumbered with legacy systems. While cutting-edge AI applications are increasingly common place, with a mushrooming number of organisations, from tech giants to startups, developing working solutions, the financial and time capital involved can be overwhelmingly extensive.
These are costs that have to be taken on, though, if an organisation, regardless of size, wants to thrive in the future.
Outsourcing AI is one option many are seriously considering. The benefits of using third-party service providers for AI solutions are manifold. They include greater access to global talent pools – a key point when you consider that leading AI data scientists, because of their scarcity, can command seven-figure salaries – and the ability to tap into specialist skillsets and experts who will be able to solve company-specific problems with greater efficiency, thus helping business leaders choose the appropriate technology for their organisation.
“Even the largest technology brands are outsourcing AI projects,” says Dr Paula Parpart, founder of Brainpool, a worldwide network of AI and machine-learning luminaries. “Demand for top AI experts and data scientists is far outstripping supply, which is why outsourcing is a compelling option. Some of the specialisms required are so niche, the talent so hard to find and contractual relationships so tight that it could take two years for an organisation to fill a role.”
Further, thanks to that level of expertise, tried-and-tested models are quicker to implement compared with crafting them internally. And if an outsourced AI solution does not quite work and needs to be switched, the cost and risk burden is markedly less than the in-house alternative. And, at this stage of AI’s maturity, experimentation is advised.
A recent Accenture report predicts that AI will add £654 billion to the UK economy by 2035. Vinod Patel, managing director of Accenture Operations, says: “A significant portion of that will be outsourced to third-party service providers. Increasingly, organisations are looking at external parties to drive innovation.”
Advising caution, Mr Dean says: “It’s important to note that AI is not magic and it is not always successful in finding improvements. But outsourcing provides ready access to the required talent today versus waiting to recruit and hire people, which will be very hard, time consuming and expensive.
“There are millions of AI opportunities across the enterprise, though there is very little off-the-shelf software. AI is a diverse field and often requires and ensemble of approaches to achieve success. It will take multiple years for organisations to begin to take full advantage of AI, but the sooner business leaders start understanding what AI can do for them, and experimenting with it, the more likely they are to come out on the other side, successful in the marketplace.”
Marco Rimini, chief development officer of Mindshare Worldwide, agrees that AI “if applied correctly, will empower an organisation to operate at levels previously out of reach of manual capability and ability, which in turn will lead to significant opportunities, irrespective of industry”.
He echoes Cathy O’Neil’s observation, in Weapons of Math Destruction, that poorly thought-through AI applications can be highly damaging; another reason to engage experienced third parties in the AI space. “If a business incorrectly applies AI, or ignores it, it will enforce negative change and that could be fatally damaging,” says Mr Rimini.
He warns that it is critical for business leaders who chose to outsource AI that they guard the most important digital assets and data from the third parties, and deliberate over their business strategy, which could be altered unrecognisably by potent new technology.
Mr Rimini cautions: “Whatever size, you need to invest in-house to determine the role of AI, or risk outsourcing the core of your organisation and also becoming overdependent on the outsourcing company, which in a worst-case scenario could become a direct competitor.
“AI is not an additional service, or function in itself, but can be the heartbeat of a business. Ultimately, you shouldn’t outsource the core of your business.”
Insight: Pros and cons of outsourcing AI
The big positive in outsourcing is that it gives organisations a means of accessing top-level artificial intelligence (AI) experts who are normally extremely difficult to find and would be expensive to have on the payroll permanently. Their day rates might seem high, but they do not compare with the seven-figure salaries if they were full-time employees. Outsourcing opens up AI to benefit more organisations and democratises access to skills… Dr Peter Bebbington, chief technology officer at Brainpool
Planning AI projects without previous experience can result in mistakes and even lead to the entirely wrong approach being taken. External providers can draw on experience and knowledge to identify both the right approach and project for the business. Additionally, if a project isn’t delivering, the organisation can walk away from an external data company whenever they need to… Richard Potter, chief executive of Peak
Companies will naturally benefit from utilising external AI-powered solutions to streamline their business because these providers are tapping into a global network of dynamic, demand-driven data. To try and replicate this scale of data would be a cost-inefficiency, if even possible at all… Mark O’Shea, chief technology officer at Maistro
There will likely be a lack of domain expertise when outsourcing to third-party AI developers. This can mean education is required before these developers are able to provide industry-specific AI application. Organisations that choose to outsource also lose the ability to groom their own specialist teams… Nav Dhunay, co-founder and chief executive of Imaginea.AI
By outsourcing you relinquish a certain amount of output and control. And outsourcing AI puts pressure on an organisation’s “plumbing”, which is responsible for transporting the intelligence to the person or process that can actually use it to drive business transformation. It is imperative, therefore, to optimise communication to make sure you get what you want, rather than receive what the outsourcer wishes to give you… Will Edward, chief commercial officer at Autologyx
The biggest pitfall in AI, and therefore outsourcing AI capabilities, is the assumption that it will solve everything. Organisations need to apply a level of discovery to their current and, importantly, their future business to determine the applicability of AI. The prioritisation of AI will help business leaders determine where to start and the journey to increase AI, and in turn this will frame the strategy for outsourcing AI… Vinod Patel, managing director of Accenture Operations
Innovation in large enterprises once occurred over the course of decades, but today, that’s a luxury many enterprises no longer have. In 1965, the average company on the S&P 500 remained for 33 years. By 1990 it shrunk to 20 years, and by 2026, it’s expected to shrink to 14 years.
Rapid innovation is a prerequisite for survival.
Yet, many say enterprises don’t have what it takes. They take too long to adopt solutions and get bogged down by legacy systems. Their progress is incremental rather than disruptive.
But the biggest companies in the world aren’t sitting still. They can be catalysts for innovation and first adopters of new technology, if they understand how to create a framework for innovation within their company. At Sapphire, we collaborate regularly with corporate innovators that seek to navigate dynamic new ecosystems often populated by disruptive startups and emerging technologies.
Shakti Jauhar, head of Global HR Operations and Shared Service at PepsiCo, talks about the importance of constant innovation and created a program that helps his team evaluate and bring in new technology innovations from startups in the HR space. Called the 90/90, the program has seen early success, so I sat down with Shakti to learn more about the framework he uses to speed up startup collaboration — one that any enterprise can leverage to make fast-moving innovation part of their ethos.
Below is an excerpt from our conversation in which Shakti shares the initial steps a company should take to create a framework for working with startups.
Step 1: Create Alignment and Agree on Objectives
This first step may seem obvious, but is often overlooked. Misalignment can and will kill every attempt to innovate in the enterprise. Enterprises are complex machines that rely on many systems running in tandem. If the legal team, IT department and procurement each have conflicting priorities, it will be difficult to succeed. With the increasing trend of business driving tech adoption independent of IT, CXOs would also do well to align closely with CIOs and IT leadership on questions of specific innovation priorities, where to partner vs. build, speed of adoption, appetite for technological risk and so on.
At PepsiCo, an important alignment step is to identify a need or areas of opportunity and then present them to the startup and innovation communities for solutions. Problem statements ranged from CoEs looking to implement a new program to efficiency plays. Every six to nine months, the team would identify a small group of startups and invite them to gain alignment with stakeholders aligned to the agreed-upon problem statements. This alignment is a key enabler in the eventual success of startups graduating through the program.
Achieving alignment will put in place a realistic understanding both of what is possible and how it will play out across an organization. Working out internal problems is the foundation of an internal framework for innovation and CXOs should do this well before they bring startups into the equation.
Step 2: Ready Internal Infrastructure and Platforms
Another critical step is reviewing the infrastructure that a company has in place and updating it if necessary. As a key first step, PepsiCo has re-architected its core HR system onto a single platform across 83 countries for ~260,000 employees. This, along with other technology deployments enabled it to create the equivalent of a “plug and play” system, where new solutions could be adopted into the core platform.
Allowing some experimentation on this platform can also be an enabler of startup success. For example, partners adopted some of the ideas for startups where they have launched an app store or made an environment available for Startups to write their own APIs into an HR platform. Taking a platform-based approach has been a holy grail in the enterprise for some time, and for PepsiCo HR this infrastructure is a key ingredient to accelerate serving up innovation at scale for employees.
Step 3: Build a Blueprint
The next step is to create a blueprint which enables finding, incorporating and scaling new processes. This allows enterprises to lock in their ability to innovate for years to come and continually work with the best emerging startups in their field.
As part of the 90/90 program, participating startups commit 90 days to both deploy their solution within PepsiCo and demonstrate their ROI. This provides a clear framework for all parties to quickly evaluate success. That means PepsiCo is evaluating solutions based on how they drive broader business goals and address problem statements. For the startups, that means quickly assessing their readiness to scale to enterprise grade.
To assemble a system for scaling innovation by partnering with startups, enterprises should:
Use their connections with VC firms, founders and angel investors to scout partnership opportunities. For example, PepsiCo’s partnership with Sapphire Ventures has exposed the company to a wide range of startups and emerging technologies that fuel its innovation roadmap.
Specify a hard timeline for testing innovation and partnerships. This helps focus the system on accomplishing set goals. It also standardizes the process for bringing on new tech, making it repeatable.
Focus on finding fit. When dealing with a shorter timeframe, like the 90/90 framework, big investments are not necessary. The real ROI might come from finding something that continually pays for itself in a short time.
The goal in making a blueprint for a framework like 90/90 is to keep things moving for the enterprise and to make partnerships easier by laying out a clear vision of how successful adoption of new technology will work out.
Step 4: Lean All The Way In
Setting the wheels of innovation in motion is only half of the work in a program like this. The other half is building long-term relationships with the best new companies out there. The companies that find success in a startup-enterprise relationship are open, proactive and willing to make an investment beyond the short-term.
Enterprises also need to keep a close eye on the startups in their industry. But when so many startups fail, enterprises can be wary of spending too much time trying to dissect the space.
That’s a huge mistake. Yes, many startups don’t survive. But, over time, startups will evolve the way organizations think about innovation and agility. And ultimately one of them will end up disrupting business in a way that will be unprecedented. Leaders need to be paying close attention to their market to stay on that curve.
Set Up for Success
It’s up to large enterprises to carve out their own future. In today’s world, that means finding ways to innovate at high speeds. Although they certainly have more to coordinate than smaller companies, this doesn’t mean they’re doomed to lag behind.
Instead, savvy global enterprises like PepsiCo, are putting themselves on the forefront of innovation in their industry. They’re building long-term partnerships within the startup and venture communities, and creating a way for innovation to regularly cycle through their companies. They’re streamlining their internal processes to scale novel solutions and as they’re doing these things, they’re securing the legacy of their company for years to come.
Corporate venture capital can help agrifood tech startups scale-up and expand to new markets, but can also be difficult to work with, according to agrifood entrepreneurs attending the Seeds & Chips conference in Milan last week.
While corporate investment can bring industry knowledge, technical know-how and distribution channels, some entrepreneurs are wary of taking their money, especially if their objectives do not align.
If a corporate VC has been founded to defend the parent company’s market share or to scout for new acquisition opportunities, there might be a conflict of interest with the companies they are trying to invest in, some entrepreneurs argued.
“(Corporate VC) can probably bring a lot value. They have a lot of expertise, money, etc. They basically have everything we need. But sometimes I feel like they want us to be their company,” Alvyn Severien, CEO of Algama, the microalgae-based food startup, told AgFunderNews on the sidelines of the event. “The more we discuss, they want some kind of exclusivity; they want to own us.”
Of course, there are corporate VCs who do not operate that way and aim to invest in startups like a commercial venture capital fund would, added Severein. “I think this a good attitude to have, and we are open to talk to these people.”
Another agrifood entrepreneur highlighted the different value propositions for a startup in commercial and corporate venture capital.
“Commercial VCs are very professional and understand how to create a solid business model, as well as how to guide a startup through a capital raising or an exit. However, their value beyond that can be limited. For example, they might say they have a network that can help set up a distribution infrastructure, but in reality, it will only help to a certain point. Corporate VCs, however, are not as professional and do not have the experience of deals and rounds because most of them have not been around for a long time. But on the flipside, they can bring a lot of added value to startups, for example by providing access to their distribution network or R&D capabilities.”
The different value propositions can sometimes be exploited to accelerate the growth of startups. Dan Altschuler Malek, venture partner at New Crop Capital, mentioned during a panel discussion that he enjoyed co-investing with Tyson Ventures, the venture arm of Tyson Foods, in cultured meat startup Memphis Meats. Together they can help Memphis Meats on multiple dimensions, but he also highlighted how important it was that Tyson Ventures has a good understanding of where the meat industry is going, and that it is seeing cultured meat as a complementary product to what they are currently offering.
“In our case, we have a sustainability mission. Of course, we provide an opportunity; it is like any investment. But we are really focusing on the sustainability aspect. And most of our investors are really interested in that as well.”
Ultimately, it is all about sharing the same vision. Commercial VCs can more easily go along with the vision of their portfolio companies. A corporate’s primary interest is to ensure growth and a good performance of their core business, which might not align with the mission of a startup.
Entrepreneurs should look at their mission and strategic needs to determine what kind of investor would suit them.