There was little doubt four years ago that Conagra Brands’ frozen portfolio was full of iconic items that had grown tired and, according to its then-new CEO Sean Connolly, were “trapped in time.”
While products such as Healthy Choice — with its heart-healthymessage — and Banquet — popular for its $2 turkey and gravy and salisbury steak entrees — were still generating revenue, the products lookedmuch the same as decades before. The result: sales sharply fell as consumers turned to trendier flavors and better-for-youoptions.
Executives realized the decades-old process used to create and test products wasn’t translating into meaningful sales. Simply introducing new flavors or boosting advertising was no longer enough to entice consumers to buy. If Conagra maintained the status quo, the CPG giant only risked exacerbating the slide and putting its portfolio of brands further behind the competition.
“We were doing all this work into what I would call validation insights, and things weren’t working,” Bob Nolan, senior vice president of demand sciences at Conagra, told Food Dive. “How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Nolan and other officials at Conagra eventually decided to abandon traditional product testing and market research in favor of buying huge quantities of behavioral data. Executives were convinced the datacould do a better job of predicting eventual product success than consumers sitting in an artificial setting offering feedback.
Conagra now spends about $15 million less on testing products than it did three years ago, with the much of the money now going toward buying data in food service, natural products, consumption at home, grocery retail and loyalty cards. When Nolan started working at Conagra in 2012, he estimated 90% of his budget at the company was spent on traditional validation research such as testing potential products, TV advertisements or marketing campaigns. Today, money spent on those methods hasbeen cut to zero.
While most food and beverage companies have not changed how they go about testing their products as much as Conagra, CPG businesses throughout the industry are collectively making meaningful changes to their own processes.
With more data avaliable now than ever before, companies can change their testing protocol to answer questions they might have previously not had the budget or time to address. They’re also turning to technology such as videos and smartphones to immediately enagage with consumers or to see firsthand how they would respond to their prototype products in real-life settings, like their own homes.
As food manufacturers scramble to remain competitive and meet the shopper’s insatiable demand fornew tastes and experiences,changing how they go about testing can increase the liklihood that a product succeeds — enabling corporations to reap more revenue and avoid being one of the tens of thousands of products that fail every year.
For Conagra, the new approach already is paying off. One success story came in the development of the company’s frozen Healthy Choice Korean-Inspired Beef Power Bowl. By combing data collected from the natural food channel and specialty stores like Whole Foods and Sprouts Farmers Market, the CPG giant found people were eating more of their food in bowls — a contrast to offerings in trays.
“How it could it not work if we asked consumers what they wanted, and we made it, and then it didn’t sell? …That’s when the journey started. Is there a different way to approach this?”
Senior vice president of demand sciences, Conagra
At the same time, information gathered from restaurants showed Korean was the fastest-growing cuisine. The data also indicatedthe most popularflavors within that ethnic category. Nolan said without the data it would have been hard to instill confidence at Conagra that marketing a product like that would work, and executives would have been more likely to focus on flavors the company was already familiar with.
Since then, Conagra rebranded Healthy Choice around cleaner label foods with recognizable, modern ingredients that were incorporated into innovations such as the Power Bowl. The overhaul helped rejuvenate the 34-year old brand, with sales jumping 20% during the last three years after declining about 10% during the prior decade, according to the company.
Conagra has experienced similar success by innovating its other frozen brands, including Banquet and Marie Callender’s. For a company where frozen sales total $5.1 billlion annually, the segment is an important barometer for success at Conagra.
For years, food companies would come up with product ideas using market research approaches that dated back to the 1950s. Executives would sit in a room and mull over ways to grow a brand. They would develop prototypes before testing and retesting a few of them to find the one that would have the best chance of resonating with consumers. Data used was largely cultivated through surveys or focus groups to support or debunk a company idea.
“It’s an old industry and innovation has been talked about before but it’s never been practiced, and I think now it’s starting to get very serious because CPG companies are under a lot of pressure to innovate and get to market faster,” Sean Bisceglia, CEO of Curion, told Food Dive. “I really fear the ones that aren’t embracing it and practicing it … may damage their brand and eventaully damage their sales.”
Information on nearly every facet of a consumer’s shopping habits and preferences can be easily obtained. There is data showing how often people shop and where they go. Tens of millions of loyalty cards reveal which items were purchased at what store, and even the checkout lane the person was in. Data is available on a broader level showing how products are selling, but CPGs can drill down on an even more granular level to determine the growth rate of non-GMO or organic, or even how a specific ingredient like turmeric is performing.
Market research firms such as Nielsen and Mintel collect reams of valuable data, including when people eat, where and how they consume their food, how much time they spend eating it and even how it was prepared, such as by using a microwave, oven or blender.
To help its customers who want fast results for a fraction of the cost, Bisceglia said Curion has created a platform in which a product can be tried out among a random population group — as opposed to a specifically targeted audience made up of specific attributes, like stay-at-home moms in their 30s with two kids — with the data given to the client without the traditional in-depth analysis. It can cost a few thousand dollars with results available in a few days, compared to a far more complicated and robust testing process over several months that can sometimes cost hundreds of thousands of dollars, he said.
Curion, which has tested an estimated 8,000 products on 700,000 people during the last decade, is creating a database that could allow companies to avoid testing altogether.
For example, a business creating a mango-flavored yogurt could initially use data collected by a market research firm or someone else showing how the variety performed nationwide or by region. Then, as product development is in full swing, the company could use Curion’s information to show how mango yogurt performed with certain ages, income levels and ethnicities, or even how certain formulations or strength of mango flavor are received by consumers.
“What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again? You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Owner, Lori Rothman Consulting
Lori Rothman, who runs her own consulting firm to advise companies with their product testing,worked much of the last 30 years at companies including Kraft and Kellogg to determine the most effective way to test a product and then design the corresponding trial. She used to have days or weeks to review data and consumer comments before plotting out the best way to move forward, she said.
In today’s marketplace, there is sometimes pressure to deliver within a day or even immediately. Some companies are even reacting in real time as information comes in — a precedent Rothman warned can be dangerous because of the growing amount of data available and the inherent complexity in understanding it.
“It’s continuing toward more data. It’s just going to get more and more and we just have to get better at knowing what to do with it, and how to use it, and what’s actually important. What’s actually going to be able to predict if someone is going to buy something, and are they going to buy it again and again and again?” Rothman said. “You have to get smart on what is the payoff at the end of all of the data. And just figure out what the key measures are that you need and stop collecting, if you can, all this other ancillary stuff.”
Ferrara Candy, the maker of SweeTarts, Nerds and Brach’s, estimated the company considers more than 100 product ideas each year. An average of five typically make it to market.
To help whittle down the list, the candy company owned by Nutella-maker Ferrero conducts an array of tests with consumers, nearly all of them done without the customary focus group or in-person interview.
Daniel Hunt, director of insights and analytics for Ferrara, told Food Dive rather than working with outside vendors to conduct research, like the company would have a decade ago, it now handles the majority of testing itself.
In the past, the company might havespent $20,000 to run a major test. It would have paid a market research firm to write an initial set of questions to ask consumers, then refine them, run the test and then analyze the information collected.
Today, Hunt said Ferrara’s own product development team, most of whom have a research background, does most of the work creating new surveys or modifying previously used ones — all for a fraction of the cost. And what might have taken a few months to carry out in the past can sometimes be completed in as little as a few weeks.
“Now when we launch a new product, it’s not much of a surprise what it does, and how it performs, and where it does well, and where it does poorly. I think a lot of that stuff you’ve researched to the point where you know it pretty well,” Hunt told Food Dive. “Understanding what is going to happen to a product is more important — and really understanding that early in the cycle, being able to identify what are the big potential items two years ahead of launching it, so you can put your focus really where it’s most important.”
Increasingly, technology is playing a bigger part in enabling companies such as Ferrara to not only do more of their own testing, but providing them with more options of how best to carry it out.
Data can be collected from message boards, chat rooms and online communities popular with millennials and Gen Zers. But technology does have its limits. Ferrara aims to keep the time commitment for its online surveys to fewer than seven minutes because Hunt said the quality of responses tends to diminish for longer ones, especially among people who do them on their smartphones.
Other research can be far more rigorous, depending on how the company plans to use the information.
“I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything we’re doing more testing and research then before but maybe just in a slightly different way than we did in the past.”
Director of insights and analytics, Ferrara
Last summer, Ferrara created an online community of 20 people to help it develop a chewy option for its SweeTarts brand. As part of a three-week program, participants submitted videos showing them opening boxes of candies with different sizes, shapes, flavors, tastes and textures sent to them by Ferrara. Some of the products were its own candies, while others came from competitors such as Mars Wrigley’s Skittles or Starburst. Ferrara wanted to watch each individual’s reaction as he or she tried the products.
Participants were asked what they liked or disliked, or where there were market opportunites for chewy candy to help Ferrara better hone its product development. These consumers wereasked to design their own products.
Ferrara also had people either video record themselves shopping or writing down their experience. This helped researchers get a feel for everything from when people make decisions that are impulsive or more thought out, to what would make a shopper decide not to purchase a product. As people provided feedback, Ferrara could immediately engage with them to expound on their responses.
“All of those things have really helped us get information that is more useful and helpful,” Hunt said. “I don’t think that (testing is) going away or becoming less prevalent, but certainly the way that we’re testing things from a product standpoint is changing and evolving. If anything, we’re doing more testing and research than before, but maybe just in a slightly different way than we did in the past.”
Getting people to change isn’t easy. To help execute on its vision, Conagra spent four years overhauling the way it went about developing and testing products — a lengthy process in which one of the biggest challenges was convincing employees used to doing things a certain way for much of their career to embrace a different way of thinking.
Conagra brought in data scientists and researchers to provide evidence to show how brands grow and what consumer behavior was connected to that increase. Nolan’s team had senior management participate in training courses “so people realize this isn’t just a fly-by-night” idea, but one based on science.
The CPG giant assembled a team of more than 50 individuals— many of whom had not worked with food before — to parse the complex data andfind trends. Thismarked a dramatic new way of thinking, Nolan said.
While people with food and market research backgrounds would have been picked to fill these roles in the past, Conagra knew it would be hard to retrain them in the company’s new way of thinking. Instead, it turned to individuals who had experience indata technology, hospitality and food service, even if it took them time to get up to speed on Conagra-specific information, like the brands in its portfolio or how they were manufactured.
Conagra’s reach extended further outside its own doors, too. The company now occasionally works with professors at the University of Chicago, just 8 miles south of its headquarters, to help assess whether it is properly interpreting how people will behave.
“In the past, we were just like everybody else,” Nolan said. “There are just so many principles that we have thrown out that it is hard for people to adjust.”
Mars Wrigley has taken a different approach, maintaining the customary consumer testing while incorporating new tools, technology and ways of thinking that weren’t available or accepted even a few years ago.
“I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Lisa Saxon Reed
Director of global sensory, Mars Wrigley
Lisa Saxon Reed, director of global sensory at Mars Wrigley, told Food Dive the sweets maker was recently working to create packaging for its Extra mega-pack with 35 pieces of gum, improving upon a version developed for its Orbit brand years before. This time around, the company — which developed more than 30 prototypes — found customers wanted a recyclable plastic container they believed would keepthe unchewed gum fresh.
Shoppers also wanted to feel and hear the packaging close securely, with an auditory “click.” Saxon Reed, who was not involved with the earlier form of the package, speculated it didn’t resonate with consumers because it was made of paperboard, throwing into question freshness and whether the package would survive as long as the gum did.
The new packaging, which hit shelves in 2016 after about a year of development, has been a success, becoming the top selling gum product at Walmart within 12 months of its launch, according to Saxon Reed. Mars Wrigley also incorporated the same packaging design for a mega pack of its 5 gum brand because it was so successful.
“If we would not have made a range of packaging prototypes and had people use them in front of us, we would have absolutely missed the importance of these sensory queues and we would have potentially failed again in the marketplace,” Saxon Reed said. “If I would have done that online, I’m not sure how I would have heard thoseclues. …I don’t think those would have come up and we would have missed an opportunity to win.”
The new approach extends to the product itself, too. Saxon Reed said Mars Wrigley was looking to expand its Extra gum line into a cube shape in fall 2017. Early in the process, Mars Wrigley asked consumers to compile an online diary with words, pictures and collages showing how they defined refreshment. The company wanted to customize the new offering to U.S. consumers, and not just import the cube-shaped variety already in China.
After Mars Wrigley noticed people using the color blue or drawing waterfalls, showers or water to illustrate a feeling of refreshment, product developers went about incorporating those attributes into its new Extra Refreshers line through the color, flavor or characteristics thatfeel cool or fresh to the mouth. They later tested the product on consumers who liked gum, including through the age-old testing process where people were given multiple samples to try and asked which they preferred.
Extra Refreshers hit shelves earlier this year and is “off to a strong start,” Saxon Reed said.
“I don’t see it as an ‘either-or’ when it comes to technology and product testing. I really see it as a ‘yes-and,’ ” she said. “How can technology really help us better understand the reactions that we are getting? But at this point, I have not seen a technology that replicates people actually trying the product and getting their honest reaction to it. At the end of the day, this is food.”
Regardless of what process large food and beverage companies use, how much money and time they spend testing out their products, or even how heavily involved consumers are, CPG companies and product testing firms agreed that an item’s success is heavily defined by one thing that hasn’t and probably never will change: taste.
“Everybody can sell something once in beautiful packaging with all the data, but if it tastes terrible it’s not going to sell again,” Bisceglia said.
It’s energy that has been around forever, used for years as a heating source across the world, particularly in areas with volcanic activity. Today, geothermal has surfaced as another renewable resource, with advancements in drilling technology bringing down costs and opening new areas to development.
Renewable energy continues to increase its share of the world’s power generation. Solar and wind power receive most of the headlines, but another option is increasingly being recognized as an important carbon-free resource.
Geothermal, accessing heat from the earth, is considered a sustainable and environmentally friendly source of renewable energy. In some parts of the world, the heat that can be used for geothermal is easily accessible, while in other areas, access is more challenging. Areas with volcanic activity, such as Hawaii—where the recently restarted Puna Geothermal Venture supplies about 30% of the electricity demand on the island of Hawaii—are well-suited to geothermal systems.
“What we need to do as a renewable energy industry is appreciate that we need all sources of renewable power to be successful and that intermittent sources of power need the baseload sources to get to a 100% renewable portfolio,” Will Pettitt, executive director of the Geothermal Resources Council (GRC), told POWER. “Geothermal therefore needs to be collaborating with the solar, wind, and biofuel industries to make this happen.”
1. The Nesjavellir Geothermal Power Station is located near the Hengill volcano in Iceland. The 120-MW plant contributes to the country’s 750 MW of installed geothermal generation capacity. Courtesy: Gretar Ívarsson
The U.S. Department of Energy (DOE) says the U.S. leads the world in geothermal generation capacity, with about 3.8 GW. Indonesia is next at about 2 GW, with the Philippines at about 1.9 GW. Turkey and New Zealand round out the top five, followed by Mexico, Italy, Iceland (Figure 1), Kenya, and Japan.
Cost savings from geothermal when compared to other technologies is part of its allure. The DOE is funding research into clean energy options, including up to $84 million in its 2019 budget to advance geothermal energy development.
2. This graphic produced by AltaRock Energy, a geothermal development and management company, shows the energy-per-well equivalent for shale gas, conventional geothermal, an enhanced geothermal system (EGS) well, and a “super hot” EGS well. Courtesy: AltaRock Energy / National Renewable Energy Laboratory
Introspective Systems, a Portland, Maine-based company that develops distributed grid management software, in February received a Small Business Innovation Research award from the DOE in support of the agency’s Enhanced Geothermal Systems’ (EGS) project. At EGS (Figure 2) sites, a fracture network is developed, and water is pumped into hot rock formations thousands of feet below the earth’s surface. The heated water is then recovered to drive conventional steam turbines. Introspective Systems is developing monitoring software that enables EGS systems to be cost-competitive.
Kay Aikin, Introspective Systems’ CEO, was among business leaders selected by the Clean Energy Business Network (CEBN)—a group of more than 3,000 business leaders from all 50 states working in the clean energy economy—to participate in meetings with members of Congress in March to discuss the need to protect and grow federal funding for the DOE and clean energy innovation overall.
Aikin told POWER that EGS technology is designed to overcome the problem of solids coming “out of the liquids and filling up all the pores,” or cracks in rock through which heated water could flow. The Introspective Systems’ software uses “algorithms to find the sites [suitable for a geothermal system]. We can track those cracks and pores, and that is what we are proposing to do.”
Looking for more insight into geothermal energy? Read our “Q&A with Geothermal Experts,” featuring Dr. Will Pettitt, executive director of the Davis, California-based Geothermal Resources Council, and Dr. Torsten Rosenboom, a partner in the Frankfurt, Germany office of global law firm Watson Farley & Williams LLP.
“In my view there are three technology pieces that need to come together for EGS to be successful,” said the GRC’s Pettitt. “Creating and maintaining the reservoir so as to ensure sufficient permeability without short-circuiting; bringing costs down on well drilling and construction; [and] high-temperature downhole equipment for zonal isolation and measurements. These technologies all have a lot of crossover opportunities to helping conventional geothermal be more efficient.”
Aikin noted a Massachusetts Institute of Technology report on geothermal [The Future of Geothermal Energy: Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century] “that was the basis for this funding from DOE,” she said. Aikin said current goals for geothermal would “offset about 6.1% of CO2 emissions, about a quarter of the Paris climate pledge. Because it’s base[load] power, it will offset coal and natural gas. We’re talking about roughly 1,500 new geothermal plants by 2050, and they can be sited almost anywhere.”
Kate Young, manager of the geothermal program at the National Renewable Energy Laboratory (NREL) in Golden, Colorado, talked to POWER about the biggest things that the industry is focusing on. “DOE has been working with the national labs the past several years to develop the GeoVision study, that is now in the final stages of approval,” she said.
The GeoVision study explores potential geothermal growth scenarios across multiple market sectors for 2020, 2030, and 2050. NREL’s research focuses on things such as:
The study started with analyses spearheaded by several DOE labs in areas such as exploration; reservoir development and management; non-technical barriers; hybrid systems; and thermal applications (see sidebar). NREL then synthesized the analyses from the labs in market deployment models for the electricity and heating/cooling sectors.
Geothermal Is Big Business in Boise
The first U.S. geothermal district heating system began operating in 1892 in Boise, Idaho. The city still relies on geothermal, with the largest system of its kind in the U.S., and the sixth-largest worldwide, according to city officials. The current system, which began operating in 1983, heats 6 million square feet of real estate—about a third of the city’s downtown (Figure 3)—in the winter. The city last year got the go-ahead from the state Department of Water Resources to increase the amount of water it uses, and Public Works Director Steve Burgos told POWER the city wants to connect more downtown buildings to the system.
Burgos said it costs the city about $1,000 to pump the water out of the ground and into the system on a monthly basis, and about another $1,000 for the electricity used to inject the water back into the aquifer. Burgos said the water “comes out at 177 degrees,” and the city is able to re-use the water in lower-temperature (110 degrees) scenarios, such as at laundry facilities. The city’s annual revenue from the system is $650,000 to $750,000.
“We have approximately 95 buildings using the geothermal system,” said Burgos. “About 2% of the city’s energy use is supplied by geothermal. We’re very proud of it. It’s a source of civic pride. Most of the buildings that are hooked up use geothermal for heating. Some of the buildings use geothermal for snow melt. There’s no outward sign of the system, there’s no steam coming out of the ground.”
Colin Hickman, the city’s communication manager for public works, told POWER that Boise “has a downtown YMCA, that has a huge swimming pool, that is heated by geothermal.” He and Burgos both said the system is an integral part of the city’s development.
“We’re currently looking at a strategic master plan for the geothermal,” Burgos said. “We definitely want to expand the system. Going into suburban areas is challenging, so we’re focusing on the downtown core.” Burgos said the city about a decade ago put in an injection well to help stabilize the aquifer. Hickman noted the city last year received a 25% increase in its water rights.
Boise State University (BSU) has used the system since 2013 to heat several of its buildings, and the school’s curriculum includes the study of geothermal physics. The system at BSU was expanded about a year and a half ago—it’s currently used in 11 buildings—and another campus building currently under construction also will use geothermal.
Boise officials tout the city’s Central Addition project, part of its LIV District initiative (Lasting Environments, Innovative Enterprises and Vibrant Communities). Among the LIV District’s goals is to “integrate renewable and clean geothermal energy” as part of the area’s sustainable infrastructure.
“This is part of a broader energy program for the city,” Burgos said, “as the city is looking at a 100% renewable goal, which would call for an expansion of the geothermal energy program.” Burgos noted that Idaho Power, the state’s prominent utility, has a goal of 100% clean energy by 2045.
As Boise grows, Burgos and Hickman said the geothermal system will continue to play a prominent role.
“We actively go out and talk about it when we know a new business is coming in,” Burgos said. “And as building ownership starts to change hands, we want to have a relationship with those folks.”
Said Hickman: “It’s one of the things we like as a selling point” for the city.
Young told POWER: “The GeoVision study looked at different pathways to reduce the cost of geothermal and at ways we can expand access to geothermal resources so that it can be a 50-state technology, not limited to the West. When the study is released, it will be a helpful tool in showing the potential for geothermal in the U.S.”
Young said of the DOE: “Their next big initiative is to enable EGS, using the FORGE site,” referring to the Frontier Observatory for Research in Geothermal Energy, a location “where scientists and engineers will be able to develop, test, and accelerate breakthroughs in EGS technologies and techniques,” according to DOE. The agency last year said the University of Utah “will receive up to $140 million in continued funding over the next five years for cutting-edge geothermal research and development” at a site near Milford, Utah, which will serve as a field laboratory.
“The amount of R&D money that’s been invested in geothermal relative to other technologies has been small,” Young said. “and consequently, the R&D improvement has been proportionally less than other technologies. The potential, however, for geothermal technology and cost improvement is significant; investment in geothermal could bring down costs and help to make it a 50-state technology – which could have a positive impact on the U.S. energy industry.”
For those who question whether geothermal would work in some areas, Young counters: “The temperatures are lower in the Eastern U.S., but the reality is, there’s heat underground everywhere. The core of the earth is as hot as the surface of the sun, but a lot closer. DOE is working to be able to access that heat from anywhere – at low cost.”
Geothermal installations are often found at tectonic plate boundaries, or at places where the Earth’s crust is thin enough to let heat through. The Pacific Rim, known as the Ring of Fire for its many volcanoes, has several of these places, including in California, Oregon, and Alaska, as well as northern Nevada.
Geothermal’s potential has not gone unnoticed. Some of the world’s wealthiest people, including Microsoft founder Bill Gates, Amazon founder and CEO Jeff Bezos, and Alibaba co-founder Jack Ma, are backing Breakthrough Energy Ventures, a firm that invests in companies developing decarbonization technologies. Breakthrough recently invested $12.5 million in Baseload Capital, a geothermal project development company that provides funding for geothermal power plants using technology developed by Climeon, its Swedish parent company.
Climeon was founded in 2011; it formed Baseload Capital in 2018. The two focus on geothermal, shipping, and heavy industry, in the latter two sectors turning waste heat into electricity. Climeon’s geothermal modules are scalable, and available for both new and existing geothermal systems. Climeon in March said it had an order backlog of about $88 million for its modules.
“We believe that a baseload resource such as low-temperature geothermal heat power has the potential to transform the energy landscape. Baseload Capital, together with Climeon’s innovative technology, has the potential to deliver [greenhouse gas-free] electricity at large scale, economically and efficiently,” Carmichael Roberts of Breakthrough Energy Ventures said in a statement.
Climeon says its modules reduce the need for drilling new wells and enable the reuse of older wells, along with speeding the development time of projects. The company says the compact and modular design is scalable from 150-kW modules up to 50-MW systems. Climeon says it can be connected to any heat source, and has just three moving parts in each module: two pumps, and a turbine.
4. The Sonoma Plant operated by Calpine is one of more than 20 geothermal power plants sited at The Geysers, the world’s largest geothermal field, located in Northern California. Courtesy: Creative Commons / Stepheng3
Breakthrough Energy’s investment in Baseload Capital is its second into geothermal energy. Breakthrough last year backed Fervo Energy, a San Francisco, California-based company that says its technology can produce geothermal energy at a cost of 5¢/kWh to 7¢/kWh. Fervo CEO and co-founder Tim Latimer said the money from Breakthrough would be used for field testing of EGS installations. Fervo’s other co-founder, Jack Norbeck, was a reservoir engineer at The Geysers in California (Figure 4), the world’s largest geothermal field, located north of Santa Rosa and just south of the Mendocino National Forest.
Most of the nearly two dozen geothermal plants at The Geysers are owned and operated by Calpine, though not all are operating. The California Energy Commission says there are more than 40 operating geothermal plants in the state, with installed capacity of about 2,700 MW.
Geothermal “is something we have to do,” said Aikin of Introspective Systems. “We have to find new baseload power. Our distribution technology can get part of the way there, toward 80% renewables, but we need base power. [Geothermal] is a really good ‘all of the above’ direction to go in.”
Source : https://www.powermag.com/bringing-the-heat-geothermal-making-inroads-as-baseload-power/?printmode=1
Composites simulation tools aren’t just for mega corporations. Small and mid-sized companies can reap their benefits, too.
In 2015, Solvay Composite Materials began using simulation tools from MultiMechanics to simplify testing of materials used in high-performance applications. The global business unit of Solvay recognized the benefits of conducting computer-simulated tests to accurately predict the behavior of advanced materials, such as resistance to extreme temperatures and loads. Two years later, Solvay invested $1.9 million in MultiMechanics to expedite development of the Omaha, Neb.-based startup company’s material simulation software platform, which Solvay predicts could reduce the time and cost of developing new materials by 40 percent.
Commitment to – and investment in – composites simulation tools isn’t unusual for a large company like Solvay, which recorded net sales of €10.3 billion (approximately $11.6 billion) in 2018 and has 27,000 employees working at 125 sites throughout 62 countries. What may be more surprising is the impact composites simulation can have on small to mid-sized companies. “Simulation tools are for everyone,” asserts Flavio Souza, Ph.D., president and chief technology officer of MultiMechanics.
The team at Guerrilla Gravity would agree. The 7-year-old mountain bike manufacturer in Denver began using simulation software from Altair more than a year ago to develop a new frame technology made from thermoplastic resins and carbon fiber. “We were the first ones to figure out how to create a hollow structural unit with a complex geometry out of thermoplastic materials,” says Will Montague, president of Guerrilla Gravity.
That probably wouldn’t have been possible without composites simulation tools, says Ben Bosworth, director of composites engineering at Guerrilla Gravity. Using topology optimization, which essentially finds the ideal distribution of material based on goals and constraints, the company was able to maximize use of its materials and conduct testing with confidence that the new materials would pass on the first try. (They did.) Afterward, the company was able to design its product for a specific manufacturing process – automated fiber placement.
“There is a pretty high chance that if we didn’t utilize composites simulation software, we would have been far behind schedule on our initial target launch date,” says Bosworth. Guerrilla Gravity introduced its new frame, which can be used on all four of its full-suspension mountain bike models, on Jan. 31, 2019.
The Language of Innovation
There are dozens of simulation solutions, some geared specifically to the composites industry and other general finite element analysis (FEA) tools. But they all share the common end goal of helping companies bring pioneering products to market faster – whether those companies are Fortune 500 corporations or startup entrepreneurships.
“Composites simulation is going to be the language of innovation,” says R. Byron Pipes, executive director of the Composites Manufacturing & Simulation Center at Purdue University. “Without it, a company’s ability to innovate in the composites field is going to be quite restricted.”
Those innovations can be at the material level or within end-product applications. “If you really want to improve the micromechanics of your materials, you can use simulation to tweak the properties of the fibers, the resin, the combination of the two or even the coating of fibers,” says Souza. “For those who build parts, simulation can help you innovate in terms of the shape of the part and the manufacturing process.”
One of the biggest advantages that design simulation has over the traditional engineering approach is time, says Jeff Wollschlager, senior director of composites technology at Altair. He calls conventional engineering the “build and bust” method, where companies make samples, then break them to test their viability. It’s a safe method, producing solid – although often conservative – designs. “But the downside of traditional approaches is they take a lot more time and many more dollars,” says Wollschlager. “And everything in this world is about time and money.”
In addition, simulation tools allow companies to know more about the materials they use and the products they make, which in turn facilitates the manufacturing of more robust products. “You have to augment your understanding of your product with something else,” says Wollschlager. “And that something else is simulation.”
A Leap Forward in Manufacturability
Four years ago, Montague and Matt Giaraffa, co-founder and chief engineer of Guerrilla Gravity, opted to pursue carbon fiber materials to make their bike frames lighter and sturdier. “We wanted to fundamentally improve on what was out there in the market. That required rethinking and analyzing not only the material, but how the frames are made,” says Montague.
The company also was committed to manufacturing its products in the United States. “To produce the frames in-house, we had to make a big leap forward in manufacturability of the frames,” says Montague. “And thermoplastics allow for that.” Once Montague and Giaraffa selected the material, they had to figure out exactly how to make the frames. That’s when Bosworth – and composites simulation – entered the picture.
Bosworth has more than a decade of experience with simulation software, beginning as an undergraduate student in mechanical engineering as a member of his college’s Formula SAE® team to design, build and test a vehicle for competition. While creating the new frame for Guerrilla Gravity, he used Altair’s simulation tools extensively, beginning with early development to prove the material feasibility for the application.
“We had a lot of baseline data from our previous aluminum frames, so we had a really good idea about how strong the frames needed to be and what performance characteristics we wanted,” says Bosworth. “Once we introduced the thermoplastic carbon fiber, we were able to take advantage of the software and use it to its fullest potential.” He began with simple tensile test samples and matched those with physical tests. Next, he developed tube samples using the software and again matched those to physical tests.
“It wasn’t until I was much further down the rabbit hole that I actually started developing the frame model,” says Bosworth. Even then, he started small, first developing a computer model for the front triangle of the bike frame, then adding in the rear triangle. Afterward, he integrated the boundary conditions and the load cases and began doing the optimization.
“You need to start simple, get all the fundamentals down and make sure the models are working in the way you intend them to,” says Bosworth. “Then you can get more advanced and grow your understanding.” At the composite optimization stage, Bosworth was able to develop a high-performing laminate schedule for production and design for automated fiber placement.
Even with all his experience, developing the bike frame still presented challenges. “One of the issues with composites simulation is there are so many variables to getting an accurate result,” admits Bosworth. “I focused on not coming up with a 100 percent perfect answer, but using the software as a tool to get us as close as we could as fast as possible.”
He adds that composites simulation tools can steer you in the right direction, but without many months of simulation and physical testing, it’s still very difficult to get completely accurate results. “One of the biggest challenges is figuring out where your time is best spent and what level of simulation accuracy you want to achieve with the given time constraints,” says Bosworth.
Wading into the Simulation Waters
The sophistication and expense of composites simulation tools can be daunting, but Wollschlager encourages people not to be put off by the technology. “The tools are not prohibitive to small and medium-sized companies – at least not to the level people think they are,” he says.
Cost is often the elephant in the room, but Wollschlager says it’s misleading to think packages will cost a fortune. “A proper suite provides you simulation in all facets of composite life cycles – in the concept, design and manufacturing phases,” he says. “The cost of such a suite is approximately 20 to 25 percent of the yearly cost of an average employee. Looking at it in those terms, I just don’t see the barrier to entry for small to medium-sized businesses.”
As you wade into the waters of simulation, consider the following:
• Assess your goals before searching for a package. Depending on what you are trying to accomplish, you may need a comprehensive suite of design and analysis tools or only a module or two to get started. “If you want a simplified methodology because you don’t feel comfortable with a more advanced one, there are mainstream tools I would recommend,” says Souza. “But if you really want to innovate and be at the cutting-edge of your industry trying to understand how materials behave and reduce costs, then I would go with a more advanced package.” Decide upfront if you want tools to analyze materials, conduct preliminary designs, optimize the laminate schedule, predict the life of composite materials, simulate thermo-mechanical behaviors and so on.
• Find programs that fit your budget. Many companies offer programs for startups and small businesses that include discounts on simulation software and a limited number of hours of free consulting. Guerrilla Gravity purchased its simulation tools through Altair’s Startup Program, which is designed for privately-held businesses less than four years old with revenues under $10 million. The program made it fiscally feasible for the mountain bike manufacturer to create a high-performing solution, says Bosworth. “If we had not been given that opportunity, we probably would’ve gone with a much more rudimentary design – probably an isotropic, black aluminum material just to get us somewhere in the ballpark of what we were trying to do,” he says.
• Engage with vendors to expedite the learning curve. Don’t just buy simulation tools from suppliers. Most companies offer initial training, plus extra consultation and access to experts as needed. “We like to walk hand-in-hand with our customers,” says Souza. “For smaller companies that don’t have a lot of resources, we can work as a partnership. We help them create the models and teach them the technology behind the product.”
• Start small, and take it slow. “I see people go right to the final step, trying to make a really advanced model,” says Bosworth. “Then they get frustrated because nothing is working right and the joints aren’t articulating. They end up troubleshooting so many issues.” Instead, he recommends users start simple, as he did with the thermoplastic bike frame.
• Don’t expect to do it all with simulation. “We don’t advocate for 100 percent simulation. There is no such thing. We also don’t advocate for 100 percent experimentation, which is the traditional approach to design,” says Wollschlager. “The trick is that it’s somewhere in the middle, and we’re all struggling to find the perfect percentage. It’s problem-dependent.”
• Put the right people in place to use the tools. “Honestly, I don’t know much about FEA software,” admits Montague. “So it goes back to hiring smart people and letting them do their thing.” Bosworth was the “smart hire” for Guerrilla Gravity. And, as an experienced user, he agrees it takes some know-how to work with simulation tools. “I think it would be hard for someone who doesn’t have basic material knowledge and a fundamental understanding of stress and strain and boundary conditions to utilize the tools no matter how basic the FEA software is,” he says. For now, simulation is typically handled by engineers, though that may change.
Perhaps the largest barrier to implementation is ignorance – not of individuals, but industry-wide, says Pipes. “People don’t know what simulation can do for them – even many top level senior managers in aerospace,” he says. “They still think of simulation in terms of geometry and performance, not manufacturing. And manufacturing is where the big payoff is going to be because that’s where all the economics lie.”
Pipes wants to “stretch people into believing what you can and will be able to do with simulation.” As the technology advances, that includes more and more each day – not just for mega corporations, but for small and mid-sized companies, too.
“As the simulation industry gets democratized, prices are going to come down due to competition, while the amount you can do will go through the roof,” says Wollschlager. “It’s a great time to get involved in simulation.”
Source : http://compositesmanufacturingmagazine.com/2019/05/making-simulation-accessible-to-the-masses/
The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.
The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?
We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.
To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:
After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.
The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.
Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.
Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.
In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.
It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.
While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.
Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.
By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.
Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.
These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.
Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.
While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.
The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.
With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.
Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.
One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.
A wide array of other models are being explored and worth touching upon:
With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.
When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.
Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller
Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10
I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?
1. It’s really hard to tell in advance what’s hard and what’s easy.
Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy. Even within a domain, performance can vary wildly. What’s good accuracy for predicting sentiment? On movie reviews, there is a lot of text and writers tend to be fairly clear about what they think and these days 90–95% accuracy is expected. On Twitter, two humans might only agree on the sentiment of a tweet 80% of the time. It might be possible to get 95% accuracy on the sentiment of tweets about certain airlines by just always predicting that the sentiment is going to be negative.
Metrics can also increase a lot in the early days of a project and then suddenly hit a wall. I once ran a Kaggle competition where thousands of people competed around the world to model my data. In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques. Those people competing in the Kaggle competition worked incredibly hard to get that 68% accuracy and I’m sure felt like it was a huge achievement. But for most use cases, 65% vs 68% is totally indistinguishable. If that had been an internal project, I would have definitely been disappointed by the outcome.
My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning. Engineering projects generally move forward, but machine learning projects can completely stall. It’s possible, even common, for a week spent on modeling data to result in no improvement whatsoever.
2. Machine Learning is prone to fail in unexpected ways.
Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data. Humans are so good at generalizing from training data that we have terrible intuitions about this. I built a little robot with a camera and a vision model trained on the millions of images of ImageNet which were taken off the web. I preprocessed the images on my robot camera to look like the images from the web but the accuracy was much worse than I expected. Why? Images off the web tend to frame the object in question. My robot wouldn’t necessarily look right at an object in the same way a human photographer would. Humans likely not even notice the difference but modern deep learning networks suffered a lot. There are ways to deal with this phenomenon, but I only noticed it because the degradation in performance was so jarring that I spent a lot of time debugging it.
Much more pernicious are the subtle differences that lead to degraded performance that are hard to spot. Language models trained on the New York Times don’t generalize well to social media texts. We might expect that. But apparently, models trained on text from 2017 experience degraded performance on text written in 2018. Upstream distributions shift over time in lots of ways. Fraud models break down completely as adversaries adapt to what the model is doing.
3. Machine Learning requires lots and lots of relevant training data.
Everyone knows this and yet it’s such a huge barrier. Computer vision can do amazing things, provided you are able to collect and label a massive amount of training data. For some use cases, the data is a free byproduct of some business process. This is where machine learning tends to work really well. For many other use cases, training data is incredibly expensive and challenging to collect. A lot of medical use cases seem perfect for machine learning — crucial decisions with lots of weak signals and clear outcomes — but the data is locked up due to important privacy issues or not collected consistently in the first place.
Many companies don’t know where to start in investing in collecting training data. It’s a significant effort and it’s hard to predict a priori how well the model will work.
1. Pay a lot of attention to your training data.
Look at the cases where the algorithm is misclassifying data that it was trained on. These are almost always mislabels or strange edge cases. Either way you really want to know about them. Make everyone working on building models look at the training data and label some of the training data themselves. For many use cases, it’s very unlikely that a model will do better than the rate at which two independent humans agree.
2. Get something working end-to-end right away, then improve one thing at a time.
Start with the simplest thing that might work and get it deployed. You will learn a ton from doing this. Additional complexity at any stage in the process always improves models in research papers but it seldom improves models in the real world. Justify every additional piece of complexity.
Getting something into the hands of the end user helps you get an early read on how well the model is likely to work and it can bring up crucial issues like a disagreement between what the model is optimizing and what the end user wants. It also may make you reassess the kind of training data you are collecting. It’s much better to discover those issues quickly.
3. Look for graceful ways to handle the inevitable cases where the algorithm fails.
Nearly all machine learning models fail a fair amount of the time and how this is handled is absolutely crucial. Models often have a reliable confidence score that you can use. With batch processes, you can build human-in-the-loop systems that send low confidence predictions to an operator to make the system work reliably end to end and collect high-quality training data. With other use cases, you might be able to present low confident predictions in a way that potential errors are flagged or are less annoying to the end user.
The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use. As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic. I’m incredibly excited about the opportunity to build completely new kinds of products but worried about the lack of tools and best practices. So much so that I started a company to help with this called Weights and Biases. If you’re interested in learning more, check out what we’re up to.
Source : https://medium.com/@l2k/why-are-machine-learning-projects-so-hard-to-manage-8e9b9cf49641
There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.
Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.
Just how active is the sector now?
That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.
And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.
The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.
Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!
Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.
Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.
These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?
The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.
Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/
Intelligent use of real-time data is critical to successful industrial digitalisation. However, ensuring that data flows effectively is just as critical to success. Todd Gurela explains the importance of getting your manufacturing network right.
Industrial digitalisation, including the Industrial Internet of Things (IIoT), offers great promise for manufacturers looking to optimise business operations.
By bringing together the machines, processes, people and data on your plant floor through a secure Ethernet network, IIoT makes it possible to design, develop, and fabricate products faster, safer, and with less waste.
For example, one automotive parts supplier eliminated network downtime, saving around £750,000 in the process simply by deploying a new wireless network across the factory floor.
The time it took for the company to completely recoup their investment in the project? Just nine months.
Without data – extracted from multiple sources and delivered to the right application, at the right time – little optimisation can happen.
And there is a multitude of meaningful data held in factory equipment. Consider how real-time access to condition, performance, and quality data – across every machine on the floor – would help you make better business and production decisions.
Imagine the following. A machine sensor detects that volume is low for a particular part on your assembly line. Data analysis determines, based on real-time production speed and previous output totals, that the part needs to be re-stocked in one hour.
With this information, your team can arrange for replacement parts to arrive before you run out, and avoid a production stoppage.
This scenario may be a theoretical, but it illustrates a genuine truth. Manufacturers need reliable, scalable, secure factory networks so they can focus on their most important task: making whatever they make more efficiently, at higher quality levels, and at lower costs.
At the heart of this truth is the factory network. So, while the key to a successful Industry 4.0 project is data, the key to meaningful, accurate data is the network. And manufacturers need to plan carefully to ensure their network can deliver on their needs.
There are five characteristics manufacturers should look for in a factory network before selecting a vendor.
In no particular order, they are:
Interoperability – this ability allows for the ‘flattening’ of the industrial network to improve data sharing, and usually includes Ethernet as a standard.
Automation – for ‘plug and play’ network deployment to streamline processes and drive productivity.
Simplicity – the network infrastructure should be simple, as should the management.
Security – your network should be secure and provide visibility into and control of your data to reduce risk, protect intellectual property, and ensure production integrity.
Intelligence – you need a network that makes it possible to analyse data, and take action quickly, even at the network edge.
Manufacturers need solutions with these features to help aggregate, visualise, and analyse data from connected machines and equipment, and to assure the reliable, rapid, and secure delivery of data. Anything less will leave them wanting, and with subpar results.
Network interoperability allows manufacturers to seamlessly pull data from anywhere in their facility. An emerging standard in this area is Time Sensitive Networking (TSN).
Although not yet widely adopted, TSN provides a common communications pathway for your machines. With TSN, the future of industrial networks will be a single, open Ethernet network across the factory floor that enables manufacturers to access data with ease and efficiency.
Most important, TSN opens up critical control applications such as robot control, drive control, and vision systems to the Industrial Internet of Things (IIoT), making it possible for manufacturers to identify areas for optimisation and cost reduction.
Also, with the OPC-UA protocol now running over TSN, it also becomes possible to have standard and secure communication from sensor to cloud. In fact, TSN fills an important gap in standard networking by protecting critical traffic.
How so? Automation and control applications require consistent delivery of data from sensors, to controllers and actuators.
TSN ensures that critical traffic flows promptly, securing bandwidth and time in the network infrastructure for critical applications, while supporting all other forms of traffic.
And because TSN is delivered over standard Industrial Ethernet, control networks can take advantage of the security built into the technology.
TSN eliminates network silos that block reachability to critical plant areas, so that you can extract real-time data for analytics and business insights.
This is key to the future of factory networks, as TSN will drive the interoperability required for manufacturers to maximise the value from Industry 4.0 projects.
One leading manufacturer estimated that unscheduled downtime cost them more than £16,000/minute in lost profits and productivity. That’s almost £1m per hour if production stops. Could your organisation survive a stoppage like that?
Network automation is critical for manufacturers who have growing network demands. This includes needing to add new machines, or integrate operational controls, to existing infrastructure as well as net-new deployments.
Network uptime becomes increasingly important as the network expands. Ask yourself whether your network and its supporting tools have the capability for ‘plug and play’ network deployments that greatly reduce downtime if – and when – failure occurs.
It’s essential that factories leverage networks that automate certain tasks – to automatically set correct switch settings, for example – to meet Industry 4.0 objectives. The task is too overwhelming otherwise.
Like automation, network simplicity is an essential component of the factory network. Choosing a single network infrastructure, capable of handling TSN, Ethernet IP, Profinet, and CCLink traffic can significantly simplify installation, reduce maintenance expense, and reduce downtime.
It also makes it possible to get all your machine controls, from any of the top worldwide automation vendors, to talk through the same network hardware.
Consider also that you want a network that can be managed by operations and IT professionals. Avoid solutions that are too IT-centric and look for user-friendly tools that operations can use to troubleshoot network issues quickly.
Tools that visualise the network topology for operations professionals can be especially useful in this regard.
For example, knowing which PLC (including firmware data) is connected to which port, and which I/O is connected to the same switch, can help speed commissioning and troubleshooting.
Last, validated network designs are essential to factory success. These designs help manufacturers quickly roll out new network deployments and maintain the performance of automation equipment. Make sure this is part of the service your network vendor can provide.
Cybersecurity is critically important on the factory floor. As manufacturing networks grow, so does the attack surface, or vectors, for malicious activity such as a ransomware attack.
According to the Cisco 2017 Midyear Cybersecurity Report, nearly 50% of manufacturers use six or more security vendors in their facilities. This mix and match of security products and vendors can be difficult to manage for even the most seasoned security expert.
No single product, technology or methodology can fully secure industrial operations. However, there are vendors that can provide comprehensive network security solutions in their plant network infrastructure that include simple protections for physical assets, such as blocking access to ports in unmanaged switches or using managed switches.
Protecting critical manufacturing assets requires a holistic defence-in-depth security approach that uses multiple layers of defence to address different types of threats. It also requires a network design that leverages industrial security best practices such as ‘Demilitarized Zones’ (DMZs) to provide pervasive security across the entire plant.
Consider for a moment how professional athletes react to their surroundings. They interpret what is happening in real-time, and make split-second decisions based on what is going on around them.
Part of what makes those decisions possible is how the players have been coached to react in certain situations. If players needed to ask their coach for advice before taking every shot, tackling the opposition, or sprinting for victory…well, the results wouldn’t be very good.
Just as a team’s performance improves when players can take in their surroundings and perform an appropriate action, the factory performs better when certain network data can be processed and actioned upon immediately – without needing to travel to the data centre first.
Processing data in this way is called ‘edge’, or ‘fog’, computing. It entails running applications right on your network hardware to make more intelligent, faster decisions.
Manufacturers need to access information quickly, filter it in real-time, then use that data to better understand processes and areas for improvement.
Processing data at the edge is key to unlocking networking intelligence, so it’s important to ask yourself whether your factory network can support edge applications before beginning a project. And if it can’t, it’s time to consider a new network.
A final note on network intelligence. Once you deploy edge applications, make sure you have the tools to manage and implement them with confidence, at scale. Managing massive amounts of data can quickly become a problem, so you’ll need systems that can extract, compute, and move data to the right places at the right time.
The opportunity for manufacturers who invest in Industry 4.0 solutions is massive (and it’s time that leaders from the top floor and shop floor realised it). But before any Industry 4.0 project can get off the ground, the right foundation needs to be in place.
The factory (or industrial) network is that foundation… and manufacturers owe it to themselves to select the best one available.
SAS International is a leading British manufacturer of quality metal ceilings and bespoke architectural metalwork. Installed in iconic, landmark buildings worldwide, SAS products lead through innovation, cutting-edge design and technical acoustic expertise.
Their success is built on continued investment in manufacturing and achieving value for clients through world-class engineered solutions.
In the UK, SAS operates factories in Bridgend, Birmingham and Maybole, with headquarters and warehouse facilities in Reading. The company has recently expanded its export markets and employs nearly 1,000 staff internationally.
However, the IT infrastructure was operating on ageing equipment with connectivity, visibility and security constraints.
The company’s IT team recently modernised its network, upgrading from commercial-grade wireless to a new network solution with a unified dashboard that allows them to remotely manage distributed sites.
They now have instant visibility and control over the network devices, as well as the mobile devices used by employees daily.
During the initial deployment, the IT team was able to identify cabling issues that previously they would not have been alerted to or been able to investigate.
With upcoming projects and continually working to optimise solutions, like cloud storage, the network is now robust enough and reliable enough to support future IT needs.
SAS is retrofitting numerous manufacturing machines with computers. This retrofit, partnered with the new network, allows remote communications between the machines and the designers without having to manually input data at the machines themselves.
The robust wireless infrastructure is changing the manual printing and checking of stock by enabling handheld scanners and creating a more efficient and cost-effective product flow.
Fault mitigation and anomaly detection have been huge benefits of the solution. For example, the IT team was able to quickly identify a bandwidth issue when a phenomenal amount of data was generated from an automated transfer to a shop machine.
They were able to spot the issue, identify the machine, and fix the problem. Before, they would merely have seen there was a network slowdown, but wouldn’t have been able to identify or resolve the problem.
The SAS team will continue to benefit from the included firmware updates and new feature releases that are integrated into the solution, providing them with a future-proof solution as they expand to global sites in the future.
Source : https://www.themanufacturer.com/articles/the-key-to-any-successful-industrial-digitalisation-project/
At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf
Renting robots as temp labor? Not a new idea. But it’s certainly one that is gaining followers.
Rising labor shortages, tightly contested global markets, and growing interest in automation are tightening the screws on traditional business models. A broader spectrum of users are seeking flexible automation solutions. More suppliers are adopting new-age rental or lease options to satisfy the demand. Some are mature companies answering the call, others are startups blazing a path for the rest of the industry. Robotics as a Service (RaaS) is an emerging trend whose time has come.
Steel Collar Associates may have been ahead of its time when RIA spoke with its owner in 2013 about his “Humanoids for Hire” – aka Yaskawa dual-arm robots for rent. Already several years into his venture at the time, Bill Higgins was having little success contracting out his robo-employees. Back then, industry was barely warming up to the idea of cage-free robots rubbing elbows with their human coworkers. Now every major robot manufacturer has a collaborative robot on its roster. And a slew of startups have joined the fray.
Just like human-robot collaboration is helping democratize robotics, RaaS will help bring robots to the masses. And cobots aren’t the only robots for rent.
Whether you have a short-term need, want to try before you buy, forgo a capital expenditure, or lower your cost of entry to robotic automation, RaaS is worth a closer look. It’s robots on demand, when and where you want them.
Robots on Demand
Out-of-the-box solutions like those offered by READY Robotics, which are easy to use and easy to deploy, are making RaaS a reality. Your next, or perhaps first, robotic solution may be a Johnny-on-the-spot – on wheels.
“The TaskMate is a ready-to-use, on-demand robot worker that is specifically designed to come out of its shipping crate ready to be deployed to the production line,” says READY Robotics CEO Ben Gibbs, noting that manufacturers without the time to undertake custom robot integration are looking for an out-of-the box automation solution. Rental options make the foray easier.
“Time is their most precious resource. They want something like the TaskMate that is essentially ready to go out of the box,” says Gibbs. “They may have to do a little fixturing or put together a parts presentation hopper. Besides that, it’s something they can deploy pretty quickly. We’re driving towards providing a solution that’s as easy to use as your personal computer.”
The system consists of a collaborative robot arm mounted on a stand with casters, so you can wheel it into position anywhere on the production floor. The ease of portability makes it ideal for high-mix, low-volume production where it can be quickly relocated to different manufacturing cells. Nicknamed the “Swiss Army Knife” of robots, the TaskMate performs a variety of automation tasks from machine tending to pick-and-place applications, to parts inspection.
The TaskMate comes in two varieties, the 5-kg payload R5 and 10-kg payload R10 (pictured). Both systems use robot arms from collaborative robot maker Universal Robots. The UR arm is equipped with a force sensor and a universal interface called the TEACHMATE that allows different robot grippers to be hot-swapped onto the end of the arm. Supported end effector brands include SCHUNK, Robotiq and Piab.
Contributing to the system’s ease of use is READY’s proprietary operating system, the FORGE/OS software. A simple flowchart interface (pictured) controls the robot arm, end-of-arm tooling and other peripherals. No coding is required.
For those tasks requiring a higher payload, reach, or cycle time than is capable with the power-and-force limiting cobot included with the TaskMate R5 and R10 systems, READY also offers its FORGE controller (formerly called the TaskMate Kit). Running the intuitive FORGE/OS software, the controller provides the same easy programming interface but is designed as a standalone system for ABB, FANUC, UR and Yaskawa robots.
“For example, if you plug the FORGE controller into a FANUC robot, you no longer have to program in Karel (the robot OEM’s proprietary programming language),” explains Gibbs. “On the teach pendant, you can use FORGE/OS to program the robot directly, so you have the same programming experience on the controller as you do on the TaskMate.
“We started primarily with smaller six degree-of-freedom robot arms, like the FANUC LR Mate and GP7 from Yaskawa,” continues Gibbs. “We have started to integrate some of the larger robots as well, like the FANUC M-710iC/50. Ultimately, we’re driving toward a ubiquitous programming experience regardless of what robot arm or robot manufacturer you’re using.”
In the Cloud
A common element in the RaaS rental model is cloud robotics. READY offers customers the ability to remotely monitor the TaskMate or other robotic systems hooked up to the FORGE controller.
“We can set them up with alerts, so when the production cycle is completed or the robot enters an unexpected error state, they can receive an email notifying the floor manager or line operator to check the system,” says Gibbs.
You can also save and back up programs to the cloud, and deploy them from one robot to another. If an operator were to inadvertently lose a program, rather than rewrite it from scratch, you can just drop the backup version from the cloud onto the system and be up and running again in minutes.
The TaskMate systems and FORGE controller are available for both purchase and rental.
“We provide a menu to our customers of how they might want to consume our products and services,” says Gibbs. “That may be all the way from a traditional CapEx (capital expenditure) purchase if they want to buy one of our TaskMates upfront, to the other end of the spectrum where they can rent the system with no contract for however long or short of a duration they want.”
For an additional charge, READY can manage the entire asset for the customer.
“We set it up, we program it, and we remotely monitor it to make sure it’s maximizing its uptime. We can come in and tweak the program if it’s running into unexpected errors. All of the systems are equipped with cell modems, so they can update the software over the air. We handle all of the maintenance or it’s handled by our channel partners.”
Gibbs says flexibility is the biggest advantage to their rental option. READY offers a 3-month trial rental. But customers are not required to keep it for that full term.
“We have a no-term rental. That’s even more appealing because it can come entirely out of your OpEx (operating expenditure) budget. Instead of going through a lengthy CapEx approval process, we’ve had some customers just run their corporate credit card, because the rental is below their approval level for an OpEx purchase. They can easily set up the system and use it for a few months. That alone provides them with a much stronger justification for moving forward with CapEx if they want, or just continue to expand their rental.
“At the end of the first month, if they decide that it’s not working out, just like any incompetent worker, they can fire it and send it back.”
If the customer chooses to continue renting, Gibbs says it’s more cost-effective to sign a contract. This reduces the risk for everyone, so there’s usually a financial incentive.
“The primary way we differentiate ourselves is that we offer that no-term rental with a fixed monthly fee, which allows these factories to capture the traditional value of automation. We don’t have a meter running that says you ran it 22 hours this day, so you owe us for 22 hours of work. We encourage them to run it as long as they want. The expectation is the longer you run it, the cheaper it should be.”
Flexibility for High-Mix, Low-Volume
READY’s target customers range from small job shops to large multinationals and Fortune 500 companies.
“Attwood is a great example of the type of high-mix, low-volume production environment where the flexibility of the TaskMate really shines,” says Gibbs.
Attwood Marine in Lowell, Michigan, is one of the world’s largest producers of boat parts, accessories and supplies. If it’s on your boat, there’s a good chance this century-old company made it. They make thousands of different parts, but cater to a relatively small marine market. The challenges of high-mix, low-volume production in a highly competitive market had them looking for an automation solution.
The flexibility of the TaskMate to quickly deploy and redeploy depending on Attwood’s short- or long-term needs was a deciding factor. With only a couple hundred employees and no dedicated robotics programmer on staff, the customer appreciates the FORGE software’s ease of use. Plus the ability to rent the system plays to the seasonal nature of Attwood’s business and lowers the cost of their first foray into robotic automation.
Attwood has deployed the TaskMate R10 to a half-dozen cells on the production floor performing CNC machine tending, pick-and-place tasks like palletizing, loading/unloading conveyors and case packing, and even repetitive testing. You need to actuate a switch or pull a cord 250,000 times? That’s a job for flexible automation.
By deploying one robot system to multiple production cells, Attwood was able to spread their ROI across multiple product lines and realize up to a 30 percent reduction in overall manufacturing costs. Watch the TaskMate on the job at Attwood Marine.
Small to midsized businesses aren’t the only ones benefiting. Large multinationals like tools manufacturer Stanley Black & Decker use the TaskMate R10 for machine tending CNC lathes.
“Multinationals may have robot programmers on staff, but usually not enough of them,” says Gibbs. “Automation engineers are in high demand and very difficult to come by. Any technology that makes it faster and easier for people to set up robots is a tremendous value. Even with large multinationals, some like to be asset-light and do a rental, but everyone loves the ease of programming we offer through FORGE.”
Forged in the Lab
READY’s portable plug-and-play solution is a technology spinoff from Professor Greg Hager’s research in human-machine collaborative systems at Johns Hopkins University. Gibbs, an alumnus, was working in the university’s technology ventures office helping researchers like Prof. Hager develop commercialization strategies for their new technologies. Hager, along with Gibbs, and fellow alum CTO Kelleher Guerin cofounded the startup in October 2015. Another cofounder, Drew Greenblatt, President of Marlin Steel Wire Products (an SME in the Know), offered up his nearby Baltimore, Maryland-based custom metal forms factory as a prototype test site for the TaskMate. The system was officially launched in July 2017.
Prof. Hager is now an advisor to the company. Distinguished robotics researcher, Henrik Christensen, is Chairman of the Board of Advisors. In December 2017, the startup secured $15 million in Series A funding led by Drive Capital.
READY maintains an office in Baltimore, while its headquarters is in Columbus, Ohio. They are a FANUC Authorized System Integrator. Gibbs says they are in the process of building a channel partner network of integrators and distributors to support future growth.
Pay As You Go
Business models under the RaaS umbrella vary widely, and are evolving. Startups like Hirebotics and Kindred leverage cloud robotics more intensely to monitor robot uptime, collect data, and enhance performance using AI. They charge by the hour, or even by the second. You pay for only what you use. Each service model has its advantages.
Some RaaS advocates offer subscription-based models. Some took a page from the sharing economy. Think Airbnb, Lyft, TaskRabbit, Poshmark. Share an abode, a car or clothes. Skip the overhead, the infrastructure and the long-term commitment. Pay as you go for a robot on the run.
Mobile Robots for Hire
Autonomous mobile robots (AMRs) are no strangers to the RaaS model, either. RIA members Aethon and Savioke lease their mobile robots for various applications in healthcare, hospitality and manufacturing. Startup inVia Robotics offers a subscription-based RaaS solution for its warehouse “Picker” robots.
We first explored the emergence of AMRs in the Always-On Supply Chain. It’s startling how much the logistics robot market has changed in just a couple of years. Since then, prototypes and beta deployments have turned into full product lines with significant investor funding. Major users like DHL, Walmart and Kroger, not to mention early adopter Amazon, are doubling down on their mobile fleets.
After triple-digit revenue growth in Europe, Mobile Industrial Robots (MiR) was just breaking onto the North American scene two years ago. Now, as they celebrate comparable growth on this side of the pond, MiR prepares to launch a new lease program in January.
MiR is another prodigy of Denmark’s booming robotics cluster. They join Danish cousin Universal Robots on the list of Teradyne’s smart robotics acquisitions. Odense must have the Midas touch.
Go Big or Go Home
Responding to customer demands for larger payloads, MiR introduced its 500-kg mobile platform at Automatica in June. The MiR500 (pictured) comes with a pallet transport system that automatically lifts pallets off a rack and delivers them autonomously. Watch it in action on the production floor of this agricultural machine manufacturer.
“Everybody we deal with today is making a big push to eliminate forklift traffic from the inner aisleways of production lines,” says Ed Mullen, Vice President of Sales – Americas for MiR in Holbrook, New York. “That’s really driving the whole launch of the MiR500. We’ve gone through some epic growth here in my division.”
Mullen’s division is responsible for supporting MiR’s extensive distributor network in all markets between Canada and Brazil. Right now, the Americas account for about a third of the global business.
“We’re seeing applications in industrial automation, warehouses and distribution centers,” says Mullen. “Electronics, semiconductor and a lot of the tier automotive companies, like Faurecia, Visteon and Magna, have all invested in our platforms and are scaling the business. We see this being implemented across all industries, which is really adding to our excitement.”
Although Mullen says they’ve seen tremendous success with the current buy model, MiR is trying to make it even easier to work with this emerging technology. That drove them to the RaaS model.
“We think a leasing option will allow companies that are still trying to understand the use cases for the technology to get in quicker, and then slowly scale the business up as they learn how to apply it and what the sweet spots are for autonomous mobile robots. The lease option is intended to reduce the cost of entry. Today it’s mainly the bigger multinationals that are buying, but we believe by providing options for lower entry points, this will make the use cases in the small-to-midsized companies come to light.”
He says a third-party company will handle all the leases. MiR’s distributor network will engage with the third-party company to put together lease programs for customers.
MiR has also implemented a Preferred System Integrator (PSI) program to augment the existing network of distribution partners. Two and a half years ago, it was mainly large companies investing in these mobile platforms. They were purchasing in volumes of one to five robots. Today, they’re seeing investments of 20, 30, or even more than 50 robots.
“When you get into these bigger deployments, it’s more critical to have companies that are equipped to handle them. Our distribution partners are set up as a sales channel. Although most of them have integration capabilities, they don’t want to invest in deploying hundreds of robots at one time. They rather hand that off to a company that’s able to properly support large-scale deployments.”
Over the last couple of years, MiR had been focused on bringing more efficiency to the manufacturing process; not necessarily replacing existing AGVs and forklifts.
“For example, you have a guy that gets paid a healthy salary to sit in front of a machine tool and use his skills to do a certain task. That’s what makes the company money. But when he has to get up and carry a tray of parts to the next phase in the production cycle, that’s inefficient. That’s what we’ve been focusing on, at least with our MiR100 and MiR200 (pictured).”
Technologies, an Indiana-based company specializing in custom plastic injection molding and mold tooling. The mobile robot loops the shop floor, autonomously transporting finished product from the presses to quality inspection. This frees up personnel for more high-value tasks and eliminates material flow bottlenecks.
“With the new MiR500, we’re going after heavier loads and palletizer loads. That’s replacing standard AGVs and forklifts. We’re also starting to see big conveyor companies like Simplimatic Automation and FlexLink move to a more flexible type of platform with autonomous mobile robots.
“Parallel to the hardware is our software. A key part of our company is the way we develop the software, the way we allow people to interface with the product. We’re continuously making it more intuitive and easier to use.”
MiR offers two software packages, the operating system that comes with the robot and the fleet management software that manages two or more robots. The latter is not a requirement, but Mullen says most companies are investing in it to get additional functionality when interfacing with their enterprise system. The newest fleet system is moving to a cloud-based option.
Hardware and software updates are all handled through MiR’s distribution channel and Mullen doesn’t think any of that will change under the lease option.
“The support model will stay the same. Our distributors are all trained on hardware updates, preventative maintenance and troubleshooting. I firmly believe the major component to our success today is our distribution model.”
Mullen says he’s looking forward to new products coming out in 2019. MiR is also hiring. They expect to double their employee count in the Americas and globally.
High-Tech, Short-Term Need
It’s many of these feisty startups that we’re seeing adopt nontraditional models like RaaS. But stalwarts are coming on board, too.
Established in 1992, RobotWorx is part of SCOTT Technology Ltd., a century-old New Zealand-based company specializing in automated production, robotics and process machinery. RobotWorx joined the SCOTT family of international companies in 2014 and recently completed a rigorous audit process to become an RIA Certified Robot Integrator.
RobotWorx buys, reconditions and sells used robots, along with maintaining an inventory of new robotic systems and offering full robot integration and training services. Rentals are nothing new to them. They’ve been renting robots for several years, before it was a trend. But in response to the upswing in industry requests of late, RobotWorx rolled out a major push on their rental program this past spring.
“We’ve done a lot with the TV and film industry,” says Tom Fischer, Operations Manager for RobotWorx in Marion, Ohio. “If you’ve seen the latest AT&T commercial, there are blue and orange robots in it. We rented those out for a week.”
Dubbed “Bruce” and “Linda” on strips of tape along their outstretched arms, these brightly colored robots have a starring role in this AT&T Business commercial promoting Edge-to-Edge Intelligence? solutions. Fischer says companies in this industry usually select a particular size of robot, typically either a long-reach or large-payload material handling robot, like the Yaskawa Motoman long-reach robots in this AT&T commercial.
Ever wonder if the robots in commercials are just there for effect? It turns out, not always. Fischer says these are fully functioning robots. AT&T’s ad agency must have a robot wrangler off camera to keep Bruce and Linda in line. However, the other robots in the background are the result of TV magic.
“We basically just sent them the robots,” says Fischer. “They did what they wanted to do with them and then sent them back.”
For quick gigs like this commercial, or maybe a movie cameo or even a tradeshow display, rental robots make sense. But how do you know when it’s better to rent or buy?
“We’ll do a cost analysis with the customer,” says Fischer. “We have an ROI calculator on our website if they want to see what their long-term commitment capital investment would be. (Check out RIA’s Robot ROI Calculator). We also look at it from the standpoint that if they have a long-term contract with somebody, their return on investment is going to be a lot better with a purchase. If they think they’re only going to use the robot for six months, it doesn’t make sense for them to buy it.”
RobotWorx rents robots by the week, month or year. A week is the minimum, but there’s no long-term commitment required. A rental includes a robot, the robot controller, teach pendant and end-of-arm tooling (EOAT). Robot brands available include ABB, FANUC, KUKA, Universal Robots, and Yaskawa Motoman.
They also rent entire ready-to-ship robot cells for welding or material handling. The most popular systems are the RWZero (pictured) and RW950 cells.
“The RWZero cell is very basic,” says Fischer. “You have a widget and you need 5,000 of them. Rent this cell and you have a production line instantly.”
The RW950 is more portable. Fisher calls it a “pallet platform.” The robot, controller, operator station and workpiece positioner all share a common base, which is basically a large steel structure that can be moved around with a forklift whenever needed. See the RW950 Welding Workcell in action.
“We’ve done a lot of the small weld cells,” he says. “We always have a couple on hand so we can supply those on demand. We’ve done larger material handling cells, as well.
“We have a third-party company that does the financing if you need it. A lot of people just end up paying it upfront. If they were to purchase the robot after they’ve rented it, we apply that towards the purchase as well.”
Fischer says 20 percent of the rental price is credited to the purchase if a customer decides to keep the robot. All the robots and robotic cells are up to date on maintenance before they leave the RobotWorx floor and shouldn’t require any major maintenance for at least a year. He says most customers end up buying the robot if their rental period exceeds a year.
Time is not always the deciding factor under the RaaS model. As robotic systems become easier to deploy and redeploy, the idea of robots as a service will gain more permanence as a long-term solution. In the future, robotics in our workplaces and homes will be as ubiquitous as the Internet. In the meantime, we’ll keep our eyes on RaaS as it gets ready for primetime
Source : https://www.robotics.org/content-detail.cfm/Industrial-Robotics-Industry-Insights/Robots-for-Rent-Why-RaaS-Works/content_id/7665
Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives—all of whom agreed that the company needed to invest in artificial intelligence—seemed baffled by the need for so much detail. “How will we know if it’s working?” asked a senior director to the visible relief of his colleagues.
Although they believe AI’s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:
Organizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn’t new, the answer might be.
Captivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).
AI positioning depends on incumbent practices. A retailer’s customer service department designated a group of AI experts to develop “follow the sun chatbots” that would serve the retailer’s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.
These decisions were vastly different, but they were both the right ones for their respective companies.
When people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.
Executives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.
These packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.
It’s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.
I’ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn’t done yet, and when they support existing cultural norms.
Incumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.
A senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives “so that they’d know what they’d be in for.” More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. “This is a company where you absolutely need someone powerful in your corner,” she explained.
If you’re new to AI you’ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker’s quote about culture eating strategy for breakfast? Going rogue is risky.
On the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.
Either way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.
These days AI has mojo. Companies are getting serious about it in a way they haven’t been before. And the more your executives understand about how it will be deployed—and why—the better the chances for delivering ongoing value.
Source : https://www.cio.com/article/3318639/artificial-intelligence/5-questions-ceos-are-asking-about-ai.html