Category: Manufacturing

Geothermal Making Inroads as Baseload Power

It’s energy that has been around forever, used for years as a heating source across the world, particularly in areas with volcanic activity. Today, geothermal has surfaced as another renewable resource, with advancements in drilling technology bringing down costs and opening new areas to development.

Renewable energy continues to increase its share of the world’s power generation. Solar and wind power receive most of the headlines, but another option is increasingly being recognized as an important carbon-free resource.

Geothermal, accessing heat from the earth, is considered a sustainable and environmentally friendly source of renewable energy. In some parts of the world, the heat that can be used for geothermal is easily accessible, while in other areas, access is more challenging. Areas with volcanic activity, such as Hawaii—where the recently restarted Puna Geothermal Venture supplies about 30% of the electricity demand on the island of Hawaii—are well-suited to geothermal systems.

“What we need to do as a renewable energy industry is appreciate that we need all sources of renewable power to be successful and that intermittent sources of power need the baseload sources to get to a 100% renewable portfolio,” Will Pettitt, executive director of the Geothermal Resources Council (GRC), told POWER. “Geothermal therefore needs to be collaborating with the solar, wind, and biofuel industries to make this happen.”

1. The Nesjavellir Geothermal Power Station is located near the Hengill volcano in Iceland. The 120-MW plant contributes to the country’s 750 MW of installed geothermal generation capacity. Courtesy: Gretar Ívarsson

The U.S. Department of Energy (DOE) says the U.S. leads the world in geothermal generation capacity, with about 3.8 GW. Indonesia is next at about 2 GW, with the Philippines at about 1.9 GW. Turkey and New Zealand round out the top five, followed by Mexico, Italy, Iceland (Figure 1), Kenya, and Japan.

Research and Development

Cost savings from geothermal when compared to other technologies is part of its allure. The DOE is funding research into clean energy options, including up to $84 million in its 2019 budget to advance geothermal energy development.

 

2. This graphic produced by AltaRock Energy, a geothermal development and management company, shows the energy-per-well equivalent for shale gas, conventional geothermal, an enhanced geothermal system (EGS) well, and a “super hot” EGS well. Courtesy: AltaRock Energy / National Renewable Energy Laboratory

Introspective Systems, a Portland, Maine-based company that develops distributed grid management software, in February received a Small Business Innovation Research award from the DOE in support of the agency’s Enhanced Geothermal Systems’ (EGS) project. At EGS (Figure 2) sites, a fracture network is developed, and water is pumped into hot rock formations thousands of feet below the earth’s surface. The heated water is then recovered to drive conventional steam turbines. Introspective Systems is developing monitoring software that enables EGS systems to be cost-competitive.

Kay Aikin, Introspective Systems’ CEO, was among business leaders selected by the Clean Energy Business Network (CEBN)—a group of more than 3,000 business leaders from all 50 states working in the clean energy economy—to participate in meetings with members of Congress in March to discuss the need to protect and grow federal funding for the DOE and clean energy innovation overall.

Aikin told POWER that EGS technology is designed to overcome the problem of solids coming “out of the liquids and filling up all the pores,” or cracks in rock through which heated water could flow. The Introspective Systems’ software uses “algorithms to find the sites [suitable for a geothermal system]. We can track those cracks and pores, and that is what we are proposing to do.”

Looking for more insight into geothermal energy? Read our “Q&A with Geothermal Experts,” featuring Dr. Will Pettitt, executive director of the Davis, California-based Geothermal Resources Council, and Dr. Torsten Rosenboom, a partner in the Frankfurt, Germany office of global law firm Watson Farley & Williams LLP.

“In my view there are three technology pieces that need to come together for EGS to be successful,” said the GRC’s Pettitt. “Creating and maintaining the reservoir so as to ensure sufficient permeability without short-circuiting; bringing costs down on well drilling and construction; [and] high-temperature downhole equipment for zonal isolation and measurements. These technologies all have a lot of crossover opportunities to helping conventional geothermal be more efficient.”

Aikin noted a Massachusetts Institute of Technology report on geothermal [The Future of Geothermal Energy: Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century] “that was the basis for this funding from DOE,” she said. Aikin said current goals for geothermal would “offset about 6.1% of CO2 emissions, about a quarter of the Paris climate pledge. Because it’s base[load] power, it will offset coal and natural gas. We’re talking about roughly 1,500 new geothermal plants by 2050, and they can be sited almost anywhere.”

NREL Takes Prominent Role

Kate Young, manager of the geothermal program at the National Renewable Energy Laboratory (NREL) in Golden, Colorado, talked to POWER about the biggest things that the industry is focusing on. “DOE has been working with the national labs the past several years to develop the GeoVision study, that is now in the final stages of approval,” she said.

The GeoVision study explores potential geothermal growth scenarios across multiple market sectors for 2020, 2030, and 2050. NREL’s research focuses on things such as:

    ■ Geothermal resource potential – hydrothermal, coproduction, and near-field and greenfield enhanced geothermal systems.
    ■ Techno-economic characteristics – the costs and technical issues of advanced technologies and potential future impacts and calculating geothermal capacity.
    ■ Market penetration – modeling of dozens of scenarios, including multiple reference scenarios.
    ■ Non-technical barriers – factors that create delays, increase risk, or increase the cost of project development.

The study started with analyses spearheaded by several DOE labs in areas such as exploration; reservoir development and management; non-technical barriers; hybrid systems; and thermal applications (see sidebar). NREL then synthesized the analyses from the labs in market deployment models for the electricity and heating/cooling sectors.

Geothermal Is Big Business in Boise

The first U.S. geothermal district heating system began operating in 1892 in Boise, Idaho. The city still relies on geothermal, with the largest system of its kind in the U.S., and the sixth-largest worldwide, according to city officials. The current system, which began operating in 1983, heats 6 million square feet of real estate—about a third of the city’s downtown (Figure 3)—in the winter. The city last year got the go-ahead from the state Department of Water Resources to increase the amount of water it uses, and Public Works Director Steve Burgos told POWER the city wants to connect more downtown buildings to the system.

3. This plaque, designed by artist Ward Hooper, adorns buildings across downtown Boise, Idaho, denoting properties that use geothermal energy. Courtesy: City of Boise

Burgos said it costs the city about $1,000 to pump the water out of the ground and into the system on a monthly basis, and about another $1,000 for the electricity used to inject the water back into the aquifer. Burgos said the water “comes out at 177 degrees,” and the city is able to re-use the water in lower-temperature (110 degrees) scenarios, such as at laundry facilities. The city’s annual revenue from the system is $650,000 to $750,000.

“We have approximately 95 buildings using the geothermal system,” said Burgos. “About 2% of the city’s energy use is supplied by geothermal. We’re very proud of it. It’s a source of civic pride. Most of the buildings that are hooked up use geothermal for heating. Some of the buildings use geothermal for snow melt. There’s no outward sign of the system, there’s no steam coming out of the ground.”

Colin Hickman, the city’s communication manager for public works, told POWER that Boise “has a downtown YMCA, that has a huge swimming pool, that is heated by geothermal.” He and Burgos both said the system is an integral part of the city’s development.

“We’re currently looking at a strategic master plan for the geothermal,” Burgos said. “We definitely want to expand the system. Going into suburban areas is challenging, so we’re focusing on the downtown core.” Burgos said the city about a decade ago put in an injection well to help stabilize the aquifer. Hickman noted the city last year received a 25% increase in its water rights.

Boise State University (BSU) has used the system since 2013 to heat several of its buildings, and the school’s curriculum includes the study of geothermal physics. The system at BSU was expanded about a year and a half ago—it’s currently used in 11 buildings—and another campus building currently under construction also will use geothermal.

Boise officials tout the city’s Central Addition project, part of its LIV District initiative (Lasting Environments, Innovative Enterprises and Vibrant Communities). Among the LIV District’s goals is to “integrate renewable and clean geothermal energy” as part of the area’s sustainable infrastructure.

“This is part of a broader energy program for the city,” Burgos said, “as the city is looking at a 100% renewable goal, which would call for an expansion of the geothermal energy program.” Burgos noted that Idaho Power, the state’s prominent utility, has a goal of 100% clean energy by 2045.

As Boise grows, Burgos and Hickman said the geothermal system will continue to play a prominent role.

“We actively go out and talk about it when we know a new business is coming in,” Burgos said. “And as building ownership starts to change hands, we want to have a relationship with those folks.”

Said Hickman: “It’s one of the things we like as a selling point” for the city.

Young told POWER: “The GeoVision study looked at different pathways to reduce the cost of geothermal and at ways we can expand access to geothermal resources so that it can be a 50-state technology, not limited to the West. When the study is released, it will be a helpful tool in showing the potential for geothermal in the U.S.”

Young said of the DOE: “Their next big initiative is to enable EGS, using the FORGE site,” referring to the Frontier Observatory for Research in Geothermal Energy, a location “where scientists and engineers will be able to develop, test, and accelerate breakthroughs in EGS technologies and techniques,” according to DOE. The agency last year said the University of Utah “will receive up to $140 million in continued funding over the next five years for cutting-edge geothermal research and development” at a site near Milford, Utah, which will serve as a field laboratory.

“The amount of R&D money that’s been invested in geothermal relative to other technologies has been small,” Young said. “and consequently, the R&D improvement has been proportionally less than other technologies. The potential, however, for geothermal technology and cost improvement is significant; investment in geothermal could bring down costs and help to make it a 50-state technology – which could have a positive impact on the U.S. energy industry.”

For those who question whether geothermal would work in some areas, Young counters: “The temperatures are lower in the Eastern U.S., but the reality is, there’s heat underground everywhere. The core of the earth is as hot as the surface of the sun, but a lot closer. DOE is working to be able to access that heat from anywhere – at low cost.”

Investors Stepping Up

Geothermal installations are often found at tectonic plate boundaries, or at places where the Earth’s crust is thin enough to let heat through. The Pacific Rim, known as the Ring of Fire for its many volcanoes, has several of these places, including in California, Oregon, and Alaska, as well as northern Nevada.

Geothermal’s potential has not gone unnoticed. Some of the world’s wealthiest people, including Microsoft founder Bill Gates, Amazon founder and CEO Jeff Bezos, and Alibaba co-founder Jack Ma, are backing Breakthrough Energy Ventures, a firm that invests in companies developing decarbonization technologies. Breakthrough recently invested $12.5 million in Baseload Capital, a geothermal project development company that provides funding for geothermal power plants using technology developed by Climeon, its Swedish parent company.

Climeon was founded in 2011; it formed Baseload Capital in 2018. The two focus on geothermal, shipping, and heavy industry, in the latter two sectors turning waste heat into electricity. Climeon’s geothermal modules are scalable, and available for both new and existing geothermal systems. Climeon in March said it had an order backlog of about $88 million for its modules.

“We believe that a baseload resource such as low-temperature geothermal heat power has the potential to transform the energy landscape. Baseload Capital, together with Climeon’s innovative technology, has the potential to deliver [greenhouse gas-free] electricity at large scale, economically and efficiently,” Carmichael Roberts of Breakthrough Energy Ventures said in a statement.

Climeon says its modules reduce the need for drilling new wells and enable the reuse of older wells, along with speeding the development time of projects. The company says the compact and modular design is scalable from 150-kW modules up to 50-MW systems. Climeon says it can be connected to any heat source, and has just three moving parts in each module: two pumps, and a turbine.

4. The Sonoma Plant operated by Calpine is one of more than 20 geothermal power plants sited at The Geysers, the world’s largest geothermal field, located in Northern California.  Courtesy: Creative Commons / Stepheng3

Breakthrough Energy’s investment in Baseload Capital is its second into geothermal energy. Breakthrough last year backed Fervo Energy, a San Francisco, California-based company that says its technology can produce geothermal energy at a cost of 5¢/kWh to 7¢/kWh. Fervo CEO and co-founder Tim Latimer said the money from Breakthrough would be used for field testing of EGS installations. Fervo’s other co-founder, Jack Norbeck, was a reservoir engineer at The Geysers in California (Figure 4), the world’s largest geothermal field, located north of Santa Rosa and just south of the Mendocino National Forest.

Most of the nearly two dozen geothermal plants at The Geysers are owned and operated by Calpine, though not all are operating. The California Energy Commission says there are more than 40 operating geothermal plants in the state, with installed capacity of about 2,700 MW.

Geothermal “is something we have to do,” said Aikin of Introspective Systems. “We have to find new baseload power. Our distribution technology can get part of the way there, toward 80% renewables, but we need base power. [Geothermal] is a really good ‘all of the above’ direction to go in.”

Source : https://www.powermag.com/bringing-the-heat-geothermal-making-inroads-as-baseload-power/?printmode=1

 

Making Simulation Accessible to the Masses – American Composites Manufacturers Association

Composites simulation tools aren’t just for mega corporations. Small and mid-sized companies can reap their benefits, too.

In 2015, Solvay Composite Materials began using simulation tools from MultiMechanics to simplify testing of materials used in high-performance applications. The global business unit of Solvay recognized the benefits of conducting computer-simulated tests to accurately predict the behavior of advanced materials, such as resistance to extreme temperatures and loads. Two years later, Solvay invested $1.9 million in MultiMechanics to expedite development of the Omaha, Neb.-based startup company’s material simulation software platform, which Solvay predicts could reduce the time and cost of developing new materials by 40 percent.

Commitment to – and investment in – composites simulation tools isn’t unusual for a large company like Solvay, which recorded net sales of €10.3 billion (approximately $11.6 billion) in 2018 and has 27,000 employees working at 125 sites throughout 62 countries. What may be more surprising is the impact composites simulation can have on small to mid-sized companies. “Simulation tools are for everyone,” asserts Flavio Souza, Ph.D., president and chief technology officer of MultiMechanics.

The team at Guerrilla Gravity would agree. The 7-year-old mountain bike manufacturer in Denver began using simulation software from Altair more than a year ago to develop a new frame technology made from thermoplastic resins and carbon fiber. “We were the first ones to figure out how to create a hollow structural unit with a complex geometry out of thermoplastic materials,” says Will Montague, president of Guerrilla Gravity.

That probably wouldn’t have been possible without composites simulation tools, says Ben Bosworth, director of composites engineering at Guerrilla Gravity. Using topology optimization, which essentially finds the ideal distribution of material based on goals and constraints, the company was able to maximize use of its materials and conduct testing with confidence that the new materials would pass on the first try. (They did.) Afterward, the company was able to design its product for a specific manufacturing process – automated fiber placement.

“There is a pretty high chance that if we didn’t utilize composites simulation software, we would have been far behind schedule on our initial target launch date,” says Bosworth. Guerrilla Gravity introduced its new frame, which can be used on all four of its full-suspension mountain bike models, on Jan. 31, 2019.

The Language of Innovation
There are dozens of simulation solutions, some geared specifically to the composites industry and other general finite element analysis (FEA) tools. But they all share the common end goal of helping companies bring pioneering products to market faster – whether those companies are Fortune 500 corporations or startup entrepreneurships.

“Composites simulation is going to be the language of innovation,” says R. Byron Pipes, executive director of the Composites Manufacturing & Simulation Center at Purdue University. “Without it, a company’s ability to innovate in the composites field is going to be quite restricted.”

Those innovations can be at the material level or within end-product applications. “If you really want to improve the micromechanics of your materials, you can use simulation to tweak the properties of the fibers, the resin, the combination of the two or even the coating of fibers,” says Souza. “For those who build parts, simulation can help you innovate in terms of the shape of the part and the manufacturing process.”

One of the biggest advantages that design simulation has over the traditional engineering approach is time, says Jeff Wollschlager, senior director of composites technology at Altair. He calls conventional engineering the “build and bust” method, where companies make samples, then break them to test their viability. It’s a safe method, producing solid – although often conservative – designs. “But the downside of traditional approaches is they take a lot more time and many more dollars,” says Wollschlager. “And everything in this world is about time and money.”

In addition, simulation tools allow companies to know more about the materials they use and the products they make, which in turn facilitates the manufacturing of more robust products. “You have to augment your understanding of your product with something else,” says Wollschlager. “And that something else is simulation.”

A Leap Forward in Manufacturability
Four years ago, Montague and Matt Giaraffa, co-founder and chief engineer of Guerrilla Gravity, opted to pursue carbon fiber materials to make their bike frames lighter and sturdier. “We wanted to fundamentally improve on what was out there in the market. That required rethinking and analyzing not only the material, but how the frames are made,” says Montague.

The company also was committed to manufacturing its products in the United States. “To produce the frames in-house, we had to make a big leap forward in manufacturability of the frames,” says Montague. “And thermoplastics allow for that.” Once Montague and Giaraffa selected the material, they had to figure out exactly how to make the frames. That’s when Bosworth – and composites simulation – entered the picture.

Bosworth has more than a decade of experience with simulation software, beginning as an undergraduate student in mechanical engineering as a member of his college’s Formula SAE® team to design, build and test a vehicle for competition. While creating the new frame for Guerrilla Gravity, he used Altair’s simulation tools extensively, beginning with early development to prove the material feasibility for the application.

“We had a lot of baseline data from our previous aluminum frames, so we had a really good idea about how strong the frames needed to be and what performance characteristics we wanted,” says Bosworth. “Once we introduced the thermoplastic carbon fiber, we were able to take advantage of the software and use it to its fullest potential.” He began with simple tensile test samples and matched those with physical tests. Next, he developed tube samples using the software and again matched those to physical tests.

“It wasn’t until I was much further down the rabbit hole that I actually started developing the frame model,” says Bosworth. Even then, he started small, first developing a computer model for the front triangle of the bike frame, then adding in the rear triangle. Afterward, he integrated the boundary conditions and the load cases and began doing the optimization.

“You need to start simple, get all the fundamentals down and make sure the models are working in the way you intend them to,” says Bosworth. “Then you can get more advanced and grow your understanding.” At the composite optimization stage, Bosworth was able to develop a high-performing laminate schedule for production and design for automated fiber placement.

Even with all his experience, developing the bike frame still presented challenges. “One of the issues with composites simulation is there are so many variables to getting an accurate result,” admits Bosworth. “I focused on not coming up with a 100 percent perfect answer, but using the software as a tool to get us as close as we could as fast as possible.”

He adds that composites simulation tools can steer you in the right direction, but without many months of simulation and physical testing, it’s still very difficult to get completely accurate results. “One of the biggest challenges is figuring out where your time is best spent and what level of simulation accuracy you want to achieve with the given time constraints,” says Bosworth.

Wading into the Simulation Waters
The sophistication and expense of composites simulation tools can be daunting, but Wollschlager encourages people not to be put off by the technology. “The tools are not prohibitive to small and medium-sized companies – at least not to the level people think they are,” he says.

Cost is often the elephant in the room, but Wollschlager says it’s misleading to think packages will cost a fortune. “A proper suite provides you simulation in all facets of composite life cycles – in the concept, design and manufacturing phases,” he says. “The cost of such a suite is approximately 20 to 25 percent of the yearly cost of an average employee. Looking at it in those terms, I just don’t see the barrier to entry for small to medium-sized businesses.”

As you wade into the waters of simulation, consider the following:

Assess your goals before searching for a package. Depending on what you are trying to accomplish, you may need a comprehensive suite of design and analysis tools or only a module or two to get started. “If you want a simplified methodology because you don’t feel comfortable with a more advanced one, there are mainstream tools I would recommend,” says Souza. “But if you really want to innovate and be at the cutting-edge of your industry trying to understand how materials behave and reduce costs, then I would go with a more advanced package.” Decide upfront if you want tools to analyze materials, conduct preliminary designs, optimize the laminate schedule, predict the life of composite materials, simulate thermo-mechanical behaviors and so on.

Find programs that fit your budget. Many companies offer programs for startups and small businesses that include discounts on simulation software and a limited number of hours of free consulting. Guerrilla Gravity purchased its simulation tools through Altair’s Startup Program, which is designed for privately-held businesses less than four years old with revenues under $10 million. The program made it fiscally feasible for the mountain bike manufacturer to create a high-performing solution, says Bosworth. “If we had not been given that opportunity, we probably would’ve gone with a much more rudimentary design – probably an isotropic, black aluminum material just to get us somewhere in the ballpark of what we were trying to do,” he says.

Engage with vendors to expedite the learning curve. Don’t just buy simulation tools from suppliers. Most companies offer initial training, plus extra consultation and access to experts as needed. “We like to walk hand-in-hand with our customers,” says Souza. “For smaller companies that don’t have a lot of resources, we can work as a partnership. We help them create the models and teach them the technology behind the product.”

Start small, and take it slow. “I see people go right to the final step, trying to make a really advanced model,” says Bosworth. “Then they get frustrated because nothing is working right and the joints aren’t articulating. They end up troubleshooting so many issues.” Instead, he recommends users start simple, as he did with the thermoplastic bike frame.

Don’t expect to do it all with simulation. “We don’t advocate for 100 percent simulation. There is no such thing. We also don’t advocate for 100 percent experimentation, which is the traditional approach to design,” says Wollschlager. “The trick is that it’s somewhere in the middle, and we’re all struggling to find the perfect percentage. It’s problem-dependent.”

Put the right people in place to use the tools. “Honestly, I don’t know much about FEA software,” admits Montague. “So it goes back to hiring smart people and letting them do their thing.” Bosworth was the “smart hire” for Guerrilla Gravity. And, as an experienced user, he agrees it takes some know-how to work with simulation tools. “I think it would be hard for someone who doesn’t have basic material knowledge and a fundamental understanding of stress and strain and boundary conditions to utilize the tools no matter how basic the FEA software is,” he says. For now, simulation is typically handled by engineers, though that may change.

Perhaps the largest barrier to implementation is ignorance – not of individuals, but industry-wide, says Pipes. “People don’t know what simulation can do for them – even many top level senior managers in aerospace,” he says. “They still think of simulation in terms of geometry and performance, not manufacturing. And manufacturing is where the big payoff is going to be because that’s where all the economics lie.”

Pipes wants to “stretch people into believing what you can and will be able to do with simulation.” As the technology advances, that includes more and more each day – not just for mega corporations, but for small and mid-sized companies, too.

“As the simulation industry gets democratized, prices are going to come down due to competition, while the amount you can do will go through the roof,” says Wollschlager. “It’s a great time to get involved in simulation.”

Source : http://compositesmanufacturingmagazine.com/2019/05/making-simulation-accessible-to-the-masses/

 

Which New Business Models Will Be Unleashed By Web 3.0? – Fabric

The forthcoming wave of Web 3.0 goes far beyond the initial use case of cryptocurrencies. Through the richness of interactions now possible and the global scope of counter-parties available, Web 3.0 will cryptographically connect data from individuals, corporations and machines, with efficient machine learning algorithms, leading to the rise of fundamentally new markets and associated business models.

The future impact of Web 3.0 makes undeniable sense, but the question remains, which business models will crack the code to provide lasting and sustainable value in today’s economy?

A history of Business Models across Web 1.0, Web 2.0 and Web 3.0

We will dive into native business models that have been and will be enabled by Web 3.0, while first briefly touching upon the quick-forgotten but often arduous journeys leading to the unexpected & unpredictable successful business models that emerged in Web 2.0.

To set the scene anecdotally for Web 2.0’s business model discovery process, let us not forget the journey that Google went through from their launch in 1998 to 2002 before going public in 2004:

  • In 1999, while enjoying good traffic, they were clearly struggling with their business model. Their lead investor Mike Moritz (Sequoia Capital) openly stated “we really couldn’t figure out the business model, there was a period where things were looking pretty bleak”.
  • In 2001, Google was making $85m in revenue while their rival Overture was making $288m in revenue, as CPM based online advertising was falling away post dot-com crash.
  • In 2002, adopting Overture’s ad model, Google went on to launch AdWords Select: its own pay-per-click, auction-based search-advertising product.
  • Two years later, in 2004, Google hits 84.7% of all internet searches and goes public with a valuation of $23.2 billion with annualised revenues of $2.7 billion.

After struggling for 4 years, a single small modification to their business model launched Google into orbit to become one of the worlds most valuable companies.

Looking back at the wave of Web 2.0 Business Models

Content

The earliest iterations of online content merely involved the digitisation of existing newspapers and phone books … and yet, we’ve now seen Roma (Alfonso Cuarón) receive 10 Academy Awards Nominations for a movie distributed via the subscription streaming giant Netflix.

Marketplaces

Amazon started as an online bookstore that nobody believed could become profitable … and yet, it is now the behemoth of marketplaces covering anything from gardening equipment to healthy food to cloud infrastructure.

Open Source Software

Open source software development started off with hobbyists and an idealist view that software should be a freely-accessible common good … and yet, the entire internet runs on open source software today, creating $400b of economic value a year and Github was acquired by Microsoft for $7.5b while Red Hat makes $3.4b in yearly revenues providing services for Linux.

SaaS

In the early days of Web 2.0, it might have been inconceivable that after massively spending on proprietary infrastructure one could deliver business software via a browser and become economically viable … and yet, today the large majority of B2B businesses run on SaaS models.

Sharing Economy

It was hard to believe that anyone would be willing to climb into a stranger’s car or rent out their couch to travellers … and yet, Uber and AirBnB have become the largest taxi operator and accommodation providers in the world, without owning any cars or properties.

Advertising

While Google and Facebook might have gone into hyper-growth early on, they didn’t have a clear plan for revenue generation for the first half of their existence … and yet, the advertising model turned out to fit them almost too well, and they now generate 58% of the global digital advertising revenues ($111B in 2018) which has become the dominant business model of Web 2.0.

Emerging Web 3.0 Business Models

Taking a look at Web 3.0 over the past 10 years, initial business models tend not to be repeatable or scalable, or simply try to replicate Web 2.0 models. We are convinced that while there is some scepticism about their viability, the continuous experimentation by some of the smartest builders will lead to incredibly valuable models being built over the coming years.

By exploring both the more established and the more experimental Web 3.0 business models, we aim to understand how some of them will accrue value over the coming years.

  • Issuing a native asset
  • Holding the native asset, building the network:
  • Taxation on speculation (exchanges)
  • Payment tokens
  • Burn tokens
  • Work Tokens
  • Other models

Issuing a native asset:

Bitcoin came first. Proof of Work coupled with Nakamoto Consensus created the first Byzantine Fault Tolerant & fully open peer to peer network. Its intrinsic business model relies on its native asset: BTC — a provable scarce digital token paid out to miners as block rewards. Others, including Ethereum, Monero and ZCash, have followed down this path, issuing ETH, XMR and ZEC.

These native assets are necessary for the functioning of the network and derive their value from the security they provide: by providing a high enough incentive for honest miners to provide hashing power, the cost for malicious actors to perform an attack grows alongside the price of the native asset, and in turn, the added security drives further demand for the currency, further increasing its price and value. The value accrued in these native assets has been analysed & quantified at length.

Holding the native asset, building the network:

Some of the earliest companies that formed around crypto networks had a single mission: make their respective networks more successful & valuable. Their resultant business model can be condensed to “increase their native asset treasury; build the ecosystem”. Blockstream, acting as one of the largest maintainers of Bitcoin Core, relies on creating value from its balance sheet of BTC. Equally, ConsenSys has grown to a thousand employees building critical infrastructure for the Ethereum ecosystem, with the purpose of increasing the value of the ETH it holds.

While this perfectly aligns the companies with the networks, the model is hard to replicate beyond the first handful of companies: amassing a meaningful enough balance of native assets becomes impossible after a while … and the blood, toil, tears and sweat of launching & sustaining a company cannot be justified without a large enough stake for exponential returns. As an illustration, it wouldn’t be rational for any business other than a central bank — i.e. a US remittance provider — to base their business purely on holding large sums of USD while working on making the US economy more successful.

Taxing the Speculative Nature of these Native Assets:

The subsequent generation of business models focused on building the financial infrastructure for these native assets: exchanges, custodians & derivatives providers. They were all built with a simple business objective — providing services for users interested in speculating on these volatile assets. While the likes of Coinbase, Bitstamp & Bitmex have grown into billion-dollar companies, they do not have a fully monopolistic nature: they provide convenience & enhance the value of their underlying networks. The open & permissionless nature of the underlying networks makes it impossible for companies to lock in a monopolistic position by virtue of providing “exclusive access”, but their liquidity and brands provide defensible moats over time.

Payment Tokens:

With The Rise of the Token Sale, a new wave of projects in the blockchain space based their business models on payment tokens within networks: often creating two sided marketplaces, and enforcing the use of a native token for any payments made. The assumptions are that as the network’s economy would grow, the demand for the limited native payment token would increase, which would lead to an increase in value of the token. While the value accrual of such a token model is debated, the increased friction for the user is clear — what could have been paid in ETH or DAI, now requires additional exchanges on both sides of a transaction. While this model was widely used during the 2017 token mania, its friction-inducing characteristics have rapidly removed it from the forefront of development over the past 9 months.

Burn Tokens:

Revenue generating communities, companies and projects with a token might not always be able to pass the profits on to the token holders in a direct manner. A model that garnered a lot of interest as one of the characteristics of the Binance (BNB) and MakerDAO (MKR) tokens was the idea of buybacks / token burns. As revenues flow into the project (from trading fees for Binance and from stability fees for MakerDAO), native tokens are bought back from the public market and burned, resulting in a decrease of the supply of tokens, which should lead to an increase in price. It’s worth exploring Arjun Balaji’s evaluation (The Block), in which he argues the Binance token burning mechanism doesn’t actually result in the equivalent of an equity buyback: as there are no dividends paid out at all, the “earning per token” remains at $0.

Work Tokens:

One of the business models for crypto-networks that we are seeing ‘hold water’ is the work token: a model that focuses exclusively on the revenue generating supply side of a network in order to reduce friction for users. Some good examples include Augur’s REP and Keep Network’s KEEP tokens. A work token model operates similarly to classic taxi medallions, as it requires service providers to stake / bond a certain amount of native tokens in exchange for the right to provide profitable work to the network. One of the most powerful aspects of the work token model is the ability to incentivise actors with both carrot (rewards for the work) & stick (stake that can be slashed). Beyond providing security to the network by incentivising the service providers to execute honest work (as they have locked skin in the game denominated in the work token), they can also be evaluated by predictable future cash-flows to the collective of service providers (we have previously explored the benefits and valuation methods for such tokens in this blog). In brief, such tokens should be valued based of the future expected cash flows attributable to all the service providers in the network, which can be modelled out based on assumptions on pricing and usage of the network.

A wide array of other models are being explored and worth touching upon:

  • Dual token model such as MKR/DAI & SPANK/BOOTY where one asset absorbs the volatile up- & down-side of usage and the other asset is kept stable for optimal transacting.
  • Governance tokens which provide the ability to influence parameters such as fees and development prioritisation and can be valued from the perspective of an insurance against a fork.
  • Tokenised securities as digital representations of existing assets (shares, commodities, invoices or real estate) which are valued based on the underlying asset with a potential premium for divisibility & borderless liquidity.
  • Transaction fees for features such as the models BloXroute & Aztec Protocol have been exploring with a treasury that takes a small transaction fee in exchange for its enhancements (e.g. scalability & privacy respectively).
  • Tech 4 Tokens as proposed by the Starkware team who wish to provide their technology as an investment in exchange for tokens — effectively building a treasury of all the projects they work with.
  • Providing UX/UI for protocols, such as Veil & Guesser are doing for Augur and Balance is doing for the MakerDAO ecosystem, relying on small fees or referrals & commissions.
  • Network specific services which currently include staking providers (e.g. Staked.us), CDP managers (e.g. topping off MakerDAO CDPs before they become undercollateralised) or marketplace management services such as OB1 on OpenBazaar which can charge traditional fees (subscription or as a % of revenues)
  • Liquidity providers operating in applications that don’t have revenue generating business models. For example, Uniswap is an automated market maker, in which the only route to generating revenues is providing liquidity pairs.

With this wealth of new business models arising and being explored, it becomes clear that while there is still room for traditional venture capital, the role of the investor and of capital itself is evolving. The capital itself morphs into a native asset within the network which has a specific role to fulfil. From passive network participation to bootstrap networks post financial investment (e.g. computational work or liquidity provision) to direct injections of subjective work into the networks (e.g. governance or CDP risk evaluation), investors will have to reposition themselves for this new organisational mode driven by trust minimised decentralised networks.

When looking back, we realise Web 1.0 & Web 2.0 took exhaustive experimentation to find the appropriate business models, which have created the tech titans of today. We are not ignoring the fact that Web 3.0 will have to go on an equally arduous journey of iterations, but once we find adequate business models, they will be incredibly powerful: in trust minimised settings, both individuals and enterprises will be enabled to interact on a whole new scale without relying on rent-seeking intermediaries.

Today we see 1000s of incredibly talented teams pushing forward implementations of some of these models or discovering completely new viable business models. As the models might not fit the traditional frameworks, investors might have to adapt by taking on new roles and provide work and capital (a journey we have already started at Fabric Ventures), but as long as we can see predictable and rational value accrual, it makes sense to double down, as every day the execution risk is getting smaller and smaller

Source : https://medium.com/fabric-ventures/which-new-business-models-will-be-unleashed-by-web-3-0-4e67c17dbd10

Why are Machine Learning Projects so Hard to Manage? – Lukas Biewald

I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?

1. It’s really hard to tell in advance what’s hard and what’s easy.

Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy. Even within a domain, performance can vary wildly. What’s good accuracy for predicting sentiment? On movie reviews, there is a lot of text and writers tend to be fairly clear about what they think and these days 90–95% accuracy is expected. On Twitter, two humans might only agree on the sentiment of a tweet 80% of the time. It might be possible to get 95% accuracy on the sentiment of tweets about certain airlines by just always predicting that the sentiment is going to be negative.

Metrics can also increase a lot in the early days of a project and then suddenly hit a wall. I once ran a Kaggle competition where thousands of people competed around the world to model my data. In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques. Those people competing in the Kaggle competition worked incredibly hard to get that 68% accuracy and I’m sure felt like it was a huge achievement. But for most use cases, 65% vs 68% is totally indistinguishable. If that had been an internal project, I would have definitely been disappointed by the outcome.

My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning. Engineering projects generally move forward, but machine learning projects can completely stall. It’s possible, even common, for a week spent on modeling data to result in no improvement whatsoever.

2. Machine Learning is prone to fail in unexpected ways.

Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data. Humans are so good at generalizing from training data that we have terrible intuitions about this. I built a little robot with a camera and a vision model trained on the millions of images of ImageNet which were taken off the web. I preprocessed the images on my robot camera to look like the images from the web but the accuracy was much worse than I expected. Why? Images off the web tend to frame the object in question. My robot wouldn’t necessarily look right at an object in the same way a human photographer would. Humans likely not even notice the difference but modern deep learning networks suffered a lot. There are ways to deal with this phenomenon, but I only noticed it because the degradation in performance was so jarring that I spent a lot of time debugging it.

Much more pernicious are the subtle differences that lead to degraded performance that are hard to spot. Language models trained on the New York Times don’t generalize well to social media texts. We might expect that. But apparently, models trained on text from 2017 experience degraded performance on text written in 2018. Upstream distributions shift over time in lots of ways. Fraud models break down completely as adversaries adapt to what the model is doing.

3. Machine Learning requires lots and lots of relevant training data.

Everyone knows this and yet it’s such a huge barrier. Computer vision can do amazing things, provided you are able to collect and label a massive amount of training data. For some use cases, the data is a free byproduct of some business process. This is where machine learning tends to work really well. For many other use cases, training data is incredibly expensive and challenging to collect. A lot of medical use cases seem perfect for machine learning — crucial decisions with lots of weak signals and clear outcomes — but the data is locked up due to important privacy issues or not collected consistently in the first place.

Many companies don’t know where to start in investing in collecting training data. It’s a significant effort and it’s hard to predict a priori how well the model will work.

What are the best practices to deal with these issues?

1. Pay a lot of attention to your training data.
Look at the cases where the algorithm is misclassifying data that it was trained on. These are almost always mislabels or strange edge cases. Either way you really want to know about them. Make everyone working on building models look at the training data and label some of the training data themselves. For many use cases, it’s very unlikely that a model will do better than the rate at which two independent humans agree.

2. Get something working end-to-end right away, then improve one thing at a time.
Start with the simplest thing that might work and get it deployed. You will learn a ton from doing this. Additional complexity at any stage in the process always improves models in research papers but it seldom improves models in the real world. Justify every additional piece of complexity.

Getting something into the hands of the end user helps you get an early read on how well the model is likely to work and it can bring up crucial issues like a disagreement between what the model is optimizing and what the end user wants. It also may make you reassess the kind of training data you are collecting. It’s much better to discover those issues quickly.

3. Look for graceful ways to handle the inevitable cases where the algorithm fails.
Nearly all machine learning models fail a fair amount of the time and how this is handled is absolutely crucial. Models often have a reliable confidence score that you can use. With batch processes, you can build human-in-the-loop systems that send low confidence predictions to an operator to make the system work reliably end to end and collect high-quality training data. With other use cases, you might be able to present low confident predictions in a way that potential errors are flagged or are less annoying to the end user.

What’s Next?

The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use. As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic. I’m incredibly excited about the opportunity to build completely new kinds of products but worried about the lack of tools and best practices. So much so that I started a company to help with this called Weights and Biases. If you’re interested in learning more, check out what we’re up to.

Source : https://medium.com/@l2k/why-are-machine-learning-projects-so-hard-to-manage-8e9b9cf49641

Industrial tech may not be sexy, but VCs are loving it – John Tough

There are nearly 300 industrial-focused companies within the Fortune 1,000. The medium revenue for these economy-anchoring firms is nearly $4.9 billion, resulting in over $9 trillion in market capitalization.

Due to the boring nature of some of these industrial verticals and the complexity of the value chain, venture-related events tend to get lost within our traditional VC news channels. But entrepreneurs (and VCs willing to fund them) are waking up to the potential rewards of gaining access to these markets.

Just how active is the sector now?

That’s right: Last year nearly $6 billion went into Series A, B & C startups within the industrial, engineering & construction, power, energy, mining & materials, and mobility segments. Venture capital dollars deployed to these sectors is growing at a 30 percent annual rate, up from ~$750 million in 2010.

And while $6 billion invested is notable due to the previous benchmarks, this early stage investment figure still only equates to ~0.2 percent of the revenue for the sector and ~1.2 percent of industry profits.

The number of deals in the space shows a similarly strong growth trajectory. But there are some interesting trends beginning to emerge: The capital deployed to the industrial technology market is growing at a faster clip than the number of deals. These differing growth trajectories mean that the average deal size has grown by 45 percent in the last eight years, from $18 to $26 million.

Detail by stage of financing

Median Series A deal size in 2018 was $11 million, representing a modest 8 percent increase in size versus 2012/2013. But Series A deal volume is up nearly 10x since then!

Median Series B deal size in 2018 was $20 million, an 83 percent growth over the past five years and deal volume is up about 4x.

Median Series C deal size in 2018 was $33 million, representing an enormous 113 percent growth over the past five years. But Series C deals have appeared to reach a plateau in the low 40s, so investors are becoming pickier in selecting the winners.

These graphs show that the Series A investors have stayed relatively consistent and that the overall 46 percent increase in sector deal size growth primarily originates from the Series B and Series C investment rounds. With bigger rounds, how are valuation levels adjusting?

Above: Growth in pre-money valuation particularly acute in later stage deals

The data shows that valuations have increased even faster than the round sizes have grown themselves. This means management teams are not feeling any incremental dilution by raising these larger rounds.

  • The average Series A round now buys about 24 percent, slightly less than five years ago
  • The average Series B round now buys about 22 percent of the company, down from 26 percent five years ago
  • The average Series C round now buys approximately 20 percent, down from 23 percent five years ago.

Some conclusions

  • Dollars invested as a portion of industry revenue and profit allows for further capital commitments.
  • There is a growing appreciation for the industrial sales cycle. Investor willingness to wait for reduced risk to deploy even more capital in the perceived winners appears to be driving this trend.
  • Entrepreneurs that can successfully de-risk their enterprise through revenue, partnerships, and industry hires will gain access to outsized capital pools. The winners in this market tend to compound as later customers look to early adopters
  • Uncertainty still remains about exit opportunities for technology companies that serve these industries. While there are a few headline-grabbing acquisitions (PlanGrid, Kurion, OSIsoft), we are not hearing about a sizable exit from this market on a weekly or monthly cadence. This means we won’t know for a few years about the returns impact of these rising valuations. Grab your hard hat!

Source : https://venturebeat.com/2019/01/22/industrial-tech-may-not-be-sexy-but-vcs-are-loving-it/

Key to any successful industrial digitalisation project – Manufacturer

Intelligent use of real-time data is critical to successful industrial digitalisation. However, ensuring that data flows effectively is just as critical to success. Todd Gurela explains the importance of getting your manufacturing network right.

Industrial digitalisation, including the Industrial Internet of Things (IIoT), offers great promise for manufacturers looking to optimise business operations.

By bringing together the machines, processes, people and data on your plant floor through a secure Ethernet network, IIoT makes it possible to design, develop, and fabricate products faster, safer, and with less waste.

For example, one automotive parts supplier eliminated network downtime, saving around £750,000 in the process simply by deploying a new wireless network across the factory floor.

The time it took for the company to completely recoup their investment in the project? Just nine months.

The key to any successful industrial digitalisation project is factory data

Without data – extracted from multiple sources and delivered to the right application, at the right time – little optimisation can happen.

And there is a multitude of meaningful data held in factory equipment. Consider how real-time access to condition, performance, and quality data – across every machine on the floor – would help you make better business and production decisions.

Imagine the following. A machine sensor detects that volume is low for a particular part on your assembly line. Data analysis determines, based on real-time production speed and previous output totals, that the part needs to be re-stocked in one hour.

With this information, your team can arrange for replacement parts to arrive before you run out, and avoid a production stoppage.

This scenario may be a theoretical, but it illustrates a genuine truth. Manufacturers need reliable, scalable, secure factory networks so they can focus on their most important task: making whatever they make more efficiently, at higher quality levels, and at lower costs.

At the heart of this truth is the factory network. So, while the key to a successful Industry 4.0 project is data, the key to meaningful, accurate data is the network. And manufacturers need to plan carefully to ensure their network can deliver on their needs.

Five key network characteristics

There are five characteristics manufacturers should look for in a factory network before selecting a vendor.

In no particular order, they are:

Interoperability – this ability allows for the ‘flattening’ of the industrial network to improve data sharing, and usually includes Ethernet as a standard.

Automation – for ‘plug and play’ network deployment to streamline processes and drive productivity.

Simplicity – the network infrastructure should be simple, as should the management.

Security – your network should be secure and provide visibility into and control of your data to reduce risk, protect intellectual property, and ensure production integrity.

Intelligence – you need a network that makes it possible to analyse data, and take action quickly, even at the network edge.

Manufacturers need solutions with these features to help aggregate, visualise, and analyse data from connected machines and equipment, and to assure the reliable, rapid, and secure delivery of data. Anything less will leave them wanting, and with subpar results.

These five characteristics are explained in more detail below, along with a real-world case study of a British manufacturer who recently modernised its network and is now expanding globally. 

1. Interoperability

Network interoperability allows manufacturers to seamlessly pull data from anywhere in their facility. An emerging standard in this area is Time Sensitive Networking (TSN).

Although not yet widely adopted, TSN provides a common communications pathway for your machines. With TSN, the future of industrial networks will be a single, open Ethernet network across the factory floor that enables manufacturers to access data with ease and efficiency.

Most important, TSN opens up critical control applications such as robot control, drive control, and vision systems to the Industrial Internet of Things (IIoT), making it possible for manufacturers to identify areas for optimisation and cost reduction.

Also, with the OPC-UA protocol now running over TSN, it also becomes possible to have standard and secure communication from sensor to cloud. In fact, TSN fills an important gap in standard networking by protecting critical traffic.

How so? Automation and control applications require consistent delivery of data from sensors, to controllers and actuators.

TSN ensures that critical traffic flows promptly, securing bandwidth and time in the network infrastructure for critical applications, while supporting all other forms of traffic.

And because TSN is delivered over standard Industrial Ethernet, control networks can take advantage of the security built into the technology.

TSN eliminates network silos that block reachability to critical plant areas, so that you can extract real-time data for analytics and business insights.

This is key to the future of factory networks, as TSN will drive the interoperability required for manufacturers to maximise the value from Industry 4.0 projects.

One leading manufacturer estimated that unscheduled downtime cost them more than £16,000/minute in lost profits and productivity. That’s almost £1m per hour if production stops. Could your organisation survive a stoppage like that?


2. Automation

Network automation is critical for manufacturers who have growing network demands. This includes needing to add new machines, or integrate operational controls, to existing infrastructure as well as net-new deployments.

Network uptime becomes increasingly important as the network expands. Ask yourself whether your network and its supporting tools have the capability for ‘plug and play’ network deployments that greatly reduce downtime if – and when – failure occurs.

It’s essential that factories leverage networks that automate certain tasks – to automatically set correct switch settings, for example – to meet Industry 4.0 objectives. The task is too overwhelming otherwise.


3. Simplicity

Like automation, network simplicity is an essential component of the factory network. Choosing a single network infrastructure, capable of handling TSN, Ethernet IP, Profinet, and CCLink traffic can significantly simplify installation, reduce maintenance expense, and reduce downtime.

It also makes it possible to get all your machine controls, from any of the top worldwide automation vendors, to talk through the same network hardware.

Consider also that you want a network that can be managed by operations and IT professionals. Avoid solutions that are too IT-centric and look for user-friendly tools that operations can use to troubleshoot network issues quickly.

Tools that visualise the network topology for operations professionals can be especially useful in this regard.

For example, knowing which PLC (including firmware data) is connected to which port, and which I/O is connected to the same switch, can help speed commissioning and troubleshooting.

Last, validated network designs are essential to factory success. These designs help manufacturers quickly roll out new network deployments and maintain the performance of automation equipment. Make sure this is part of the service your network vendor can provide.


4. Security

Cybersecurity is critically important on the factory floor. As manufacturing networks grow, so does the attack surface, or vectors, for malicious activity such as a ransomware attack.

According to the Cisco 2017 Midyear Cybersecurity Report, nearly 50% of manufacturers use six or more security vendors in their facilities. This mix and match of security products and vendors can be difficult to manage for even the most seasoned security expert.

No single product, technology or methodology can fully secure industrial operations. However, there are vendors that can provide comprehensive network security solutions in their plant network infrastructure that include simple protections for physical assets, such as blocking access to ports in unmanaged switches or using managed switches.

Protecting critical manufacturing assets requires a holistic defence-in-depth security approach that uses multiple layers of defence to address different types of threats. It also requires a network design that leverages industrial security best practices such as ‘Demilitarized Zones’ (DMZs) to provide pervasive security across the entire plant.


5. Intelligence

Consider for a moment how professional athletes react to their surroundings. They interpret what is happening in real-time, and make split-second decisions based on what is going on around them.

Part of what makes those decisions possible is how the players have been coached to react in certain situations. If players needed to ask their coach for advice before taking every shot, tackling the opposition, or sprinting for victory…well, the results wouldn’t be very good.

Just as a team’s performance improves when players can take in their surroundings and perform an appropriate action, the factory performs better when certain network data can be processed and actioned upon immediately – without needing to travel to the data centre first.

Processing data in this way is called ‘edge’, or ‘fog’, computing. It entails running applications right on your network hardware to make more intelligent, faster decisions.

Manufacturers need to access information quickly, filter it in real-time, then use that data to better understand processes and areas for improvement.

Processing data at the edge is key to unlocking networking intelligence, so it’s important to ask yourself whether your factory network can support edge applications before beginning a project. And if it can’t, it’s time to consider a new network.

A final note on network intelligence. Once you deploy edge applications, make sure you have the tools to manage and implement them with confidence, at scale. Managing massive amounts of data can quickly become a problem, so you’ll need systems that can extract, compute, and move data to the right places at the right time.

The opportunity for manufacturers who invest in Industry 4.0 solutions is massive (and it’s time that leaders from the top floor and shop floor realised it). But before any Industry 4.0 project can get off the ground, the right foundation needs to be in place.

The factory (or industrial) network is that foundation… and manufacturers owe it to themselves to select the best one available.

Case Study:

SAS International is a leading British manufacturer of quality metal ceilings and bespoke architectural metalwork. Installed in iconic, landmark buildings worldwide, SAS products lead through innovation, cutting-edge design and technical acoustic expertise.

Their success is built on continued investment in manufacturing and achieving value for clients through world-class engineered solutions.

In the UK, SAS operates factories in Bridgend, Birmingham and Maybole, with headquarters and warehouse facilities in Reading. The company has recently expanded its export markets and employs nearly 1,000 staff internationally.

However, the IT infrastructure was operating on ageing equipment with connectivity, visibility and security constraints.

The company’s IT team recently modernised its network, upgrading from commercial-grade wireless to a new network solution with a unified dashboard that allows them to remotely manage distributed sites.

They now have instant visibility and control over the network devices, as well as the mobile devices used by employees daily.

Results

During the initial deployment, the IT team was able to identify cabling issues that previously they would not have been alerted to or been able to investigate.

With upcoming projects and continually working to optimise solutions, like cloud storage, the network is now robust enough and reliable enough to support future IT needs.

SAS is retrofitting numerous manufacturing machines with computers. This retrofit, partnered with the new network, allows remote communications between the machines and the designers without having to manually input data at the machines themselves.

The robust wireless infrastructure is changing the manual printing and checking of stock by enabling handheld scanners and creating a more efficient and cost-effective product flow.

Fault mitigation and anomaly detection have been huge benefits of the solution. For example, the IT team was able to quickly identify a bandwidth issue when a phenomenal amount of data was generated from an automated transfer to a shop machine.

They were able to spot the issue, identify the machine, and fix the problem. Before, they would merely have seen there was a network slowdown, but wouldn’t have been able to identify or resolve the problem.

The SAS team will continue to benefit from the included firmware updates and new feature releases that are integrated into the solution, providing them with a future-proof solution as they expand to global sites in the future.

Source : https://www.themanufacturer.com/articles/the-key-to-any-successful-industrial-digitalisation-project/

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

Robots for Rent – Why RaaS Works – RIA

Renting robots as temp labor? Not a new idea. But it’s certainly one that is gaining followers.

Rising labor shortages, tightly contested global markets, and growing interest in automation are tightening the screws on traditional business models. A broader spectrum of users are seeking flexible automation solutions. More suppliers are adopting new-age rental or lease options to satisfy the demand. Some are mature companies answering the call, others are startups blazing a path for the rest of the industry. Robotics as a Service (RaaS) is an emerging trend whose time has come.

Steel Collar Associates may have been ahead of its time when RIA spoke with its owner in 2013 about his “Humanoids for Hire” – aka Yaskawa dual-arm robots for rent. Already several years into his venture at the time, Bill Higgins was having little success contracting out his robo-employees. Back then, industry was barely warming up to the idea of cage-free robots rubbing elbows with their human coworkers. Now every major robot manufacturer has a collaborative robot on its roster. And a slew of startups have joined the fray.

Just like human-robot collaboration is helping democratize robotics, RaaS will help bring robots to the masses. And cobots aren’t the only robots for rent.

Whether you have a short-term need, want to try before you buy, forgo a capital expenditure, or lower your cost of entry to robotic automation, RaaS is worth a closer look. It’s robots on demand, when and where you want them.

An out-of-the-box collaborative robot solution on wheels is easy to redeploy as production needs change. A rental option further enhances ROI. (Courtesy of READY Robotics)

Robots on Demand
Out-of-the-box solutions like those offered by READY Robotics, which are easy to use and easy to deploy, are making RaaS a reality. Your next, or perhaps first, robotic solution may be a Johnny-on-the-spot – on wheels.

“The TaskMate is a ready-to-use, on-demand robot worker that is specifically designed to come out of its shipping crate ready to be deployed to the production line,” says READY Robotics CEO Ben Gibbs, noting that manufacturers without the time to undertake custom robot integration are looking for an out-of-the box automation solution. Rental options make the foray easier.

“Time is their most precious resource. They want something like the TaskMate that is essentially ready to go out of the box,” says Gibbs. “They may have to do a little fixturing or put together a parts presentation hopper. Besides that, it’s something they can deploy pretty quickly. We’re driving towards providing a solution that’s as easy to use as your personal computer.”

The system consists of a collaborative robot arm mounted on a stand with casters, so you can wheel it into position anywhere on the production floor. The ease of portability makes it ideal for high-mix, low-volume production where it can be quickly relocated to different manufacturing cells. Nicknamed the “Swiss Army Knife” of robots, the TaskMate performs a variety of automation tasks from machine tending to pick-and-place applications, to parts inspection.

The TaskMate comes in two varieties, the 5-kg payload R5 and 10-kg payload R10 (pictured). Both systems use robot arms from collaborative robot maker Universal Robots. The UR arm is equipped with a force sensor and a universal interface called the TEACHMATE that allows different robot grippers to be hot-swapped onto the end of the arm. Supported end effector brands include SCHUNK, Robotiq and Piab.

Contributing to the system’s ease of use is READY’s proprietary operating system, the FORGE/OS software. A simple flowchart interface (pictured) controls the robot arm, end-of-arm tooling and other peripherals. No coding is required.

For those tasks requiring a higher payload, reach, or cycle time than is capable with the power-and-force limiting cobot included with the TaskMate R5 and R10 systems, READY also offers its FORGE controller (formerly called the TaskMate Kit). Running the intuitive FORGE/OS software, the controller provides the same easy programming interface but is designed as a standalone system for ABB, FANUC, UR and Yaskawa robots.

“For example, if you plug the FORGE controller into a FANUC robot, you no longer have to program in Karel (the robot OEM’s proprietary programming language),” explains Gibbs. “On the teach pendant, you can use FORGE/OS to program the robot directly, so you have the same programming experience on the controller as you do on the TaskMate.

Intuitive software interface with a flowchart design and compatibility with multiple robot brands makes programming easier and faster. (Courtesy of READY Robotics)

“We started primarily with smaller six degree-of-freedom robot arms, like the FANUC LR Mate and GP7 from Yaskawa,” continues Gibbs. “We have started to integrate some of the larger robots as well, like the FANUC M-710iC/50. Ultimately, we’re driving toward a ubiquitous programming experience regardless of what robot arm or robot manufacturer you’re using.”

In the Cloud
A common element in the RaaS rental model is cloud robotics. READY offers customers the ability to remotely monitor the TaskMate or other robotic systems hooked up to the FORGE controller.

“We can set them up with alerts, so when the production cycle is completed or the robot enters an unexpected error state, they can receive an email notifying the floor manager or line operator to check the system,” says Gibbs.

You can also save and back up programs to the cloud, and deploy them from one robot to another. If an operator were to inadvertently lose a program, rather than rewrite it from scratch, you can just drop the backup version from the cloud onto the system and be up and running again in minutes.

The TaskMate systems and FORGE controller are available for both purchase and rental.

“We provide a menu to our customers of how they might want to consume our products and services,” says Gibbs. “That may be all the way from a traditional CapEx (capital expenditure) purchase if they want to buy one of our TaskMates upfront, to the other end of the spectrum where they can rent the system with no contract for however long or short of a duration they want.”

For an additional charge, READY can manage the entire asset for the customer.

“We set it up, we program it, and we remotely monitor it to make sure it’s maximizing its uptime. We can come in and tweak the program if it’s running into unexpected errors. All of the systems are equipped with cell modems, so they can update the software over the air. We handle all of the maintenance or it’s handled by our channel partners.”

No-Term Rental
Gibbs says flexibility is the biggest advantage to their rental option. READY offers a 3-month trial rental. But customers are not required to keep it for that full term.

“We have a no-term rental. That’s even more appealing because it can come entirely out of your OpEx (operating expenditure) budget. Instead of going through a lengthy CapEx approval process, we’ve had some customers just run their corporate credit card, because the rental is below their approval level for an OpEx purchase. They can easily set up the system and use it for a few months. That alone provides them with a much stronger justification for moving forward with CapEx if they want, or just continue to expand their rental.

“At the end of the first month, if they decide that it’s not working out, just like any incompetent worker, they can fire it and send it back.”

If the customer chooses to continue renting, Gibbs says it’s more cost-effective to sign a contract. This reduces the risk for everyone, so there’s usually a financial incentive.

“The primary way we differentiate ourselves is that we offer that no-term rental with a fixed monthly fee, which allows these factories to capture the traditional value of automation. We don’t have a meter running that says you ran it 22 hours this day, so you owe us for 22 hours of work. We encourage them to run it as long as they want. The expectation is the longer you run it, the cheaper it should be.”

Flexibility for High-Mix, Low-Volume
READY’s target customers range from small job shops to large multinationals and Fortune 500 companies.

“Attwood is a great example of the type of high-mix, low-volume production environment where the flexibility of the TaskMate really shines,” says Gibbs.

Attwood Marine in Lowell, Michigan, is one of the world’s largest producers of boat parts, accessories and supplies. If it’s on your boat, there’s a good chance this century-old company made it. They make thousands of different parts, but cater to a relatively small marine market. The challenges of high-mix, low-volume production in a highly competitive market had them looking for an automation solution.

The flexibility of the TaskMate to quickly deploy and redeploy depending on Attwood’s short- or long-term needs was a deciding factor. With only a couple hundred employees and no dedicated robotics programmer on staff, the customer appreciates the FORGE software’s ease of use. Plus the ability to rent the system plays to the seasonal nature of Attwood’s business and lowers the cost of their first foray into robotic automation.

Attwood has deployed the TaskMate R10 to a half-dozen cells on the production floor performing CNC machine tending, pick-and-place tasks like palletizing, loading/unloading conveyors and case packing, and even repetitive testing. You need to actuate a switch or pull a cord 250,000 times? That’s a job for flexible automation.

By deploying one robot system to multiple production cells, Attwood was able to spread their ROI across multiple product lines and realize up to a 30 percent reduction in overall manufacturing costs. Watch the TaskMate on the job at Attwood Marine.

Small to midsized businesses aren’t the only ones benefiting. Large multinationals like tools manufacturer Stanley Black & Decker use the TaskMate R10 for machine tending CNC lathes.

“Multinationals may have robot programmers on staff, but usually not enough of them,” says Gibbs. “Automation engineers are in high demand and very difficult to come by. Any technology that makes it faster and easier for people to set up robots is a tremendous value. Even with large multinationals, some like to be asset-light and do a rental, but everyone loves the ease of programming we offer through FORGE.”

Forged in the Lab
READY’s portable plug-and-play solution is a technology spinoff from Professor Greg Hager’s research in human-machine collaborative systems at Johns Hopkins University. Gibbs, an alumnus, was working in the university’s technology ventures office helping researchers like Prof. Hager develop commercialization strategies for their new technologies. Hager, along with Gibbs, and fellow alum CTO Kelleher Guerin cofounded the startup in October 2015. Another cofounder, Drew Greenblatt, President of Marlin Steel Wire Products (an SME in the Know), offered up his nearby Baltimore, Maryland-based custom metal forms factory as a prototype test site for the TaskMate. The system was officially launched in July 2017.

Prof. Hager is now an advisor to the company. Distinguished robotics researcher, Henrik Christensen, is Chairman of the Board of Advisors. In December 2017, the startup secured $15 million in Series A funding led by Drive Capital.

READY maintains an office in Baltimore, while its headquarters is in Columbus, Ohio. They are a FANUC Authorized System Integrator. Gibbs says they are in the process of building a channel partner network of integrators and distributors to support future growth.

Pay As You Go
Business models under the RaaS umbrella vary widely, and are evolving. Startups like Hirebotics and Kindred leverage cloud robotics more intensely to monitor robot uptime, collect data, and enhance performance using AI. They charge by the hour, or even by the second. You pay for only what you use. Each service model has its advantages.

Some RaaS advocates offer subscription-based models. Some took a page from the sharing economy. Think Airbnb, Lyft, TaskRabbit, Poshmark. Share an abode, a car or clothes. Skip the overhead, the infrastructure and the long-term commitment. Pay as you go for a robot on the run.

Mobile Robots for Hire
Autonomous mobile robots (AMRs) are no strangers to the RaaS model, either. RIA members Aethon and Savioke lease their mobile robots for various applications in healthcare, hospitality and manufacturing. Startup inVia Robotics offers a subscription-based RaaS solution for its warehouse “Picker” robots.

Autonomous mobile robot navigates production floors to transport pallets and heavy loads via the most efficient route, while safely maneuvering around people and other obstacles. (Courtesy of Mobile Industrial Robots A/S)We first explored the emergence of AMRs in the Always-On Supply Chain. It’s startling how much the logistics robot market has changed in just a couple of years. Since then, prototypes and beta deployments have turned into full product lines with significant investor funding. Major users like DHL, Walmart and Kroger, not to mention early adopter Amazon, are doubling down on their mobile fleets.

After triple-digit revenue growth in Europe, Mobile Industrial Robots (MiR) was just breaking onto the North American scene two years ago. Now, as they celebrate comparable growth on this side of the pond, MiR prepares to launch a new lease program in January.

MiR is another prodigy of Denmark’s booming robotics cluster. They join Danish cousin Universal Robots on the list of Teradyne’s smart robotics acquisitions. Odense must have the Midas touch.

Go Big or Go Home
Responding to customer demands for larger payloads, MiR introduced its 500-kg mobile platform at Automatica in June. The MiR500 (pictured) comes with a pallet transport system that automatically lifts pallets off a rack and delivers them autonomously. Watch it in action on the production floor of this agricultural machine manufacturer.

“Everybody we deal with today is making a big push to eliminate forklift traffic from the inner aisleways of production lines,” says Ed Mullen, Vice President of Sales – Americas for MiR in Holbrook, New York. “That’s really driving the whole launch of the MiR500. We’ve gone through some epic growth here in my division.”

Mullen’s division is responsible for supporting MiR’s extensive distributor network in all markets between Canada and Brazil. Right now, the Americas account for about a third of the global business.

“We’re seeing applications in industrial automation, warehouses and distribution centers,” says Mullen. “Electronics, semiconductor and a lot of the tier automotive companies, like Faurecia, Visteon and Magna, have all invested in our platforms and are scaling the business. We see this being implemented across all industries, which is really adding to our excitement.”

Lease Options
Although Mullen says they’ve seen tremendous success with the current buy model, MiR is trying to make it even easier to work with this emerging technology. That drove them to the RaaS model.

“We think a leasing option will allow companies that are still trying to understand the use cases for the technology to get in quicker, and then slowly scale the business up as they learn how to apply it and what the sweet spots are for autonomous mobile robots. The lease option is intended to reduce the cost of entry. Today it’s mainly the bigger multinationals that are buying, but we believe by providing options for lower entry points, this will make the use cases in the small-to-midsized companies come to light.”

He says a third-party company will handle all the leases. MiR’s distributor network will engage with the third-party company to put together lease programs for customers.

MiR has also implemented a Preferred System Integrator (PSI) program to augment the existing network of distribution partners. Two and a half years ago, it was mainly large companies investing in these mobile platforms. They were purchasing in volumes of one to five robots. Today, they’re seeing investments of 20, 30, or even more than 50 robots.

“When you get into these bigger deployments, it’s more critical to have companies that are equipped to handle them. Our distribution partners are set up as a sales channel. Although most of them have integration capabilities, they don’t want to invest in deploying hundreds of robots at one time. They rather hand that off to a company that’s able to properly support large-scale deployments.”

Over the last couple of years, MiR had been focused on bringing more efficiency to the manufacturing process; not necessarily replacing existing AGVs and forklifts.

“For example, you have a guy that gets paid a healthy salary to sit in front of a machine tool and use his skills to do a certain task. That’s what makes the company money. But when he has to get up and carry a tray of parts to the next phase in the production cycle, that’s inefficient. That’s what we’ve been focusing on, at least with our MiR100 and MiR200 (pictured).”

 Autonomous mobile robot efficiently transports finished product to the inspection area, freeing up employees for more high-value tasks at this custom plastic injection molder. (Courtesy of Mobile Industrial Robots A/S)

Technologies, an Indiana-based company specializing in custom plastic injection molding and mold tooling. The mobile robot loops the shop floor, autonomously transporting finished product from the presses to quality inspection. This frees up personnel for more high-value tasks and eliminates material flow bottlenecks.

“With the new MiR500, we’re going after heavier loads and palletizer loads. That’s replacing standard AGVs and forklifts. We’re also starting to see big conveyor companies like Simplimatic Automation and FlexLink move to a more flexible type of platform with autonomous mobile robots.

“Parallel to the hardware is our software. A key part of our company is the way we develop the software, the way we allow people to interface with the product. We’re continuously making it more intuitive and easier to use.”

MiR offers two software packages, the operating system that comes with the robot and the fleet management software that manages two or more robots. The latter is not a requirement, but Mullen says most companies are investing in it to get additional functionality when interfacing with their enterprise system. The newest fleet system is moving to a cloud-based option.

Hardware and software updates are all handled through MiR’s distribution channel and Mullen doesn’t think any of that will change under the lease option.

“The support model will stay the same. Our distributors are all trained on hardware updates, preventative maintenance and troubleshooting. I firmly believe the major component to our success today is our distribution model.”

Mullen says he’s looking forward to new products coming out in 2019. MiR is also hiring. They expect to double their employee count in the Americas and globally.

High-Tech, Short-Term Need
It’s many of these feisty startups that we’re seeing adopt nontraditional models like RaaS. But stalwarts are coming on board, too.

On-demand material handling robots come in all sizes, payloads and reaches for rental by the week. (Courtesy of RobotWorx)Established in 1992, RobotWorx is part of SCOTT Technology Ltd., a century-old New Zealand-based company specializing in automated production, robotics and process machinery. RobotWorx joined the SCOTT family of international companies in 2014 and recently completed a rigorous audit process to become an RIA Certified Robot Integrator.

RobotWorx buys, reconditions and sells used robots, along with maintaining an inventory of new robotic systems and offering full robot integration and training services. Rentals are nothing new to them. They’ve been renting robots for several years, before it was a trend. But in response to the upswing in industry requests of late, RobotWorx rolled out a major push on their rental program this past spring.

“We’ve done a lot with the TV and film industry,” says Tom Fischer, Operations Manager for RobotWorx in Marion, Ohio. “If you’ve seen the latest AT&T commercial, there are blue and orange robots in it. We rented those out for a week.”

Dubbed “Bruce” and “Linda” on strips of tape along their outstretched arms, these brightly colored robots have a starring role in this AT&T Business commercial promoting Edge-to-Edge Intelligence? solutions. Fischer says companies in this industry usually select a particular size of robot, typically either a long-reach or large-payload material handling robot, like the Yaskawa Motoman long-reach robots in this AT&T commercial.

Ever wonder if the robots in commercials are just there for effect? It turns out, not always. Fischer says these are fully functioning robots. AT&T’s ad agency must have a robot wrangler off camera to keep Bruce and Linda in line. However, the other robots in the background are the result of TV magic.

“We basically just sent them the robots,” says Fischer. “They did what they wanted to do with them and then sent them back.”

For quick gigs like this commercial, or maybe a movie cameo or even a tradeshow display, rental robots make sense. But how do you know when it’s better to rent or buy?

“We’ll do a cost analysis with the customer,” says Fischer. “We have an ROI calculator on our website if they want to see what their long-term commitment capital investment would be. (Check out RIA’s Robot ROI Calculator). We also look at it from the standpoint that if they have a long-term contract with somebody, their return on investment is going to be a lot better with a purchase. If they think they’re only going to use the robot for six months, it doesn’t make sense for them to buy it.”

Rent-A-Cell
RobotWorx rents robots by the week, month or year. A week is the minimum, but there’s no long-term commitment required. A rental includes a robot, the robot controller, teach pendant and end-of-arm tooling (EOAT). Robot brands available include ABB, FANUC, KUKA, Universal Robots, and Yaskawa Motoman.

They also rent entire ready-to-ship robot cells for welding or material handling. The most popular systems are the RWZero (pictured) and RW950 cells.

Self-contained, ready-to-ship robotic welding cell accelerates uptime whether you buy or rent it. (Courtesy of RobotWorx)

“The RWZero cell is very basic,” says Fischer. “You have a widget and you need 5,000 of them. Rent this cell and you have a production line instantly.”

The RW950 is more portable. Fisher calls it a “pallet platform.” The robot, controller, operator station and workpiece positioner all share a common base, which is basically a large steel structure that can be moved around with a forklift whenever needed. See the RW950 Welding Workcell in action.

“We’ve done a lot of the small weld cells,” he says. “We always have a couple on hand so we can supply those on demand. We’ve done larger material handling cells, as well.

“We have a third-party company that does the financing if you need it. A lot of people just end up paying it upfront. If they were to purchase the robot after they’ve rented it, we apply that towards the purchase as well.”

Fischer says 20 percent of the rental price is credited to the purchase if a customer decides to keep the robot. All the robots and robotic cells are up to date on maintenance before they leave the RobotWorx floor and shouldn’t require any major maintenance for at least a year. He says most customers end up buying the robot if their rental period exceeds a year.

Time is not always the deciding factor under the RaaS model. As robotic systems become easier to deploy and redeploy, the idea of robots as a service will gain more permanence as a long-term solution. In the future, robotics in our workplaces and homes will be as ubiquitous as the Internet. In the meantime, we’ll keep our eyes on RaaS as it gets ready for primetime

Source : https://www.robotics.org/content-detail.cfm/Industrial-Robotics-Industry-Insights/Robots-for-Rent-Why-RaaS-Works/content_id/7665

Here Are the Top Five Questions CEOs Ask About AI – CIO

Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives—all of whom agreed that the company needed to invest in artificial intelligence—seemed baffled by the need for so much detail. “How will we know if it’s working?” asked a senior director to the visible relief of his colleagues.

Although they believe AI’s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:

1. “What’s the reporting structure for an AI team?”

Organizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn’t new, the answer might be.

Captivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).

AI positioning depends on incumbent practices. A retailer’s customer service department designated a group of AI experts to develop “follow the sun chatbots” that would serve the retailer’s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.

These decisions were vastly different, but they were both the right ones for their respective companies.

Considerations:

  • How unique (e.g., competitively differentiating) is the expected outcome? If the proposed AI effort is seen as strategic, it might be better to create team of subject matter experts and developers with its own budget, headcount, and skills so as not distract from or siphon resources from existing projects.
  • To what extent are internal skills available? If data scientists and AI developers are already clustered within a COE, it might be better to leave the team as-is, hiring additional experts as demand grows.
  • How important will it be to package and brand the results of an AI effort? If AI outcome is a new product or service, it might be better to create a dedicated team that can deliver the product and assume maintenance and enhancement duties as it continues to innovate.

2. “Should we launch our AI effort using some sort of solution, or will coding from scratch distinguish our offering?”

When people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.

Executives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.

These packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.

Considerations:

  • Is time-to-delivery a success metric? In other words, is there lower tolerance for research or so-called “skunkworks” projects where timeframes and outcomes could be vague?
  • Is there a discrete budget for an AI project? This could make it easier to procure developer SDKs or other productivity tools.
  • How much research will developer toolboxes require? Depending on your company’s level of skill, in the time it takes to research, obtain approval for, procure, and learn an AI developer toolkit your team could have delivered important new functionality.

3. “Do we need a business case for AI?”

It’s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.

I’ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn’t done yet, and when they support existing cultural norms.

Considerations:

  • Do other strategic projects require business cases? If they do, decide whether you want AI to be part of the standard cadre of successful strategic initiatives, or to stand on its own.
  • Are business cases generally required for capital expenditures? If so, would bucking the norm make you an innovative disruptor, or an obstinate rule-breaker?
  • How formal is the initiative approval process? The absence of a business case might signal a lack of rigor, jeopardizing funding.
  • What will be sacrificed if you don’t build a business case? Budget? Headcount? Visibility? Prestige?

4. “We’ve had an executive sponsor for nearly every high-profile project. What about AI?”

Incumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.

A senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives “so that they’d know what they’d be in for.” More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. “This is a company where you absolutely need someone powerful in your corner,” she explained.

Considerations:

  • Does the company’s funding model require an executive sponsor? Challenging that rule might cost you time, not to mention allies.
  • Have high-impact projects with no executive sponsor failed?  You might not want your AI project to be the first.
  • Is the proposed AI effort specific to a line of business? In this case enlisting an executive sponsor familiar with the business problem AI is slated to solve can be an effective insurance policy.

5. “What practical advice do you have for teams just getting started?”

If you’re new to AI you’ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker’s quote about culture eating strategy for breakfast? Going rogue is risky.

On the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.

Either way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.

Considerations:

  • Distribute a high-level delivery schedule. An unbounded research project is not enough. Be sure you’re building something—AI experts agree that execution matters—and be clear about the delivery plan.
  • Help colleagues envision the benefits. Does AI promise first mover advantage? Significant cost reductions? Brand awareness?
  • Explain enough to color in the goal. Building a convolutional neural network to diagnose skin lesions via image scans is a world away from using unsupervised learning to discover unanticipated correlations between customer segments. As one of my clients says, “Don’t let the vague in.”

These days AI has mojo. Companies are getting serious about it in a way they haven’t been before. And the more your executives understand about how it will be deployed—and why—the better the chances for delivering ongoing value.

Source : https://www.cio.com/article/3318639/artificial-intelligence/5-questions-ceos-are-asking-about-ai.html

Augmented reality , the state of art in the industry- Miscible

Miscible.io attended The Augmented World Expo in Europe / Munich , October 2018, here is my report.

What a great #AWE2018 show in Munich, with a strong focus on the industry usage and, of course , the german automotive industry was well represented. Some new , simple but efficient, AR devices , and plenty of good use cases with a confirmed ROI. This edition was PRAGMATIC.

Here are my six take aways from this edition. Enjoy it !

1 – The return of investment of the AR solutions

The use of XR by automotive companies, big pharma, and teachers confirmed some good ROI with some “ready to use” solutions, especially in this domains :

2 – This is still the firstfruits of AR and some improvements are expected for drawbacks

  • Hardware : field of view, contrast/brigtness , 3D asset resolutions
  • Some AR headset are heavy to wear, it can have some consequences on the operator confort and security.
  • Accuracy between virtual and reality overlay / recognition
  • Automation process from Authoring software to build an end user solution.

3 – Challenge of the Authoring

To create specific and advanced AR Apps, there is still some challenges with the content authoring and with the integration to the legacy systems to retrieve master data and 3D assets. Automotized and integrated AR app need some ingenious developments.

An interesting use case from Boeing ( using hololens to assist the mounting of cables) shows how they did to get an integrated and automatized AR app. Their AR solution architecture in 4 blocks :

  • A web service to design the new AR app (UX and workflow)
  • A call to legacy systems to collect Master Data and 3D data / assets
  • Creation of an integrated Packaged data = asset bundle for the AR
  • Creation of the specific AR app (Vuforia / Unity) , to be transfered to the stand alone system, the Hololens glass.

4 – concept of 3D asset as a master data

The usage of AR and VR becomes more important in many domains : From conception to maintenance and sales (configurator, catalogs …)

The consequence is that original CAD files can be transformed and used in different processes of your company, where it becomes a challenge to use high polygon from CAD applications into other 3D / VR / AR applications, where there is a need of lighter 3D assets, also with some needs of texture and rendering adjustment.

gIFT can be a solution , glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.

The main challenge is to implement a good centralised and integrated 3D asset management strategy, considering them as important as your other key master data.

5 – service company and expert to design advanced AR / VR solutions , integrated in the enterprise information system.

The conception of advanced and integrated AR solutions for large companies needs some new expert combining knowlegde in 3D apps and experience in system integration.

This projects need new types of information system architecture taking in account the AR technologies.

PTC looks like a leader in providing efficient and scalable tools for large companies. PTC, owner of Vuforia is also exceling with other 3D / PLM management solutions like windchill , to smoothly integrate 3D management in all the processes and IT of the enterprise.

Sopra Steria , the french IS integration company, is also taking this role , bringing his system integration experience into the new AR /VR usages in the industry.

If you don’t want to invest in this kind of complex projects, for a first step in AR/VR or for some quick wins at a low budget , new content authoring solutions exist to build your AR app with some simple user interfaces and workflows : skylight by Upskill , worklink by Scope AR

6 – The need for an open AR Cloud

“A real time 3D (or spatial) map of the world, the AR cloud, will be the single most important software infrastructure in computing. Far more valuable than facebook social graph, or google page rank index” say Ori Inbar, Co-Founder and CEO of Augmented Reality.ORG. A promising prediction.

The AR cloud provide a persistant , multiuser and cross device AR landscape. It allows people to share experiences and collaborate. The most known AR cloud experience so far is the famous Pokemon Go game.

So far the AR map works using GPS or image recognition, or local point of cloud for a limited space / a building. The dream will be to copy the world as a point of cloud, for a global AR cloud landscape. A real time systems that could be used by robots, drones etc…

The AWE exhibition presented some interesting AR cloud initiative :

  • The Open AR Cloud Initiative launched at the event and had its first working session.
  • Some good SDK are now available to build your own local AR clouds : Wikitude an Immersal

Source : https://www.linkedin.com/pulse/augmented-reality-state-art-industry-fr%C3%A9d%C3%A9ric-niederberger/

 

Scroll to top