Moving beyond supply chain proof of concepts still requires bringing ecosystems of enterprises together
Bang on trend and with no shortage of afficiandoes; micro-brewery-made, artisinal beer proved a fitting use case for blockchain technology at Oracle OpenWorld last week.
Alpha Acid Brewing in Belmont, California was showcased as an early adopter of one of Oracle’s new blockchain based applications, Intelligent Track and Trace.
“We can now track materials and premium ingredients from our suppliers and analyse sensor data from the production process for each batch,” said Kyle Bozicevic, owner and brewer at Alpha Acid, which served up thousands of (free) cups of its beer range across the three day event.
“[The] application helps ensure that we are getting the highest quality hops, malt, and yeast, and enables us to create a strong narrative around our products for customers,” he added.
Big Red is hoping it will find an equally thirsty audience for the four supply chain focused blockchain applications it will make available through next year; Intelligent Track and Trace, Lot Lineage and Provenance, Intelligent Cold Chain and Warranty and Usage Tracking.
The use-case-specific SaaS applications are built on Oracle’s Blockchain Cloud Service launched earlier this year (itself based on Linux Foundation’s open source Hyperledger Fabric platform) and connect with its Supply Chain Management (SCM) Cloud, Enterprise Resource Management (ERP) Cloud and other applications.
“Typically when you think about the blockchain it’s about distributed ledger, it’s about digital signatures, it’s about smart contracts; but really the value proposition associated with blockchain is here,” said Rick Jewell, senior vice president, supply chain and manufacturing cloud applications, Oracle, at OpenWorld’s supply chain keynote.
Jewell pointed to a word cloud on a slide featuring phrases like: ‘reduce delays and inefficiencies’, ‘dispute resolution’, ‘proof of delivery’ and ‘expedite payments’.
“Just as we did with IoT – we didn’t stop with the IoT platform, we built IoT applications, we’ve done the same thing here. We have built form-fit blockchain applications that work on top of that,” he added.
The apps will make getting started with blockchain much easier for a business, but there are still significant challenges for them to overcome in taking the technology beyond proof-of-concept; chiefly, all the other businesses they work with.
As Gartner supply chain technology research director Amber Salley explained: “The apps will be as useful as there is an ecosystem committed to using blockchain.”
Alpha Acid is one of a number of early-adopters to get early access to the applications. Others named include Arab Jordan Investment Bank, CargoSmart, Certified Origins, Indian Oil, Intelipost, MTO, Neurosoft, Nigeria Customs, Sofbang, Solar Site Design and TradeFin.
CargoSmart is a shipment management software solutions provider in APAC, and begun its blockchain initiative for shipment documentation in July.
Since shipping document handling processes are complex, feature dated paper processes and involve many stakeholders across numerous countries, it is an ideal use case for blockchain CargoSmart CEO Steve Siu told Computerworld.
“We consider blockchain as the digital baseline for the next generation,” Siu said.
“Blockchain is something different – which is to come together to share that information in the first place then think about how the industry would take advantage of that to change the processes, to change the way they work together,” he added.
Getting all the stakeholders on to the blockchain will be a considerable challenge however.
Shipping companies have diverse technical capabilities and data standards, and currently exchange documents in many formats including email, online forms, and electronic data interchange (EDI). On average, a single shipment can involve more than 30 documents exchanged by all parties, often with multiple revisions due to human errors, before it leaves port.
These existing processes are not standardised, despite numerous attempts to do so, but would need to be if blockchain is to be used.
“To drive the industry to change is actually very difficult. That’s why we took this consortium approach, to get the industry together,” Siu added.
The sentiment was echoed by Certified Origin CIO Andrea Biagianti. His company has been using a blockchain application to trace key steps in the supply chain from Italian olive groves to the Bellucci-brand bottled extra virgin olive oil sold in North America.
“We think that the hardest step at the beginning is to build a best practice for all the actors in the supply chain. It is difficult for them to know that they have to work all together with one final scope,” he told Computerworld.
The requirement to get multiple stakeholders behind a single blockchain solution, could be a limiting factor in the apps’ success, Gartner’s Salley explained.
“Since it is a chain there needs to be multiple parties involved to add ‘links’ to the change. That means that the multiple parties will need to have invested in the systems and processes to make it work,” she said.
Despite the distributed, multi-stakeholder nature of the technology, Oracle will charge just one party, the “top node”, for using the apps and the cost is not based on the number of users on the chain.
“We do not intend on charging based on users, but we intend on charging for the platform itself,” Oracle’s executive vice president, applications product development, Steve Miranda told media.
“And the platform – think of it as the hub – whether that hub is purchased by a single node in the supply chain, the top node, or if that gets purchased by the collective sets of nodes… but because of the nature of the application and the distributed nature of the application, charging on a per user basis like that is counter to the way we expect it to be used. We want it to be used more pervasively not less pervasively,” he said.
Above a certain scale however, Miranda indicated that additional costs could kick in.
“The scale will likely have some sort of transaction charge on top of that but that depends on the blockchain use case,” he explained.
The apps will be interoperable with other blockchain providers with HyperLedger based solutions such as SAP and IBM, Oracle said.
Gartner research into supply-chain-focused blockchain solutions has found the market to be “uncertain, confusing and overly hyped”, while many proposed use cases “may not even need blockchain in the first place”.
A September report from the analyst firm said that a lack of data and governance standards across broad ecosystems of trading partners “will inhibit multi-enterprise collaboration, therefore stalling pilots and diminishing wide adoption”.
Until 2021, 90 per cent of supply chain blockchain initiatives will be proof-of-concepts (such as Commonwealth Bank of Australia’s recent almond shipping experiment) and onboarding challenges will halt 90 per cent of the initiatives across medium to large-scale enterprises, Gartner predicts.
“Blockchain in supply chain is a technology looking for a use case. I think the apps are Oracle’s attempt to create that use case. It is hard to sell blockchain as a platform so by productising it as an app gives a business a starting point to get using blockchain,” Salley said.
Intelligent Track and Trace will be available in the first quarter of next year, with the other apps following through the rest of 2019.
Source : https://www.computerworld.com.au/article/648812/oracle-apps-make-blockchain-easier-consortium-challenges-remain/
What a great #AWE2018 show in Munich, with a strong focus on the industry usage and, of course , the german automotive industry was well represented. Some new , simple but efficient, AR devices , and plenty of good use cases with a confirmed ROI. This edition was PRAGMATIC.
The use of XR by automotive companies, big pharma, and teachers confirmed some good ROI with some “ready to use” solutions, especially in this domains :
To create specific and advanced AR Apps, there is still some challenges with the content authoring and with the integration to the legacy systems to retrieve master data and 3D assets. Automotized and integrated AR app need some ingenious developments.
An interesting use case from Boeing ( using hololens to assist the mounting of cables) shows how they did to get an integrated and automatized AR app. Their AR solution architecture in 4 blocks :
The usage of AR and VR becomes more important in many domains : From conception to maintenance and sales (configurator, catalogs …)
The consequence is that original CAD files can be transformed and used in different processes of your company, where it becomes a challenge to use high polygon from CAD applications into other 3D / VR / AR applications, where there is a need of lighter 3D assets, also with some needs of texture and rendering adjustment.
gIFT can be a solution , glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.
The main challenge is to implement a good centralised and integrated 3D asset management strategy, considering them as important as your other key master data.
The conception of advanced and integrated AR solutions for large companies needs some new expert combining knowlegde in 3D apps and experience in system integration.
This projects need new types of information system architecture taking in account the AR technologies.
PTC looks like a leader in providing efficient and scalable tools for large companies. PTC, owner of Vuforia is also exceling with other 3D / PLM management solutions like windchill , to smoothly integrate 3D management in all the processes and IT of the enterprise.
Sopra Steria , the french IS integration company, is also taking this role , bringing his system integration experience into the new AR /VR usages in the industry.
If you don’t want to invest in this kind of complex projects, for a first step in AR/VR or for some quick wins at a low budget , new content authoring solutions exist to build your AR app with some simple user interfaces and workflows : skylight by Upskill , worklink by Scope AR
“A real time 3D (or spatial) map of the world, the AR cloud, will be the single most important software infrastructure in computing. Far more valuable than facebook social graph, or google page rank index” say Ori Inbar, Co-Founder and CEO of Augmented Reality.ORG. A promising prediction.
The AR cloud provide a persistant , multiuser and cross device AR landscape. It allows people to share experiences and collaborate. The most known AR cloud experience so far is the famous Pokemon Go game.
So far the AR map works using GPS or image recognition, or local point of cloud for a limited space / a building. The dream will be to copy the world as a point of cloud, for a global AR cloud landscape. A real time systems that could be used by robots, drones etc…
The AWE exhibition presented some interesting AR cloud initiative :
Source : https://www.linkedin.com/pulse/augmented-reality-state-art-industry-fr%C3%A9d%C3%A9ric-niederberger/
As more consumers search for sustainable packaging options, food and beverage companies are forced to make tough decisions about their products.
Shoppers and investors are increasingly looking for companies and brands to take the initiative on environmental issues. A Horizon Media study found 81% of millennials expect companies to make public commitments to good corporate citizenship and 66% of consumers will pay more for products from brands committed to environmentally friendly practices, according to the Nielsen Global Corporate Sustainability Report.
But more eco-friendly practices haven’t been easy for the food packaging industry. Designing eco-friendly packaging that can keep products fresh and endure temperature changes that come with cooking can be a challenge. Packaging companies told Food Dive they recently made moves to offer sustainable options with water-based ink and more compostable packaging, but have faced obstacles along the way. While some brands are aiming to only appear more sustainable, others are making slow efforts to be eco-friendly with new innovations and products.
For major food and beverage companies, the higher cost of sustainable materials and the struggle to keep food fresh are barriers. Production costs for sustainable options be about 25% more compared to traditional packaging. These materials also tend to be less effective in maintaining freshness, since packaging companies say plastic can have a tighter seal and keep out air better than other materials.
“That’s their compromise, it looks eco-friendly — but it’s not.” Damon Leach – Account representative at Green Rush Packaging
Some companies have found a way around the high costs. Damon Leach, an account representative at Green Rush Packaging, told Food Dive that a solution for some food companies has been to use material that looks recyclable to shoppers, but in reality, is not.
Instead of paying more for eco-friendly materials, companies have been picking material, like kraft paper, that looks more sustainable to consumers, he added. Leach said the products that appear to be more green do sell better.
Although Leach said more suppliers and consumers theoretically want sustainable packaging, those materials typically don’t have a long shelf life and consumers don’t want to pay the extra money. But some companies are still making an effort to pay more for eco-friendly packaging despite the challenges.
From producers and companies to retailers, consumers and recycling organizations, packaging can affect the whole supply chain. So the challenge for packaging manufacturers becomes determining what new innovations and materials are the best investments.
Randall LaVeau, business development manager at Interpress Technologies, which manufactures formed paperboard and plastic food packaging products, told Food Dive there is a huge push for more recycling in the marketplace. But he said it is hard to get an eco-friendly material that holds water but isn’t plastic and doesn’t degrade — a necessity for microwavable products that need water to cook.
Many companies are now working to develop recyclable packaging that can withstand heat and hold liquids, but LaVeau said there is still a lot of research and development to go before it is widespread.
“Everybody is in the shop trying to figure it out,” LaVeau said. “People have been working on it for the last 10 years or longer, they just haven’t had a good success for it.”
For companies that have made sustainability goals, the time is ticking to figure it out. Mondelez just announced its plans to make all their packaging recyclable by 2025. Nestlé, Unilever and PepsiCo have agreed to phase in packaging made from recyclable, compostable and biodegradable materials with more recycled content by 2025, but haven’t released specific details about their plans. In fact, a recent report identified Coca-Cola, PepsiCo and Nestle as businesses contributing most to pollution.
But as these big companies push for more development on sustainable materials, that means cost could continue to be an obstacle. Although consumers say they are willing to pay more for sustainability, they don’t always pick up the more expensive options in stores.
“Just like anything else, when something new comes out… it is more expensive until they can work with it in time and maximize their efficiencies for the cost to come down,” LaVeau said.
Several companies have developed more sustainable options this year. For example, HelloFresh is rolling out more sustainable packaging for its meal kits with recyclable liners created by sustainable design company TemperPack.
And some new developments haven’t come into the mainstream yet. U.S. Department of Agriculture researchers have developed an edible, biodegradable packaging film made of casein, a milk protein, that can be wrapped around food to prevent spoilage.
Other companies are working to find new ways to help the environment. Wayne Shilka, vice president of innovation and technical support at Eagle Flexible Packaging, a printer of packaging in Chicago, has prioritized offering more sustainable options to their customers. Eagle Flexible Packaging uses a water-based ink because it doesn’t create any probable organic compounds that then go out into the atmosphere, making it more environmentally friendly, Shilka said.
“We are finding that sustainable packaging is getting more and more and more interest.”, Wayne Shilka – Vice president of Eagle Flexible Packaging
Six years ago, Eagle Flexible Packaging put together a compostable material for packaging, and about 100 companies discussed the option with them. Only one customer ended up using the compostable product because it cost more any other packaging option the company offered. Every year since, a few more customers have worked with them to outfit their products with compostable material, Shilka said.
As more companies turn to compostable and sustainable packaging, the price will come down and make it more appealing, Shilka added.
“It continues growing to the point that it’s becoming not mainstream, but it’s much more routine that we had people who are calling and are interested and are actually doing something sustainable,” Shilka said.
While some companies work to find new recyclable materials, others are satisfied with current packaging options. Flexible packaging — which is any package whose shape can be readily changed, such as bags and pouches — is popular. Representatives at packaging companies said flexible packaging can be an issue for sustainability since it has multilayer films with plastic and paper that need to be separated to be recycled.
LaVeau said most of his products are “recyclable to an extent” because of the layers. Certain recycling mills can handle his products, but at others, consumers need to separate the packaging for recycling — which doesn’t always happen.
Green Rush Packaging has the same issue.
“We got to get the end users to separate and recycle better instead of just facilities otherwise it is just waste, bad for the environment,” Leach said.
Flexible packaging can also provide a higher product-to-package ratio, which creates fewer emissions during transportation and ultimately uses less space in landfills.
Some companies stand by their use of packages that aren’t fully sustainable. Robert Reinders, president of Performance Packaging, a family owned corrugated box plant founded in 1995 by packaging professionals, told Food Dive that about 5% of his products are recyclable. He said flexible packaging is a sustainable option because it uses up less energy and prolongs the shelf life of the food so it eliminates food waste.
“There is all kinds of great benefits to flexible packaging that gets drowned out by the recycle, compostable needs,” Reinders said.
In the last two years, more than 70 bills have been introduced in state legislatures regarding plastic bags — encompassing bans, fees and recycling programs. However, many of those laws have not impacted the food packaging industry.
In comparison, countries across the globe are increasing their efforts and goals when it comes to sustainability for both food and beverage product packaging. But U.S. companies are still in the development stage on many of their innovations.
The Singapore Packaging Agreement — a joint initiative by government, industry and NGOs to reduce packaging waste — has averted about 46,000 metric tons of packaging waste during the past 11 years, according to Eco-Business. In Australia, national, state and territory environment ministers have agreed that 100% of Australian packaging will be recyclable, compostable or reusable by 2025.
Vancouver, Canada has adopted a ban on the distribution of polystyrene foam cups and containers, as well as restrictions on disposable cups and plastic shopping bags. The U.K. also plans to eliminate plastic waste by 2042.
As countries around the world change their packaging to adjust to these sustainability goals, Reinders said U.S. companies will likely adopt more changes. And as more CPG makers start mass producing sustainable options around the world, he said it will drive prices down globally.
“I was at Nestlé headquarters in Switzerland and they are currently making the efforts to find different materials and different processes so they can be recyclable,” Reinders said. “It’s all starting now. The more the big guys get into it, the better it will be.”
Source : https://www.fooddive.com/news/how-sustainable-is-the-food-packaging-industry/539089/
Edge computing technology is quickly becoming a megatrend in industrial control, offering a wide range of benefits for factory automation applications. While the major cloud suppliers are expanding, new communications hardware and software technology are beginning to provide new solutions compared to the previous offerings used in factory automation.
|A future application possibility that illustrates both the general concept and potential impact of edge computing in automation and control is edge data being visualized on a tablet in a brownfield application. (Image source: B&R Industrial Automation)|
“The most important benefit [compared to existing solutions] will be interoperability—from the device level to the cloud,” John Kowal, director of business development for B&R Industrial Automation, told Design News. “So it’s very important that communications be standards-based, as you see with OPC UA TSN. ‘Flavors’ of Ethernet including ‘flavors’ of TSN should not be considered as providing interoperable edge communications, although they will function perfectly well in a closed system. Interoperability is one of the primary differences between previous solutions. OPC UA TSN is critical to connecting the edge device to everything else.”
Emerging Technology Solutions
Sari Germanos of B&R added that these comments about edge computing can also be equally applied to the cloud. “With edge, you are using fog instead of cloud with a gateway. Edge controllers need things like redundancy and backup, while cloud services do that for you automatically,” Germanos said. He also noted that cloud computing generally makes data readily accessible from anywhere in the world, while the choice of serious cloud providers for industrial production applications is limited. Edge controllers are likely to have more local features and functions, though the responsibility for tasks like maintenance and backup falls on the user.
Factory Automation Applications
Kowal noted that you could say that any automation application would benefit from collecting and analyzing data at the edge. But the key is what kind of data, what aspects of operations, and what are the expectations of analytics that can deliver actionable productivity improvements? “If your goal is uptime, then you will want to collect data on machine health, such as bearing frequencies, temperatures, lubrication and coolant levels, increased friction on mechanical systems, gauging, and metrology,” he said.
Some of the same logic applies to product quality. Machine wear and tear leads to reduced yield which can, in turn, be defined in terms of OEE data gathering that may already be taking place, but will not be captured at shorter intervals and automatically communicated and analyzed.
Capturing Production Capacity as well as Machine and Materials Availability
Beyond the maintenance and production efficiency aspects, Kowal said that users should consider capturing production capacity, machine and raw material availability, and constraint and output data. These will be needed to schedule smaller batch sizes, tier more effectively into ordering and production scheduling systems, and ultimately improve delivery times to customers.
Edge control technology also offers benefits compared to IoT gateway products. Kowal said that he’s never been big on splitting hairs with technology definitions—at least not from the perspective of results. But fundamentally, brownfield operators tend to want gateways to translate between their installed base of equipment, which may not even be currently networked, and the cloud. Typically, these are boxes equipped with legacy communications interfaces that act as a gateway to get data from the control system without a controls retrofit, which can be costly, risky, and even ineffective.
“We have done some work in this space, though B&R’s primary market is in new equipment,” Kowal added. “In that case, you have many options how to implement edge computing on a new machine or production line. You can use smart sensors and other devices direct to cloud or to an edge controller. The edge controller or computing resource can take many form factors. It can be a machine controller, an industrial PC that’s also used for other tasks like HMI or cell control, a small PLC used within the machine, or a standalone dedicated edge controller.”
Boosted Memory, Processing, and Connections
Germanos noted that industrial controllers were not designed to be edge controllers; they are typically designed to control one machine versus a complete production line. Edge controllers have built-in redundancy to maintain production line operation.
“If I was designing a new machine, cell, line, or facility, I would set up the machine controllers as the edge controller/computers rather than add another piece of control hardware or gateway,” Germanos said. “Today, you can get machine controllers with plenty of memory, processing power, and network connections. I would not select a control platform unless it supports OPC UA, and I would strongly urge selecting a technology provider that supports the OPC UA TSN movement known as “The Shapers,” so that as this new standard for Industrial Ethernet evolves, I would be free from the ‘flavors’ of Ethernet.”
His recommendation is to use a platform that runs a real-time operating system for the machinery on one core or, using a Hypervisor, whatever other OS might be appropriate for any additional applications that run on Windows or Linux.
Source : https://www.designnews.com/automation-motion-control/edge-computing-emerges-megatrend-automation/27888481159634
Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.
If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.
At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.
In this inaugural post, we discuss three areas of technical AI safety: specification, robustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.
You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.
This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:
A specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.
For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.
*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.
As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.
There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shift, adversarial inputs, and unsafe exploration can be classified as robustness problems.
To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.
Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.
Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015; Amodei and Olah et al., 2016).
Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.
Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.
AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).
Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).
We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.
Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.
With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!
We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research
Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
During the first week of August, we participated in Argentina’s first Agtech Week, a series of events distributed in Buenos Aires, Rosario and Córdoba; Argentina’s main cities and agribusiness hubs.
It seems a long way since NXTP Labs, one of Latin America’s most active accelerators and venture firms, launched the first Agtech Acceleration Program (then titled ‘Agrotech’ to identify with local ‘agro’ slang for agribusiness) in the region in 2016 with a group of no more than 10 startups with little-to-no funding and only early adoption from farmers.
Agtech Week culminated in Congreso Aapresid (Argentine Association of Direct Sowing Producers) with an audience of over 5,000 farmers in Córdoba listening to use cases of Blockchain and other frontier technologies by both local and international entrepreneurs and technology companies. Last year we also helped develop Pulse, an innovation hub in Piracicaba, Brazil, together with Raízen, Brazil’s largest ethanol producer, to develop pilots with agtech startups.
Despite this initial interest from local institutions and farmers trying to understand the implications of new technological developments — most of the initial interest in Blockchain, for example, starts with ‘which cryptocurrency should I be invested in?’ — technology is yet to disrupt the agribusiness value chain.
Latin America is a relevant player, representing 16% of the world’s Food & Agriculture exports, particularly in Bananas (+60%), Beef (+30%), Coffee (+45%), Corn (+30%), Poultry (+30%), Soybeans (+50%) and Sugar (+50%).
If global agricultural production needs to grow by 60% by 2050 to meet global demand, Latin America’s production needs to grow by 80%, according to its market participation and growth potential. This implies a bigger focus on yield increases, versus arable land expansion or higher crop intensity — the main driver behind growth in production during the last few decades. Plus, Latin America’s productivity growth (1.9%) is behind the average OECD country (2.4%).
In some aspects, Latin America has pioneered agribusiness technology adoption. An aspect engrained in Argentina’s Aapresid farmer association is the adoption of direct sowing practice [no till], which reduces soil erosion; now at 81% of Argentina’s arable land (compared to 23% in the US, and 10% in Europe).
Coupling high technology adoption with market share, local companies have seized an opportunity to grow globally under the shadows of global leaders in their key markets.
Recently, Syngenta acquired Strider, a precision farming digital tool that operates in six of the 10 largest agricultural operations in Brazil with 40 employees. In effect, Strider closed key contracts with large-scale farmers in Brazil that Syngenta would have otherwise had to do over a year or two. This was more of a quick go-to-market digital strategy.
Additionally, John Deere acquired PLA, originally from Las Rosas, Argentina, that manufactures sprayers, planters and specialty products, with 450 employees and selling in four continents. Earlier this year, John Deere announced its acquisition of King Agro, a family-owned business with approximately 180 employees and an extensive 30-year history of developing various carbon fiber products. King Agro reduces the overall cost of spraying by amplifying the sprayer’s radius.
As a venture firm focused on technology opportunities in Latin America, NXTP Labs is looking for new technology firms capturing large market opportunities where local entrepreneurs may have a competitive advantage. Latin American agribusiness is relevant in the global food export market, and it already has examples of successful entrepreneurs in both the tech and agribusiness sectors to serve as role models.
When analyzing specific sub-sectors within agribusiness, we have identified opportunities in:
Some of our portfolio companies are already developing these solutions. Some examples are:
Traceability: Bovcontrol works with milk producers to monitor from farm to fork the different treatments they make on dairy cows.
Insurance: S4 Agtech analyzes satellite imagery to develop an index that helps insurers optimize their risk aversion according to real-time productivity data.
Finance: PagoRural is developing a non-banking financial service for farmers to access credit when buying their seed; and Agrofy, an Ag-specific eCommerce site is on its way to finance its marketplace products purely online.
Logistics: CargoX is helping Brazilian freight companies optimize their fleet.
Data Collection: Satellogic produces high precision nano satellites; while Kilimo and Auravant are helping farmers aggregate these images, process them, and integrate them with production data to turn it into actionable insights.
VC Investments in Latin America surpassed $1 billion for the first time in 2017, doubling the amount committed to startups in 2016. Ag & Foodtech globally have surpassed the $10 billion mark globally, according to AgFunder.
Latin America plays a key role in the food and agribusiness value chain and is yet to set its footprint with prominent agtech players that can leverage regional advantages to become dominant global players. Its market is less integrated and lacks the venture liquidity track record of the United States. However, several sub-sectors provide an edge for local entrepreneurs to develop local solutions for problems shared by farmers, processors, and distributors in the region. Some of them have started to catch the attention of key regional stakeholders.
We have scouted approximately 520 agtech startups across the region, more than half of them launched in the past three years, and less than a quarter have institutional funding, so there is a lot of innovation and investment opportunity coming our way.
Source : https://agfundernews.com/the-gaps-and-opportunities-for-agribusiness-technology-in-latin-america.html/
The AARRR funnel framework has been the dominant guiding framework to metrics, goal setting, and strategic growth conversations. Funnels were a good starting point but do not accurately represent how the fastest growing products grow. It is time to move past the funnel framework and focus on Growth Loops.
This is the third post in a four post series. In the previous posts in the series we went through three important points. Growth wins, the game has changed, and to adapt we need a system of product, process, and team.
This system is just the beginning. We need new frameworks and tools to think about how products grow that incorporate these changes to growth and the lessons we’ve learned.
One thing we ask participants in Reforge Programs is to go around their company and ask five different people to whiteboard the answer to a seemingly simple question: How does your product grow?
This seems like a simple question. But what everyone inevitably finds is one or more things:
This is a BIG problem. If everyone has a different or incomplete picture of how the product grows, then you can’t have apples to apples discussions about priorities, metrics, goals, or strategy. This leads to a few things:
“How does your product grow?” is simply the most important question to be able to answer. Growing is the entire reason why products and companies exist (especially in venture backed startups). Companies that continually grow also provide the largest positive outcomes. More importantly, personally in your career if you drive growth at your company, you are rewarded vs others who do not drive growth.
So, what is the best way to answer this question?
One of the common answers to “How does your product grow?” is a picture of a funnel. The funnel AARRR framework was originally created by Dave McClure. It was a great starting point. It helped me and millions of others level up their game. But the framework is now > 11 years old and since then we’ve learned a lot about how the fastest software products grow.
The biggest thing we’ve learned is that the funnel framework is too micro of a view in order to answer “How does your product grow?” It helps explain a specific step within a Growth Loop, but misses the larger picture of the loop itself. When the funnel is applied at the company level and used to explain how a product grows, it leads to a few common issues:
When building a new product, the most common approach we see is to “build a great product” and then test a lot of different channels to see what works. This is exactly the wrong way to approach it. This treats product strategy and acquisition strategy in silos. In larger more developed products, you see this silo’d strategic planning as well. Typically the product team goes off and plans their product strategy and then marketing goes off and creates the acquisition strategy.
This silo’d strategic thinking is the cause for most distribution failures. Product Channel Fit tells us why. We commonly forget that we do not control the rules of the channels. The channels control the rules. As a result, we have to mold our product to fit the channels, not the other way around.
To make it worse, we also tend to treat our monetization strategy in a third silo. But, we know due to Channel Model Fit, our monetization model enables or disables certain channels.
Product, channels, and monetization need to be thought about together. They are interlinked. But the funnel framework leads a lot of teams to treat these as silo’d layers.
It is common for companies to structure teams by layers of the funnel. Marketing owns acquisition. Product owns retention. Sales (if B2B) owns revenue. Then each one of those teams is given a metric that corresponds to that layer of the funnel.
The problem is that the teams then optimize at the expense of each other in order to reach their silo’d goal. Marketing brings in low quality users/leads at top of funnel to hit their goal, but that tanks retention or further down funnel metrics. All sorts of checks and balances get put in place over time to try and fix this which ends up complicating the understanding and goal setting of the metrics.
Funnels operate in one direction. Put more in at the top, get more out at the bottom. There is no concept of how to reinvest what comes out at the bottom to get more at the top to continue to feed growth over time. In other words, no compounding effect. This means we have to keep putting more into the top to get more at the bottom. More money, more people, more tactics, more channels, more, more, more. This is unsustainable. Understanding the connection of how you reinvest to get more growth changes the way you think about where to focus and what to invest in (more on that below).
What is a framework that represents how the fastest companies grow? One that combines product, channels, and monetization into one system? One that looks for compounding growth vs linear growth?
“Compound interest is man’s greatest invention.” – Einstein
The fastest growing products are better represented as a system of loops, not funnels. Loops are closed systems where the inputs through some process generates more of an output that can be reinvested in the input. There are growth loops that serve different value creation including new users, returning users, defensibility, or efficiency.
Here are a couple examples:
The driving force behind Pinterest’s growth is the following loop:
One of the driving forces of SurveyMonkey’s growth is the following loop:
These are two of over 20 growth loops we’ve identified in our research for the Growth Models Deep Dive program that drive acquisition, retention, defensibility, efficiency, or a combination. Those that understand them and organize their product/teams around them will be the ones who create the most value. There are two primary reason why Growth Loops are the key to the fastest growing products.
Loops force you to answer “How does one cohort of users lead to another cohort of users?” You focus on how you reinvest the output of one cycle of the loop into the next cycle of the loop to get more output. This creates a compounding effect that is more sustainable.
Not all loops are created equally. You’ll be tempted to draw a ton of loops for your product, but what that typically means is that you just have a ton of low powered loops that aren’t sustainable. The fastest growing products are typically powered by 1- 2 major loops that transition over time. Measuring and understanding the power/health of your loops is critical to understanding where to focus.
Loops combine how your product, channel, and monetization model work together in a single system rather than treating them as silos. As a result they end up being more specific to your product and company making them harder for others to replicate.
On the other hand, strategies and tactics that aren’t specific to your product/user/model by definition can be replicated with ease by others. As they get copied, effectiveness decreases and always trends to zero requiring you to constantly invent new strategies and tactics. This is not sustainable over the long term.
Once you start looking at things through the loop framework, you start to make very different set of decisions.
Once you start viewing things through loops, you stop approaching acquisition, product, and monetization in silos. It forces you to think about how the three work together in a system. You stop thinking about the never ending cycles of more tactics, more channels, more of everything just to keep filling the top of the funnel, and you start thinking about how what you are producing can be reinvested.
If you had two options, which one would you choose?
We really hope you choose Initiative B. This highlights how you make investment decisions differently. Rather than looking for the short term bumps and sugar rushes, loops help you start looking for the things that will compound over time producing much better results over the long term.
In the second post in the series, we talked about the larger need for Cross Functional teams as product, data, engineering, and design play a larger role in outcomes like acquisition, retention, and monetization. Loops as you can see above traverse typical functional lines. To enable and improve them you typically need every function represented working towards the same goal, the output of the loop. This helps the teams align and organize around the loop rather than by function and reduces the teams optimizing at the expense of each other as it will be reflected in the output of the loop.
Understanding loops, how to measure them, and how to map them to your product is just the first step. It is a phenomenal qualitative tool to change the way you think about growing a product. But it is hard to represent all the individual levers and their effect on your metrics. You need to translate your loops into a quantitative growth model to help communicate, prioritize, make strategic bets, set goals, and drive your metrics roadmap. We’ll talk about this in the next (and final) post of the series. Subscribe here to make sure you don’t miss it.
Go deep on Growth Loops + Models in our upcoming fall program along with other experienced practitioners from Google, Facebook, Spotify, Adobe, HubSpot, and many more. We’ll go through the properties that make a loop, detailed examples of 20+ growth loops, how to measure/analyze your loops, and how to build quantitative models
Source : https://www.reforge.com/blog/growth-loops