News & Press

Follow our trends news and stay up to date.

Ecology – Futurecasting ecological research: the rise of technoecology


Increasingly complex research questions and global challenges (e.g., climate change and biodiversity loss) are driving rapid development, refinement, and uses of technology in ecology. This trend is spawning a distinct sub‐discipline, here termed “technoecology.” We highlight recent ground‐breaking and transformative technological advances for studying species and environments: bio‐batteries, low‐power and long‐range telemetry, the Internet of things, swarm theory, 3D printing, mapping molecular movement, and low‐power computers. These technologies have the potential to revolutionize ecology by providing “next‐generation” ecological data, particularly when integrated with each other, and in doing so could be applied to address a diverse range of requirements (e.g., pest and wildlife management, informing environmental policy and decision making). Critical to technoecology’s rate of advancement and uptake by ecologists and environmental managers will be fostering increased interdisciplinary collaboration. Ideally, such partnerships will span the conception, implementation, and enhancement phases of ideas, bridging the university, public, and private sectors.


Ecosystems are complex and dynamic, and the relationships among their many components are often difficult to measure (Bolliger et al. 2005, Ascough et al. 2008). Ecologists often rely on technology to quantify ecological phenomena (Keller et al. 2008). Technological advancements have often been the catalyst for enhanced understanding of ecosystem function and dynamics (Fig. 1, Table 1), which in turn aids environmental management. For example, the inception of VHF telemetry to track animals in the 1960s allowed ecologists to remotely monitor the physiology, movement, resource selection, and demographics of wild animals for the first time (Tester et al. 1964). However, advancements in GPS and satellite communications technology have largely supplanted most uses for VHF tracking. As opposed to VHF, GPS has the ability to log locations, as well as high recording frequency, greater accuracy and precision, and less researcher interference of the animals, leading to an enhanced, more detailed understanding of species habitat use and interactions (Rodgers et al. 1996). This has assisted in species management by not only highlighting important areas to protect (Pendoley et al. 2014), but also identifying key resources such as individual plants instead of general areas of vegetation.

Illustrative timeline of new technologies in ecology and environmental science (see Table 2 for technology descriptions).
Table 1. Timeline of new technologies in ecology and environmental science, to accompany information in Fig. 1
SonarSonar first used to locate and record schools of fish
Automated sensorsAutomated sensors specifically used to measure and log environmental variables
Camera trapsCamera traps first implemented to record wildlife presence and behavior
Sidescan sonarSidescan sonar is used to efficiently create an image of large areas of the sea floor
Mainframe computersComputers able to undertake ecological statistical analysis of large datasets
VHF trackingRadio tracking, allowing ecologists to remotely monitor wild animals
Landsat imageryThe first space‐based, land‐remote sensing data
Sanger sequencingThe first method to sequence DNA based on the selective incorporation of chain‐terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication
LiDARRemote sensors that measure distance by illuminating a target with a laser and analyzing the refracted light
Multispectral LandsatSatellite imagery with different wavelength bands along the spectrum, allowing for measurements through water and vegetation
Thermal bio‐loggersSurgically implanted devices to measure animal body temperature
GPS trackingSatellite tracking of wildlife with higher recording frequency, greater accuracy and precision, and less researcher interference than VHF
Thematic LandsatA whisk broom scanner operating across seven wavelengths and able to measure global warming and climate change
Infrared camera trapsAble to sense animal movement in the dark and take images without a visible flash
Multibeam sonarTransmitting broad acoustic fan shaped pulses to establish a full water column profile
Video trapsVideo instead of still imagery, able to determine animal behavior as well as identification
AccelerometersMeasures animal movement (acceleration) that is irrespective of satellite reception (geographic position)
3D LiDARAccurate measurement of 3D ecosystem structure
Autonomous vehiclesUnmanned sensor platforms to collect ecological data automatically and remotely, including in terrain that is difficult and/or dangerous to access for humans
3D trackingThe use of inertial measurements units devices in conjunction with GPS data to create real‐time animal movement tracks
ICARUSThe International Cooperation for Animal Research Using Space (ICARUS) Initiative is to observe global migratory movements of small animals through a satellite system
Next gen sequencingMillions of fragments of DNA from a single sample can be sequenced in unison
Long‐range, low‐power telemetryLow‐voltage, low‐amperage transfer of data over several kilometers
Internet of thingsA network of devices that can communicate with one another, transferring information and processing data
Low‐power computersSmall computers with the ability to connect an array of sensors and, in some cases, run algorithms and statistical analyses
Swarm theoryThe autonomous but coordinated use of multiple unmanned sensor platforms to complete ecological surveys or tasks without human intervention
3D printingThe construction of custom equipment and constructing animal analogues for behavioral studies
Mapping molecular movementCameras that can display images at a sub‐cellular level without the need of electron microscopes
Biotic gamingHuman players control a paramecium similar to a video game, which could aid in the understanding of microorganism behavior
Bio‐batteriesElectro‐biochemical devices can run on compounds such as starch, allowing sensors and devices to be powered for extended periods in remote locations where more traditional energy sources such as solar power may be unreliable (e.g., rainforests)
Kinetic batteriesBatteries charged via movement that are able to power microcomputers

Ecological advances to date are driven by technology primarily relating to enhanced data capture. Expanding technologies have focused on the collection of high spatial and temporal resolution information. For example, small, unmanned aircraft can currently map landscapes with sub‐centimeter resolution (Anderson and Gaston 2013), while temperature, humidity, and light sensors can be densely deployed (hundreds per hectare) to record micro‐climatic variations (Keller et al. 2008). Such advances in data acquisition technologies have delivered knowledge of the natural environment unthinkable just a decade ago. But what does the future hold?

Here, we argue that ecology could be on the precipice of a revolution in data acquisition. It will occur within three concepts: supersize (the expansion of current practice), step‐change (the ability to use technology to address questions we previously could not), and radical change (exploring questions we could not previously imagine). Technologies, both current and emerging, have the capacity to spawn this “next‐generation” ecological data that, if harnessed effectively, will transform our understanding of the ecological world (Snaddon et al. 2013). What we term “technoecology” is the hardware side of “big data” (Howe et al. 2008), focused on the employment of cutting edge physical technology to acquire new volumes and forms of ecological data. Such data can help address complex and pressing global issues of ecological and conservation concern (Pimm et al. 2015). However, the pace of this revolution will be determined in part by how quickly ecologists embrace these technologies. The purpose of this article is to bring to the attention of ecologists some examples of current, emerging, and conceptual technologies that will be at the forefront of this revolution, in order to hasten the uptake of these more recent developments in technoecology.

Technoecology’s Application and Potential

Bio‐loggers: recording the movement of animals

Bio‐logging technology is not new to ecology, incorporating sensors such as heart rate loggers, as well as VHF and GPS technology. Instead, bio‐logging technology is being supersized, expanding the current practices with new technology. Accelerometers are being used to record fine‐scale animal movement in real time, something which was only possible previously via direct observation (Shamoun‐Baranes et al. 2012). Using accelerometry, we can calculate an animal’s rate of energy expenditure (Wilson et al. 2006), allowing ecologists to attribute a “cost” to different activities and in relation to environmental variation.

Bio‐loggers are also causing a step‐change in the questions we can explore in animal movement. Real‐time three‐dimensional animal movement tracks can now be recreated from data collected by inertial measurements units, which incorporate accelerometers, gyroscopes, magnetometers, and barometers. This technology has been used to examine the movements of cryptic animals such as birds (Aldoumani et al. 2016) and whales (Lopez et al. 2016) to determine both how they move and how they respond to external stimuli. The incorporation of GPS technology would allow for the animal movement to be placed spatially within 3D‐rendered environments and allow for the examination of how individuals respond to each other, creating a radical change to the discipline of animal movement. Over the last 50 yr, we have gone from simply locating animals, to reconstructing behavioral states and estimating energy expenditure by using these technological advancements.

Bio‐batteries: plugging‐in to trees to run field equipment

Bio‐batteries are new generation fuel cells that will supersize both the volume and the scale of data that can be collected. Bio‐batteries convert chemical energy into electricity using low‐cost biocatalyst enzymes. Also known as enzymatic fuel cells, electro‐biochemical devices can run on compounds such as starch in plants, which is the most widely used energy‐storage compound in nature (Zhu et al. 2014). While still in early development, bio‐batteries have huge potential for research. Enzymatic fuel cells containing a 15% (wt/v) maltodextrin solution have an energy‐storage density of 596 Ah/kg, which is one order of magnitude higher than that of lithium‐ion batteries. Imagine future ecologists “plugging‐in” to trees, receiving continuous electricity supply to run long‐term sampling and monitoring equipment such as temperature probes and humidity sensors. Further, the capabilities of bio‐batteries combined with low‐power radio communication devices (see Next‐generation Ecology) could revolutionize field‐based data acquisition.

Bio‐batteries could greatly aid current technoecological projects such as large‐scale environmental monitoring. For example, Cama et al. (2013) are undertaking permanent monitoring of the Napo River in the Amazon using data transfer over the Wi‐Fi network already in place. The Wi‐Fi towers are powered via solar panels, but within the dense rainforest canopy there is not enough light to use solar power to run electronics. If sensor arrays within the rainforest could be powered continuously via the trees, the project could run without a need for avoiding regions for lack of sunlight or using staff to regularly replace batteries.

Low‐power, long‐range telemetry: transmitting data from the field to the laboratory

Ecological data collection often occurs in locations difficult or hazardous to traverse, meaning that practical methods of data retrieval often influence sensor placement, limiting the data collected, but what if the data could be sent from remote sensors back to a central location for easy collection? Ecological projects such as monitoring the Amazon environment already do so using Wi‐Fi towers (Cama et al. 2013), but Wi‐Fi transmission range is limited (approximately 30 m). This can be extended with larger antennas and increasing transmission power, but in return consumes much more electricity. Other technologies are capable of transmitting data via either satellite (Lidgard et al. 2014) or the cell phone network (Sundell et al. 2006), but are likewise limited to locations with cell coverage or are prohibitively expensive. Low‐power networks offer great promise for data transfer over large distances (kilometers), including the increasingly popular LoRa system (Talla et al. 2017). Long‐range telemetry is already being used commercially for reading water meters, where water usage data are sent to hubs, transmitting data hourly, and a single battery could last over a decade (e.g., Taggle Systems; Integrating such technology into ecological research would allow sensor deployment in remote areas where other communication methods are infeasible, for example, dense forests, high mountain ranges, swamps, and deep canyons. Such devices could also be used to transmit information to a base station, resulting in faster data collection and more convenient data retrieval.

The Internet of things: creating “smart” environments

It is now possible to wirelessly connect devices to one another so they can share information automatically. This is known as the Internet of things (IoT), in which a variety of “things” or objects can interact and co‐operate with their neighbors (Gershenfeld et al. 2004). Each device is still capable of acting independently, or it can communicate with others to gain additional information. Expanding on the use of low‐power, long‐range telemetry, IoT could be used to set up peer‐to‐peer networking to transfer data from one device to the next until reaching a location with Internet access or cell coverage, where more traditional means of transmission are possible. An attempt of such peer‐to‐peer transfer in ecology is ZebraNet: a system of GPS devices attached to animals (zebras) which transfer each individual’s GPS data between each other when in close proximity (Juang et al. 2002). Using this design, retrieving a device attached to one animal also provides the data from all other animals.

The applications of IoT go beyond the simple transfer of data. IoT technology effectively creates “smart environments,” in which hundreds of networked devices, such as temperature sensors, wildlife camera traps, and acoustic monitors, are connected wirelessly and are able to transmit data to central nodes. Using bio‐batteries, such devices could run “indefinitely” (not literally, as components will eventually fail due to wear and tear in field conditions, which can be severe in some environments, e.g., very high/low temperatures, humidity, and/or salinity). From there, fully automated digital asset management systems can query and analyze data. Automated processes are increasingly pertinent with more long‐term continuously recording sensor networks (e.g., National Ecological Observatory Network [NEON]). NEON is composed of multiple sensors measuring environmental parameters such as the concentration of CO2 and Ozone, or soil moisture, all continuously‐recording remotely with high temporal resolution, creating ever expanding environmental datasets (Keller et al. 2008). To make best use of such data requires analysis at high temporal resolutions, which is not feasible to do manually by researchers, but possible with machine learning algorithms and other advanced statistical approaches.

Swarm theory for faster and safer data acquisition, and dynamic ecological survey

Swarm theory is a prime example of the complimentary nature of technology and ecology. In essence, swarm theory refers to individuals self‐organizing to work collectively to accomplish goals. Swarm theory relates to both natural and artificial life, and mathematicians have studied the organization of ant colonies (Dorigo et al. 1999) and flocking behavior of birds and insects (Li et al. 2013), in an attempt to understand this phenomenon. Swarm theory is already being used with unmanned autonomous vehicles for first response to disasters, investigating potentially dangerous situations, search and rescue, and for military purposes ( Exciting applications of swarm theory include faster data acquisition and communication over large geographic scales and dynamic ecological survey.

Swarm theory is directly applicable to the collection of remotely‐sensed data by multiple unmanned vehicles, whether aerial, water surface, or underwater. Unmanned aerial vehicles (UAVs) are already being used for landscape mapping and wildlife identification (Anderson and Gaston 2013, Humle et al. 2014, Lucieer et al. 2014), and the data collected can be processed into high‐resolution (<10 cm) to characterize the variability in terrain and vegetation density (Friedman et al. 2013, Lucieer et al. 2014). So far, however, such vehicles are used individually. By employing swarm theory, data collection could be completed faster by using several vehicles working simultaneously and collaboratively. Moreover, if vehicles were enabled to communicate with each other, data transfer would also be improved. Given the comparatively low costs of unmanned vehicles versus manned vehicles, such implementation would dramatically increase the efficiency of data collection while also eliminating safety issues. This efficiency could, in turn, allow for more repeated and systematic surveys, improving the statistical power and inference from time‐series analyses.

Even more exciting than swarms simply being used to advance our capabilities in data acquisition is the prospect of deploying them as more active tools for quantifying biotic interactions. The ability of a swarm to locate and then track individuals of different species in real time could revolutionize our understanding of key ecological phenomena such as dispersal, animal migration, competition, and predation. Swarms could be used to initially sweep large areas, and then, as individual drones detect the species/individuals of interest, they could then inform other drones, refining search areas based on this geographic information, and then detect and track the behavior of additional animals, in real time. An increased capacity to detect and measure species interactions, and assess marine and terrestrial landscape change, would enhance our understanding of fundamental ecological and geological processes, ultimately assisting to further ecological theory and improve biodiversity conservation (Williams et al. 2012).

This technology will however require careful consideration of the societal and legislative context, as is the case for UAVs (see Allan et al. 2015).

3D printing for unique and precise equipment

While 3D printing has existed since the 1980s, its use in ecology has primarily been as teaching aids. For example, journals such as PeerJ offer the ability to download blueprints of 3D images ( However, 3D printing has many more applications. These include (1) building specialized equipment cheaply and relatively easily by using the design tools included with many 3D printers or by scanning and modifying products that already exist (Rangel et al. 2013); (2) building organic small molecules, mimicking the production of molecules in nature (Li et al. 2015); (3D printing at the molecular level even has the potential to create small organic molecules in the laboratory, Service 2015); and (3) printing realistic high‐definition full‐color designs in a number of different materials ( Using such models, ecologists are able to print specialized platforms for sensor equipment (e.g., GPS collars) that fit better to animals. The use of 3D printing could go a step further, however, and create true‐color, structurally complex analogues of either vegetation or other animals for behavioral studies. For example, Dyer et al. (2006) explored whether bee attraction was based on color or may also be associated with flower temperature. Flowers of intricate and exact shape and color could be printed with heating elements embedded more easily and realistically than trying to build them by hand.

Mapping molecular movement for non‐destructive analysis of nature

New developments in optical resolution and image processing have led to cameras that can display images at a sub‐cellular level without the need of electron microscopes. Originally developed to scan silicon wafers for defects, this new technology is now being used to examine molecular transport and the exchange between muscle, cartilage, and bone in living tissue ( The development also highlights what can be achieved by cross‐disciplinary and institutional collaboration, in this case optical and industrial measurement manufacturers Zeiss, Google, Cleveland Clinic, and Brown, Stanford, New South Wales universities. Together, they have also created a “zoom‐able” model that can go from the centimeter level down to nanometer‐sized molecules, creating terabytes of data.

These technology’s ecological and environmental applications are substantial, paramount of which is the non‐destructive nature of the analysis, allowing for time‐series analyses of molecular transfer. For instance, Clemens et al. (2002) examined the hyper‐accumulation of toxic metals by specific plant species. Understanding how some plants can absorb toxic metals has promise for soil decontamination, but as stated by Clemens et al. (2002) “molecularly, the factors governing differential metal accumulation and storage are unknown.” The ability to not only observe the molecular transport of heavy metals in plant tissue, but also to change the observational scale, will greatly advance our knowledge of nutrient uptake and storage in plants.

Low‐power computers for automated data analysis

Low‐power microcomputers and microcontrollers exist in products such as Raspberry Pi, Arduino, and Beagleboard. In ecology, low‐power computers have been used to build custom equipment such as underwater stereo‐camera traps, automated weather stations, and GPS tracking collars (Williams et al. 2014, Greenville and Emery 2016). Notably though, following a surge in hobbyists embracing the adaptability of low‐cost, low‐power, high‐performance microcontrollers, large companies such as Intel have also joined the marketplace with microcontrollers like Edison ( Edison is low‐power, but has a dual‐core CPU, Wi‐Fi, Bluetooth, data storage, inbuilt Real Time Clock, and the ability to connect a plethora of sensors from GPS receivers to infrared cameras (; Intel 2014). Cell phones and wearable devices are already integrating this technology. As an example, the Samsung Galaxy S8 cellular phone contains an eight‐core processor computer with 4GB ram, cameras, GPS, accelerometers, heart rate monitor, fingerprint, proximity, and pressure sensors ( Using microcontrollers such as these, it is possible to run high‐level algorithms and statistical analysis on the device such as image recognition and machine learning. Not all microcontrollers are capable of running such complex data processes and other options will be required (e.g., microprocessors) instead, a situation that is likely to improve, however, with further development of the technology.

The ability to process data onboard has huge potential for technology’s ecological application, such as remote camera traps and acoustic sensors. By running pattern recognition algorithms in the equipment itself, species identification from either images or calls could be achieved both automatically and immediately. This information could be processed, records tabulated, and a decision taken as to conserve, delete, flag the recorded data for later manual observation, or even transmit the data back to the laboratory. This removes the need for storing huge volumes of raw photographs or audio files, but instead just tabulated summary results. The equipment could be programmed to specifically keep photographs and acoustics of species of interest (e.g., rare or invasive species, or species that cannot be identified with high certainty) while deleting those that are not, and/or to save any data with a recognition confidence below a designated threshold for manual inspection. In terms of direct application to conservation, it is possible that this technology would allow intelligent poison bait stations to be built. Poison baiting is widely used to control pest species (Buckmaster et al. 2014), but the consumption of baits by non‐target species can have unintended consequences ranging from incapacitation to death, limiting the efficacy of the control program (Doherty and Ritchie 2017). Using real‐time image recognition software built into custom designed bait dispensers, we could program poison bait release only when pest animals are present (e.g., grooming traps,, reducing harm to non‐target species.

Technological Developments Flowing into Ecology

The technological developments from outside ecology that flow into the discipline offer great potential for theoretical advances and environmental applications. Two examples include personal satellites and neural interface research.

Personal satellites are an upcoming technology in the world of ecology. Like UAVs before them, miniature satellites promise transformative data gathering and transmission opportunities. Projects such as CubeSat were created by California Polytechnic State University, San Luis Obispo, and Stanford University’s Space Systems Development Lab in 1999, and focused on affordable access to space. These satellites are designed to achieve low Earth orbit (LEO), approximately 125 to 500 km above the Earth. Measuring only 10 cm per side, the CubeSats can house sensors and communications arrays that enable operators to study the Earth from space, as well as space around the Earth. Open‐source development kits are already available ( However, NASA estimates it currently costs approximately US $10,000 to launch ~0.5 kg of payload into LEO (NASA 2017), meaning it is still cost prohibitive, and the capabilities of such satellites are currently limited. Given the rapid expansion of commercial space missions and pace of evolving technology, however, private satellites to examine ecosystem function and dynamics may not be too far over the horizon.

Neural interface research aims at creating a link between the nervous system and the outside world, by stimulating or recording from neural tissue (Hatsopoulos and Donoghue2009). Currently, this technology is focused in biomedical science, recording neural signals to decipher movement intentions, with the aim of assisting paralyzed people. Recent experiments have been able to surgically implant a thumbtack‐sized array of electrodes, able to record the electrical activity of neurons in the brain. Using wireless technology, scientists were able to link epidural electrical stimulation with leg motor cortex activity in real time to alleviate gait deficits after a spinal cord injury in Rhesus monkeys (Macaca mulatta; Capogrosso et al. 2016). Restoration of volitional movement may at first appear limited in its relevance to ecology, but the recording and analysis of neural activity is not. To restore volitional movement, mathematical algorithms are being used to interpret neural coding and brain behavior to determine the intent to move. This technology may make it possible in the future to record and understand how animals make decisions based on neural activity, and as affected by their surrounding environment. Using such information could greatly advance the field of movement ecology and related theory (e.g., species niches, dispersal, meta‐populations, trophic interactions) and aid improved conservation planning for species (e.g., reserve design) based on how they perceive their environment and make decisions.

Next‐generation Ecology

The technologies listed above clearly provide exciting opportunities in data capture for ecologists. However, transformation of data acquisition in ecology will be most hastened by their use in combination, through the integration of multiple emerging technologies into next‐generation ecological monitoring (Marvin et al. 2016). For instance, imagine research stations fitted with remote cameras and acoustic recorders equipped with low‐power computers for image and call recognition and powered by trees via bio‐batteries. These devices could use low‐power, long‐range telemetry both to communicate with each other in a network, potentially tracking animal movement from one location to the next, and to transmit data to a central location. Swarms of UAVs working together (swarm theory) could then be deployed to both map the landscape and collect the data from the central location wirelessly without landing. The UAVs could then land in a location with Wi‐Fi and send all the data via the Internet into cloud‐based storage, accessible from any Internet‐equipped computer in the world (Fig. 2, Table 2). While a system with this much integration might still be theoretical, it is not outside the possibilities of the next 5–10 yrs.

Visualization of a future “smart” research environment, integrating multiple ecological technologies. The red lines indicate data transfer via the Internet of things (IoT), in which multiple technologies are communicating with one another. The gray lines indicate more traditional data transfer. Broken lines indicate data transferred over long distances. Once initiated, this environment would require minimal researcher input. (See Table 3 for descriptions of numbered technologies.).
Table 2. Description of elements of a future “smart” research environment, as illustrated in Fig. 2
1Bio‐batteriesIn locations where solar power is not an option (closed canopies), data‐recording technology such as camera traps and acoustic sensors could run on bio‐batteries, eliminating the need for conventional batteries
2The Internet of things (IoT)Autonomous unmanned vehicles could use IoT to wirelessly communicate and collect data from recording technologies (camera traps) located in dangerous or difficult‐to‐access locations
3Swarm theoryAutonomous vehicles such as unmanned aerial vehicles could self‐organize to efficiently collect and transfer data
4Long‐range low‐power telemetryTechnology “talking” to each other, transferring information over several kilometers
5Solar powerEnvironmental sensors, such as weather stations, could be powered via solar power and transfer data to autonomous vehicles for easy data retrieval
6Low‐power computerA field server designed to wirelessly collect and analyze data from all the technology in the environment
7Data transfer via satelliteThere is potential to autonomously transfer data from central hubs in the environment back to researchers, without the need for visiting the research sites
8BioinformaticsWith the ability to collect vast quantities of high‐resolution spatial and temporal data via permanent and perpetual environmental data‐recording technologies, the development of methods to manage and analyze the data collected will become much more pertinent

Bioinformatics will play a large role in the use of “next‐generation” ecological data that technoecology produces. Datasets will be very large and complex, meaning that manual processing and traditional computing hardware and statistical approaches will be insufficient to process such information. For example, the data captured on a 1‐km2 UAV survey for high‐resolution image mosaics and 3D construction is in the tens of gigabytes, so at a landscape scale datasets can be terabytes. Such datasets are known as “big data” (Howe et al. 2008), and bioinformatics will be required to develop methods for sorting, analyzing, categorizing, and storing these data, combining the fields of ecology, computer science, statistics, mathematics, and engineering.

Multi‐disciplinary collaboration will also play a major role in developing future technologies in ecology (Joppa 2015). Ecological applications of cutting edge technology most often develop through multi‐disciplinary collaboration between scientists from different fields or between the public and private sectors. For instance, the Princeton ZebraNet project is a collaboration between engineers and biologists (Juang et al. 2002), while the development of the molecular microscope involved the private sector companies Zeiss and Google. Industries may already have technology and knowledge to answer certain ecological questions, but might be unaware to such applications. Ecologists should also look to collaborate on convergent design; much of what we do as ecologists and environmental scientists has applications in agriculture, search and rescue, health, or sport science, and vice versa, so opportunities to share and reduce research and development costs exist. Finally, ecologists should be given opportunities for technology‐based training and placement programs early in their careers as a way to raise awareness of what could be done.

In the coming decades, a technology‐based revolution in ecology, akin to what has already occurred in genetics (Elmer‐DeWitt and Bjerklie 1994), seems likely. The pace of this revolution will be dictated, in part, by the speed at which ecologists embrace and integrate new technologies as they arise. It is worth remembering, “We still do not know one thousandth of one percent of what nature has revealed to us”—Albert Einstein.


We would like to thank the corresponding editor for his excellent suggestions for improving our manuscript and the anonymous reviewer who suggested the addition of the supersize, step‐change, and radical change conceptual framework.

MedCity – As digital health companies proliferate, it’s getting tougher to spot the strongest businesses

The digital health rocket seems to have gotten supercharged lately, at least when it comes to fundraising.  Depending on who you ask, either $1.62 billion (Rock Health’s count) or $2.5 billion (Mercom) or $2.8 billion (Startup Health’s count) was plowed into digital health companies in just the first three months of 2018.  By any measure Q1 2018 was the most significant quarter yet for digital health funding.  This headline has been everywhere.  Digital health:  to infinity and beyond!  But what is the significance of this? Should investors and customers of these companies be excited or worried?  It’s a little hard to tell.

But if you dig a little deeper, there are some interesting things to notice.

Another thing to notice: some deals are way more equal than others, to misquote a book almost everyone was forced to read in junior high.  Megadeals have come to digital health (whatever that is), defined as companies getting more than $100 million dropped on them in a single deal.  For instance, according to Mercom Capital, just five deals together accounted for approximately $936 million, or more than a third of the entire quarter’s funding (assuming you’re using the Mercom numbers)  If you use the Rock Health numbers, which include only three of the mega deals, we are talking $550 million for the best in class (bank account wise, anyway).  Among the various megadeal high fliers are Heartflow ($240 million raised), Helix ($200 million raised), SomaLogic ($200 million raised), PointClickCare ($186 million raised), and Collective Health ($110 million raised); three others raised $100 million each.

First of all, the definition of “digital health” is getting murkier and murkier.  Some sweep in things that others might consider life sciences or genomics. Others include things that may generally be considered health services, in that they are more people than technology. Rock Health excludes companies that are primarily health services, such as One Medical or primarily insurance companies, such as Oscar, including only “health companies that build and sell technologies—sometimes paired with a service, but only when the technology is, in and of itself, the service.”  In contrast, Startup Health and Mercom Capital clearly have more expansive views though I couldn’t find precise definitions.  My solution is this: stop using the term “digital health”.  Frankly, it’s all healthcare and if I were in charge of the world I would use the following four categories and ditch the new school monikers:  1) drugs/therapeutics 2) diagnostics in vivo, in vitro, digital or otherwise; 3) medical devices with and without sensors; and 4) everything else.  But I’m not in charge of the world and it isn’t looking likely anytime soon, so the number and nomenclature games continue.  My kingdom for a common ontology!

It used to be conventional wisdom that the reason healthcare IT deals were appealing, at least compared to medical devices and biotech, is because they needed far less capital to get to the promised land.  Well, property values in the digital health promised land have risen faster than those in downtown San Francisco, so conventional wisdom be damned.  These technology-focused enterprises are giving biotech deals a run for their money, literally.

But here’s what it really means: if you take out the top 10 deals in Rock Health’s count, which take up 55 percent of the first quarter’s capital, the remainder are averaging about $10.8 incoming capital per deal.  If you use the Mercom numbers, the average non-megadeal got $8.6 million. This is a far more “normalish” amount of capital for any type of Series A or Series B venture deal, so somewhere in the universe, there is the capacity to reason.Another note on this topic: the gap between the haves and have not’s is widening dramatically.  Mercom reports that the total number of deals for Q1 2018 was 187, which is the lowest total number of digital health deals for five quarters.  Rock Health claims there were 77 deals in the quarter; Startup Health, always the most enthusiastic, claims there were 191 deals in the digital health category.  I don’t know who has the “right” definition of digital health; what I do know is that either way, this is a lot of companies.

I think that the phenomenon of companies proliferating like bunnies on Valentine’s Day has another implication: too many damn companies.  Perhaps it’s only me, but it’s getting harder and harder to tell the difference between the myriad of entrants in a variety of categories.  Medication adherence deals seem to be proliferating faster than I can log them.   Companies claiming to improve consumer engagement, whatever that is, are outnumbering articles about AI, and that’s saying something.  Companies claiming to use AI to make everything better, whether it’s care delivery or drug development or French Fries are so numerous that it’s making me artificially stupider.  I think that this excess of entrepreneurship is actually bad for everyone in that it makes it much harder for any investor to pick the winner(s) and makes it nearly impossible for customers to figure out the best dance partner.  It’s a lot easier for customers to simply say no than to take a chance and pick the wrong flavor of the month.  It’s just become too darn easy to start a company.

And with respect to well-constructed clinical studies to demonstrate efficacy, nothing could be more important for companies trying to stand out from the crowd.  We keep seeing articles like this one, that talks about how digital health products often fail to deliver on the promise of better, faster, cheaper or any part thereof. And there’s this one by a disappointed Dr. Eric Topol, a physician who has committed a significant amount of his professional life to the pursuit ofhigh quality digital health initiatives – a true believer, as it were, but one who has seen his share of disappointment when it comes to the claims of digital health products. I’m definitely of the belief that there are some seriously meaningful products out there that make a difference.  But there is so much chaff around the wheat that it’s hard to find the good stuff.

Digital health has become the world’s biggest Oreo with the world’s thinnest cream center.  But well-constructed, two-arm studies can make one Oreo stand out in a crowd of would-be Newman-Os. One way that investors and buyers are distinguishing the good from the not-so-much is by looking for those who have made the effort to get an FDA approval and who have made an investment in serious clinical trials to prove value.  Mercom Capital reports that there were over 100 FDA approvals of digital health products in 2017.  Considering that there were at least 345 digital health deals in 2017 (taking the low-end Rock Health numbers) and that only a fraction raised money in that year, it is interesting to think that a minority of companies are bothering to take the FDA route.

Now, this is a SWAG at best, but it feels about right to me.  I often hear digital health entrepreneurs talking about the lengths they are going to in order to avoid needing FDA approval and I almost always respond by saying that I disagree with the approach.  Yes, there are clearly companies that don’t ever need the FDA’s imprimatur (e.g., administrative products with no clinical component), but if you have a clinically-oriented product and hope to claim that it makes a difference, the FDA could be your best friend.  Having an FDA approval certainly conveys a sense of value and legitimacy to many in the buyer and investor community.

It will be interesting to see if the gravity-defying digital health activity will continue ad infinitum or whether the sector will come into contact with the third of Newton’s Laws. Investors are, by definition, in it for the money.  If you can’t exit you shouldn’t enter.  In 2017 there were exactly zero digital health IPOs.  This year there has, so far, been one IPO: Chinese fitness tracker and smartwatch maker, Huami, which raised $110 million and is now listed on the New York Stock Exchange, per Mercom Capital.  In 2017 there were about 119 exits via merger or acquisition, which was down from the prior year.  This year has started off with a faster M&A run rate (about 37 companies acquired in Q1 2018), but what we don’t know is whether the majority of these company exits will look more like Flatiron (woo hoo!) or Practice Fusion (yikes!).  Caveat Emptor: Buyer beware is all I have to say about that.

McKinsey – AI frontier : Analysis of more than 400 use cases across 19 industries and nine business functions

An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value


Mapping AI techniques to problem types

As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).

AI analytics techniques

Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.

We analyzed the applications and value of three neural network techniques:

  • Feed forward neural networks: the simplest type of artificial neural network. In this architecture, information moves in only one direction, forward, from the input layer, through the “hidden” layers, to the output layer. There are no loops in the network. The first single-neuron network was proposed already in 1958 by AI pioneer Frank Rosenblatt. While the idea is not new, advances in computing power, training algorithms, and available data led to higher levels of performance than previously possible.
  • Recurrent neural networks (RNNs): Artificial neural networks whose connections between neurons include loops, well-suited for processing sequences of inputs. In November 2016, Oxford University researchers reported that a system based on recurrent neural networks (and convolutional neural networks) had achieved 95 percent accuracy in reading lips, outperforming experienced human lip readers, who tested at 52 percent accuracy.
  • Convolutional neural networks (CNNs): Artificial neural networks in which the connections between neural layers are inspired by the organization of the animal visual cortex, the portion of the brain that processes images, well suited for perceptual tasks.

For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.

Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.

Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.


Section 2

Insights from use cases

We collated and analyzed more than 400 use cases across 19 industries and nine business functions. They provided insight into the areas within specific sectors where deep neural networks can potentially create the most value, the incremental lift that these neural networks can generate compared with traditional analytics (Exhibit 2), and the voracious data requirements—in terms of volume, variety, and velocity—that must be met for this potential to be realized. Our library of use cases, while extensive, is not exhaustive, and may overstate or understate the potential for certain sectors. We will continue refining and adding to it.

Advanced deep learning AI techniques can be applied across industries

Examples of where AI can be used to improve the performance of existing use cases include:

  • Predictive maintenance: the power of machine learning to detect anomalies. Deep learning’s capacity to analyze very large amounts of high dimensional data can take existing preventive maintenance systems to a new level. Layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield. For example, AI can extend the life of a cargo plane beyond what is possible using traditional analytic techniques by combining plane model data, maintenance history, IoT sensor data such as anomaly detection on engine vibration data, and images and video of engine condition.
  • AI-driven logistics optimization can reduce costs through real-time forecasts and behavioral coaching. Application of AI techniques such as continuous estimation to logistics can add substantial value across sectors. AI can optimize routing of delivery traffic, thereby improving fuel efficiency and reducing delivery times. One European trucking company has reduced fuel costs by 15 percent, for example, by using sensors that monitor both vehicle performance and driver behavior; drivers receive real-time coaching, including when to speed up or slow down, optimizing fuel consumption and reducing maintenance costs.
  • AI can be a valuable tool for customer service management and personalization challenges. Improved speech recognition in call center management and call routing as a result of the application of AI techniques allow a more seamless experience for customers—and more efficient processing. The capabilities go beyond words alone. For example, deep learning analysis of audio allows systems to assess a customers’ emotional tone; in the event a customer is responding badly to the system, the call can be rerouted automatically to human operators and managers. In other areas of marketing and sales, AI techniques can also have a significant impact. Combining customer demographic and past transaction data with social media monitoring can help generate individualized product recommendations. “Next product to buy” recommendations that target individual customers—as companies such as Amazon and Netflix have successfully been doing–can lead to a twofold increase in the rate of sales conversions.

Two-thirds of the opportunities to use AI are in improving the performance of existing analytics use cases

In 69 percent of the use cases we studied, deep neural networks can be used to improve performance beyond that provided by other analytic techniques. Cases in which only neural networks can be used, which we refer to here as “greenfield” cases, constituted just 16 percent of the total. For the remaining 15 percent, artificial neural networks provided limited additional performance over other analytics techniques, among other reasons because of data limitations that made these cases unsuitable for deep learning (Exhibit 3).

AI improves the performance of existing analytics techniques

Greenfield AI solutions are prevalent in business areas such as customer service management, as well as among some industries where the data are rich and voluminous and at times integrate human reactions. Among industries, we found many greenfield use cases in healthcare, in particular. Some of these cases involve disease diagnosis and improved care, and rely on rich data sets incorporating image and video inputs, including from MRIs.

On average, our use cases suggest that modern deep learning AI techniques have the potential to provide a boost in additional value above and beyond traditional analytics techniques ranging from 30 percent to 128 percent, depending on industry.

In many of our use cases, however, traditional analytics and machine learning techniques continue to underpin a large percentage of the value creation potential in industries including insurance, pharmaceuticals and medical products, and telecommunications, with the potential of AI limited in certain contexts. In part this is due to the way data are used by these industries and to regulatory issues.

Data requirements for deep learning are substantially greater than for other analytics

Making effective use of neural networks in most applications requires large labeled training data sets alongside access to sufficient computing infrastructure. Furthermore, these deep learning techniques are particularly powerful in extracting patterns from complex, multidimensional data types such as images, video, and audio or speech.

Deep-learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. By one estimate, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labeled examples. In some cases where advanced analytics is currently used, so much data are available—million or even billions of rows per data set—that AI usage is the most appropriate technique. However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.

These massive data sets can be difficult to obtain or create for many business use cases, and labeling remains a challenge. Most current AI models are trained through “supervised learning”, which requires humans to label and categorize the underlying data. However promising new techniques are emerging to overcome these data bottlenecks, such as reinforcement learning, generative adversarial networks, transfer learning, and “one-shot learning,” which allows a trained AI model to learn about a subject based on a small number of real-world demonstrations or examples—and sometimes just one.

Organizations will have to adopt and implement strategies that enable them to collect and integrate data at scale. Even with large datasets, they will have to guard against “overfitting,” where a model too tightly matches the “noisy” or random features of the training set, resulting in a corresponding lack of accuracy in future performance, and against “underfitting,” where the model fails to capture all of the relevant features. Linking data across customer segments and channels, rather than allowing the data to languish in silos, is especially important to create value.

Realizing AI’s full potential requires a diverse range of data types including images, video, and audio

Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multidimensional nature, known by practitioners as “high dimensionality.” Neural networks are good at dealing with high dimensionality, as multiple layers in a network can learn to represent the many different features present in the data. Thus, for facial recognition, the first layer in the network could focus on raw pixels, the next on edges and lines, another on generic facial features, and the final layer might identify the face. Unlike previous generations of AI, which often required human expertise to do “feature engineering,” these neural network techniques are often able to learn to represent these features in their simulated neural networks as part of the training process.

Along with issues around the volume and variety of data, velocity is also a requirement: AI techniques require models to be retrained to match potential changing conditions, so the training data must be refreshed frequently. In one-third of the cases, the model needs to be refreshed at least monthly, and almost one in four cases requires a daily refresh; this is especially the case in marketing and sales and in supply chain management and manufacturing.


Section 3

Sizing the potential value of AI

We estimate that the AI techniques we cite in this briefing together have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques (Exhibit 4).

AI has the potential to create value across sectors

Per industry, we estimate that AI’s potential value amounts to between one and nine percent of 2016 revenue. The value as measured by percentage of industry revenue varies significantly among industries, depending on the specific applicable use cases, the availability of abundant and complex data, as well as on regulatory and other constraints.

These figures are not forecasts for a particular period, but they are indicative of the considerable potential for the global economy that advanced analytics represents.

From the use cases we have examined, we find that the greatest potential value impact from using AI are both in top-line-oriented functions, such as in marketing and sales, and bottom-line-oriented operational functions, including supply chain management and manufacturing.

Consumer industries such as retail and high tech will tend to see more potential from marketing and sales AI applications because frequent and digital interactions between business and customers generate larger data sets for AI techniques to tap into. E-commerce platforms, in particular, stand to benefit. This is because of the ease with which these platforms collect customer information such as click data or time spent on a web page and can then customize promotions, prices, and products for each customer dynamically and in real time.

AI's impact is likely to be most substantial in M&S, supply-chain management, and manufacturing

Here is a snapshot of three sectors where we have seen AI’s impact: (Exhibit 5)

  • In retail, marketing and sales is the area with the most significant potential value from AI, and within that function, pricing and promotion and customer service management are the main value areas. Our use cases show that using customer data to personalize promotions, for example, including tailoring individual offers every day, can lead to a one to two percent increase in incremental sales for brick-and-mortar retailers alone.
  • In consumer goods, supply-chain management is the key function that could benefit from AI deployment. Among the examples in our use cases, we see how forecasting based on underlying causal drivers of demand rather than prior outcomes can improve forecasting accuracy by 10 to 20 percent, which translates into a potential five percent reduction in inventory costs and revenue increases of two to three percent.
  • In banking, particularly retail banking, AI has significant value potential in marketing and sales, much as it does in retail. However, because of the importance of assessing and managing risk in banking, for example for loan underwriting and fraud detection, AI has much higher value potential to improve performance in risk in the banking sector than in many other industries.


Section 4

The road to impact and value

Artificial intelligence is attracting growing amounts of corporate investment, and as the technologies develop, the potential value that can be unlocked is likely to grow. So far, however, only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.

For all their promise, AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed above, but also five other limitations:

  • First is the challenge of labeling training data, which often must be done manually and is necessary for supervised learning. Promising new techniques are emerging to address this challenge, such as reinforcement learning and in-stream supervision, in which data can be labeled in the course of natural usage.
  • Second is the difficulty of obtaining data sets that are sufficiently large and comprehensive to be used for training; for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
  • Third is the difficulty of explaining in human terms results from large and complex models: why was a certain decision reached? Product certifications in healthcare and in the automotive and aerospace industries, for example, can be an obstacle; among other constraints, regulators often want rules and choice criteria to be clearly explainable.
  • Fourth is the generalizability of learning: AI models continue to have difficulties in carrying their experiences from one set of circumstances to another. That means companies must commit resources to train new models even for use cases that are similar to previous ones. Transfer learning—in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity—is one promising response to this challenge.
  • The fifth limitation concerns the risk of bias in data and algorithms. This issue touches on concerns that are more social in nature and which could require broader steps to resolve, such as understanding how the processes used to collect training data can influence the behavior of models they are used to train. For example, unintended biases can be introduced when training data is not representative of the larger population to which an AI model is applied. Thus, facial recognition models trained on a population of faces corresponding to the demographics of AI developers could struggle when applied to populations with more diverse characteristics. A recent report on the malicious use of AIhighlights a range of security threats, from sophisticated automation of hacking to hyper-personalized political disinformation campaigns.

Organizational challenges around technology, processes, and people can slow or impede AI adoption

Organizations planning to adopt significant deep learning efforts will need to consider a spectrum of options about how to do so. The range of options includes building a complete in-house AI capability, outsourcing these capabilities, or leveraging AI-as-a-service offerings.

Based on the use cases they plan to build, companies will need to create a data plan that produces results and predictions, which can be fed either into designed interfaces for humans to act on or into transaction systems. Key data engineering challenges include data creation or acquisition, defining data ontology, and building appropriate data “pipes.” Given the significant computational requirements of deep learning, some organizations will maintain their own data centers, because of regulations or security concerns, but the capital expenditures could be considerable, particularly when using specialized hardware. Cloud vendors offer another option.

Process can also become an impediment to successful adoption unless organizations are digitally mature. On the technical side, organizations will have to develop robust data maintenance and governance processes, and implement modern software disciplines such as Agile and DevOps. Even more challenging, in terms of scale, is overcoming the “last mile” problem of making sure the superior insights provided by AI are instantiated in the behavior of the people and processes of an enterprise.

On the people front, much of the construction and optimization of deep neural networks remains something of an art requiring real experts to deliver step-change performance increases. Demand for these skills far outstrips supply at present; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems. and competition for them is fierce among the tech giants.

AI can seem an elusive business case

Where AI techniques and data are available and the value is clearly proven, organizations can already pursue the opportunity. In some areas, the techniques today may be mature and the data available, but the cost and complexity of deploying AI may simply not be worthwhile, given the value that could be generated. For example, an airline could use facial recognition and other biometric scanning technology to streamline aircraft boarding, but the value of doing so may not justify the cost and issues around privacy and personal identification.

Similarly, we can see potential cases where the data and the techniques are maturing, but the value is not yet clear. The most unpredictable scenario is where either the data (both the types and volume) or the techniques are simply too new and untested to know how much value they could unlock. For example, in healthcare, if AI were able to build on the superhuman precision we are already starting to see with X-ray analysis and broaden that to more accurate diagnoses and even automated medical procedures, the economic value could be very significant. At the same time, the complexities and costs of arriving at this frontier are also daunting. Among other issues, it would require flawless technical execution and resolving issues of malpractice insurance and other legal concerns.

Societal concerns and regulations can also constrain AI use. Regulatory constraints are especially prevalent in use cases related to personally identifiable information. This is particularly relevant at a time of growing public debate about the use and commercialization of individual data on some online platforms. Use and storage of personal information is especially sensitive in sectors such as banking, health care, and pharmaceutical and medical products, as well as in the public and social sector. In addition to addressing these issues, businesses and other users of data for AI will need to continue to evolve business models related to data use in order to address societies’ concerns.. Furthermore, regulatory requirements and restrictions can differ from country to country, as well from sector to sector.

Implications for stakeholders

As we have seen, it is a company’s ability to execute against AI models that creates value, rather than the models themselves. In this final section, we sketch out some of the high-level implications of our study of AI use cases for providers of AI technology, appliers of AI technology, and policy makers, who set the context for both.

  • For AI technology provider companies: Many companies that develop or provide AI to others have considerable strength in the technology itself and the data scientists needed to make it work, but they can lack a deep understanding of end markets. Understanding the value potential of AI across sectors and functions can help shape the portfolios of these AI technology companies. That said, they shouldn’t necessarily only prioritize the areas of highest potential value. Instead, they can combine that data with complementary analyses of the competitor landscape, of their own existing strengths, sector or function knowledge, and customer relationships, to shape their investment portfolios. On the technical side, the mapping of problem types and techniques to sectors and functions of potential value can guide a company with specific areas of expertise on where to focus.
  • Many companies seeking to adopt AI in their operations have started machine learning and AI experiments across their business. Before launching more pilots or testing solutions, it is useful to step back and take a holistic approach to the issue, moving to create a prioritized portfolio of initiatives across the enterprise, including AI and the wider analytic and digital techniques available. For a business leader to create an appropriate portfolio, it is important to develop an understanding about which use cases and domains have the potential to drive the most value for a company, as well as which AI and other analytical techniques will need to be deployed to capture that value. This portfolio ought to be informed not only by where the theoretical value can be captured, but by the question of how the techniques can be deployed at scale across the enterprise. The question of how analytical techniques are scaling is driven less by the techniques themselves and more by a company’s skills, capabilities, and data. Companies will need to consider efforts on the “first mile,” that is, how to acquire and organize data and efforts, as well as on the “last mile,” or how to integrate the output of AI models into work flows ranging from clinical trial managers and sales force managers to procurement officers. Previous MGI research suggests that AI leaders invest heavily in these first- and last-mile efforts.
  • Policy makers will need to strike a balance between supporting the development of AI technologies and managing any risks from bad actors. They have an interest in supporting broad adoption, since AI can lead to higher labor productivity, economic growth, and societal prosperity. Their tools include public investments in research and development as well as support for a variety of training programs, which can help nurture AI talent. On the issue of data, governments can spur the development of training data directly through open data initiatives. Opening up public-sector data can spur private-sector innovation. Setting common data standards can also help. AI is also raising new questions for policy makers to grapple with for which historical tools and frameworks may not be adequate. Therefore, some policy innovations will likely be needed to cope with these rapidly evolving technologies. But given the scale of the beneficial impact on business the economy and society, the goal should not be to constrain the adoption and application of AI, but rather to encourage its beneficial and safe use.

Glossy : Blockchain, Internet of Things and AI: What the newest luxury startup accelerators are investing in

Blockchain, artificial intelligence, and sustainability and raw materials are the top new technology priorities at a batch of recently formed fashion accelerators.

LVMH announced earlier this month that it will be launching an annual startup accelerator at Station F, a business incubator in Paris, and will choose 50 startups to join two six-month programs. On Friday, Farfetch announced the launch of Dream Assembly, its first in-house accelerator in partnership with Burberry and 500 Startups, a venture capital firm. It’s currently taking applications from companies and will choose up to 10 to participate in a 12-week program. Kering, a founding partner of the accelerator Plug and Play, started its second 12-week season with 15 new startups in March.

Helping to foster new industry startups is a way for retailers and conglomerates to lend their support and cash to companies they think will prove lucrative and beneficial down the line, as well as demonstrate an interest in the broad umbrella of innovation. Retail and accelerator partnerships have popped up throughout the industry: Target’s Techstars Retail accelerator will start its third program this summer; the R/GA Accelerator has partnered with brands and retailers like Nike and Westfield; beauty retailers like Sephora and L’Oréal have launched in-house accelerators to harbor relevant technology companies within their own walls. Early successes within the world of luxury accelerators include Orchard Mile, incubated by an early version of LVMH’s accelerator, and The Modist, which has received funding from Farfetch.

“It’s very important for new companies to continue to form luxury in all shapes and sizes,” said Morty Singer, CEO of business development firm Marvin Traub Associates. “LVMH’s service explores the technology and innovation out there, where they may not be ready to invest in-house, and you can really see where the industry is headed by looking at the companies that they’re supporting.”

In luxury, startups working on supply-chain logistics technology, fabric sourcing and sustainable materials, personalization and customization through artificial intelligence, e-commerce solutions and mobile capabilities are all present across the accelerators, but blockchain and the Internet of Things make up one of the newest trends to emerge among startup initiatives. It’s one of the seven “main ideas” at LVMH’s new accelerator, titled La Maison des Startups; launched a blockchain-specific accelerator in February; last year, Kering’s sustainability-centric accelerator included blockchain startups like A Transparent Company, as the blockchain can be used to verify where materials are sourced from and where items are produced. It’s also playing a role in anti-counterfeiting efforts in online luxury.

“This isn’t something that a luxury fashion house is born knowing how to do. Brands would have to build out new teams to deliver new products and new content for connected consumers, and be continually investing in new content and experiences,” said Lauren Nutt Bello, the managing partner at digital agency Ready Set Rocket. “Already, there’s an overloading amount of data that brands don’t know how to make sense of, and some luxury companies are brands that don’t even have full e-commerce stores. They need outside help.”

The goal is to scout technology startups that could be absorbed at the brand level. While other fashion accelerators like the Fashion Tech Accelerator and XRC Labs offer potential retail partnerships as a perk of the program, retail-founded accelerators build brand partnerships in as a guarantee. At LVMH’s La Maison des Startups, the companies invited to participate will be working on projects with individual fashion houses owned by the conglomerate to incorporate new technologies and practices at the brand level. Burberry is the first brand partner of Farfetch’s Dream Assembly, and new brands can get involved down the line. Kering brands all have the option of participating in the Plug and Play accelerator, and the company’s sustainability initiative is an open-source resource for all of its owned brands. None of the accelerators promise investments to the startups, and terms for potential investments aren’t disclosed.

But building up a startup in such close proximity to brands — in some cases, legacy brands — can invite competing agendas. What incubators like XRC Labs offer is a more mixed variety of insight from experts, mentors and judges from across the industry, and in-house retail accelerators can have a one-track mind focused on their own goals, and reduce the potential for investment to a limited group. Partnerships with unbiased players in the space, like Station F and 500 Startups can help. The big trade off, of course, is access.

“When I started my company, I needed exposure to boutiques and to brands; I needed consumers I could learn from; I needed technology support and mentoring,” said Jose Neves, the founder and CEO of Farfetch.

Teads – Real-life AWS infrastructure cost optimization strategy

The cloud computing opportunity and its traps

One of the advantages of cloud computing is its ability to fit the infrastructure to your needs, you only pay for what you really use. That is how most hyper growth startups have managed their incredible ascents.

Most companies migrating to the cloud embrace the “lift & shift” strategy, replicating what was once on premises.

You most likely won’t save a penny with this first step.

Main reasons being:

  • Your applications do not support elasticity yet,
  • Your applications rely on complex backend you need to migrate with (RabbitMQ, Cassandra, Galera clusters, etc.),
  • Your code relies on being executed in a known network environment and most likely uses NFS as distributed storage mechanism.

Once in the cloud, you need to “cloudify” your infrastructure.

Then, and only then, will you have access to virtually infinite computing power and storage.

Watch out, this apparent freedom can lead to very serious drifts: over provisioning, under optimizing your code or even forgetting to “turn off the lights” by letting that small PoC run more than necessary using that very nice r3.8xlarge instance.

Essentially, you have just replaced your need for capacity planning by a need for cost monitoring and optimization.

The dark side of cloud computing

At Teads we were “born in the cloud” and we are very happy about it.

One of our biggest pain today with our cloud providers is the complexity of their pricing.

It is designed to look very simple at the first glance (usually based on simple metrics like $/GB/month or $/hour or, more recently, $/second) but as you expand and go into a multi-region infrastructure mixing lots of products, you will have a hard time tracking the ever-growing cost of your cloud infrastructure.

For example, the cost of putting a file on S3 and serving it from thereincludes four different lines of billing:

  • Actual storage cost (80% of your bill)
  • Cost of the HTTP PUT request (2% of your bill)
  • Cost of the many HTTP GET requests (3% of your bill)
  • Cost of the data transfer (15% of your bill)

Our take on Cost Optimization

  • Focus on structural costs – Never block short term costs increase that would speed up the business, or enable a technical migration.
  • Everyone is responsible – Provide tooling to each team to make them autonomous on their cost optimization.

The limit of cost optimization for us is when it drives more complexity in the code and less agility in the future, for a limited ROI.
This way of thinking also helps us to tackle cost optimisation in our day to day developments.

Overall we can extend this famous quote from Kent Beck:

“Make it work, make it right, make it fast” … and then cost efficient.

Billing Hygiene

It is of the utmost importance to keep a strict billing hygiene and know your daily spends.

In some cases, it will help you identify suspicious uptrends, like a service stuck in a loop and writing a huge volume of logs to S3 or a developer that left its test infrastructure up & running during a week-end.

You need to arm yourself with a detailed monitoring of your costs and spend time looking at it every day.

You have several options to do so, starting with AWS’s own tools:

  • Billing Dashboard, giving a high level view of your main costs (Amazon S3, Amazon EC2, etc.) and a rarely accurate forecast, at least for us. Overall, it’s not detailed enough to be of use for serious monitoring.
  • Detailed Billing Report, this feature has to be enabled in your account preferences. It sends you a daily gzipped .csv file containing one line per billable item since the beginning of the month (e.g., instance A sent X Mb of data on the Internet).
    The detailed billing is an interesting source of data once you have added custom tags to your services so that you can group your costs by feature / application / part of your infrastructure.
    Be aware that this file is accurate within a delay of approximately two days as it takes time for AWS to compute the files.
  • Trusted Advisor, available at the business and enterprise support level, also includes a cost section with interesting optimization insights.
Trusted Advisor cost section – Courtesy of AWS
  • Cost Explorer, an interesting tool since its update in august 2017. It can be used to quickly identify trends but it is still limited as you cannot build complete dashboards with it. It is mainly a reporting tool.
Example of a Cost Explorer report — AWS documentation

Then you have several other external options to monitor the costs of your infrastructure:

  • SaaS products like Cloudyn / Cloudhealth. These solutions are really well made and will tell you how to optimize your infrastructure. Their pricing model is based on a percentage of your annual AWS bill, not on the savings that the tools will help you make, which was a show stopper for us.
  • The open source project Ice, initially developed by Netflix for their own use. Recently, the leadership of this project was transferred to the french startup Teevity who is also offering a SaaS version for a fixed fee. This could be a great option as it also handles GCP and Azure.

Building our own monitoring solution

At Teads we decided to go DIY using the detailed billings files.

We built a small Lambda function that ingests the detailed billing file into Redshift every day. This tool helps us slice and dice our data along numerous dimensions to dive deeper into our costs. We also use it to spot suspicious usage uptrends, down to the service level.

This is an example of our daily dashboard built with, each color corresponds to a service we tagged
When zoomed on a specific service, we can quickly figure out what is expensive

On top of that, we still use a spreadsheet to integrate the reservation upfronts in order to get a complete overview and the full daily costs.

Now that we have the data, how to optimize?

Here are the 5 pillars of our cost optimization strategy.

1 – Reserved Instances (RIs)

First things first, you need to reserve your instances. Technically speaking, RIs will only make sure that you have access to the reserved resources.

At Teads our reservation strategy is based on bi-annual reservation batchesand we are also evaluating higher frequencies (3 to 4 batches per year).

The right frequency should be determined by the best compromise between flexibility (handling growth, having leaner financial streams) and the ability to manage the reservations efficiently.
In the end, managing reservations is a time consuming task.

Reservation is mostly a financial tool, you commit to pay for resources during 1 or 3 years and get a discount over the on-demand price:

  • You have two types of reservations, standard or convertible. Convertible lets you change the instance family but comes with a smaller discount compared to standard (avg. 75% vs 54% for a convertible). They are the best option to leverage future instance families in the long run.
  • Reservations come with three different payment options: Full Upfront, Partial Upfront, and No Upfront. With partial and no upfront, you pay the remaining balance monthly over the term. We prefer partial upfront since the discount rate is really close to the full upfront one (e.g. 56% vs 55% for a convertible 3-year term with partial).
  • Don’t forget that you can reserve a lot of things and not only Amazon EC2 instances: Amazon RDS, Amazon Elasticache, Amazon Redshift, Amazon DynamoDB, etc.

2 – Optimize Amazon S3

The second source of optimization is the object management on S3. Storage is cheap and infinite, but it is not a valid reason to keep all your data there forever. Many companies do not clean their data on S3, even though several trivial mechanisms could be used:

The Object Lifecycle option enables you to set simple rules for objects in a bucket :

  • Infrequent Access Storage (IAS): for application logs, set the object storage class to Infrequent Access Storage after a few days.
    IAS will cut the storage cost by a factor of two but comes with a higher cost for requests.
    The main drawback of IAS is that it uses 128kb blocks to store data so if you want to store a lot of smaller objects it will end up more expensive than standard storage.
  • GlacierAmazon Glacier is a very long term archiving service, also called cold storage.
    Here is a nice article from Cloudability if you want to dig deeper into optimizing storage costs and compare the different options.

Also, don’t forget to set up a delete policy when you think you won’t need those files anymore.

Finally, enabling a VPC Endpoint for your Amazon S3 buckets will suppress the data transfer costs between Amazon S3 and your instances.

3 – Leverage the Spot market

Spot instances enables you to use AWS’s spare computing power at a heavily discounted price. This can be very interesting depending on your workloads.

Spot instances are bought using some sort of auction model, if your bid is above the spot market rate you will get the instance and only pay the market price. However these instances can be reclaimed if the market price exceeds your bid.

At Teads, we usually bid the on-demand price to be sure that we can get the instance. We only pay the “market” rate which gives us a rebate up to 90%.

It is worth noting that:

  • You get a 2 min termination notice before your spot is reclaimed but you need to look for it.
  • Spot Instances are easy to use for non critical batch workloads and interesting for data processing, it’s a very good match with Amazon Elastic Map Reduce.

4 – Data transfer

Back in the physical world, you were used to pay for the network link between your Data Center and the Internet.

Whatever data you sent through that link was free of charge.

In the cloud, data transfer can grow to become really expensive.

You are charged for data transfer from your services to the Internet but also in-between AWS Availability Zones.

This can quickly become an issue when using distributed systems like Kafkaand Cassandra that need to be deployed in different zones to be highly available and constantly exchange over the network.

Some advice:

  • If you have instances communicating with each other, you should try to locate them in the same AZ
  • Use managed services like Amazon DynamoDB or Amazon RDS as their inter-AZ replication costs is built-in their pricing
  • If you serve more than a few hundred Terabytes per months you should discuss with your account manager
  • Use Amazon CloudFront (AWS’s CDN) as much as you can when serving static files. The data transfer out rates are cheaper from CloudFront and free between CloudFront and EC2 or S3.

5 – Unused infrastructure

With a growing infrastructure, you can rapidly forget to turn off unused and idle things:

  • Detached Elastic IPs (EIPs), they are free when attached to an EC2 instance but you have to pay for it if they are not.
  • The block stores (EBS) starting with the EC2 instances are preserved when you stop your instances. As you will rarely re-attach a root EBS volume you can delete them. Also, snapshots tend to pile up over time, you should also look into it.
  • Load Balancer (ELB) with no traffic is easy to detect and obviously useless. Still, it will cost you ~20 $/month.
  • Instances with no network activity over the last week. In a cloud context it doesn’t make a lot of sense.

Trusted Advisor can help you in detecting these unnecessary expenses.

Key takeaways

Shopify – 67 Key Performance Indicators (KPIs) for Ecommerce

Key performance indicators (KPIs) are like milestones on the road to online retail success. Monitoring them will help ecommerce entrepreneurs identify progress toward sales, marketing, and customer service goals.

KPIs should be chosen and monitored depending on your unique business goals. Certain KPIs support some goals while they’re irrelevant for others. With the idea that KPIs should differ based on the goal being measured, it’s possible to consider a set of common performance indicators for ecommerce.

Table of Contents

Here is the definition of key performance indicators, types of key performance indicators, and 67 examples of ecommerce key performance indicators.

What is a performance indicator?

A performance indicator is a quantifiable measurement or data point used to gauge performance relative to some goal. As an example, some online retailers may have a goal to increase site traffic 50% in the next year.

Relative to this goal, a performance indicator might be the number of unique visitors the site receives daily or which traffic sources send visitors (paid advertising, search engine optimization, brand or display advertising, a YouTube video, etc.)

What is a key performance indicator?

For most goals there could be many performance indicators — often too many — so often people narrow it down to just two or three impactful data points known as key performance indicators. KPIs are those measurements that most accurately and succinctly show whether or not a business in progressing toward its goal.

Why are key performance indicators important?

KPIs are important just like strategy and goal setting are important. Without KPIs, it’s difficult to gauge progress over time. You’d be making decisions based on gut instinct, personal preference or belief, or other unfounded hypotheses. KPIs tell you more information about your business and your customers, so you can make informed and strategic decisions.

But KPIs aren’t important on their own. The real value lies in the actionable insights you take away from analyzing the data. You’ll be able to more accurately devise strategies to drive more online sales, as well as understand where there may problems in your business.

Plus, the data related to KPIs can be distributed to the larger team. This can be used to educate your employees and come together for critical problem-solving.

What is the difference between a SLA and a KPI?

SLA stands for service level agreement, while a KPI is a key performance indicator. A service level agreement in ecommerce establishes the scope for the working relationship between an online retailer and a vendor. For example, you might have a SLA with your manufacturer or digital marketing agency. A KPI, as we know, is a metric or data point related to some business operation. These are often quantifiable, but KPIs may also be qualitative

Types of key performance indicators

There are many types of key performance indicators. They may be qualitative, quantitative, predictive of the future, or revealing of the past. KPIs also touch on various business operations. When it comes to ecommerce, KPIs generally fall into one of the following five categories:

  1. Sales
  2. Marketing
  3. Customer service
  4. Manufacturing
  5. Project management

67 key performance indicator examples for ecommerce

Note: The performance indicators listed below are in no way an exhaustive list. There are an almost infinite number of KPIs to consider for your ecommerce business.

What are key performance indicators for sales?

Sales key performance indicators are measures that tell you how your business is doing in terms of conversions and revenue. You can look at sales KPIs related to a specific channel, time period, team, employee, etc. to inform business decisions.

Examples of key performance indicators for sales include:

  • Sales: Ecommerce retailers can monitor total sales by the hour, day, week, month, quarter, or year.
  • Average order size: Sometimes called average market basket, the average order size tells you how much a customer typically spends on a single order.
  • Gross profit: Calculate this KPI by subtracting the total cost of goods sold from total sales.
  • Average margin: Average margin, or average profit margin, is a percentage that represents your profit margin over a period of time.
  • Number of transactions: This is the total number of transactions. Use this KPI in conjunction with average order size or total number of site visitors for deeper insights.
  • Conversion rate: The conversion rate, also a percentage, is the rate at which users on your ecommerce site are converting (or buying). This is calculated by dividing the total number of visitors (to a site, page, category, or selection of pages) by the total number of conversions.
  • Shopping cart abandonment rate: The shopping cart abandonment rate tells you how many users are adding products to their shopping cart but not checking out. The lower this number, the better. If your cart abandonment rate is high, there may be too much friction in the checkout process.
  • New customer orders vs. returning customer orders: This metric shows a comparison between new and repeat customers. Many business owners focus only on customer acquisition, but customer retention can also drive loyalty, word of mouth marketing, and higher order values.
  • Cost of goods sold (COGS): COGS tells you how much you’re spending to sell a product. This includes manufacturing, employee wages, and overhead costs.
  • Total available market relative to a retailer’s share of market: Tracking this KPI will tell you how much your business is growing compared to others within your industry.
  • Product affinity: This KPI tells you which products are purchased together. This can and should inform cross-promotion strategies.
  • Product relationship: This is which products are viewed consecutively. Again, use this KPI to formulate effective cross-selling tactics.
  • Inventory levels: This KPI could tell you how much stock is on hand, how long product is sitting, how quickly product is selling, etc.
  • Competitive pricing: It’s important to gauge your success and growth against yourself and against your competitors. Monitor your competitors’ pricing strategies and compare them to your own.
  • Customer lifetime value (CLV): The CLV tells you how much a customer is worth to your business over the course of their relationship with your brand. You want to increase this number over time through strengthening relationships and focusing on customer loyalty.
  • Revenue per visitor (RPV): RPV gives you an average of how much a person spends during a single visit to your site. If this KPI is low, you can view website analytics to see how you can drive more online sales.
  • Churn rate: For an online retailer, the churn rate tells you how quickly customers are leaving your brand or canceling/failing to renew a subscription with your brand.
  • Customer acquisition cost (CAC): CAC tells you how much your company spends on acquiring a new customer. This is measured by looking at your marketing spend and how it breaks down per individual customer.

What are key performance indicators for marketing?

Key performance indicators for marketing tell you how well you’re doing in relation to your marketing and advertising goals. These also impact your sales KPIs. Marketers use KPIs to understand which products are selling, who’s buying them, how they’re buying them, and why they’re buying them. This can help you market more strategically in the future and inform product development.

Examples of key performance indicators for marketing include:

  • Site traffic: Site traffic refers to the total number of visits to your ecommerce site. More site traffic means more users are hitting your store.
  • New visitors vs. returning visitors: New site visitors are first-time visitors to your site. Returning visitors, on the other hand, have been to your site before. While looking at this metric alone won’t reveal much, it can help ecommerce retailers gauge success of digital marketing campaigns. If you’re running a retargeted ad, for example, returning visitors should be higher.
  • Time on site: This KPI tells you how much time visitors are spending on your website. Generally, more time spent means they’ve had deeper engagements with your brand. Usually, you’ll want to see more time spent on blog content and landing pages and less time spent through the checkout process.
  • Bounce rate: The bounce rate tells you how many users exit your site after viewing only one page. If this number is high, you’ll want to investigate why visitors are leaving your site instead of exploring.
  • Pageviews per visit: Pageviews per visit refers to the average number of pages a user will view on your site during each visit. Again, more pages usually means more engagement. However, if it’s taking users too many clicks to find the products they’re looking for, you want to revisit your site design.
  • Average session duration: The average amount of time a person spends on your site during a single visit is called the average session duration.
  • Traffic source: The traffic source KPI tells you where visitors are coming from or how they found your site. This will provide information about which channels are driving the most traffic, such as: organic search, paid ads, or social media.
  • Mobile site traffic: Monitor the total number of users who use mobile devices to access your store and make sure your site is optimized for mobile.
  • Day part monitoring: Looking at when site visitors come can tell you which are peak traffic times.
  • Newsletter subscribers: The number of newsletter subscribers refers to how many users have opted into your email marketing list. If you have more subscribers, you can reach more consumers. However, you’ll also want to look at related data, such as the demographics of your newsletter subscribers, to make sure you’re reaching your target audience.
  • Texting subscribers: Newer to digital marketing than email, ecommerce brands can reach consumers through SMS-based marketing. Texting subscribers refers to the number of customers on your text message contact list. To get started with your own text-based marketing, browse these SMS Shopify apps.
  • Subscriber growth rate: This tells you how quickly your subscriber list is growing. Pairing this KPI with the total number of subscribers will give you good insight into this channel.
  • Email open rate: This KPI tells you the percentage of subscribers that open your email. If you have a low email open rate, you could test new subject lines, or try cleaning your list for inactive or irrelevant subscribers.
  • Email click-through rate (CTR): While the open rate tells you the percentage of subscribers who open the email, the click-through rate tells you the percentage of those who actually clicked on a link after opening. This is arguably more important than the open rate because without clicks, you won’t drive any traffic to your site.
  • Unsubscribes: You can look at both the total number and the rate of unsubscriptions for your email list.
  • Chat sessions initiated: If you have live chat functionality on your ecommerce store, the number of chat sessions initiated tells you how many users engaged with the tool to speak to a virtual aide.
  • Social followers and fans: Whether you’re on Facebook, Instagram, Twitter, Pinterest, or Snapchat (or a combination of a few), the number of followers or fans you have is a useful KPI to gauge customer loyalty and brand awareness. Many of those social media networks also have tools that ecommerce businesses can use to learn more about their social followers.
  • Social media engagement: Social media engagement tells you how actively your followers and fans are interacting with your brand on social media.
  • Clicks: The total number of clicks a link gets. You could measure this KPI almost anywhere: on your website, social media, email, display ads, PPC, etc.
  • Average CTR: The average click-through rate tells you the percentage of users on a page (or asset) who click on a link.
  • Average position: The average position KPI tells you about your site’s search engine optimization (SEO) and paid search performance. This demonstrates where you are on search engine results pages. Most online retailers have the goal of being number one for their targeted keywords.
  • Pay-per-click (PPC) traffic volume: If you’re running PPC campaigns, this tells you how much traffic you’re successfully driving to your site.
  • Blog traffic: You can find this KPI by simply creating a filtered view in your analytics tool. It’s also helpful to compare blog traffic to overall site traffic.
  • Number and quality of product reviews: Product reviews are great for a number of reasons: They provide social proof, they can help with SEO, and they give you valuable feedback for your business. The quantity and content of product reviews are important KPIs to track for your ecommerce business.
  • Banner or display advertising CTRs: The CTRs for your banner and display ads will tell you the percentage of viewers who have clicked on the ad. This KPI will give you insight into your copy, imagery, and offer performance.
  • Affiliate performance rates: If you engage in affiliate marketing, this KPI will help you understand which channels are most successful.

What are key performance indicators for customer service?

Customer service KPIs tell you how effective your customer service is and if you’re meeting expectations.You might be wondering: what should the KPIs be in our call center, for our email support team, for our social media support team, etc. Measuring and tracking these KPIs will help you ensure you’re providing a positive customer experience.

Key performance indicators for customer service include:

  • Customer satisfaction (CSAT) score: The CSAT KPI is typically measured by customer responses to a very common survey question: “How satisfied were you with your experience?” This is usually answered with a numbered scale.
  • Net promoter score (NPS): Your NPS KPI provides insight into your customer relationships and loyalty by telling you how likely customers are to recommend your brand to someone in their network.
  • Hit rate: Calculate your hit rate by taking the total number of sales of a single product and dividing it by the number of customers who have contacted your customer service team about said product.
  • Customer service email count: This is the number of emails your customer support team receives.
  • Customer service phone call count: Rather than email, this is how frequently your customer support team is reached via phone.
  • Customer service chat count: If you have live chat on your ecommerce site, you may have a customer service chat count.
  • First response time: First response time is the average amount of time it takes a customer to receive the first response to their query. Aim low!
  • Average resolution time: This is the amount of time it takes for a customer support issue to be resolved, starting from the point at which the customer reached out about the problem.
  • Active issues: The total number of active issues tells you how many queries are currently in progress.
  • Backlogs: Backlogs are when issues are getting backed up in your system. This could be caused by a number of factors.
  • Concern classification: Beyond the total number of customer support interactions, look at quantitative data around trends to see if you can be proactive and reduce customer support queries. You’ll classify the customer concerns which will help identify trends and your progress in solving issues.
  • Service escalation rate: The service escalation rate KPI tells you how many times a customer has asked a customer service representative to redirect them to a supervisor or other senior employee. You want to keep this number low.

What are key performance indicators for manufacturing?

Key performance indicators for manufacturing are, predictably, related to your supply chain and production processes. These may tell you where efficiencies and inefficiencies are, as well as help you understand productivity and expenses.

Key performance indicators for manufacturing in ecommerce include:

  • Cycle time: The cycle time manufacturing KPI tells you how long it takes to manufacture a single product from start to finish. Monitoring this KPI will give you insight into production efficiency.
  • Overall equipment effectiveness (OEE): The OEE KPI provides ecommerce businesses with insight into how well manufacturing equipment is performing.
  • Overall labor effectiveness (OLE): Just as you’ll want insight into your equipment, the OLE KPI will tell you how productive the staff operating the machines are.
  • Yield: Yield is a straightforward manufacturing KPI. It is the number of products you have manufactured. Consider analyzing the yield variance KPI in manufacturing, too, as that will tell you how much you deviate from your average.
  • First time yield (FTY) and first time through (FTT): FTY, also referred to as first pass yield, is a quality-based KPI. It tells you how wasteful your production processes are. To calculate FTY, divide the number of successfully manufactured units by the total number of units that started the process.
  • Number of non-compliance events or incidents: In manufacturing, there are several sets of regulations, licenses, and policies businesses must comply with. These are typically related to safety, working conditions, and quality. You’ll want to reduce this number to ensure you’re operating within the mandated guidelines.

What are key performance indicators for project management?

Key performance indicators for project management give you insight into how well your teams are performing and completing specific tasks. Each project or initiative within your ecommerce business has different goals, and must be managed with different processes and workflows. Project management KPIs tell you how well each team is working to achieve their respective goals and how well their processes are working to help them achieve those goals.

Key performance indicators for project management include:

  • Hours worked: The total hours worked tells you how much time a team put into a project. Project managers should also assess the variance in estimated vs. actual hours worked to better predict and resource future projects.
  • Budget: The budget indicates how much money you have allocated for the specific project. Project managers and ecommerce business owners will want to make sure that the budget is realistic; if you’re repeatedly over budget, some adjustments to your project planning need to be made.
  • Return on investment (ROI): The ROI KPI for project management tells you how much your efforts earned your business. The higher this number, the better. The ROI accounts for all of your expenses and earnings related to a project.
  • Cost variance: Just as it’s helpful to compare real vs. predicted timing and hours, you should examine the total cost against the predicted cost. This will help you understand where you need to reel it in and where you may want to invest more.
  • Cost performance index (CPI): The CPI for project management, like ROI, tells you how much your resource investment is worth. The CPI is calculated by dividing the earned value by the actual costs. If you come in under one, there’s room for improvement.

How do I create a KPI?

Selecting your KPIs begins with clearly stating your goals and understanding which areas of business impact those goals. Of course, KPIs for ecommerce can and should differ for each of your goals, whether they’re related to boosting sales, streamlining marketing, or improving customer service.

Key performance indicator templates

Here are a few key performance indicator templates, with examples of goals and the associated KPIs.

GOAL 1: Boost sales 10% in the next quarter.

KPI examples:

  • Daily sales.
  • Conversion rate.
  • Site traffic.

GOAL 2: Increase conversion rate 2% in the next year.

KPI examples:

  • Conversion rate.
  • Shopping cart abandonment rate.
  • Competitive pricing.

GOAL 3: Grow site traffic 20% in the next year.

KPI examples:

  • Site traffic.
  • Traffic sources.
  • Promotional click-through rates.
  • Social shares.
  • Bounce rates.

GOAL 4: Reduce customer service calls by half in the next 6 months.

KPI examples:

  • Service call classification.
  • Pages visited immediately before call.

There are many performance indicators and the value of those indicators is directly tied to the goal measured. Monitoring which page someone visited before initiating a customer service call makes sense as a KPI for GOAL 4 since it could help identify areas of confusion that, when corrected, would reduce customer service calls. But that same performance indicator would be useless for GOAL 3.

Once you have set goals and selected KPIs, monitoring those indicators should become an everyday exercise. Most importantly: Performance should inform business decisions and you should use KPIs to drive actions.

Gary Silberg – Are CFOs ready for a changing business model?

Imagine 25,000 automobile parts sourced in wildly different locations throughout the world, magically converging upon an assembly plant to create a vehicle. Consider the complexity for the CFO — the billions of dollars spent for design, for building plants, and for marketing and advertising.

Consider the attention necessary to address taxation and regulation and the complex economic rationale derived from vehicle line profitability, return on invested capital, cash flow and optimal capital structure.

But if anything, the life of the CFO is about to become even more complex.

For the carmaker and supplier, the financial implications largely end when the vehicle leaves the factory. The car then becomes mostly someone else’s business, except for auto financing and parts supply. The sale is made to the dealer and the revenue recognized.

However, a vast range of new technologies and technological abilities — graphic processing unit chips, LIDAR, mapping software, deep learning and artificial intelligence — are transforming consumer behavior and revolutionizing the way we lead our lives, including how we use our automobiles.

New realities

This doesn’t mean the traditional carmaking business is going away anytime soon, but car sales will decline as mobility services reduce the need to own automobiles.

Thus, car companies must change to accommodate a world where revenue comes more from providing services. That is a dilemma for the CFO, a dilemma of business models: the need to serve multiple innovation paces at once.

CFOs must maintain the traditional pace of the business that reflects the sale of cars — consumer interaction once every three to five years — but must also accommodate new business realities for the faster-paced transactions necessary for emerging, service-oriented markets — indeed, consumer interactions as often as many times per day.

Great potential

Those emerging markets have great potential: mobility services, power provision, fuel services, data aggregation and insurance, for example. This will produce a trillion-dollar market for mobility services alone, thus changing the auto industry.

But adding service businesses requires far-reaching strategic decisions affecting complex revenue models, balance sheets, capital structure, taxation and governance.

This sea change in the industry requires new key performance indicators for the CFO to set a new drumbeat for measuring growth, profitability and sustainability. Some metrics that will be important: revenue passenger miles, recurring revenue and number of active customers.

New indicators for profitability are: passenger revenue per available seat mile, revenue per available seat mile, cost per available seat mile, customer acquisition costs and recurring margins.

Entering the mobility service or any service business market is a profound change.

For the office of the CFO it will mean a radical difference in how they operate. Both the metrics for strategic drivers and key finance considerations require rethinking, restaffing and reinvesting in an infrastructure to accommodate an entirely different kind of business model.

While it can seem overwhelming, it’s important for CFOs to be thoughtful and lead by looking outside the industry to find innovative solutions and business models to meet these challenges.

The question for CFOs is not if the business model will change but rather are they ready for the drastic changes coming to the automotive industry?

Caitlin Stanway – Innovation Is Too Important To Be Left Siloed In The “Innovation” Department

Linda has spent 17 years with W. L. Gore & Associates serving multiple roles. In the Medical Products Division, Linda led new product development teams from ideation to commercial launch; drove technical resourcing for manufacturing engineering; and owns two patents. Linda dedicated the past two years to the creation and launch of the Gore Innovation Center in Silicon Valley, where she took it from its original concept to the facility’s execution, completion, and launch. Now, she’s working to jointly develop products, technologies, and business models where Gore materials can uniquely add value.

How do you encourage a culture of innovation?

Gore’s unique culture encourages Associates to pursue questions, ideas and innovations as part of their daily commitments. Associates are encouraged to use their “dabble time” to explore areas that they believe may be of value to Gore, even if they are outside their current division. Gore’s history of innovation has resulted in important problem solving and business creation as a result of genuinely curious Associates who came up with an idea and had the passion to pursue it.

For example, the Elixir Guitar Strings were born out of the W. L. Gore concept of “dabble time.” During a testing session of slender cables for Disney puppets, a group of Gore engineers noticed the cables were too difficult to control and the prototype failed. Instead of giving up on the project, Gore encouraged them to use some “dabble time” to think of alternative solutions. The group decided they needed a smoother, lower friction cable and realized they could use guitar strings as a substitute for the prototype. Once the strings became integrated into that process, the engineers realized they could create a stronger and longer lasting guitar string by combining the existing strings with Gore polymers.

While this is one proof point, for fostering an innovative mindset, we also believe in the power of creating an internal accelerator, a small team that is available to pursue ideas that germinate from employees. Most employee-generated ideas go nowhere because there are insufficient resources with the right perspective. A broad set of business-building skills are required to take a good idea and an adequate resource pool to a minimum viable product and then build it into a successful business.

Do you have any tips on how companies can have a more innovative mindset?

Innovation is too important to be left siloed in the “innovation” department. Innovation can come from anywhere inside or outside the company. For any company, it is key to be open to the ideas from any source and then take the time to flesh out and prioritize the ideas. Once priorities have been established, the company needs to devote the time and resources necessary to make the ideas successful, then announce the success across the company. A more innovative mindset across the entire company can build from one employee-generated success. Employees are highly motivated by such a success, which improves morale and promotes a supportive culture of innovation across the company.

How do you encourage a culture of innovation in a small company versus a large company?

In all companies, regardless of size, employees need to understand their contribution to company innovation goals. So, leadership is the key. Leaders must communicate their expectations of employees and put infrastructure in place to enable employees to pursue their ideas and curiosities. Companies can give employees a percentage of their time to pursue ideas or have employee idea contests. The most success is when companies have taken the time to educate employees on design thinking, lean innovation, business model innovation, open innovation and more. Leadership, setting expectations and providing infrastructure that supports those expectations, works the same across both large and small companies. Gore started as 2 people in a garage 60 years ago. It is now a $3billion company with over 9,500 Associates. Gore grew up with a culture of innovation and took the necessary steps to ensure that as the company grew, it stayed true to the roots of its culture, made changes when necessary, and allowed Associates to be free to innovate. Leadership was and continues to be critical to this journey.

How important is workplace diversity to innovation?

Innovation comes not just from breadth of experience and deep technical knowledge, but through the involvement of diverse teams. Pioneering ideas result when all those involved — everyone from engineers to customers — tap into their individual talents and experiences. Gore is a stronger enterprise because we foster an environment that is inclusive of all, regardless of race, sex, gender identity, sexual orientation or other personal identifiers.

Are companies having to innovate faster than they have had to before? If so, what tools are helping them do this?

Yes, over time the focus of innovation has shifted from internal only for some companies to bringing in external ideas and working with external partners. This shifts the pace of innovation as startups are on a faster timeline than corporations. To ensure we are working with the best startups and getting them what they need, we need to move faster. One area that facilitates faster innovation is to ‘deconstruct’ corporate practices in legal, procurement, supply chain, and other functions that aren’t built for speed but for the purpose of reducing risk in core businesses. The innovation team embraces risk when it explores new opportunities and speed is critical to these explorations. Internal processes that worked in the past sometimes only hinder innovation today. Applying a ‘deconstruction’ mindset puts the innovation leader in the position to rewrite policies that accelerate innovation, just like a startup CEO writes policies that support the startup’s mission.

How do you prepare for disruption?

We work closely with startups, universities, and customers to understand emerging technologies and business models. Taking a cross-sector approach allows us to capitalize on best practices from a variety of fields. Disruption at its best capitalizes on the agile nature of start-ups, the expertise and infrastructure of established corporations, and the exploratory mindset of academic institutions, all while focusing on the problem that needs to be solved. Innovation for innovation’s sake means nothing unless it truly makes a difference – addressing a challenge, improving a life, increasing efficiency etc. Preparing for disruption means factoring in all these inputs to improve the status quo.

How is the nature of innovation and organizations’ approaches to it set to evolve over the next five years?

One challenge facing companies is finding the right balance across all three business creation phases, Ideation-Incubation-Scaleup. Many companies invest in one of these areas while under-investing in the others, resulting in a large number of projects failing to move the needle for the company. In the next few years, companies will gain more insights from data about where their innovation programs fail to support promising projects, and companies will fix the gaps by balancing investments across all three business creation phases.

In addition, new tools like machine learning and artificial intelligence will continue to shape the way we develop businesses and processes. The potential for increasing efficiency on many fronts and across industries is huge. Organizations that build business models around these disruptive tools will realize success in a way that inflexible institutions are unable to.

Techcrunch – This tiny agtech company thinks it has figured out something its better-capitalized rivals haven’t

In November, we told you about Farmers Business Network, a social network for farmers that invites them to share their data, pool their know-how and bargain more effectively for better pricing from manufacturing companies. At the time, FBN, as it’s known, had just closed on $110 million in new funding in a round that brought its funding to roughly $200 million altogether.

That kind of financial backing might dissuade newcomers to the space, but a months-old startup called AgVend has just raised $1.75 million in seed funding on the premise that, well, FBN is doing it wrong. Specifically, AgVend’s  pitch is that manufacturers aren’t so crazy about FBN getting between their offerings and their end users — in large part because FBN is able to secure group discounts on those users’ behalf.

AgVend is instead planning to work directly with manufacturers and retailers, selling their goods through its own site as well as helping them develop their own web shops. The idea is to “protect their channel pricing power,” explains CEO Alexander Reichert, who previously spent more than four years with Euclid Analytics, a company that helps brands monitor and understand their foot traffic. AgVend is their white knight, coming to save them from getting disrupted out of business. “Why cut them out of the equation?” he asks.

Whether farmers will go along is the question. Those who’ve joined FBN can ostensibly save money on seeds, fertilizers, pesticides and more by being invited to comparison shop through FBN’s own online store. It’s not the easiest sell, though. FBN charges farmers $600 per year to access its platform, which is presumably a hurdle for some.

AgVend meanwhile is embracing good-old-fashioned opacity. While it invites farmers to search for products at its own site based on the farmers’ needs and location, it’s only after someone has purchased something that the retailer who sold the items is revealed. The reason: retailers don’t necessarily want to put all of their pricing online and be bound to those numbers, explains Reichert.

Naturally, AgVend insists that it’s not just better for retailers and the manufacturers standing behind them. For one thing, says Reichert,  AgVend’s farming customers are sometimes offered rebates. Customers are also better informed about the products they’re buying because the information is coming from the retailers and not a third party, he insists. “When a third party like FBN comes in and tries going around the retailers, the manufacturers can’t guarantee that FBN is giving the right guidance about their products.”

In the end, its customers will decide. But the market looks big enough to support a number of players if they figure out how to play it. According to USDA data from last year, U.S. farms spent an estimated $346.9 billion in 2016 on farm production expenditures.

That’s a lot of feed and fertilizer. It’s no wonder that founders, and the VCs who are writing them checks, see fertile ground. This particular deal was led by 8VC and included the participation of Precursor Ventures, Green Bay Ventures, FJ Labs and House Fund, among others.

Chris McCann – Blockchain don’t scale (yet), aren’t really decentralized, distribute wealth poorly, lack killer apps, and run on a controlled Internet

Naval Ravikant recently shared this thought:

“The dirty secrets of blockchains: they don’t scale (yet), aren’t really decentralized, distribute wealth poorly, lack killer apps, and run on a controlled Internet.”

In this post, I want to dive into his fourth observation that blockchains “lack killer apps” and understand just how far away we are to real applications (not tokens, not store of value, etc.) being built on top of blockchains.

Thanks to Dappradar, I was able to analyze the top decentralized applications (DApps) built on top of Ethereum, the largest decentralized application platform. My research is focused on live public DApp’s which are deployed and usable today. This does not include any future or potential applications not deployed yet.

If you look at a broad overview of the 312 DApps created, the main broad categories are:

I. Decentralized Exchanges
II. Games (Largely collectible type games, excluding casino/games of chance)
III. Casino Applications
IV. Other (we’ll revisit this category later)

On closer examination, it becomes clear only a few individual DApps make up the majority of transactions within their respective category:

Diving into the “Other” category, the largest individual DApps in this category are primarily pyramid schemes: PoWH 3DPoWMPoWLLockedIn, etc. (*Please exercise caution, all of these projects are actual pyramid schemes.)

These top DApps are all still very small relative to traditional consumer web and mobile applications.

*Even “Peak DApp” isn’t that large: by our rough estimates, CryptoKitties only had ~14,000 unique users and 130,000 transactions daily.

Compared to:

*As another comparison point, even the top 50 apps in the Google Play Store alone get on average 25,000+ downloads per day. (This is just downloads, not even counting “transactions”).

Further trends emerge on closer inspection of the transactions of DApps tracked here:

  • More than half of all DApps have zero transactions in the last week.
  • Of the DApps with any usage, the majority of usage is skewed to a small few (see graph).
  • Only 25% of DApps have more than 100 transactions in a week.


Where we are and what it means for protocols and the ecosystem:

After looking through the data, my personal takeaways are:

  1. We are orders of magnitudes away from consumer adoption of DApps. No killer app (outside of tokens and trading) have been created yet. Any seemingly “large” DApp (ex. IDEX, CryptoKitties, etc) has low usage overall.
  2. All of the top DApps are still very much about speculation of value. Decentralized exchanges, casino games, pyramid schemes, and even the current collectible games (I would argue) are all around speculation.
  3. What applications (aside from value transfer and speculation) really take advantage of the true unique properties of a blockchain (censorship resistance, immutability of data, etc) and unlock real adoption?
  4. For new protocol developers, instead of trying to convince existing DApp developers to build on your new platform — think hard about what DApps actually make sense on your protocol and how to help them have a chance at real adoption.
  5. We as an ecosystem need to build better tools and infrastructure for more widespread adoption of DApps. Metamask is an awesome tool, but it is still a difficult onboarding step for most normal users. ToshiStatus, and Cipherare all steps in the right direction and I’m really looking forward to the creation of other tools to simplify the user onboarding experience and improve general UI/UX for normal users.

What kind of DApps do you think we as a community should be building? Would love to hear your takeaways and thoughts about the state of DApps, feel free to comment below or tweet @mccannatron.

Also, if there are any DApps or UI/UX tools I should be paying attention to, let me know — I would love to check them out.

Scroll to top