Category: Mobility

Digital Transformation of Business and Society: Challenges and Opportunities by 2020 – Frank Diana

At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.

With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.

Our Emerging Future

He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:

  • Because of exponential progression, it is difficult to imagine the world in 5 years, and although the industrial era was impactful, it will not compare to what lies ahead. The danger of vastly under-estimating the sheer velocity of change is real. For example, in just three months, the projection for the number of autonomous vehicles sold in 2035 went from 100 million to 1.5 billion
  • Six years ago Gerd advised a German auto company about the driverless car and the implications of a sharing economy — and they laughed. Think of what’s happened in just six years — can’t imagine anyone is laughing now. He witnessed something similar as a veteran of the music business where he tried to guide the industry through digital disruption; an industry that shifted from selling $20 CDs to making a fraction of a penny per play. Gerd’s experience in the music business is a lesson we should learn from: you can’t stop people who see value from extracting that value. Protectionist behavior did not work, as the industry lost 71% of their revenue in 12 years. Streaming music will be huge, but the winners are not traditional players. The winners are Spotify, Apple, Facebook, Google, etc. This scenario likely plays out across every industry, as new businesses are emerging, but traditional companies are not running them. Gerd stressed that we can’t let this happen across these other industries
  • Anything that can be automated will be automated: truck drivers and pilots go away, as robots don’t need unions. There is just too much to be gained not to automate. For example, 60% of the cost in the system could be eliminated by interconnecting logistics, possibly realized via a Logistics Internet as described by economist Jeremy Rifkin. But the drive towards automation will have unintended consequences and some science fiction scenarios could play out. Humanity and technology are indeed intertwining, but technology does not have ethics. A self-driving car would need ethics, as we make difficult decisions while driving all the time. How does a car decide to hit a frog versus swerving and hitting a mother and her child? Speaking of science fiction scenarios, Gerd predicts that when these things come together, humans and machines will have converged:
  • Gerd has been using the term “Hellven” to represent the two paths technology can take. Is it 90% heaven and 10% hell (unintended consequences), or can this equation flip? He asks the question: Where are we trying to go with this? He used the real example of Drones used to benefit society (heaven), but people buying guns to shoot them down (hell). As we pursue exponential technologies, we must do it in a way that avoids negative consequences. Will we allow humanity to move down a path where by 2030, we will all be human-machine hybrids? Will hacking drive chaos, as hackers gain control of a vehicle? A recent Jeep recall of 1.4 million jeeps underscores the possibility. A world of super intelligence requires super humanity — technology does not have ethics, but society depends on it. Is this Ray Kurzweil vision what we want?
  • Is society truly ready for human-machine hybrids, or even advancements like the driverless car that may be closer to realization? Gerd used a very effective Video to make the point
  • Followers of my Blog know I’m a big believer in the coming shift to value ecosystems. Gerd described this as a move away from Egosystems, where large companies are running large things, to interdependent Ecosystems. I’ve talked about the blurring of industry boundaries and the movement towards ecosystems. We may ultimately move away from the industry construct and end up with a handful of ecosystems like: mobility, shelter, resources, wellness, growth, money, maker, and comfort
  • Our kids will live to 90 or 100 as the default. We are gaining 8 hours of longevity per day — one third of a year per year. Genetic engineering is likely to eradicate disease, impacting longevity and global population. DNA editing is becoming a real possibility in the next 10 years, and at least 50 Silicon Valley companies are focused on ending aging and eliminating death. One such company is Human Longevity Inc., which was co-founded by Peter Diamandis of Singularity University. Gerd used a quote from Peter to help the audience understand the motivation: “Today there are six to seven trillion dollars a year spent on healthcare, half of which goes to people over the age of 65. In addition, people over the age of 65 hold something on the order of $60 trillion in wealth. And the question is what would people pay for an extra 10, 20, 30, 40 years of healthy life. It’s a huge opportunity”
  • Gerd described the growing need to focus on the right side of our brain. He believes that algorithms can only go so far. Our right brain characteristics cannot be replicated by an algorithm, making a human-algorithm combination — or humarithm as Gerd calls it — a better path. The right brain characteristics that grow in importance and drive future hiring profiles are:
  • Google is on the way to becoming the global operating system — an Artificial Intelligence enterprise. In the future, you won’t search, because as a digital assistant, Google will already know what you want. Gerd quotes Ray Kurzweil in saying that by 2027, the capacity of one computer will equal that of the human brain — at which point we shift from an artificial narrow intelligence, to an artificial general intelligence. In thinking about AI, Gerd flips the paradigm to IA or intelligent Assistant. For example, Schwab already has an intelligent portfolio. He indicated that every bank is investing in intelligent portfolios that deal with simple investments that robots can handle. This leads to a 50% replacement of financial advisors by robots and AI
  • This intelligent assistant race has just begun, as Siri, Google Now, Facebook MoneyPenny, and Amazon Echo vie for intelligent assistant positioning. Intelligent assistants could eliminate the need for actual assistants in five years, and creep into countless scenarios over time. Police departments are already capable of determining who is likely to commit a crime in the next month, and there are examples of police taking preventative measures. Augmentation adds another dimension, as an officer wearing glasses can identify you upon seeing you and have your records displayed in front of them. There are over 100 companies focused on augmentation, and a number of intelligent assistant examples surrounding IBM Watson; the most discussed being the effectiveness of doctor assistance. An intelligent assistant is the likely first role in the autonomous vehicle transition, as cars step in to provide a number of valuable services without completely taking over. There are countless Examples emerging
  • Gerd took two polls during his keynote. Here is the first: how do you feel about the rise of intelligent digital assistants? Answers 1 and 2 below received the lion share of the votes
  • Collectively, automation, robotics, intelligent assistants, and artificial intelligence will reframe business, commerce, culture, and society. This is perhaps the key take away from a discussion like this. We are at an inflection point where reframing begins to drive real structural change. How many leaders are ready for true structural change?
  • Gerd likes to refer to the 7-ations: Digitization, De-Materialization, Automation, Virtualization, Optimization, Augmentation, and Robotization. Consequences of the exponential and combinatorial growth of these seven include dependency, job displacement, and abundance. Whereas these seven are tools for dramatic cost reduction, they also lead to abundance. Examples are everywhere, from the 16 million songs available through Spotify, to the 3D printed cars that require only 50 parts. As supply exceeds demand in category after category, we reach abundance. As Gerd put it, in five years’ time, genome sequencing will be cheaper than flushing the toilet and abundant energy will be available by 2035 (2015 will be the first year that a major oil company will leave the oil business to enter the abundance of the renewable business). Other things to consider regarding abundance:
  • Efficiency and business improvement is a path not a destination. Gerd estimates that total efficiency will be reached in 5 to 10 years, creating value through productivity gains along the way. However, after total efficiency is met, value comes from purpose. Purpose-driven companies have an aspirational purpose that aims to transform the planet; referred to as a massive transformative purpose in a recent book on exponential organizations. When you consider the value that the millennial generation places on purpose, it is clear that successful organizations must excel at both technology and humanity. If we allow technology to trump humanity, business would have no purpose
  • In the first phase, the value lies in the automation itself (productivity, cost savings). In the second phase, the value lies in those things that cannot be automated. Anything that is human about your company cannot be automated: purpose, design, and brand become more powerful. Companies must invent new things that are only possible because of automation
  • Technological unemployment is real this time — and exponential. Gerd talked to a recent study by the Economist that describes how robotics and artificial intelligence will increasingly be used in place of humans to perform repetitive tasks. On the other side of the spectrum is a demand for better customer service and greater skills in innovation driven by globalization and falling barriers to market entry. Therefore, creativity and social intelligence will become crucial differentiators for many businesses; jobs will increasingly demand skills in creative problem-solving and constructive interaction with others
  • Gerd described a basic income guarantee that may be necessary if some of these unemployment scenarios play out. Something like this is already on the ballot in Switzerland, and it is not the first time this has been talked about:
  • In the world of automation, experience becomes extremely valuable — and you can’t, nor should attempt to — automate experiences. We clearly see an intense focus on customer experience, and we had a great discussion on the topic on an August 26th Game Changers broadcast. Innovation is critical to both the service economy and experience economy. Gerd used a visual to describe the progression of economic value:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
  • Gerd used a second poll to sense how people would feel about humans becoming artificially intelligent. Here again, the audience leaned towards the first two possible answers:

Gerd then summarized the session as follows:

The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.

My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently

When looking at AI, consider trying IA first (intelligent assistance / augmentation).

My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement

Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.

My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value

We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.

My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice

“The best way to predict the future is to create it” (Alan Kay).

My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens

Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf

How redesigning an enterprise product taught me to extend myself – Instacart

As designers, we want to work on problems that are intriguing and “game-changing”. All too often, we limit the “game-changing” category to a handful of consumer-facing mobile apps and social networks. The truth is: enterprise software gives designers a unique set of complex problems to solve. Enterprise platforms usually have a savvy set of users with very specific needs — needs that, when addressed, often affect a business’s bottom line.

One of my first projects as a product designer here at Instacart was to redesign elements of our inventory management tool for retailers (e.g. Kroger, Publix, Safeway, Costco, etc.). As I worked on the project more and more, I learned that Enterprise tools are full of gnarly complexity and often present opportunities to practice deep thought. As Jonathan, one of our current enterprise platform designers said —

The greater the complexity, the greater the opportunity to find elegance.

New login screen

As we scoped the project we found that the existing product wasn’t enabling retailers to manage their inventories as concisely and efficiently as they could. We found retailer users were relying on customer support to help carry out smaller tasks. Our goal with the redesign was to build and deliver a better experience that would enable retailers to manage their inventory more easily and grow their business with Instacart.

The first step in redesigning was to understand the flow of the current product. We mapped out the journey of a partner going through the tool and spoke with the PMs to figure out what we could incorporate into the roadmap.

Overview of the older version of the retailer tool

Once we had a good understanding of the lay of the land, engineering resources, and retailers’ needs, we got into the weeds. Here are a few improvements we made to the tool —

Aisle and department management for Retailers

We used the department tiles feature from our customer-facing product as the catalog’s landing page (1.0 above). With this, we worked to:

  • Refine our visual style
  • Present retailers with an actionable page on the get-go
  • Make it quick and easy to add, delete, and modify items
New Departments page for the Partner Tool. Responsive tiles allow partners to view and edit their Aisles and Departments quickly.

Establishing Overall Hierarchy

Older item search page
Beverages > Coffee returns a list of coffees from the retailer’s catalog

Our solution simplified a few things:

  • A search bar rests atop the product to help find and add items without having to be on this specific page. It pops up a modal that offers a search and add experience. This was visually prioritized since it’s the most common action taken by retailers
  • Decoupled search flow and “Add new product” flow to streamline the workflows
  • Pagination, which was originally on the top and bottom, is now pinned to the bottom of the page for easy navigation
  • We also rethought the information hierarchy on this page. In the example below, the retailer is in the “Beverages” aisle under the “Coffee” item category, which is on the top left. They are editing or adding the item “Eight O’Clock Coffee,” which is the page title. This title is bigger to anchor the user on the page and improve navigation throughout the platform
Focused view of top bar. The “New Product” button is disabled since this is a view to add products

Achieving Clarity

While it’s great that the older Item Details page was partitioned into sections, from an IA perspective, it offered challenges for two reasons:

  1. The category grouping didn’t make sense to retailers
  2. Retailers had to read the information vertically but digest it horizontally and vertically
Older version of Item Details page

To address this, we broke down the sections into what’s truly necessary. From there, we identified four main categories of information that the data fell under:

  1. Images — This is first to encourage retailers to add product photos
  2. Basic Info — Name, brand, size, and unit
  3. Item description — Below the item description field, we offered the description seen on the original package (where the data was available) to help guide them as they wrote
  4. Product attributes — help better categorize the product (e.g. Kosher)

Sources now pop up on the top right of the input fields so the editor knows who last made changes.


Takeaways

Seeking validation through numbers is always fantastic. We did a small beta launch of this product and saw an increase in weekly engagement and decrease in support requests.

I learned that designing enterprise products helps you extend yourself as a visual designer and deep product thinker. I approached this project as an opportunity to break down complex interactions and bring visual elegance to a product through thoughtful design. To this day, it remains one of my favorite projects at Instacart as it stretched my thinking and enhanced my visual design chops. Most importantly, it taught me to look at Enterprise tools in a new light; now when I look at them, I am able to appreciate the complexity within

Source: https://tech.instacart.com/how-redesigning-an-enterprise-product-taught-me-to-extend-myself-8f83d72ebcdf

6 Biases Holding You Back From Rational Thinking – Robert Greene

Emotions are continually affecting our thought processes and decisions, below the level of our awareness. And the most common emotion of them all is the desire for pleasure and the avoidance of pain. Our thoughts almost inevitably revolve around this desire; we simply recoil from entertaining ideas that are unpleasant or painful to us. We imagine we are looking for the truth, or being realistic, when in fact we are holding on to ideas that bring a release from tension and soothe our egos, make us feel superior. This pleasure principle in thinking is the source of all of our mental biases. If you believe that you are somehow immune to any of the following biases, it is simply an example of the pleasure principle in action. Instead, it is best to search and see how they continually operate inside of you, as well as learn how to identify such irrationality in others.

These biases, by distorting reality, lead to the mistakes and ineffective decisions that plague our lives. Being aware of them, we can begin to counterbalance their effects.

1) Confirmation Bias

I look at the evidence and arrive at my decisions through more or less rational processes.

To hold an idea and convince ourselves we arrived at it rationally, we go in search of evidence to support our view. What could be more objective or scientific? But because of the pleasure principle and its unconscious influence, we manage to find that evidence that confirms what we want to believe. This is known as confirmation bias.

We can see this at work in people’s plans, particularly those with high stakes. A plan is designed to lead to a positive, desired objective. If people considered the possible negative and positive consequences equally, they might find it hard to take any action. Inevitably they veer towards information that confirms the desired positive result, the rosy scenario, without realizing it. We also see this at work when people are supposedly asking for advice. This is the bane of most consultants. In the end, people want to hear their own ideas and preferences confirmed by an expert opinion. They will interpret what you say in light of what they want to hear; and if your advice runs counter to their desires, they will find some way to dismiss your opinion, your so-called expertise. The more powerful the person, the more they are subject to this form of the confirmation bias.

When investigating confirmation bias in the world take a look at theories that seem a little too good to be true. Statistics and studies are trotted out to prove them, which are not very difficult to find, once you are convinced of the rightness of your argument. On the Internet, it is easy to find studies that support both sides of an argument. In general, you should never accept the validity of people’s ideas because they have supplied “evidence.” Instead, examine the evidence yourself in the cold light of day, with as much skepticism as you can muster. Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science.

2) Conviction Bias

I believe in this idea so strongly. It must be true.

We hold on to an idea that is secretly pleasing to us, but deep inside we might have some doubts as to its truth and so we go an extra mile to convince ourselves — to believe in it with great vehemence, and to loudly contradict anyone who challenges us. How can our idea not be true if it brings out of us such energy to defend it, we tell ourselves? This bias is revealed even more clearly in our relationship to leaders — if they express an opinion with heated words and gestures, colorful metaphors and entertaining anecdotes, and a deep well of conviction, it must mean they have examined the idea carefully and therefore express it with such certainty. Those on the other hand who express nuances, whose tone is more hesitant, reveal weakness and self-doubt. They are probably lying, or so we think. This bias makes us prone to salesmen and demagogues who display conviction as a way to convince and deceive. They know that people are hungry for entertainment, so they cloak their half-truths with dramatic effects.

3) Appearance Bias

I understand the people I deal with; I see them just as they are.

We do not see people as they are, but as they appear to us. And these appearances are usually misleading. First, people have trained themselves in social situations to present the front that is appropriate and that will be judged positively. They seem to be in favor of the noblest causes, always presenting themselves as hardworking and conscientious. We take these masks for reality. Second, we are prone to fall for the halo effect — when we see certain negative or positive qualities in a person (social awkwardness, intelligence), other positive or negative qualities are implied that fit with this. People who are good looking generally seem more trustworthy, particularly politicians. If a person is successful, we imagine they are probably also ethical, conscientious and deserving of their good fortune. This obscures the fact that many people who get ahead have done so by doing less than moral actions, which they cleverly disguise from view.

4) The Group Bias

My ideas are my own. I do not listen to the group. I am not a conformist.

We are social animals by nature. The feeling of isolation, of difference from the group, is depressing and terrifying. We experience tremendous relief to find others who think the same way as we do. In fact, we are motivated to take up ideas and opinions because they bring us this relief. We are unaware of this pull and so imagine we have come to certain ideas completely on our own. Look at people that support one party or the other, one ideology — a noticeable orthodoxy or correctness prevails, without anyone saying anything or applying overt pressure. If someone is on the right or the left, their opinions will almost always follow the same direction on dozens of issues, as if by magic, and yet few would ever admit this influence on their thought patterns.

5) The Blame Bias

I learn from my experience and mistakes.

Mistakes and failures elicit the need to explain. We want to learn the lesson and not repeat the experience. But in truth, we do not like to look too closely at what we did; our introspection is limited. Our natural response is to blame others, circumstances, or a momentary lapse of judgment. The reason for this bias is that it is often too painful to look at our mistakes. It calls into question our feelings of superiority. It pokes at our ego. We go through the motions, pretending to reflect on what we did. But with the passage of time, the pleasure principle rises and we forget what small part in the mistake we ascribed to ourselves. Desire and emotion will blind us yet again, and we will repeat exactly the same mistake and go through the same mild recriminating process, followed by forgetfulness, until we die. If people truly learned from their experience, we would find few mistakes in the world, and career paths that ascend ever upward.

6) Superiority Bias

I’m different. I’m more rational than others, more ethical as well.

Few would say this to people in conversation. It sounds arrogant. But in numerous opinion polls and studies, when asked to compare themselves to others, people generally express a variation of this. It’s the equivalent of an optical illusion — we cannot seem to see our faults and irrationalities, only those of others. So, for instance, we’ll easily believe that those in the other political party do not come to their opinions based on rational principles, but those on our side have done so. On the ethical front, few will ever admit that they have resorted to deception or manipulation in their work, or have been clever and strategic in their career advancement. Everything they’ve got, or so they think, comes from natural talent and hard work. But with other people, we are quick to ascribe to them all kinds of Machiavellian tactics. This allows us to justify whatever we do, no matter the results.

We feel a tremendous pull to imagine ourselves as rational, decent, and ethical. These are qualities highly promoted in the culture. To show signs otherwise is to risk great disapproval. If all of this were true — if people were rational and morally superior — the world would be suffused with goodness and peace. We know, however, the reality, and so some people, perhaps all of us, are merely deceiving ourselves. Rationality and ethical qualities must be achieved through awareness and effort. They do not come naturally. They come through a maturation process.

Source : https://medium.com/the-mission/6-biases-holding-you-back-from-rational-thinking-f2eddd35fd0f

Here Are the Top Five Questions CEOs Ask About AI – CIO

Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives—all of whom agreed that the company needed to invest in artificial intelligence—seemed baffled by the need for so much detail. “How will we know if it’s working?” asked a senior director to the visible relief of his colleagues.

Although they believe AI’s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:

1. “What’s the reporting structure for an AI team?”

Organizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn’t new, the answer might be.

Captivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).

AI positioning depends on incumbent practices. A retailer’s customer service department designated a group of AI experts to develop “follow the sun chatbots” that would serve the retailer’s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.

These decisions were vastly different, but they were both the right ones for their respective companies.

Considerations:

  • How unique (e.g., competitively differentiating) is the expected outcome? If the proposed AI effort is seen as strategic, it might be better to create team of subject matter experts and developers with its own budget, headcount, and skills so as not distract from or siphon resources from existing projects.
  • To what extent are internal skills available? If data scientists and AI developers are already clustered within a COE, it might be better to leave the team as-is, hiring additional experts as demand grows.
  • How important will it be to package and brand the results of an AI effort? If AI outcome is a new product or service, it might be better to create a dedicated team that can deliver the product and assume maintenance and enhancement duties as it continues to innovate.

2. “Should we launch our AI effort using some sort of solution, or will coding from scratch distinguish our offering?”

When people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.

Executives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.

These packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.

Considerations:

  • Is time-to-delivery a success metric? In other words, is there lower tolerance for research or so-called “skunkworks” projects where timeframes and outcomes could be vague?
  • Is there a discrete budget for an AI project? This could make it easier to procure developer SDKs or other productivity tools.
  • How much research will developer toolboxes require? Depending on your company’s level of skill, in the time it takes to research, obtain approval for, procure, and learn an AI developer toolkit your team could have delivered important new functionality.

3. “Do we need a business case for AI?”

It’s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.

I’ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn’t done yet, and when they support existing cultural norms.

Considerations:

  • Do other strategic projects require business cases? If they do, decide whether you want AI to be part of the standard cadre of successful strategic initiatives, or to stand on its own.
  • Are business cases generally required for capital expenditures? If so, would bucking the norm make you an innovative disruptor, or an obstinate rule-breaker?
  • How formal is the initiative approval process? The absence of a business case might signal a lack of rigor, jeopardizing funding.
  • What will be sacrificed if you don’t build a business case? Budget? Headcount? Visibility? Prestige?

4. “We’ve had an executive sponsor for nearly every high-profile project. What about AI?”

Incumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.

A senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives “so that they’d know what they’d be in for.” More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. “This is a company where you absolutely need someone powerful in your corner,” she explained.

Considerations:

  • Does the company’s funding model require an executive sponsor? Challenging that rule might cost you time, not to mention allies.
  • Have high-impact projects with no executive sponsor failed?  You might not want your AI project to be the first.
  • Is the proposed AI effort specific to a line of business? In this case enlisting an executive sponsor familiar with the business problem AI is slated to solve can be an effective insurance policy.

5. “What practical advice do you have for teams just getting started?”

If you’re new to AI you’ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker’s quote about culture eating strategy for breakfast? Going rogue is risky.

On the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.

Either way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.

Considerations:

  • Distribute a high-level delivery schedule. An unbounded research project is not enough. Be sure you’re building something—AI experts agree that execution matters—and be clear about the delivery plan.
  • Help colleagues envision the benefits. Does AI promise first mover advantage? Significant cost reductions? Brand awareness?
  • Explain enough to color in the goal. Building a convolutional neural network to diagnose skin lesions via image scans is a world away from using unsupervised learning to discover unanticipated correlations between customer segments. As one of my clients says, “Don’t let the vague in.”

These days AI has mojo. Companies are getting serious about it in a way they haven’t been before. And the more your executives understand about how it will be deployed—and why—the better the chances for delivering ongoing value.

Source : https://www.cio.com/article/3318639/artificial-intelligence/5-questions-ceos-are-asking-about-ai.html

Augmented reality , the state of art in the industry- Miscible

Miscible.io attended The Augmented World Expo in Europe / Munich , October 2018, here is my report.

What a great #AWE2018 show in Munich, with a strong focus on the industry usage and, of course , the german automotive industry was well represented. Some new , simple but efficient, AR devices , and plenty of good use cases with a confirmed ROI. This edition was PRAGMATIC.

Here are my six take aways from this edition. Enjoy it !

1 – The return of investment of the AR solutions

The use of XR by automotive companies, big pharma, and teachers confirmed some good ROI with some “ready to use” solutions, especially in this domains :

2 – This is still the firstfruits of AR and some improvements are expected for drawbacks

  • Hardware : field of view, contrast/brigtness , 3D asset resolutions
  • Some AR headset are heavy to wear, it can have some consequences on the operator confort and security.
  • Accuracy between virtual and reality overlay / recognition
  • Automation process from Authoring software to build an end user solution.

3 – Challenge of the Authoring

To create specific and advanced AR Apps, there is still some challenges with the content authoring and with the integration to the legacy systems to retrieve master data and 3D assets. Automotized and integrated AR app need some ingenious developments.

An interesting use case from Boeing ( using hololens to assist the mounting of cables) shows how they did to get an integrated and automatized AR app. Their AR solution architecture in 4 blocks :

  • A web service to design the new AR app (UX and workflow)
  • A call to legacy systems to collect Master Data and 3D data / assets
  • Creation of an integrated Packaged data = asset bundle for the AR
  • Creation of the specific AR app (Vuforia / Unity) , to be transfered to the stand alone system, the Hololens glass.

4 – concept of 3D asset as a master data

The usage of AR and VR becomes more important in many domains : From conception to maintenance and sales (configurator, catalogs …)

The consequence is that original CAD files can be transformed and used in different processes of your company, where it becomes a challenge to use high polygon from CAD applications into other 3D / VR / AR applications, where there is a need of lighter 3D assets, also with some needs of texture and rendering adjustment.

gIFT can be a solution , glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.

The main challenge is to implement a good centralised and integrated 3D asset management strategy, considering them as important as your other key master data.

5 – service company and expert to design advanced AR / VR solutions , integrated in the enterprise information system.

The conception of advanced and integrated AR solutions for large companies needs some new expert combining knowlegde in 3D apps and experience in system integration.

This projects need new types of information system architecture taking in account the AR technologies.

PTC looks like a leader in providing efficient and scalable tools for large companies. PTC, owner of Vuforia is also exceling with other 3D / PLM management solutions like windchill , to smoothly integrate 3D management in all the processes and IT of the enterprise.

Sopra Steria , the french IS integration company, is also taking this role , bringing his system integration experience into the new AR /VR usages in the industry.

If you don’t want to invest in this kind of complex projects, for a first step in AR/VR or for some quick wins at a low budget , new content authoring solutions exist to build your AR app with some simple user interfaces and workflows : skylight by Upskill , worklink by Scope AR

6 – The need for an open AR Cloud

“A real time 3D (or spatial) map of the world, the AR cloud, will be the single most important software infrastructure in computing. Far more valuable than facebook social graph, or google page rank index” say Ori Inbar, Co-Founder and CEO of Augmented Reality.ORG. A promising prediction.

The AR cloud provide a persistant , multiuser and cross device AR landscape. It allows people to share experiences and collaborate. The most known AR cloud experience so far is the famous Pokemon Go game.

So far the AR map works using GPS or image recognition, or local point of cloud for a limited space / a building. The dream will be to copy the world as a point of cloud, for a global AR cloud landscape. A real time systems that could be used by robots, drones etc…

The AWE exhibition presented some interesting AR cloud initiative :

  • The Open AR Cloud Initiative launched at the event and had its first working session.
  • Some good SDK are now available to build your own local AR clouds : Wikitude an Immersal

Source : https://www.linkedin.com/pulse/augmented-reality-state-art-industry-fr%C3%A9d%C3%A9ric-niederberger/

 

Edge Computing Emerges as Megatrend in Automation – Design News

Edge computing technology is quickly becoming a megatrend in industrial control, offering a wide range of benefits for factory automation applications.  While the major cloud suppliers are expanding, new communications hardware and software technology are beginning to provide new solutions compared to the previous offerings used in factory automation.

B&R Industrial Automation, edge computing, automation control
A future application possibility that illustrates both the general concept and potential impact of edge computing in automation and control is edge data being visualized on a tablet in a brownfield application. (Image source: B&R Industrial Automation)

“The most important benefit [compared to existing solutions] will be interoperability—from the device level to the cloud,” John Kowal, director of business development for B&R Industrial Automation, told Design News. “So it’s very important that communications be standards-based, as you see with OPC UA TSN. ‘Flavors’ of Ethernet including ‘flavors’ of TSN should not be considered as providing interoperable edge communications, although they will function perfectly well in a closed system. Interoperability is one of the primary differences between previous solutions. OPC UA TSN is critical to connecting the edge device to everything else.”

Emerging Technology Solutions

Kowal added that, in legacy installations, gateways will be necessary to translate data from proprietary systems—ideally using OPC UA over standard Ethernet to the cloud. An edge computing device can also provide this gateway translation capability. “One of the benefits of Edge technology is its ability to perform analytics and optimization locally, and therefore achieve faster response for more dynamic applications, such as adjusting line speeds and product accumulation to balance the line. You do not expect this capability of a gateway,’” Kowal added.

Sari Germanos of B&R added that these comments about edge computing can also be equally applied to the cloud. “With edge, you are using fog instead of cloud with a gateway. Edge controllers need things like redundancy and backup, while cloud services do that for you automatically,” Germanos said. He also noted that cloud computing generally makes data readily accessible from anywhere in the world, while the choice of serious cloud providers for industrial production applications is limited. Edge controllers are likely to have more local features and functions, though the responsibility for tasks like maintenance and backup falls on the user.

Factory Automation Applications

Kowal noted that you could say that any automation application would benefit from collecting and analyzing data at the edge. But the key is what kind of data, what aspects of operations, and what are the expectations of analytics that can deliver actionable productivity improvements? “If your goal is uptime, then you will want to collect data on machine health, such as bearing frequencies, temperatures, lubrication and coolant levels, increased friction on mechanical systems, gauging, and metrology,” he said.

Some of the same logic applies to product quality. Machine wear and tear leads to reduced yield which can, in turn, be defined in terms of OEE data gathering that may already be taking place, but will not be captured at shorter intervals and automatically communicated and analyzed.

Capturing Production Capacity as well as Machine and Materials Availability

Beyond the maintenance and production efficiency aspects, Kowal said that users should consider capturing production capacity, machine and raw material availability, and constraint and output data. These will be needed to schedule smaller batch sizes, tier more effectively into ordering and production scheduling systems, and ultimately improve delivery times to customers.

Edge control technology also offers benefits compared to IoT gateway products. Kowal said that he’s never been big on splitting hairs with technology definitions—at least not from the perspective of results. But fundamentally, brownfield operators tend to want gateways to translate between their installed base of equipment, which may not even be currently networked, and the cloud. Typically, these are boxes equipped with legacy communications interfaces that act as a gateway to get data from the control system without a controls retrofit, which can be costly, risky, and even ineffective.

“We have done some work in this space, though B&R’s primary market is in new equipment,” Kowal added. “In that case, you have many options how to implement edge computing on a new machine or production line. You can use smart sensors and other devices direct to cloud or to an edge controller. The edge controller or computing resource can take many form factors. It can be a machine controller, an industrial PC that’s also used for other tasks like HMI or cell control, a small PLC used within the machine, or a standalone dedicated edge controller.”

Boosted Memory, Processing, and Connections

Germanos noted that industrial controllers were not designed to be edge controllers; they are typically designed to control one machine versus a complete production line.  Edge controllers have built-in redundancy to maintain production line operation.

“If I was designing a new machine, cell, line, or facility, I would set up the machine controllers as the edge controller/computers rather than add another piece of control hardware or gateway,” Germanos said. “Today, you can get machine controllers with plenty of memory, processing power, and network connections. I would not select a control platform unless it supports OPC UA, and I would strongly urge selecting a technology provider that supports the OPC UA TSN movement known as “The Shapers,” so that as this new standard for Industrial Ethernet evolves, I would be free from the ‘flavors’ of Ethernet.”

His recommendation is to use a platform that runs a real-time operating system for the machinery on one core or, using a Hypervisor, whatever other OS might be appropriate for any additional applications that run on Windows or Linux.

Source : https://www.designnews.com/automation-motion-control/edge-computing-emerges-megatrend-automation/27888481159634

 

Building safe artificial intelligence: specification, robustness, and assurance – DeepMind

Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.

If artificial intelligence (AI) is a rocket, then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.

At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. Technical AI safety is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.

In this inaugural post, we discuss three areas of technical AI safety: specificationrobustness, and assurance. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.

Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.

Specification: define the purpose of the system

You may be familiar with the story of King Midas and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, Midas asked that anything he touched be turned into gold. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.

This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:

  • ideal specification (the “wishes”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;
  • design specification (the “blueprint”), corresponding to the specification that we actually use to build the AI system, e.g. the reward function that a reinforcement learning system maximises;
  • and revealed specification (the “behaviour”), which is the specification that best describes what actually happens, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.

specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. Research into the specification problem of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the design subcategory above, while problems that create a mismatch between the design and revealed specifications are in the emergent subcategory.

For instance, in our AI Safety Gridworlds* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.

*N.B.: in our AI Safety Gridworlds paper, we provided a different definition of specification and robustness problems from the one presented in this post.

From Faulty Reward Functions in the Wild by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.

As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “Faulty Reward Functions in the Wild”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are many more examples like this of AI systems finding loopholes in their objective specification.

Robustness: design the system to withstand perturbations

There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the robustness of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (prevention) or by self-stabilisation and graceful degradation (recovery). Safety problems resulting from distributional shiftadversarial inputs, and unsafe exploration can be classified as robustness problems.

To illustrate the challenge of addressing distributional shift, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes (Amodei and Olah et al., 2016). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.

From AI Safety Gridworlds. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.

Adversarial inputs are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.

An adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.

Unsafe exploration can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies (García and Fernández, 2015Amodei and Olah et al., 2016).

Assurance: monitor and control system activity

Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, assurance, addresses these problems from two angles: monitoring and enforcing.

Monitoring comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). Enforcement, on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as interpretability and interruptibility fall under monitoring and enforcement respectively.

AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of interpretability; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system (Doshi-Velez and Kim, 2017). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval (De Fauw et al., 2018). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using Machine theory of mind (Rabinowitz et al., 2018).

ToMNet discovers two subspecies of agents and predicts their behaviour (from “Machine Theory of Mind”)

Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of interruptibility. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening (Hadfield-Menell et al., 2017); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience (Orseau and Armstrong, 2016).

A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See Orseau and Armstrong, 2016.

Looking ahead

We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.

Two examples from the development of programming include the null pointer — which Tony Hoare refers to as his ‘billion-dollar mistake’– and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.

With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!

We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research

Source : https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1

 

Visualizing Better Transportation: Data & Tools – Steve Pepple

There is a wide array of data, products, resources, and tools available and in the spirit of “emergence” and getting data out of silos, this blog post list a bunch of them. The tools, techniques, resources also make it possible to combine data in insightful ways.

Essentials

When you start working with data around transportation and geospatial analysis, you’ll enter a world full of technical terms and acronyms. It can be daunting at first, but you can learn step by step and there are countless resources to help you along the way.

Before you jump into data, here are a few essential resources and tools to take you from the basics (no coding required) to pro techniques:

Transit Tools

There a number of data tools you can use to analyze and visualize transportation and geospatial data without needing to code.

Mobility Explorer from TransitLand.

Transportation & Mobility Data

Now that we’ve looked at some essential tools for mapping and analyzing data, let’s look at interesting data to visualize.

The following organizations are doing exciting work in transportation and mobility. They will be showcase data and tools at our event on Sept. 26th:

  • ARUP— An independent firm of designers, planners, engineers, consultants and technical specialists working across every aspect of today’s built environment.
  • SFCTA & SFMTA Emerging Mobility Committee —A joint committee between agencies that has create principles for mobility services and a number of useful tools for explorer transit and mobility in San Francisco.
  • Remix – Envision ideas, collaborate, and implement plans with a platform for the modern, multimodal city. They have a new tool for designing and visualizing scooter and bicycles.
  • Strava Metro —Plan and build better active transportation infrastructure by partnering with a global community of people on the move.
  • Swiftly — Data analytics for transit agencies for improving service quality, efficiency, and reliability.
Data Visualization from CTA Emerging Mobility, Strava, and Swiftly.

And here are a number of other datasets from other companies and organizations:

  • 311 Dashboard — Explore 311 complaints and service request in San Francisco.
  • 511.org Portal — Developer portal and open data for 511 Bay Area including data for AC Transit, BART, Caltrain, Commute.org, SFMTA, SamTrans, and other transit operators.
311 Data Explorer & 511 Trip Planner and Developer Resources
Visualization by JUMP Bikes; Ford GoBike trips visualized by Steve Pepple using Carto.
  • NextBus — Provides real-time location data for a number of transportation agencies. Here is documentation on their developer API.
  • SharedStreets.io — It is a data standard and a platform that serves as a launching pad for public-private collaboration and a clearinghouse for data exchange.
  • San Francisco Municipal Transportation Agency (SFMTA) — Provides an interactive project map. The agency also has an open data initiative in the works to aggregate data from emerging mobility services providers.
  • TNCs Today — Provides a data snapshot of Transportation Network Companies (TNCs), such as Lyft and Uber, operating in San Francisco.
  • Transitland — An aggregation of transit networks maintained by transit enthusiasts and developers.
  • Vital Signs Data Center — Explore a wide variety of public datasets related to transportation, land use, the economy, the environment, and social equity.
Example of Resident Travel by Transportation from MTC Vital Signs; TNCs today from SFCTA

Tools & Code

Once you have the data you to want to explore and analyze, try these useful tools and libraries for analyzing and visualizing transportation and spatial data.

  • D3.js — D3.js Check out all the examples in Mike Bostock’s website. For example, here is how to create a real-time transit map of San Francisco.
  • Deck.gl — Open source data visualization tools from Uber. Especially good for visualization of large datasets in WebGL maps.
  • Esri Transit Tools — Tools for ESRI and ArcGIS users working with transit and network data.
  • Geocode.earth — Open Source geocoder (based on Mapzen’s Pelias) that allows users to look up geographic coordinates of addresses and vice versa. MapboxCARTO, and Esri also have search APIs for geocoding addresses.
  • Leaflet.js — the best frontend library for working with the display of points, symbols, and all types of features on the web and mobile devices. The library supports rectangles, circles, polygons, points, custom markers, and a wide variety of layers. It performs quickly, handles a variety of formats, and makes styling of map features easy.
  • Opensource Routing Machine — OSRM is a project for routing paths between origin and destination in road networks. Mapbox also has a turn-by-turn Directions API and Nokia Here has a service that supports transit.
  • Open source Planning Tools — An extension of GFTS for for transportation planning and network analysis.
  • Replica — A city planning tool from Sidewalk labs for exploring and analyzing where people move. Here’s Nick Bowden’s post about how the tool used de-identified or anonymous mobility and foot traffic data to model how people travel in urban areas.
  • Turf.js — Mapbox library for geospatial analysis in the browser. Turf’s create collection of geographic features and then quickly spatially analyze, process, and simplify the data before visualizing it.
  • UrbanSim — An open source simulation platform for supporting planning and analysis of urban development, incorporating the interactions between land use, transportation, the economy, and the environment. You can check out a simulation of the Bay Area on MTC portal.

Source : https://medium.com/@stevepepple/visualizing-better-transportation-data-tools-e48b8317a21c

Scroll to top