Introducing Foresight AI, a global-scale data platform for mobile robots

A natural question is, how does your data help mobile robots exactly? Before answering that, let us review a typical robotic system, shown in the figure below. A robot has to gather data from multiple sensors (“sensor fusion”), understand what it senses (“perception”), know where it is (“localization”), decide on its next step (“decision making”), and make the real move (“control”). The geometric and semantic layers of our HD map empower the perception, localization, and decision making modules, which is well understood in the industry. The dynamic scenarios, however, are the hidden gems that can substantially improve a robot’s decision making capability.

Figure 4. How can our dynamic HD map empower the mobile robots?

Decision making is arguably the most crucial and challenging part of a robotic system. We use it as a general term to refer to a few important techniques, including motion and intent prediction, decision making, and motion planning. The robot needs to predict how things move around it, search for optimal paths, and decide on its motion plan, in a fast-changing world. Optimal decision making is hard, actually NP-hard in computer science terms, which can approached by approximate solutions only.

“Decision making” is arguably the most crucial and challenging part of a mobile robot system. The better the decision making capability is, the more autonomous a robot can be. After all, humans find it difficult to make decisions too.

Take self-driving cars as an example. A car predicts the motion and intent of other vehicles, bicycles, and pedestrians; then decides on the motion path in the coming 15 seconds (e.g., before an unprotected left turn) or even up to 2 minutes (e.g., before the next highway exit); and constantly updates the decision every 100 milliseconds. Making decision may have an even smaller time budget when edge cases occur, such as a car suddenly stops, jaywalkers emerge from the curb, or a mattress is in the middle of the road. It may take years or even decades to improve the decision making capability and bring the current autonomous vehicles to the human level.

Our dynamic scenarios are specifically generated to address this problem. We can dramatically accelerate the development process by feeding realistic, dynamic, 3D data to the decision making module. The real-world motion trajectories, enhanced with perturbation and augmentation, can train and validate the decision making algorithms. One can run through thousands or millions of such scenarios per night in a simulator and help the decision making algorithm evolve quickly. Figure 5 illustrates the training process of using our real-world dynamic data: a self-driving car (green box) tries to make an unprotected right turn with other vehicles and humans nearby (yellow boxes) in the open-source Gazebo simulator.

Figure 5. Examples of applying real-world dynamic scenarios to train driving decision algorithms.

One of our key projects is to build the world’s largest dynamic scenario corpus, so that each self-driving car can run through our extensive scenarios and pass all the real-world tests (e.g., double merge, unprotected left turns, jaywalking pedestrians), before it can safely run on the road. One such corpus will need to be built for each different country, or even different metropolitan areas, to reflect the differences in driving behaviors. We will offer this corpus as a subscription service to the customers in appropriate regions.

read original article at https://medium.com/@foresightai/introduction-e29a18b282a?source=rss——artificial_intelligence-5