Software helps self-driving cars learn faster - by Cade Metz 無人自動車の学習能力加速

Software helps self-driving cars learn faster - by Cade Metz 無人自動車の学習能力加速

Before the car can drive without a human, a human must first get behind the wheel. As a driver at Aurora Innovation accelerates, stops and turns on local streets, sensors on the car record what he sees and track how he responds. Then a team of engineers builds software that can learn how to behave from that data.
The software is installed in the car so that it can drive on its own. In the end, the car mimics choices made by the human driver.
This is how things work at Aurora, a start-up founded by three veterans of autonomous vehicle research, including Chris Urmson, who previously led the self-driving car project at Google.
The company’s methods are part of a change sweeping across the world of self-driving cars: a kind of machine learning technology that promises a chance for little companies like Aurora to compete with the giants of both the tech and automotive industries. With it, researchers can build and improve autonomous vehicles far more rapidly ? one of the reasons Aurora believes it can close the gap separated it from companies that have been working on self-driving technology for years.
On Thursday, the year-old start-up said that it had agreed to supply self-driving technology to the Volkswagen? Group and Hyundai, two of the world’s largest car companies. Johann Jungwirth, the chief digital officer at the Volkswagen Group, which owns Audi, Porsche and six other major automotive brands including the flagship VW, said the company has been working with Aurora for several months, with an eye toward developing autonomous cars and driverless taxi services.
In 2010, when Mr. Urmson and his colleagues at Google began the autonomous-vehicle movement, writing the computer code to guide their vehicles was a painstaking, line-by-line effort. But in recent years, a type of computer algorithm called a deep neural network has come in from the edges of academia to reinvent the many technologies are built, including autonomous vehicles.
These algorithms can learn tasks on their own by analyzing vast amounts of data. “It used to be that a real smart Ph.D. sat in a cube for six months, and they would hand-code a detector” that spotted objects on the road. Mr. Urmson said during recent interview at Aurora offices. “Now, you gather the right kind of data and feed it to an algorithm, and a day later, you have something that works as well as that six months of work from the Ph.D.
The Google self-driving car project first used the technique to detect pedestrians. Since then, it has applied the same method to many other parts of the car, including systems that predict what will happen on the road and plan a route forward. Now, the industry as a whole is moving in the same direction.
But this shift raises questions. It is still unclear how regulators and lawyers ? not to mention the general public ? will view these methods. Because neural networks learn from such large amount of data, relying on hours or even days of calculations, they operate in ways that their human designers cannot necessarily anticipate or understand. There is no means of determining exactly why a machine reaches a particular decision.
“This is a big transition,” said Noah Goodall, who explores regulatory and legal issues surrounding autonomous cars at the Virginia Transportation Research Council, an arm of the State Department of Transportation. “If you start using neural networks to control how a car moves and then it crashes, how do you explain why it crashed and why it won’t happen again?”
The seeds for this work were planted in 2012. Working with two other researchers at the University of Toronto, a graduate student Alex Krizhevsky built a neural network that could recognize photos of everyday objects like flowers, dogs and cars. By analyzing thousands of flower photos, it could learn a flower in a matter of days. And it performed better than any system coded by hand.
Soon, Mr. Krizhevsky and his colleagues moved to Google. Over the next few years Google and its internet rivals broke new ground in artificial intelligence, using these concept to identify objects in photos, recognize commands spolen into smartphones, translate between languages and respond to internet search queries.
Over the holiday break at the end of 2013, another Google researcher, Anelia Angelova, asked for Mr. Krizhevsky’s help on the Google car project. Neither of them officially worked on the project; they were part of a separate A.I. lab called Google Brain. But they saw an opportunity.
Rather than trying to define for a computer what a pedestrian looked like, they created an algorithm that could allow a computer to learn what a pedestrian looked like. By analyzing thousands of street photos, their system could begin to identify the visual patterns that define a pedestrian, like the curve of a head or the bend of a leg. The method was so effective that Google began applying the technique to other parts of the project, including prediction and planning.
“It’ was a big tyrning point,” said Dmitri Dolgov, who was part of Google’s original self-driving car team and is now chief technology officer at Waymo, the new company that oversees the project. He said that 2013 “was pretty magical.”
Mr. Urmson described this shift in much the same way. He believes the continued progress of these and other machine learning methods will be essential to building cars that canmatch and even exceed the behavior of human drivers.
Mirroring the work at Waymo, Aurora is building algorithm that can recognize objects on the road and anticipate and react to what other vehicles and pedestrians will do next. As Mr. Urmson explained, the software can learn what happens when a driver turns the vehicle in a particular direction at a particular speed on a particular type of road.
Learning from human drivers in this way is an evolution of an old idea. In the early 1990s, researchers at Carnegie Mellon University in Pittsburgh built a car that learned relatively simple behavior. Last year, a team of researchers at Nvidia, the computer chip maker, published a paper showing how modern hardware can extend the idea to more complex behavior.
But many researchers question whether carmakers can completely understand why neural networks make particular decisions and rule out unexpected behavior.
“For cars or flying aircraft, there is a lot of concern over neural networks doing crazy things,” said Mykel Kochenderfer, a robotics professor who oversees the Intelligent System Laboratory at Stanford University.
Some researchers have shown that neural networks trained to identify objects can be fooled into seeing things that aren’t there ? though many, including Mr. Kochenderfer, are working to develop ways of identifying and preventing unexpected behavior.
Like Waymo, Toyota and others, Aurora says that its approach is more controlled than it might seem. The company layers cars with backup systems, so that if one system fails, another can offer a safety net. And rather than driving the car using a single neural network that learns all behavior from one vast pool of data designers break the task into smaller pieces. One system detects traffic lights. Another predicts what will happen next on the road in a particular kind of situation. A third chooses a response. And so on. The company can train and test and retrain each piece.
“How do you get confidence that something works?” asked Drew Bagnell, a machine learning specialist who helped found Aurora after leaving the self-driving car program at Uber. “You test it.”
Mr. Goodall, the Virginia Department of Transportation researcher, said car designers must reassure both regulators and the public that these methods are reliable.
“The onus is on them,” he said.