Automated Cars: Redux
Last year I linked to an article that detailed a proposed law in Nevada allowing artificially intelligent cars on the road and offered some thoughts about what a future with A.I. cars might look like.
Recently, A.I. cars have been in the news again. Thomas Fray, via the World Future Society argues that we’ll see A.I. cars within the next 10 years and offers some thoughts about what the industry will look like. Initially, as Fray says, cars will require humans to monitor their operation ‘just in case.’ This is true today, although SingularityU recently hosted a fascinating talk with Sebastian Thrun, who is working on Google’s automated car. Sebastian claimed that every time the human took control of the car, they did so merely as a precaution and that the only accidents after thousands of miles of driving occurred when a human was at the wheel. If that’s right, the A.I. may be further along than Thomas thinks, although for liability and other legal reasons he’s probably still right that humans are going to have to monitor the cars until it becomes commonplace enough that the legal system adjusts. Fray also points out (correctly, I think) that cars are going to have to begin to communicate with each other to really optimize the automation. Once the roads are mapped and the cars are more intelligent, the riding experience ought to be seamless. However, it’s also possible that the roads themselves could be upgraded with sensors every x-feet such that a simple AI in the car could still drive accurately.
BMW has recently released a 5-series with a whole host of A.I. related driving packages and has had success testing it on the autobahn. See the (really cool) video below:
Once A.I. for cars gets better, it seems logical that it will be integrated into cars with other technology like these folding cars developed by M.I.T. Once A.I. technology is perfected (or close to it) there seems to be little reason for many of the components currently required in cars like steering wheels and gas pedals. Electronic cars (or other methods of delivering fuel in a smaller form than the current gas tank) will enable auto makers to further shrink cars as they remove the inefficient combustion engine in favor of these alternate methods of propulsion. One lingering question about the M.I.T. car remains, however: How can I bring a load of groceries home?
The New York Times, however, recently posted an article about (shock!) skeptical lawyers. We legal types are trained to look for liability and potential problems everywhere, so it’s unsurprising that the overall attitude of the conference was skeptical. Interestingly, however, many of the skepticism was reserved for asking how the cars would deal with irrational humans, rather than whether they could perform the task they’re supposed to. How would an AI car handle humans that blow red lights, for instance, or roll through a stop sign? Would it react appropriately to a police car trying to pull it over, or a school bus on the other side of the road with its flashers deployed?
Finally, there is good reason to think that this A.I. revolution is not limited to cars; notably because we’ve seen a lot of progress in military drones prior to all this hype about cars. Rumor has it that much of the flight time for commercial aircraft has been automated for years. Now it seems the military is up to it again with a pilotless plane that looks much less like a drone and much more like some sweet sci-fi fantasy come to life. Although many people are concerned about the concept of autonomous weapons in the military, as Peter Singer notes in the article, it’s happening already and will certainly continue into the future. To me, this seems like a non-issue. As a nation, we already mourn the loss of human soldiers and endeavor to keep them safe. How better than sending in autonomous robots to eliminate the enemy? Of course, once A.I. gets good enough, the problem may come full circle. Are different ethical issues present when we send sentient robots in to do our killing (where they have a good chance of getting killed/destroyed themselves) that aren’t if we send other humans in to do our killing? Is the loss of a sentient robot any less tragic than the loss of a fellow human? Will enhanced humans have more sympathy for sentient robots than fully biological humans? I don’t have the answers to these questions, but I look forward to the discussion.