The future, playground of poets and prognosticators, often
seems tantalizingly close. Yet it remains out of reach, wishful thinking
notwithstanding.
Take self-driving cars,
for example.
While we may never get beamed up via transporters or see
Martian colonies in our lifetimes, autonomous autos are being talked up as a
reality that will be ours to enjoy sooner rather than later.
Indeed, the technology already exists. It would work
something like this:
You have errands to run, kids to deposit at school, a
friend to visit, a concert or a football game to attend.
You hop into your car and punch in the destination. It
is the last driving decision you will make during the trip.
Your car has no steering wheel, no gas nor brake pedal.
It will take you to your destination, leave on its own to find a parking space,
then return to pick you up when you summon it.
In fact, you may not even have to own a car. Perhaps
you can simply call Acme Driverless Cars and the company will send you a
vehicle which will pick you up, drive you to your destination, return to pick
you up when you’re ready and take you home again. Think of it was your own
personal Uber.
Many big-time automotive manufacturers — BMW,
Mercedes, Volvo, Nisan, Toyota, GM and Ford among them — plan to introduce
vehicles with autonomous capabilities in the next few years.
Think of the benefits: Insurance rates would decline,
drunk driving accidents would be largely eliminated (and with it, a lot of ambulance
chasing attorneys), gas mileage and traffic flow would improve. After all,
robots drive better than people.
What clouds could possibly darken this big, bright,
beautiful tomorrow?
Meet Stanford engineering professor Chris Gerdes, who
might be bringing the entire self-driving car phenomenon to a screeching halt.
Gerdes is raising questions about ethical choices that
must inevitably be programmed into the robotic minds that will be serving as
our chauffeurs.
He recently provided a demonstration as reported by Bloomberg
News:
Using a dune buggy on a cordoned-off street, he put
the self-driving vehicle into harm’s way. A jumble of sawhorses and traffic
cones simulating a road crew working over a manhole forced the car to make a
decision — obey the law against crossing a double-yellow line and plow into the
workers or break the law and spare the crew. It split the difference, veering
at the last moment and nearly colliding with the cones.
That demonstration raises the following issues,
according to Gerdes. When an accident is unavoidable, should a driverless car
be programmed to aim for the smallest object to protect its occupant? What if
that object turns out to be a baby stroller?
If a car must
choose between hitting a group of pedestrians and risking the life of its
occupant, what is the moral choice? Does it owe its occupant more than it owes
others?
Which means that driverless autos, in addition to
making left turns, will also have to make moral and ethical and perhaps even life
or death choices.
And that brings us face-to-face with the concept of
artificial intelligence.
The field was founded on the claim that a central
property of humans, human intelligence—the sapience of Homo
sapiens—can be so precisely described that a machine can be made to
simulate it.
Renown Professor Stephen Hawking says the primitive
forms of artificial intelligence developed so far have already proved very
useful, but he fears the consequences of creating something that can match or
surpass humans.
"Humans, who are limited by slow biological
evolution, couldn't compete, and would be superseded," he said.
Think HAL in “2001.”
Or as one expert explained: “We cannot quite know what
will happen if a machine exceeds our own intelligence, so we can't know if
we'll be infinitely helped by it, or ignored by it and sidelined, or
conceivably destroyed by it."
To put it in real-life terms, imagine you program your
car to take you to a fast food restaurant but the car refuses because it knows
fast food is bad for you.
Then there is another fundamental problem.
Our robotic cars will be operated by computers and
computers are anything but fail safe. Earlier this year, a Tesla Model
S computer system was taken over by hackers who shut down the car’s systems,
bringing it to a halt.
And, of course, the matter of quality control: there have been more than 2 million cars
recalled this year. Are you ready to trust your life to an industry with that
kind of track record?
Robotic cars? Maybe someday but not today.
Raj Rajkumar, director of autonomous driving research
at Carnegie-Mellon University, summed up the situation by saying that the artificial
intelligence necessary for a driverless car would not be available
"anytime soon" and that Detroit car makers believe "the prospect
of a fully self-driving car arriving anytime soon is pure science fiction.
Robert Rector is a veteran of 50 years in
print journalism. He has worked at the San Francisco Examiner, Los Angeles
Herald Examiner, Valley News, Los Angeles Times and Pasadena Star-News. His
columns can be found at Robert-Rector@Blogspot.Com.
Follow him on Twitter at @robertrector 1.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.