http://www.theguardian.com/technology/2015/dec/23/the-problem-with-self-driving-cars-who-controls-the-code
"A car is a high-speed, heavy object with the power to kill its users
and the people around it. A compromise in the software that allowed an
attacker to take over the brakes, accelerator and steering (such as last
summer’s exploit against Chrysler’s Jeeps,
which triggered a 1.4m vehicle recall) is a nightmare scenario. The
only thing worse would be such an exploit against a car designed to have
no user-override – designed, in fact, to treat any attempt from the
vehicle’s user to redirect its programming as a selfish attempt to avoid
the Trolley Problem’s cold equations.
Whatever problems we will have with self-driving cars, they will be
worsened by designing them to treat their passengers as adversaries.
That has profound implications beyond the hypothetical silliness of
the Trolley Problem. The world of networked equipment is already
governed by a patchwork of 'lawful interception' rules requiring them to
have some sort of back door to allow the police to monitor them. These
have been the source of grave problems in computer security, such as the
2011 attack by the Chinese government on the Gmail accounts of
suspected dissident activists was executed by exploiting lawful
interception; so was the NSA’s wiretapping of the Greek government
during the 2004 Olympic bidding process."
I drive a lot for work. I've thought of scenarios where the poor computer program would be oblivious to the moral choices it faces in some situations. It would only know things like "avoid hitting something" and "if in doubt, slow down." It might not know the human cost that such choices might require. So, who gets to write that code then? And who gets to decide?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment