On Robot-Delivered Bombs

A Northop Grumman Remotec Andros, a bomb-disposal robot similar to the one reportedly used by police to end the Dallas standoff.


“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night.  Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.”  Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one).  Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years.  As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

» Read more

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

Warning


By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”

(* Btw, I’m officially old–I had to consult Urban Dictionary to confirm that I was correctly understanding what “fam” and “zero chill” meant. “Fam” means “someone you consider family” and “no chill” means “being particularly reckless,” in case you were wondering.)

The remainder of the tagline declared: “The more you talk the smarter Tay gets.”

Or not.  Within 24 hours of going online, Tay started saying some weird stuff.  And then some offensive stuff.  And then some really offensive stuff.  Like calling Zoe Quinn a “stupid whore.”  And saying that the Holocaust was “made up.”  And saying that black people (she used a far more offensive term) should be put in concentration camps.  And that she supports a Mexican genocide.  The list goes on.

So what happened?  How could a chatbot go full Goebbels within a day of being switched on?  Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things.  At first, Tay simply repeated the inappropriate things that the trolls said to her.  But before too long, Tay had “learned” to say inappropriate things without a human goading her to do so.  This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.

Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened–of course a chatbot designed with “zero chill” would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things.  But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened.

» Read more

No, NHTSA did not declare that AIs are legal drivers

Source: NDTV Car and Bike

Source: NDTV Car and Bike

A slew of breathless stories have been published over the past couple days saying that the National Highway Traffic Safety Administration (NHTSA*) has declared that “Google’s driverless cars are now legally the same as a human driver,” or that “A.I. in Autonomous Cars Can Legally Count as the Driver.”  These stories typically go on to say that NHTSA’s decision marks “a major step toward ultimately winning approval for autonomous vehicles on the roads” or something along those lines.  CNN went even further, saying that NHTSA “gave its OK to the idea of a self-driving car without a steering wheel and so forth, that cannot be controlled by a human driver.”

Unfortunately, these news stories badly misstate–or at the very least overstate–what NHTSA has actually said.  First, the letter written by NHTSA’s chief counsel that served as the main source for these news stories does not say that an AI system can be a “driver” under the NHTSA rule that defines that term.  It merely assumes that an AI system can be a legal driver for purposes of interpreting whether Google’s self-driving car would comply with several NHTSA rules and regulations governing the features and components of motor vehicles.  In the legal world, that assumption is very, very different from a ruling that the Google car’s AI system actually qualifies as a legal driver.

NHTSA indicated that it would initiate its formal rulemaking process to consider whether it should update the definition of “driver” in light of the “changing circumstances” presented by self-driving cars.  But federal agency rulemaking is a long and complicated process, and the letter makes clear that dozens of rule changes would have to be made before a car like Google’s could comply with NHTSA standards.  Far from marking a significant step toward filling our roads with robocars, the letter underscores just how many legal and regulatory hurdles will have to be cleared before autonomous vehicles can freely operate on American roads.

The NHTSA letter does not say that AIs are legal drivers

The basis for the recent barrage of news stories is a letter that NHTSA’s Chief Counsel sent to the head of Google’s Self-Driving Car Project in response to a long series of questions regarding whether the Google car would, as designed, comply with NHTSA regulations that refer in some way to the “driver” or “operator” of a motor vehicle.  NHTSA is the federal agency tasked with creating and enforcing design and manufacturing standards for cars, most notably with respect to safety features.  Unsurprisingly, most of the current standards–most of which have been around for decades–operate under the assumption that a human driver located in the front-left seat of the vehicle will be steering the car, applying the brakes, turning on the headlights, and so forth.  Many NHTSA vehicle standards therefore require that the vehicle’s major control mechanisms be physically accessible from the front-left seat.

» Read more