Will technology send us stumbling into negligence?

Two stories that broke this week illustrate the hazards that can come from our ever-increasing reliance on technology.  The first story is about an experiment conducted at Georgia Tech where a majority of students disregarded their common sense and followed the path indicated by a robot wearing a sign that read “EMERGENCY GUIDE ROBOT”:

A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door. The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.

The vast majority of students–26 out of the 30 included in the experiment–went where the robot was pointing.  As it turned out, there was no exit in that direction.  The remaining four students either stayed in the room or were unable to complete the experiment.  No student, it seems, simply went out the way they came in.

Many of the students attributed their decision to disregard the correct exit to the “Emergency Guide Robot” sign, which suggested that the robot was specifically designed to tell them where to go in emergency situations.  According to the Georgia Tech researchers, these results suggest that people will “automatically trust” a robot that “is designed to do a particular task.”  The lead researcher analogized this trust “to the way in which drivers sometimes follow the odd routes mapped by their GPS devices,” saying that “[a]s long as a robot can communicate its intentions in some way, people will probably trust it in most situations.”

As if on cue, this happened the very same day that the study was released:

A gun battle broke out in a Palestinian neighborhood late Monday after Israeli forces tried to rescue two soldiers who had mistakenly entered the area because of an error on a satellite navigation app. . . .

According to initial Israeli reports, the two soldiers said they had been using Waze, a highly touted, Israeli-invented navigation app bought more than two years ago by Google.

As a result of following Waze’s directions, the soldiers entered a Palestinian refugee camp and fighting broke out leaving “at least one Palestinian dead and 10 injured, one seriously. At least 10 Israeli soldiers also were wounded during the hour-long operation.”

I would imagine that the Israeli soldiers’ commanding officer was none too happy, especially since an Israeli military spokesman said that “soldiers are under standing orders not to use GPS services in areas they are not familiar with.”  That being said, if the Georgia Tech study is any indication (and I suspect it is), a great many people would have used Waze, HERE, or another GPS service in the same situation–especially if they were not familiar with the area in question.  Even using paper maps, the alternative suggested by the military spokesman, might have seemed less reliable than relying on an app that is, after all, specifically designed for the purpose of getting people where they need to go right now, and that comes with regular updates from the App Store or Google Play.  Civilians, certainly, would likely be inclined to disregard a paper map that conflicts with GPS instructions in the same way the Georgia Tech students disregarded the clearly marked exit in favor of the path suggested by the “Emergency Guide Robot.”

These two news stories bring a number of legal issues to mind.  First, under the usual legal rules, can we say that the students and/or soldiers acted negligently–that is, did they fail to exercise the level of caution and care that a reasonably prudent person in their situation?  Our gut instinct might be to say that they did act negligently by failing to maintain situational awareness using their own senses–and, at least in the students’ case, ignoring their common sense in favor of trusting an unfamiliar machine.  Certainly, judges and juries tend to lean heavily on their own intuition and can be unkind to those who ignore common sense.  But if most people would have done the same thing, can we really say that a person acts unreasonably if he ignores his common sense and instead chooses to rely on technology that is supposed to be tailor-made for the task at hand?

And if the Israeli soldiers’ actions were not negligent in relying on Waze, where do we draw the line between acceptable reliance on a technology and misplaced reliance on that technology?  Certainly, it would be negligent (if not reckless) for a driver to follow Waze’s directions if it told them to drive along what the app marked as a “road” but the driver could plainly see was an open-air market.  But how will we figure out where the negligence line should be drawn when people rely on automated systems?

These sorts of studies and anecdotes also raise interesting questions for how we should regulate and manage the emerging technology of self-driving cars.  A human in the Google Car does not merely have the option of turning control over to the AI system; the car is actually designed so that the human cannot take control of the car himself or herself.  A self-driving car has no choice but to rely on GPS or similar technology for large-scale geographic guidance.  What sorts of safeguards will manufacturers be required to put in place to ensure that self-driving cars don’t drive their human occupants off a cliff, into a wall, or into other dangerous situations?  (Of course, given recent events, it seems unlikely that the Israeli military will let uniformed soldiers operate self-driving cars anywhere near a Palestinian population center–which, considering Israel’s size, probably means a significant chunk of the country.)

Of course, from a “big picture” perspective, we might decide that the occasional error–and perhaps even the occasional tragedy–is an acceptable price to pay for the overall increases in safety and reliability that these technological advances will bring.  Self-driving cars do not get drunk or tired, and real-life “emergency guide robots” presumably will be designed to be somewhat less useless than the robot in the Georgia Tech experiment.  But if judges and juries choose to fault people for ignoring their common sense, we might find that machines occasionally send us stumbling into negligence as we transition to a society that relies more and more on autonomous machines.

Leave a Reply

Your email address will not be published. Required fields are marked *