Too smart for our own good?

Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams


Two stories this past week caught my eye.  The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip.  Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.”  Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.”  But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities.  Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”

The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks.  But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.

I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell.  (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets from a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell.  But despite the strawmen and snide comments, Elkus makes some good points on the vexing issue of how to program ethics and morality into AI systems.

» Read more

Selective Revelation: Should we let robojudges issue surveillance and search warrants?

Credit: SimplySteno Court Reporting Blog

Credit: SimplySteno Court Reporting Blog


AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers.  Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight.  But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.

Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.”  Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.

She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant:

» Read more

Destroying Hezbollah’s missile cache: A proportionality case study and its implications for autonomous weapons


 

Source: Reuters, via Should Israel Consider Using Devastating Weapons Against Hezbollah Missiles? – Opinion – Haaretz – Israeli News Source Haaretz.com

The concept of proportionality is central to the Law of Armed Conflict (LOAC), which governs the circumstances under which lethal military attacks can be launched under international law.  Proportionality in this context means that the harm done to civilians and civilian property in a given attack must not be excessive in light of the military advantage expected to be gained by an attack.  Conceptually, proportionality is supposed to evoke something resembling the scales of justice; if the “weight” of the civilian harm exceeds the “weight” of the military advantage, then an attack must not be launched.  But, of course, proportionality determinations are highly subjective.  The value of civilian property might be easy enough to determine, but there is no easy or obvious way to quantify the “value” of human lives or objects and buildings of religious or historical (as opposed to economic) significance.  Similarly, “military advantage” is not something that can easily be quantified, and there certainly is no accepted method of “comparing” expected military advantage to the value of civilian lives.

Consider this opinion piece by Amitai Etzioni.  One of the greatest threats to Israel’s security comes from Hezbollah, a Lebanese Shi’a political party and paramilitary force that has carried out numerous terrorist attacks against Israel.  Hezbollah has a cache of 100,000 missiles and rockets, many-to-most of which it no doubt would launch into Israel if hostilities between Israel and Hezbollah were to rekindle.  But since most of the missiles are located in private civilian homes, Etzioni asks: “If Hezbollah starts raining them down on Israel, how can these missiles be eliminated without causing massive civilian casualties?”

» Read more

Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 2): Automated vehicle concepts

sfl-googles-new-driverless-car-20140601-001


As discussed in the first part of this analysis, the USDOT Volpe Center’s review of federal regulations (i.e., the Federal Motor Vehicle Safety Standards, or FMVSS) for autonomous vehicles had two components: a “Driver Reference Scan,” which combed through the FMVSS to identify all references to human drivers; and an “Automated Vehicle Concepts Scan,” which examined which of the FMVSS would present regulatory obstacles for the manufacturers of autonomous vehicles.  To perform this scan, the authors of the Volpe Center report identified thirteen separate types of “automated vehicle concepts” or designs, “ranging from near-term automated technologies (e.g., traffic jam assist) to fully automated vehicles that lack any mechanism for human operation.”

Here are those automated vehicle concepts as defined and described in the Volpe report:

» Read more