Too smart for our own good?
Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams
Two stories this past week caught my eye. The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip. Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.” Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.” But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities. Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”
The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks. But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.
I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell. (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets from a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell. But despite the strawmen and snide comments, Elkus makes some good points on the vexing issue of how to program ethics and morality into AI systems.