Too smart for our own good?

Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams


Two stories this past week caught my eye.  The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip.  Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.”  Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.”  But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities.  Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”

The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks.  But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.

I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell.  (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets from a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell.  But despite the strawmen and snide comments, Elkus makes some good points on the vexing issue of how to program ethics and morality into AI systems.

Elkus argues that there are few human values that are truly universal, which means that encoding values into an AI system prompts the “question of whose values ought to determine the values of the machine.”  He criticizes the view, which he attributes to Russell, that programmers can “sidestep these social questions” by coming up with algorithms that instruct machines to learn human values by observing human behavior.  Elkus rhetorically asks whether such programming means that “a machine could learn about American race relations by watching the canonical pro-Ku Klux Klan and pro-Confederacy film The Birth of a Nation?”

Indeed, Elkus points out that concepts that many AI experts take for granted contain implicit ethical choices that many people would dispute.  Elkus notes that “[w]hen [Russell] talks about ‘tradeoffs’ and ‘value functions,’ he assumes that a machine ought to be an artificial utilitarian.”

Of course, not all people agree that utilitarianism–focusing on “the greatest good for the greatest number” by maximizing aggregate utility–is a proper organizing ethical principle.  Firm individualists such as Ayn Rand find the collectivist tinge of utilitarianism repugnant, while religious figures such as Pope John Paul II have criticized utilitarianism for ignoring the the will of God.  Harry Truman’s comments on the power of the atomic bomb, the last technological development that led to widespread concerns about existential risk, reveal how prominent religious concerns are even in industrialized societies.  In a post-Nagasaki statement, Truman did not express hope that nuclear technology would be used for the benefit of humanity; instead, he prayed that God would “guide us to use it in His ways and for His purposes.”

AI engineers might scoff at the notion that such factors should be taken into consideration when figuring out how to encode ethics into AI systems, but billions of people would likely disagree.  Indeed, the entire “rationality”-based view of intelligence that pervades the current academic AI literature would likely be questioned by people whose worldviews give primacy to religious or individualistic considerations.

Unfortunately, those people are largely absent from the conferences and symposia where AI safety concerns are aired.  In the diverse world of people concerned with AI safety, many people–including and perhaps especially Stuart Russell–have expressed dismay that the AI safety ‘world’ is split up into various groups that don’t seem to listen to each other very well.  The academic AI people have their conferences, the tech industry people interested in AI have other conferences, the AI and law/ethics/society people have their conferences, and the twain (thrain?) rarely meet.  But Elkus suggests an even deeper problem–even those three groups are all largely composed of, to paraphrase Elkus, “Western, well-off, white male cisgender scientists” and professionals.

As a result, even when all three groups come together in one place (which is not often enough), they hardly form a representative cross-section of human values and concerns.  Elkus questions whether such a comparatively privileged group should have “the right to determine how the machine encodes and develops human values, and whether or not everyone ought to have a say in determining the way that AI systems” make ethical decisions.


To end on a more positive note, however, my impression is that Russell and Elkus probably do not disagree on the problems of AI safety as much as Elkus thinks they do–a fact that Elkus himself would have discovered if he had bothered to review some of Russell’s other speeches and writings before writing his essay.  Russell has often made the point in his books and speeches that human programmers face significant hurdles in getting AI systems to both (a) understand human values and (b) “care” about human values.  The fact that Russell spends more time focusing on the latter does not mean he does not recognize the former.  Instead, Russell and Elkus share the fundamental concern of most people who have expressed concerns about AI safety: that we will make AI systems that have immense power but that lack the sense of ethics and morality necessary to know how to use it properly.  In the future, I hope that Elkus will find more constructive and thoughtful ways to address those shared concerns.

One comment

  • First, you have to know how to make AI, and then how to control (which is harder). Unfortunately / inevitably…FINALLY SOLVED: BECOMING HUMAN / INTELLIGENCE / AI. NEW COMPREHENSIVE THEORY STARTS FROM THE END by establishing the working theory of functioning of the human brain-IQ. https://evolutionofhumanintelligence.wordpress.com/ That is the only way to solve this puzzle and here is the only picture/story that makes sense. The human evolution ( 7 million years, 30 hominins ) must perform / accomplish the evolution of intelligence (to the achievement C+IQ / collective intelligence with ability the speak ), but I have found only “the evolution of emotions”. These three processes intersect at one point – baby / human infant that is incapable for independent survival for many years. That is not an evolutionary mistake, on the contrary, that is the key element of my research. By observing it’s mother’s behavior, a process called MSP /multi self-projection passively occurs in baby’s brain when child perceives guardians body as his own. MSP may be the most easily understood as a feeling similar to that of the apparent movement which we have when we are in a train that stands while we are looking through the window at another train that is moving. That way infant’s CNS immediately learns the shortest way to get something done, which enables the creation of many more similar thinking processes till the moment when a minimal number of thinking processes (Adam’s number) are required in order to effect of self-consciousness arise. How are they connected…some other time… To connect all that I have mentioned with a huge number of scientific data (Denisovans, Homo naledi, Scientific Adam, Mitochondrial Eve, autism, speech, pleasure in the presence of fire, dreams…) required membership in the Mensa organization… The biggest picture (the framework) for all scientific data (even A.I. because start, origin of original, in making SAI/AGI/HAL 9000 is crucial / what has been missing) is FEST theory.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.