Questions from a young reader

Credit: Tom Toles, The Buffalo News, 1997


Last week I got an email from Will, an 8th Grader from Big D (little A, double L, A, S).  He is in a class where the students get to choose a topic to write about, and he chose AI because he had “always wondered about what makes a machine better than humans in an area.”

Will emailed me wanting to know if I could answer some questions he had about AI and its impact on our society.  I happily agreed, and he responded by sending five excellent questions.  After getting approval from Will and his teacher (thanks, Ms. Peterson!), I am posting Will’s questions and my responses below.  (I also sent Will an email with much shorter responses so that he wouldn’t fall asleep halfway through my answers).

Here they are:

 

What are your thoughts on the rapidly increasing investment in AI of huge companies such as Google and Microsoft?

This is one of the hottest topics in the world of AI policy right now.  In some ways, the investment in AI by these companies is a good thing.  There are so many things we could do with better AI systems, from having more accurate weather forecasts to reducing traffic on highways to helping doctors come up with better diagnoses when someone is sick.  Those things would bring great benefits to lots of people, and they could happen much more quickly if big companies focus their time and money on improving AI.

On the other hand, there are always dangers when big companies get too much power.  The usual way that we deal with those dangers has been through government action.  But modern AI technologies are very complicated—so complicated that sometimes even the people who design them may not totally understand why they do what they do!  It is hard to come up with good rules for things that no one completely understands.

The dangers are especially troubling now that we live in the age of “Big Data,” where the biggest tech companies are collecting huge amounts of information on people.  These data* could give these companies more power over ordinary people than any other group of companies in history.  The Economist actually had a great cover story this past week that discusses the problems that this could create.  Here is a link to it if you are interested in learning more about this.

(* Fun fact—data is actually a plural word, not singular!)

 

What is the most fundamental design element in AI? Is there a way to describe AI in your own words?

As strange as it may sound, describing what AI is may be the single most controversial issue in all of AI.  The problem is that “intelligence,” which is the fundamental thing that people working on AI are trying to build into machines, is a complicated concept.  Pretty much the only thing everyone agrees on when it comes to intelligence is that human brains have it.  That may seem like a good start, but it does not really provide us with a good way to come up with a single definition of “intelligence” for at least a couple reasons.

First, no one can agree on which of the many things that the human brain can do are necessary for “intelligence.”  Is being able to play chess a sign of intelligence?  If so, then computers have had intelligence since at least the 1960s.  Is being able to win a TV quiz show like “Jeopardy!” enough to show that something is intelligent?  If so, then IBM built an intelligent machine ten years ago.  On the other hand, do we only want to say that something is intelligent if it is capable of doing everything that human brains can do?  If so, then we certainly have not built an intelligent machine yet.

That leads to the second problem—there is still so much that we don’t understand about how the human brain works.  And since the human brain is considered the gold standard for “intelligence,” that makes it difficult to say what makes “intelligence” possible.

That being said, I think everyone has a good general idea of what “intelligence” means, even if it is tough to put into words.  In everything conversation, it means something like “being able to do things well that require using your brain.”  Using that general idea, I’m sure you’ve noticed that different people have different levels of intelligence when it comes to different things.  Some people are good at math, but bad at art.  Others are good at art, but bad at math.  Others are bad at both math and art, but are really good at making other people laugh.  I think each of those things requires “intelligence” in some sense.  But since nobody (or at least nobody I’ve met) can do everything well, I don’t think a simple and easy definition of “intelligence” is possible.  That makes it difficult to describe what AI is.

Here is how I personally describe AI: I will say that a machine is intelligent—and thus is an AI system—if it can do things that, if performed by a human, we would say require intelligence.  If you have an iPhone, then you know that Siri can answer a wide variety of questions that you can ask it.  To me, that requires intelligence.  And so I will say that Siri is an AI system.

 

Do you believe in the near future we may have commercially used intelligent robots?

I noticed that in this question, you used the word “intelligent robots” while your other questions use the term “AI.”  I point this out because “AI” and “robots” refer to somewhat different things.  My answer to your last question talked about what AI is (and why I would say that Siri—a product which is already in commercial use—is AI in my book).  “Robot” usually refers to machines that can move around and do things in the world automatically.  They need not necessarily have “intelligence”—on the contrary, a robot may only be able to perform tasks that a human being specifically tells it to perform.

That being said, there is an ever-increasing trend toward building robots that are programmed with state-of-the-art AI technology.  Some of these are already on the market or very close to the market, such as “driverless cars” (technically, these are usually referred to as “autonomous vehicles”).  I believe we are not too far off—as in within the next 10 or 15 years—of having robots capable of performing many tasks, from cleaning our streets to playing ping pong, that would have been the stuff of science fiction up until a few years ago.

 

Do you believe in the future that there is a risk of AI being uncontrollable and dangerous to the public? If so, how would we be able to keep AI under our control?

This is another difficult and controversial question.  As for the “uncontrollable” part, some of the smartest people in the AI world say that there is a real possibility that humans could build a “superintelligent” machine, and that a superintelligent machine is something that humans would not be able to control.  On the other hand, there are some other very smart people who say that it is fundamentally not possible for the sorts of AI systems we are building now to be even as intelligent and versatile as a chimpanzee, much less “superintelligent” and uncontrollable.

I honestly do not know which group is right.  But I certainly do believe that AI systems could pose a danger to the public, even if they do not become uncontrollable.  For a variety of reasons, if an AI system malfunctions and someone gets hurt, it might be hard to pinpoint what went wrong and take steps to prevent the same malfunction from happening again.  And I do not think our legal system is ready to handle the new challenges that would be presented when AI systems cause harm.

 

How deeply does AI affect how we live today in our daily lives?

We already are living in a world where AI affects most people’s daily lives, at least in the United States.  The maps on people’s cell phones use AI to decide which route will get them somewhere fastest, and can even change the route if traffic patterns suddenly change.  Netflix and Amazon use AI to identify movies you might like and products you might want to buy.  People can use AI systems to come up with a plan to save money for retirement.  And so on.  In other words, AI already affects people’s daily lives deeply and in many ways—and I expect that trend to continue in the years to come.

Leave a Reply

Your email address will not be published. Required fields are marked *