Quo vadis, AI?

 

For years, both the media and the business world have been captivated by the seemingly breathtaking pace of progress in artificial intelligence.  It’s been 21 years since Deep Blue beat Kasparov, and more than 7 years since Watson mopped the floor with Ken Jennings and Brad Rutter on Jeopardy.  The string of impressive advances has only seemed to accelerate since then, from the increasing availability of autonomous features in vehicles to rapid improvements in computer translation and promised breakthroughs in medicine and law. The notion that AI is going to revolutionize every aspect our lives took on the characteristics of gospel in business and tech journals.

But another trend has been slowly building in the background–namely, instances where AI has failed (sometimes quite spectacularly) to live up to its billing.  In 2016, some companies were predicting that fully autonomous cars would be available within 4 years.  Today, I get the sense that if you asked most watchers of the industry to give an over/under on whether fully autonomous vehicles will be on the road within 4 years, many-to-most would take the “over” in a heartbeat.  This is in part due to regulatory hurdles, no doubt, but a substantial part of it is also that the technology just isn’t “there” yet, particularly given the need to integrate AVs into a transportation system dominated by unpredictable human drivers.  The early returns on a widely-touted promise of an AI-powered revolution in cancer treatment are no better.

These are not the first time examples of technology failing to live up to its hype, of course.  AI itself has gone through several hype cycles, with “AI winters” bludgeoning the AI industry and all but ending funding for AI research in both the mid-1970s and late 1980s.  In each instance, the winters were preceded by periods of overheated investment in the AI industry and overheated predictions about the arrival of human-level intelligence.

The last five years have seen similarly massive investment in AI by both industry and government, along with headlines suggesting (sometimes not very subtly) that human-level AI may be within our reach, if not our grasp.

Certainly, the concrete accomplishments in AI during in recent years have far exceeded those made in the periods preceding previous AI winters.  But, as Kai-Fu Lee pointed out in an interview earlier this month, nearly all of the seeming “breakthroughs” in AI over the past five years are really just different applications of one specific breakthrough–namely, deep learning:

“So why you might ask, why do we see all these headlines about AI doing cancer diagnosis, beating [humans at] Go, beating [humans at] chess, and doing all kinds of amazing things?” he said speaking at an Oct. 9 event for Quartz and Retro Report’s “What Happens Next” project. “The reason is these are mere applications that were run on top of the one breakthrough.”

If no other “big breakthroughs” arrive and instances where AI capabilities end up being less than advertised start to rise, or if there are repeated instances where people overestimate the capabilities of AI in dangerous ways, another AI winter is not out of realm of possibility.  Commentators have started to take that possibility more seriously in recent months.

That being said, the wide array of successful applications of deep learning illustrates just how broad-based that “one breakthrough” has been, and how many more mini-breakthroughs it is likely to spur.  Lee obviously recognizes this; after all, he just published a best-selling book arguing that the economic and societal change in the face of AI will be even more dramatic and rapid than most people realize.

So where is AI going?  Are we headed into another AI winter?  Or are we standing on the precipice of a technological revolution on a scale not seen since at least the Industrial Revolution?

Paradoxically, the answer might well be “both.”  I had a conversation about autonomous vehicles with Tracy Pearl at a conference a couple years ago, and she said something that really stuck with me.  She said that there are a lot of technologies that can solve the problem it was designed to solve under 98% of the circumstances it will face, but where figuring out how to cover that last 2% proves extremely challenging or perhaps even impossible without further technological advances.  That’s a problem because we often don’t know what we don’t know in terms of a technology’s capabilities and the circumstances under which a technology will not function as expected or desired.  Blind spots make it difficult to recognize in advance which problems are 2% intractable, much less what sets of circumstances are contained within that 2%.

Right now, it seems like virtually every job is at risk of being either disrupted or obsoleted (I checked, that’s a word) by automation. But undoubtedly, some jobs that seem safe today will fall victim to automation, while other jobs that seem particularly vulnerable to automation will remain largely unchanged.  Health care workers constitute one class of jobs that is predicted to grow substantially in the coming decades.  But if AI-powered personal care robots and wearable devices increase individuals’ ability to care for themselves, automation may end up reducing demand for nurses and caregivers. Conversely, there are may be inherently unpredictable elements in highway driving where human judgment will prove significantly better than modern AI.  If there are enough such elements, long-haul truckers–often cited as one of the jobs most vulnerable to being automated to extinction–may keep on trucking.

As this mixed reality starts to set in, two things will likely happen:

  1. Investment in AI by investors and governments will slow, and not a few companies will go the way of Rethink Robotics, a darling of the robotics industry that abruptly shut its doors last month after sales fell well short of projections.
  2. Other applications of AI–and insights from algorithmic analysis of data–will continue to transform our world.

The catch is that it’s very difficult to predict which companies and spheres of AI development will fall into Category 1 and which into Category 2. Some areas of the AI world will experience winter while the summer keeps getting hotter for others.

What lessons does this hold for AI policy? Well, perhaps most governments have inadvertently gotten it right by mostly taking a “sit-around-and-do-nothing” approach to AI. (I’d call it a “wait-and-see” approach, but that implies that it was the result of an informed decision.) Sweeping measures like extending legal personhood to AI systems or introducing broad government regulation of the AI industry (apologies for the multiple link-plugs) seem less necessary and less desirable than they did three years ago.

Some regulation undoubtedly will be necessary, most notably in the sphere of autonomous vehicles. And we definitely should be doing more to shore up worker training and retraining programs. The uncertainty regarding which jobs will be displaced and which will not makes it all the more important to have technical and vocational education training programs in place that can make the labor market more adaptable.

Dress in layers. That way, we will be ready regardless of whether summer or winter comes.

One comment

  • Daniel Schiff

    Thank you for this sober and indeed paradoxical analysis. One consideration is that this wave seems different in scale of investment, interest, and public deployment. If that accompanies new advances in basic AI research, then we might be able to keep plugging along rather than resting on our laurels with deep learning alone. That is, perhaps the scale of investment is enough to avoid another AI winter…

    Best,
    Daniel

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.