Part 2: AI Hope, Hype, and Disappointment

The history of AI has been driven as much by human psychology as by technology. Everyone understands its power and desirability, and everybody has an innate sense that they will recognize it, as always by its results, when it arrives.

In the 66 years since the Dartmouth conference, researchers have tackled one “feature of intelligence” or another and have indeed “solved kinds of problems <once> reserved for humans,” but they have yet to produce anything that our psychology recognizes as intelligence.

The first major wave of AI was based on the premise that knowledge could be “represented” as a set of rules that computers could process with logic. If you could add enough rules, you could eventually produce commonsense knowledge of the world and general intelligence. In its day, it generated great excitement and funding. But its focus was on a process to produce knowledge (logic), not on knowledge itself. The assumption that knowledge consists merely as a set of assertions that could be represented in symbols was flawed. It did not scale; knowledge was never achieved.

The current great hope, the second wave, is Connectionist AI (or Data Science), particularly Machine Leaning (ML). Never has the excitement, expectation, and funding been greater. But all is not well. Never has the hype, confusion, and even genuine dishonesty surrounding AI been higher.

A recent issue (October 2021) of IEEE Spectrum is devoted to a special report entitled, “Why is AI so Dumb?” The title itself is an oxymoron, since intelligence and dumb are opposites, but the meaning is clear enough.

The lead article in the IEEE report is, “The Turbulent Past and Uncertain Future of AI.” It gives an excellent summary of the recurring boom-and-bust cycle of AI optimism and pessimism that has existed since the 1950’s.

The optimism phase begins with a new conjecture about what intelligence might be. Based on that conjecture, some aspirational technology is developed which produces some interesting results and maybe solves some previously intractable problems. Next comes a round of publicity and hype which takes on a life of its own and euphoria soon prevails. People begin to believe real AI is just around the corner, and the money pours in. But no matter how impressive the demonstrations and no matter how many interesting and unique applications are produced, real AI stubbornly refuses to appear. The bubble bursts, pessimism reigns, and an “AI Winter” ensues.

At this point, Connectionist AI is already well into the cycle. In fact, Machine Learning for the past several years has become synonymous with AI and the money is pouring in. The second quarter of 2021 saw record investments for “AI” startups, more than $20 billion[i]. Clearly, we are in the euphoria phase. But still, we see nothing like real AI.

There is no shame in trying to solve a hard problem and failing. Many times though, you miss the mark you are aiming for, you still create something of value. For example, today’s Machine Learning and deep learning techniques are producing many useful, even amazing, applications where statistical analysis of large datasets is appropriate. ML is very good at solving problems that can be solved with statistics and works well when you have large databases of statistically uniform elements like particles or pixels.

But despite extravagant funding, efforts to use ML to make machines use language like humans yield only a transitory illusion of understanding without genuine comprehension of a single word. True thinking machines have not appeared.

It is not a crime to assuage the disappointment from failed aspirations of some very intelligent people with a little loose language. But, of course, we know it is really about money and prestige.

It sounds far more glamorous to be an “AI Researcher” than an expert in advanced statistical methods. It wraps one in the same mantle of hope and aspiration as the ancient wizards who sought the Philosopher’s Stone. 

But anyone can be an AI researcher. All you have to do is have a desire to solve the problem and gumption to try things.

Now we are calling anything an AI researcher creates AI. Meanwhile, what we have is called “Narrow Artificial Intelligence” while the AI that was hoped for in the first place is termed “Artificial General Intelligence” (AGI). But this just adds to the confusion since much of what is now called Narrow AI doesn’t even fall within the original scoping conjecture of the field as being about “simulating some feature of intelligence.”[ii]

“Glendower: I can call the spirits from the vasty deep. Hotspur: Why, so can I, or so can any man; But will they come, when you do call for them?”

Hype vs. Honest Opinion

When Sundar Pichai said a revolution in AI was more profound than fire and electricity, was that hype? Not by itself. We all understand that the world-changing effects of real AI can hardly be overstated. But when he went on to say that AI is in its early stages, he implied that the revolution has already begun, and that connectionism — where AI is today — is a point on the path to real AI. That’s what the symbolic camp thought and where are they now?

When a person of Pichai’s stature makes a statement like that, whether an honest opinion or merely a sincere hope, that the AI revolution has already begun, he wraps himself and his company in the magical wizardly mantle. Others, especially the media, take it as authoritative and add more fuel to the cycle of hype. Despite protestations by the many experts that connectionism will not achieve real AI for, at best, decades in the future, ordinary people are coming to believe that it is just around the corner or already here.[iii]

To say there is dishonesty in AI is not to say all AI researchers are dishonest, or even that they exaggerate where narrow AI is today compared to the real thing. Yann LeCun, one of the key pioneers of Machine Learning has stated that artificial neural networks and machine learning techniques are sufficient to reach the real thing[iv].  Another honest opinion?

Not everyone in the industry is so honest. At the other extreme we see people put chatbots in what appear to be mechanized mannequins and talk about them as if they were artificial humans that have comprehension and even their own needs, wants, and desires. P. T. Barnum would have appreciated their showmanship. No wonder so many people believe that real AI is already here. Read more about The New Illusionists here.

In between honest aspiration and cynical opportunism, we have an entire industry that is cashing-in on the euphoria while it lasts.

According to research from venture capital firm MMC Ventures, over two-fifths or 1,000 of Europe’s 2,380 AI startups do not use artificial intelligence in their products and offerings. 

There is no reason to assume that number doesn’t apply across the entire industry. Another article from the IEEE report, “A Human in the Loop: AI’s dirty little secret”, asserts:

“Just about every successful deployment of AI has either one or two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.”

We are told this is AI but “pay no attention to the man behind the curtain.”

Although the Machine Learning investment feeding frenzy is still going strong, there are reasons to believe it is nearing its end. One is that the huge increases in productivity all these new AI companies are supposed to produce hasn’t materialized, and there is no evidence it will.

From an article by Kevin Drum entitled, “Stop Calling Everything AI”:

“Not only has labor productivity declined a bit over the past 60+ years, but it’s cratered over the past ten years, precisely the time when AI has supposedly started making huge inroads.”

Supposed AI companies are being started, lavishly funded, then bought and sold at huge valuations. But they might as well be growing tulips for all the productivity gains they are producing. The current “irrational exuberance” about making money on AI is now as pervasive in the industry as the dishonesty.  Perhaps the two go together. It takes no great prognostication to predict that the current Machine Learning investment bubble will soon burst like the dotcom one.

Another AI Winter?

When it does, will we have another AI Winter? History does repeat itself, but never in quite the same way. What is different this time is the artificial neural network approach really is good at solving a whole class of problems.

Solutions like finding the signature of a planet in digital data downlinked from an orbiting telescope, or an incipient tumor in a radiology image, and many others, are groundbreaking. 

Wherever finding a statistically significant pattern in a large set of data pays off, Machine Learning will continue to advance and continue to attract funding. Let us hope that investment will be more sober and deliberate and folks will stop calling it AI.

It is so unfortunate that machine learning was ever called AI in the first place. That simple fact, probably more than any other single factor, is what has led to the rush to use Machine Learning applications for automating decisions that humans traditionally make. Why not? If it is truly AI, so what if there are few problems up front? They can be fixed as it becomes more intelligent down the road.

There are more than a few problems. In fact, this area is a rather ugly business. These decision-making algorithms have come under fire for making egregious errors and reflecting the inevitable bias from datasets curated from what ordinary humans say and do, hence the practice of putting humans in the loop. 

Worse still, it is now generally recognized that Machine Learning algorithms designed to increase the time people spend online to increase ad-clicks are wreaking serious damage on our social fabric. Device addiction and depression, rising suicide rates in teens, and a rising tide of partisanship, radicalism, and fake news are cited as reasons the industry needs government regulation. How did we talk ourselves into putting so many critical aspects of our lives and society in the hands of mindless algorithms? Because, though that is what they are, we don’t call them that; we call them AI.

Unfortunately, as they peak, these waves of hype and enthusiasm can obscure and impede the recognition of promising new approaches. During the high point of the first wave, Symbolic AI, artificial neural networks (ANNs) were not taken seriously. Today, Connectionist AI, based on ANNs, is having its day and its proponents are enjoying their ascendancy over the symbolists that once condescended to them[v].

When the current investment bubble bursts, in the inevitable hangover from these irrational exuberances, will clarity and honesty return? When the community can stand up and admit that Machine Learning is not synonymous with AI, is in fact not intelligent at all and may never be, it may find that the hype and turbulence of the second wave have obscured something new and amazing.

A third Wave of AI, contrarian to long-standing notion that AI is about emulating the intellectual processes by which human create knowledge, is already here. By focusing on the nature and structure of knowledge itself, New Sapience is making rapid progress towards the real goal: machines that can acquire and apply knowledge to change the world as we envision – but do it better than we can.

[i] CB Insights Research

[ii] I recently read an article that claimed that a linear regression in an Excel spreadsheet was Machine Leaning and, by extension, Artificial Intelligence. What next? When I was a child, we had mechanical adding machines that could certainly do arithmetic better than I could. Should we consider that to be AI too? No, this is more AI hype.

[iii] Recently, I was told by a reasonably technical person that we are now entering the age of the Jetsons, with robots and everything he said, all but the flying cars. (Ironically, flying cars are already in the air, while there is no hard evidence that we will ever see an artificial neural network application truly able to understand a simple sentence.)

[iv] LeCun, as we discussed in our previous installment of this series, is very clear that MK today has no knowledge, no models of the world but he believes ML, specifically self-supervised machine learning will someday get there:  thenextweb.com

[v] “Hinton and many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning.”  Gary Marcus, Deep Learning is Hitting a Wall, Nautilus, March 10, 2022

Leave a Reply

Your email address will not be published. Required fields are marked *

Share This Post