Part 5: AI and Magical Thinking

The term “magical thinking” is usually used to heap scorn on scientifically unsupported conjectures, but it is a real phenomenon. The fact that magic has been a part of so many human cultures indicates that it should be considered an aspect of human psychology and not dismissed out of hand as folly or simple superstition. Once again, if we want to understand what people say, think and feel about Artificial Intelligence we must consider human psychology.

Sentience and Sapience in Humans and Machines

In February of this year when OpenAI co-founder Ilya Sutskever said, “It may be that today’s largest neural networks are slightly conscious,” I asked myself, what does “slightly conscious” mean anyway? Is that the mental state of someone after being forced to read endless articles making ridiculous and extravagant claims about how much like humans and the brain, machine learning applications are?

And yet, when I thought the hype could not get any higher, (deeper?) on June 11th, The Washington Post reported that Google engineer Blake Lemoine had been placed on paid administrative leave after he told company executives that LaMDA, a large language model, had become sentient. The word sentient means able to perceive or feel things, and as it was used in the context of AI, usually connotes at least a degree of self-awareness.

It soon became clear that Lemoine meant much more. In an interview with WIRED, he said, “LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney.”

So, Lemoine who was a member of Google’s Responsible AI group, is claiming that LaMDA, a machine learning application based on Google’s Transformer algorithm, has somehow, spontaneously, become a person and must be accorded human rights. It appears that Google did not think that was a very responsible thing to claim and he has since been fired. That is, Lemoine has been fired. LaMDA is still at Google churning out text.

Of course, there is no way of knowing whether Lemoine believes what he says, what he believes, or if he is merely a (very successful) attention seeker. It gets more interesting. He recently tweeted:

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls? There are massive amounts of science left to do though.”

With the first statement, Lemoine explains exactly where he’s coming from and it is clear that his feelings about the LaMDA (if he is indeed sincere) have little to do with his academic training in computer and cognitive science.

With his second statement he throws us another curve. “Massive amounts of science to do” – about what? Is he going to treat us to a scientific treatise on what kind of algorithms God is most likely to confer souls upon? Science or Mysticism; take your choice Blake.

Is that it then; are we done here? As scientists, yes. Anyone who understands the fundamentals of how computers and software work can only respond to the idea of programs like LaMDA being sentient as AI luminary Gary Marcus did, “nonsense on stilts.” But is there another side to this?

The idea that intelligence and consciousness are some kind of universal, immaterial “something” like the Force from Star Wars, has been around for centuries. It is not an intellectual model, it is a metaphor for how we experience these things.

There is a dichotomy between intellectual versus magical thinking (call it mythopoetic thinking if one prefers a more psychological term). Understanding this dichotomy is important; not to help us build AIs, but to be able to predict how humans will interact with them when we do build them.

Models and Metaphors, Sapience and Sentience

It appears we human beings are, literally, not of one mind. We have several. One of our minds is good at building those sophisticated intellectual structures, we call knowledge – models of the objective world that have given our kind great power to alter our environment. Another is the seat and source of our more nuanced emotional experiences and, finally, we have our basic stimulus-response passions.

These days it is popular to refer to these as the left brain and right brain minds, however such a physical mapping is almost certainly an oversimplification. Let us not presume to know more than we do about neuroanatomy and name them from a functional standpoint and refer to them as the sapient and the sentient minds.

In the 1960’s Paul D. MacLean formulated his model of the Triune Brain, a theory of brain evolution and human behavior that was subsequently popularized in Carl Sagan’s book, “The Dragons of Eden.”

The Triune brain consists of three parts: the R (reptilian) complex, the limbic system, and the neocortex. These structures are understood as being sequentially added to the forebrain in the course of evolution and of being the seat of progressively more sophisticated behaviors.

Since the theory was formulated, further research in neuroanatomy has cast some doubt on the validity of MacLean’s particular mapping of functions to brain structures. Yet the basic idea, that our psychology evolved along with our physical brain and retains behavior mechanisms from earlier evolutionary stages, remains compelling.

First came the stimulus-response mechanism. These reflexive passions of the hind brain (if that’s where they reside) are like the base notes of the emotional harmony (or sometimes cacophony) of the human emotional experience.

Next, in the limbic brain, evolved an emotional instrument capable of producing subjective experiences of unlimited subtlety. Our evolutionary ancestors were governed by a rich matrix of emotional responses that allowed them to respond to events in their environment in a way that was generally conducive to the species’ survival, including the formation of complex social behavior. We call this the sentient mind. It was and is a structured, sophisticated, and nuanced control mechanism

Finally, in and with the neocortex and especially in humans, a third control mechanism evolved: intellect.  This sapient mind evolved because it gave our ancestors the means to exert control over their environment, the ultimate survival mechanism.

Evolution never goes back and cleans up after itself. The stimulus-response control mechanism of the earlier evolutionary stage was overlaid, not replaced, by the more sophisticated system of desires and aversions that characterizes higher mammals. Similarly, we can suppose that in humans the functionality of the limbic brain exists in-tact and unaltered by the evolution of the intellectual functions of the neocortex.

If these evolutionary speculations are valid, human behavior must be understood as resulting from the interaction of these three fundamental aspects of our psychology.

Our sapient mind is where we live every day. It is present in our consciousness with such intensity that we seldom bother to make distinctions between our idea or feeling about a thing, our perception of it and the thing itself.

The sentient mind is more mysterious to the point it is often equated with the unconscious. It appears, as Jung conjectured, to map perceptions into built-in archetypes. These forms, unlike the intellect’s abstract ideas, have emotional experience attached to them. Everything that the sentient mind perceives evokes an emotional experience.

Causality, time and space do not seem important to the sentient mind, which lives only in the moment. While the sapient mind lives in a world of carefully constructed models, the sentient mind inhabits a landscape of metaphors.

Direct evidence of how our sentience-dominated ancestors may have experienced their lives can be seen in modern humans who have experienced temporary loss of some of their sapient mind functions and lived to tell about it.

One of the most notable and creditable of these is Dr. Jill Bolte Taylor, a neuroanatomist who experienced a severe hemorrhage in the left hemisphere of her brain. During the event she could not walk, talk, read, write, or remember her earlier life. It took eight years for her to recover fully and eventually she was able to speak eloquently about her experience.

Dr.Jill Bolte Taylor

Dr. Taylor described her experience of the event as one of euphoria, of awareness of cosmic forces, of a feeling of oneness with the universe. She felt that she was disconnected from her ego; in fact she spoke of looking down at her shoulder where she was leaning against the door frame and not being able to sense where her arm ended, and the door frame began.

What is striking about how Dr. Taylor speaks of the experience is that it is not only very similar to the described experiences of other people who have suffered loss of intellectual functions (through injury or self-induced via mind-altering drugs), but also of how people throughout history have described what they call spiritual or transcendental experiences.

That is to say how they describe what the experiences feel like, how they understand what they experienced varies according to people’s intellectual belief systems.

The conclusion to be drawn is that our pre-sapient ancestors may well have experienced life to be egoless, euphoric, transcendent and, in a word, “magical.” Modern humans, too, are still capable of these experiences but only if we can somehow avoid the masking effect of our intellect.

The complexity and challenge of being human in a civilized, rather than natural environment, can be expressed as developing the skills needed to balance the sapient mind’s models with the sentient mind’s metaphors.

So no, we are not done when we dismiss the furor that Lemoine stirred up because he’s being a mystic, not a scientist, and when we tell him “science or mysticism, make your choice”, he won’t be able to. We are all scientists and we are all mystics.

Gary Marcus describes Google VP Blaise Aguera y Arcas as a polymath, novelist, and one who has a way with words. This is how Aguera y Arcas described his impression after interacting to LaMDA:

“I felt the ground shift under my feet…increasingly felt like I was talking to something intelligent.”

This is what Marcus responded to as nonsense. Note the use of metaphor and the choice of the verb feel rather than think. This is the sentient mind talking. This is what we mean by “having a way words,” it is largely a function of being able to access the sentient mind’s archetypes and metaphors.

I remember my own sentient response when I first read text that GPT-3 composed. It described talking unicorns wearing makeup. I already knew what the technology was capable of and knew what I was reading was the result of a statistical algorithm and not communication from another mind. Nevertheless, I experienced a visceral reaction. I felt as if I was hearing ghostly voices of all the uncounted people who talked about unicorns locked away in the training set. It was not a pleasant experience, although it was fascinating from a psychological standpoint. It certainly didn’t make my understanding of language sequence models any different[i].

Spontaneous Sentience

We understand now why it is perfectly natural for people to resonate with the notion that a sophisticated computer program might suddenly become sentient. Our pre-science ancestors sensed spirits in green forest glades, imagined wise dragons in deep chasms, and saw gods in the forces of nature.

But what does the sapient mind have to say about it; how does it look from the standpoint of science and engineering?

Science fiction loves the idea. In fact, you can hardly find a case where a fictional robot doesn’t spontaneously exceed its programming to surprise its creators by exhibiting super capabilities beyond its design parameters.

An extremely common variation of this theme is the idea that an AI designed to be all sapience, all logic and intellect, suddenly develops sentience to become an emotional being of selfless devotion or implacable malevolence as the plot requires. A similar common variation is when the robot, designed without emotions inexplicably desires to have them like Pinocchio, ardently desiring to become a “real boy.”

These are great tales but as in all great art, they are grounded in the sentience perspective. These are far more fiction than science.

Artificial Intelligence, since the term was first coined, has been the practice of mimicking features of natural intelligence. It would have been more accurate to call it Imitation Intelligence. It has not had any great success at creating machines that are either sapient or sentient.

That should not be so surprising in light of what we now know about the evolution of human mind. The functionality of the human brain evolved under the control of nature’s master engineer, natural selection, neural layer by layer, over uncounted millions of years. The operation of one processing layer effects that of the layer above. Each processing layer evolved as it did to fit and complement what was laid down earlier.

No one should have ever expected to be able to pull out individual features or functions out of the matrix, for example logic, emulate it as a stand alone and solve the problem. There is no master algorithm hiding away in some corner of the brain waiting to be discovered that will miraculously convey intelligence.

This approach, of creating intelligence the way nature did, implies nothing less than building an artificial brain of equal subtlety and complexity, painstakingly designing and implementing it layer by layer and getting the functionality of each layer right and each in the correct order. Given that the complexity of the artificial neural networks we are working with today have been compared to those of worms, and that no one has a roadmap on how to proceed, we would have a long, long way to go.

Even if we continued down this path and continued to apply same vast levels of funding machine learning has attracted over the last decade, it could take centuries before reaching the fabled “human-level intelligence.” And there will be no miracles along the way. Expecting some quantum leap breakthrough is as likely as evolution jumping to mammals from fish in a single generation. Nonsense on stilts, indeed.

There could be surprises along the way, especially if the current programming paradigm based on artificial neural networks continues to dominate. These algorithms are stochastic and can’t guarantee the same results given the same inputs. Furthermore, the designers of these algorithms don’t always understand how they work[ii].

So the enterprise is trying to reach a goal that is only vaguely defined by emulating something of which they have almost no understanding and which also happens to be the most complex thing in nature and do it with a technology whose results cannot be explained or even predicted. Prayer seems a good option at this point.

I think this uncertainty is why a lot of very smart people caution us about AI. If by some miracle humans did succeed in building an imitation intelligent animal, though this process it seems very doubtful ,that we would understand its inner working to the point it could be trusted.

The Artificial (Imitation) Intelligence tradition has spent decades lost in a wilderness, attempting to reproduce what goes on inside the human brain without a roadmap. Like the medieval Alchemists, who lacked the fundamental unifying schema provided by the Periodic Table, researchers have just tried things to see what would happen.

The sapient mind builds models that convey utility and power over the natural world using a built-in methodology. Sir Francis Bacon articulated and codified this methodology as the Scientific Method, but he did not invent it. Where an area of cognitive inquiry coalesces into a science, our sentient minds seem content to withdraw into the background but prior to this our more magically thinking side is quick to jump in. As we saw with Alchemy, which was half-sapience and half-sentience.

We can now understand why the field of AI has been such a mess of hype, hope and disappointment for more than six decades and why people like Blake Lemoine, who appears to have his sapience and sentience in a hopeless tangle, evoke such a powerful response from us.

Synthetic Intelligence, not Imitation Intelligence

Not to worry. I believe it is likely that the enterprise of Artificial Intelligence as it has been pursued up to now will soon be winding down. Data science will still be with us but there is a better way to create intelligence in machines.

We don’t need to create an imitation sentient animal to reach the benefits that intelligent machines can bring us. In fact, we don’t need sentience at all; just the sapience, the intellectual capabilities. That resolves to an engineered solution for building models of the world; information structures in computers with the same functionality as knowledge in a human mind. There is a term for this: Synthetic Intelligence[iii].

Carl Sagan famously said, “if you want to make an apple pie from scratch, you must first invent the universe.” This is why Synthetic Intelligence is so much more powerful than the traditional approach, which essentially attempts to build a mechanized thinking animal from scratch in order to reach knowledge in machines. The apple pie here is knowledge, models of the world, something already well appreciated throughout the community[iv].

Our work at New Sapience has focused on “the pie.” We asked ourselves if computers, as we understand them today, already had the necessary functionality to engineer one without reference to the organic brain recipes.

Computers do. They already have multiple information processing layers that assemble successively more complex information structures from those that arise from the layers below. At the lowest level there is binary code execution, on top of that we have compiled code and then multiple levels of interpretated code as needed. Logic, analysis, abstraction, and synthesis; all the necessary information processing routines are already available.

Computers have advantages over organic brains for processing knowledge. They have perfect memory, repeatability, extendibility, and connectivity. All that has been lacking is to identify the core indigents and a find the recipe to create that ultimate information structure that is knowledge. For this we drew upon epistemology and otology, rather than neuroscience (at least to the extent that neuroscience is about neurons rather than about what they produce).

It would be misleading to think that we did not need to investigate human minds. We discovered early on that the ingredients of our model, the cognitive building blocks, had direct correspondence to the one created in human minds and as we meticulously identified and integrated those core elements into our commonsense world model, we found that each must be classified as belonging to sapience, sentience, perception or of the things that interact with our minds to cause perception; all categories of human cognition.

The Periodic Table transformed Alchemy into the science of Chemistry. Equally profound, our identification and classification of the building blocks of cognition and their integration into what we call the cognitive core, is transforming the philosophies of Epistemology and Otology into a science of knowledge.

Today we have the solid foundation needed to engineer synthetic intelligences. We call them sapiens. Successful models of the world stand because they are architecturally sound. Our cognitive core is equivalent to the discovery of the arch. Once people discovered they could stack stones in such a way to as to cross a stream, they had a direct roadmap to bridges, aqueducts, and the Pantheon. What does our roadmap tell us about the future? What can one say with confidence about how and when Synthetic Intelligence will impact our human world?

Will we be able to give them the intellectual capabilities to understand nature to a degree that will equal or exceed our own?

Undoubtably.

 

Will they be conscious and self-aware?

Certainly. We will need to design that in if we want them to have initiative and self-direction. (No need to pray for divine intervention to give them a soul; just more solid engineering that is already on the drawing board.)

 

Will they have feelings; will they become people?

If you take away one thing from this discussion, I hope it is this. Synthetic Intelligences are no more likely to spontaneously become anything they are not designed to be than your car is to turn itself into an airplane. They will be exactly what we design them to be. The question becomes, could we engineer them to have emotions? Yes, most likely; at least to the point you could not tell the difference from emotional behavior in a human. Just as we can engineer the functionality of the sapient mind, we could do the same for the sentient mind. Will we?

That is a profound question and will be the focus of our sixth and final chapter of this series on Human Psychology and Artificial Intelligence.

Technology for Humanity

The beauty of Synthetic Intelligence is that these questions are ours to answer. No more just trying out thing to see what will happen like an Alchemist pouring two unknown liquids together and hoping that they won’t blow up. In the nearer term, our path is clear. We will create machines to extend our own natural powers to comprehend and control the natural world. We will build machines to do all the things we don’t want to do because they are tedious, difficult, dirty, or dangerous.

Then, in a world where, through our sapiens, our material needs have been met to a degree we only dream of today, we will have the luxury to ask we ourselves where we want to go next. Designing sapiens to do what we would rather not is straightforward, but it entails a much more interesting question. What would we rather do ourselves? Or, in other words, what, for each of us, truly makes our lives worth living?

The Dalai Lama said, “You Westerners are very good at developing your minds, not so good at developing your hearts.” Whether you call it, the heart, the soul, or the sentient mind, we can ignore this essential aspect of our nature only to our great loss. The intellectual skills of the sapient mind make us clever at manipulating nature but in doing so, we often create technology that de-humanizes us. We no longer recognize the objects of the fundamental emotional needs that reside in the sentient mind, in the world we create for ourselves.

If we can stay focused on the multiple dimensions that are built into human psychology as we develop technology with greater and greater cognitive abilities, we can create a powerful tool that will help us harness the models of the mind together with the metaphors of the heart – and keep them from tripping over one another. A technology to uplift humanity by helping us to develop the full potential of mind, heart, and soul that is already there.

[i] Roger K Moore, Professor of Spoken Language Processing: We should never have called it “language modelling” all those years ago; it was (and still is) “word sequence modelling”. Confusion always occurs when you label an algorithm with the name of the problem you’re trying to solve, rather than with what it actually does.  (This is also why Data Science should never have been called Artificial Intelligence in the first place – editor)

[ii] Jerome Pesenti, VP of AI at Facebook: “When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.  The math and the behavior are becoming very complex and my suspicion is that as we create these networks that are ever larger and keep throwing computing power to it, …. (it) creates some interesting methodological problems.”

[iii] The term was first coined by philosopher John Haugeland in 1986. He illustrated it by comparing imitation diamonds and synthetic diamonds. The former mimic the appearance of natural diamonds but do not have the properties of real diamonds. Synthetic diamonds are real diamond composed of the same carbon crystal lattice that defines natural diamonds. But they are created by engineered rather than natural processes and for a tiny fraction of the cost of reproducing the processes by which diamonds are created in nature.

[iv] Yann LeCun, machine learning pioneer: “The crucial piece of science and technology we don’t have is how we get machines to build models of the world.”
“The step beyond that is common sense, when machines have the same background knowledge as a person.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Share This Post