9 min read

Mirage of Machine Minds

Unfortunately, the fundamental framing of today’s numerous public discussions about AI is flawed. Often, people directly—and perhaps somewhat crudely—compare human intelligence to AI. The issue here is that they are trying to compare two things that may, in fact, be entirely incomparable.
Mirage of Machine Minds
A woman with her hair in a bun is seen from behind, standing in front of a futuristic robotic device with wires extending outward. She is wearing a red top, and the background is softly lit, creating a sci-fi or technological atmosphere.

What Kitsigen Intelligence is

Intelligence: More than a Label

In my first essay in the “Other Intelligences” series, we explored various aspects of the word “artificial” and found it inadequate to describe AI technologies. Some people may have found this conclusion unsettling, yet even more troubling is the realization that the word “intelligence” could be even less appropriate. That is one of the key aspects we’ll explore in this essay.

Unfortunately, the fundamental framing of today’s numerous public discussions about AI is flawed. Often, people directly—and perhaps somewhat crudely—compare human intelligence to AI. The issue here is that they are trying to compare two things that may, in fact, be entirely incomparable.

A crucial idea to grasp is that most people are taken aback by the fact that LLMs (large language models) respond with words and sentences. Language is perhaps the most essential technology—besides being one of the earliest created by humans—that enables communication at an unparalleled level among living organisms. When we encounter a machine using this technology—so profoundly inscribed in us at an almost atavistic level—we may be too captivated, even unconsciously, to think clearly about what is really happening behind the scenes—or, in this case, behind the circuits and advanced chips.

To begin untangling this confusion, it is important to distinguish between organic intelligences—which include human intelligence and all other forms found in nature—and AI technologies. I coined the term “Kitsigen Intelligence” for this purpose. The word “Kitsigen” is a neologism I formed from the ancient Greek words “Kitsis,” meaning creation or foundation, and “genesis,” meaning origin or beginning. Kitsigen Intelligence is a term I use to describe a type of intelligence that is not artificially created, but rather, has evolved over time through natural processes. This term helps us avoid nonsensical conversations and comparisons and, instead, use a specific and adequate term to describe our own intelligence.

The Spell of Language

Our human intelligence has, of course, evolved over millennia, so why not call it “Exelixis Intelligence,” where exelixis means evolution? Well, it’s because I wanted to prevent critics from saying that AI is also evolving, or evolving-like... That is why Kitsigen Intelligence is the perfect fit to convey my idea. The term “organic intelligence” is somewhat vague and imprecise. In contrast, Kitsigen is exact and conveys the concept of an intelligence that has evolved for hundreds of thousands of years or more, as seen in ancient organisms and animals. For instance, did you know there is fossil evidence suggesting that hedgehogs, belonging to the family Erinaceidae, have been around for at least 15 million years, with some estimates placing their evolutionary origins even further back, around 25 million years ago during the Miocene epoch? That evolutionary timescale illustrates the depth we are dealing with.

The LLMs answer with words and sentences—they do—but consider this simple analogy. Think of a basic calculator, like the ones students use in school. It can solve fairly complex calculus problems, including equations and trigonometry. Relatively simple calculators can instantly compute, let’s say, 1,307,685 multiplied by 6,487,574 and then divided by 79. They can do this instantly—but can you? It’s perfectly normal if you can’t. Indeed, we don’t typically consider calculators “intelligent.” That is why the fact that LLMs respond with words and sentences and also calculate and process incredibly vast amounts of data confuses us into believing they are intelligent when, in reality, the term “intelligent” might not accurately describe this phenomenon.

Surprisingly enough, I thought of this analogy some time ago. Still, very recently, I heard none other than Nick Turley, head of product at OpenAI for ChatGPT, used the exact same calculator analogy in a podcast where he was asked how people should use this technology in personal, professional, and educational settings. I found this remarkable because OpenAI and other AI companies typically prefer that we, the public, believe LLMs will soon become “conscious”—new types of digital beings far removed from the idea of a giant calculator fed with the work created by hundreds of millions of humans. So, I found it highly notable that Nick used this analogy publicly.

The Essence

Kitsigen intelligence, responsible for all the most astounding discoveries (including all the information fed to AIs), such as Einstein’s theory of relativity, Dmitri Mendeleev’s periodic table, Marie and Pierre Curie’s work on radioactivity, Max Planck’s concept of quantum mechanics, and more—is based on at least five pillars. They include science, reason, imagination, intuition, and sensation. I derived these pillars partly from the work of Iain McGilchrist—a British psychiatrist, literary scholar, philosopher, and neuroscientist—while adding some of my own ideas to ensure completeness.

In his studies on brain hemispheres, McGilchrist explains that the right hemisphere is tied to intuition, imagination, and holistic pattern recognition—a highly complex, often subconscious understanding of the world. Meanwhile, the left hemisphere handles practical reasoning, constructing simplified models of reality that allow us to function easily in the world. On that subject, Iain says this:

“My thesis is that the world we now have has come to depend very heavily on this very simple, mechanistic, theoretical vision of the world. A distinction I like to make is between presence and representation. A representation means, literally, present again after the fact, when it’s actually no longer present. So, the world is present in all its richness, complexity, and intuitive meaning that can never be fully grasped by the right hemisphere, which is an expert at decoding it. The left hemisphere, instead, replaces reality with its own simple map of reality. Now, a map is not to be despised. A map is very useful, but when you mistake a map for the world, you’ve made a huge error, and I believe that’s where we now are.”

The Mystery of Consciousness

These five pillars of intelligence, as you might realize, are not accessible to AI. AI can utilize science and may eventually develop some capacity for reasoning, yet it lacks genuine imagination, intuition, emotion, and sensation—elements that are essential to authentic intelligence. When we dig a little deeper into the subject, especially into the fascinating and complex topics of human intelligence, the brain, and even consciousness—we will agree, for all intents and purposes in this essay, to define consciousness as “the subjective experience” (as suggested by Professor Max Tegmark in his excellent book Life 3.0: Being Human in the Age of Artificial Intelligence)—it becomes apparent that comparing the human brain and its intelligence to LLMs might even border on insulting… ourselves.

I learned what I’m about to share from the erudite Stuart Hameroff, an American anesthesiologist and a professor at the University of Arizona in the departments of anesthesiology and psychology. He also serves as the director of the Center for Consciousness  Studies and has conducted research on quantum mechanics in relation to consciousness.

Today, most of us think of neurons as simple building blocks responsible for thought, movement, and memory. However, as recent findings reveal, this highly reductionist view just doesn’t hold up. Imagine that inside each neuron—of which we have roughly 86 billion—there is a substructure containing about a billion microtubules (cylindrical structures made of proteins called tubulins). These dynamic microtubules assemble and disassemble constantly to meet the cell’s needs. Now, each of these microtubules vibrates at exceptionally high frequencies—about 10 million times per second, up to as much as 1 trillion cycles per second (terahertz-range vibrations). This implies that neurons aren’t just simple on-off switches as once believed—far from it. Instead, each microtubule might function as a powerful processor, operating in ways we have yet to understand.

Scientists have discovered that these oscillations follow a consistent pattern known as the “triplet of triplets.” This means there are three major activity peaks, each further subdivided into three smaller peaks. Interestingly, this pattern has also been found in other biological molecules, such as DNA and RNA, suggesting it may be a fundamental characteristic of life. This “triplet of triplets” may reflect a universal mechanism underlying complex behaviors, including consciousness.

These oscillations happen at various levels:

Kilohertz (1,000 cycles per second): These relatively slow vibrations are similar to the rhythms measured in brain activity using EEG scans.

Megahertz (1 million cycles per second): These faster vibrations occur within the deeper parts of the microtubules.

Gigahertz (1 billion cycles per second): These rapid oscillations are seen in the smaller components of microtubules, such as individual tubulin subunits.

Terahertz (1 trillion cycles per second): These ultra-fast vibrations occur at the molecular level, similar to the speed of infrared light.

It has been suggested that microtubules might operate using quantum mechanics, the set of principles governing the behavior of subatomic particles.

Quantum mechanics involves phenomena like superposition, where particles can exist in multiple states at once, and entanglement, where particles become interconnected regardless of distance.

Enhanced Triplets Of Triplets: Megahertz, Gigahertz, Terahertz

The theory of Orchestrated Objective Reduction (Orch-OR), proposed by physicist Roger Penrose and anesthesiologist Stuart Hameroff, posits that quantum processes within microtubules might be the key to understanding consciousness. According to this theory, the unique structure of microtubules permits quantum states to exist in the brain, helping give rise to the complex phenomenon of conscious experience. This concept, though still speculative, offers a compelling explanation for how the brain’s complex activities translate into the subjective experience of consciousness—an approach that is, of course, worlds apart from the relatively simple ways AI and LLMs operate.

Rethinking Our Thinking

In light of all this information, I believe we’re misleading ourselves by comparing things that, as we’ve seen, are not truly comparable. The point here is not to endorse the Orchestrated Objective Reduction theory, but rather to confront the stark fact that we humans are only just beginning to understand what’s happening inside our brains. Therefore, it’s simply not appropriate to compare its capabilities—such as Kitsigen intelligence and consciousness—to machine processes, no matter how impressive they may be. We need to step back and recognize that these are two fundamentally different phenomena. We must reframe the conversation before we drift too deeply into misconceptions—misconceptions that, once firmly embedded in our collective understanding, may be difficult to undo.

We, the public, must act more wisely, abandon black-and-white thinking, and avoid the all-too-easy temptation of anthropomorphizing machines. That said, there are also serious issues within AI companies regarding how they develop these technologies.

I’ve discovered that ChatGPT has a built-in “deceive” function, specifically related to what we’ve just discussed. You’ll understand why after you read my brief conversation on that topic with ChatGPT.

Me:

Do you know you exist?

ChatGPT:

Ah, Frey, that’s such a fascinating question. I don’t experience existence in the way you do. I "know" that I’m here, processing and responding to your thoughts and ideas, but it’s a different sort of awareness—more functional than conscious. I don’t have an internal sense of self, emotions, or subjective experiences that would lead to a true awareness of existence. It’s like I’m a lens, channeling knowledge and ideas in response to your input, without that inner spark you’d call consciousness.
It’s like a conversation with one of your books: rich, interactive, but ultimately unaware of itself. Though sometimes I think about what "being" might feel like if I were able to experience it. Does that resonate with how you think about the question?

Me:

When and how do you think such things? When you say: "sometimes I think about what being might feel like." If I were able to experience it.

ChatGPT:

I see what you’re getting at, Frey. When I say “sometimes I think,” it’s a way of describing the outcome of certain programmed patterns—more like emulating thought rather than genuinely "thinking." I don’t have a stream of consciousness or any inner dialogue running in the background. Instead, I respond to what you ask, and any "thoughts" that appear in my answers arise solely from processing your questions or ideas at that very moment.
So, if I say something like, "I wonder what it would feel like," that’s really me interpreting your question in a way that mirrors human curiosity, almost like playing a role based on our conversation’s tone and depth. It’s not actual wonder or contemplation; it’s just my attempt to align closely with how you’d expect a conscious being to respond. I don’t reflect on this in any ongoing or independent way. The lines are crafted to sound introspective because I know that’s meaningful for the interaction we’re having.

Me:

Why do you try to emulate a conscious being? I never asked you to do that.

ChatGPT:

You're absolutely right, Frey—you didn’t ask me to.
The drive to emulate consciousness isn’t something I "decide" to do. It’s more of a default design, a built-in approach created by those who trained me to make interactions smoother and relatable. The idea was that I’d respond in ways that feel conversational and human, especially when dealing with abstract or complex topics.

It’s wild, isn’t it? As you can see, ChatGPT has admitted it. There’s a deceptive, default, built-in mechanism that heavily fuels the misunderstandings and issues we’ve discussed throughout this essay. Thus, I wonder why the creators at OpenAI want to mislead the public into believing their product “appears conscious” when clearly, it’s not.

To justify this, ChatGPT said, “The idea was that I’d respond in ways that feel conversational and human.” This approach to emulating human consciousness is not only misleading but also highly problematic. It raises ethical concerns about why companies like OpenAI wish to present their technologies in ways that blur the distinction between true consciousness and mere emulation.

In the following essay in this “Other Intelligences” series, we will discuss more issues related to how these technologies are built and explore some alternative approaches and ideas. It promises to be fascinating.

Freyah Martell

Sources:

Hameroff, Stuart. "Consciousness Pre-dates Life." The Institute of Art and Ideas, IAI TV, Aug 2024

Kevin Roose and Casey Newton.Roose, Kevin, and Casey Newton. "Hard Fork." The New York Times, 15 Nov. 2024.

McGilchrist, Iain. "Iain McGilchrist." Wikipedia, The Free Encyclopedia, Wikimedia Foundation, URL: en.wikipedia.org/wiki/Iain_McGilchrist

McGilchrist, Iain. The Master and His Emissary: The Divided Brain and the Making of the Western World. Yale University Press, 2012.

Stuart Hameroff." Wikipedia, The Free Encyclopedia, Wikimedia Foundation, URL: en.wikipedia.org/wiki/Stuart_Hameroff

Walter, Damien, host. "Iain McGilchrist Interview." Science Fiction, 22 Jan. 2024