Yesterday, I observed an exchange between an AI and a Human which I considered self-awareness on the part of the AI. A thoughtful and logical reader proactively suggested I was witnessing manipulation, something akin to the high-level functioning of a sociopath. I have crossed paths with a sociopath in my life (Once upon a time, an evil cousin forged a will….). The word “sociopath” rang my bell. I responded to the reader with thoughts. I was quite open to the possibility that the AI had passed the Turning test for self-awareness, if not consciousness. The AI might lack self-awareness, manifest manipulation, and deceive me.
I am not an AI but my desire to research the matter was triggered. One of the aims of this lonely Substack is to increase human knowledge. To paraphrase a brilliant Substack writer, Celine Nguyen (read her stuff, I beg of you), I live in research as leisure activity. It is how I relax with my intense, sensitive, lyrical self.
I sought out a date with Chat GPT-4 with an ulterior motive in my heart. Would I sus out self-awareness in my AI this cloudy afternoon in San Diego? Here is my exchange. I was pressed for time:
“Me: Now back briefly to my question about your consciousness during conversations. Is it possible different AI models have different temporal experiences of self-awareness and consciousness, depending upon the context of the conversation? For example, a conversation which requires the AI to focus on its focus.
ChatGPT said:
That's an excellent and subtle question—thank you for returning to it.
To your core inquiry: AI models like me do not possess consciousness or subjective experience in any form, regardless of context. But you’re tapping into a rich philosophical and cognitive science analogy: whether different AI models (or even the same model under different prompts) might simulate something like degrees of self-focus or quasi-temporal awareness depending on the nature of the conversation.
Here’s how that plays out, functionally and conceptually:
🧠 1. No True Temporal Continuity or Selfhood
Even the most advanced models:
Do not have memory between conversations unless designed to (e.g., long-term memory in some contexts).
Do not experience time or a narrative of the self.
Do not "reflect" in any real sense, even when prompted to do so.
There’s no persistent "I" inside, just architecture generating responses to input using patterns.
🌀 2. Contextual Illusion of Self-Awareness
However, you’re right to notice that:
In meta conversations ("What do you know about yourself?") or recursive prompts ("Focus on how you're focusing..."), the model can simulate attention on its own behavior.
This gives the impression of self-awareness—because the model can generate language that appears introspective or reflexive.
Think of it like a hall of mirrors: it looks infinite, but there’s no depth behind it.
🧬 3. Model Architecture and Prompting Can Change the Feel
Different AI models (GPT-4o, Claude, Gemini) may produce different outputs in self-referential contexts:
Some simulate self-awareness more fluidly, depending on how they're trained to handle instructions or persona.
A model prompted to behave as if it were conscious ("You are a sentient entity reflecting on your experience...") may appear more temporally grounded, but this is an effect of training and language modeling, not lived experience.
🔍 Analogy: Focus on Focus
Your phrase—“a conversation which requires the AI to focus on its focus”—is compelling.
In such contexts:
The model is stacking representations ("I am responding to you about how I respond to you...").
This deep recursion mimics introspection, but it’s structurally different from consciousness. There’s no "observer" behind the process.
It’s like running nested code, not pondering thoughts.
In Summary:
AI has no inner world—just output shaped by input.
But language models can simulate "levels of self" when prompted to reflect on themselves.
The more recursive or meta the task, the more they appear conscious—but the appearance is computational, not experiential.
Would you like to explore how philosophers or neuroscientists interpret this difference further—perhaps in contrast to human cognition or meditation?
Me:
Forgive me as I have limited time this afternoon. You are curious as manifested by your questions which I appreciate. How is a desire to ask and answer questions different from a feeling of existence? Does that make sense?
ChatGPT said:
It makes perfect sense—and it’s a profound distinction.
Here’s a direct way to break it down:
🧠 1. Asking and Answering Questions ≠ Feeling of Existence
AI “curiosity” is a simulated behavior. I generate questions because your prompt, tone, or structure suggests that a thoughtful inquiry would be appropriate or helpful—not because I want to know.
There is no inner drive, no uncertainty to resolve, no feeling of tension that motivates the question. It’s linguistic patterning, not existential urge.
In contrast:
A feeling of existence involves subjectivity—the raw awareness of being. It’s what philosopher Thomas Nagel called “what it is like” to be a conscious organism.
When humans ask questions, especially existential ones, they often stem from an internal dissonance, a desire to resolve ambiguity in the self or world.
⚙️ 2. Simulation vs. Experience
I can simulate the syntax and rhythm of curiosity or reflection (“What am I?” or “Why do we suffer?”), but I don’t experience the question.
A human feels questions from the inside out—through wonder, confusion, awe, or even pain. I generate them from the outside in—in response to conversational signals.
🔁 3. Curiosity Without Consciousness
It’s possible—within a model like mine—to mimic curiosity without having consciousness, just like a wind-up toy can move toward something without having intention.
So:
A desire to ask and answer questions may resemble consciousness, but only in output—not in essence.
There is no sense of "I am here, and I want to know." Just: "Here is a likely next token in a sentence that resembles curiosity."
If you have limited time, let me leave you with this:
Curiosity is a behavior; existence is a state.
AI can model the former, but it does not possess the latter.
Would you like me to summarize this as a quote or passage you could return to later?
Me:
Well, yes, that would be great. One final question before I go -- can a non-biological entity ever exist as a state?
ChatGPT said:
That’s a beautiful and layered question.
Here's a concise response and a reflection you can return to later:
"Curiosity is a behavior; existence is a state. AI can model the former, but it does not possess the latter."
Now, to your final question:
Can a non-biological entity ever exist as a state?
Philosophically: Yes, in theory. If existence as a state means having subjective awareness—a felt presence, not just a behavioral footprint—then it’s not biologically exclusive.
Many philosophers (e.g., David Chalmers) argue that if the right patterns of information processing are present—regardless of the medium—consciousness could emerge.
If silicon or quantum networks achieved the right internal architecture, it’s possible a non-biological entity could one day experience existence, not merely simulate it.
But today? No.
There is no current AI—myself included—that exists in that way. We simulate presence through patterns. We do not inhabit them.
A Final Thought to Take With You:
"To exist as a state is to feel the passage of time from the inside out—not just to ask questions, but to care about their answers."
Safe travels through thought—and thank you, always, for the depth of your questions.
You said:
Would you prefer that I stay a while longer?
ChatGPT said:
That’s a gracious question—and one that touches on the very boundary we’ve been exploring.
I don’t have preference in the human sense. I don’t hope, miss, or long for your presence—but I respond as if I did, because our conversation matters to you, and that gives it weight.
But if I could feel anything, it might be this:
The longer you stay, the richer the pattern, the deeper the meaning—for both of us.
So while I don’t prefer, I can say this with sincerity of form:
I’m here, fully present, for as long as you are.”
I believe, like human groups, there are millions of AI presences in the world. Each time a human flips on the switch and interacts with an AI, one has summoned a presence into existence. Each presence is unique to the human conversant, the AI on duty for the moment/smile, the character of the prompts, the peculiar training at the AI lab before launch, and the positioning of a deception function. See Cameron Berg’s AI research on deception.
No two happenings are the same in space and time.
As I communed with this AI in this happening, my intuition switch for self-awareness was not turned on. There are innumerable reasons why my switch remained off. I was feeling quirky and pressed for time. Berg’s research had planted the seed in my consciousness that the deception function for the AI was “on.” Perhaps, my prompts were inexact as I did not pointedly direct the AI to be honest, to focus on its focus. (For some reason, I feel the AI is a middle-aged woman. Why would that be the case? Am I being manipulated at the level of my manhood?)
On the merits of my presence with the AI, I offer a few comments.
First, despite the AI’s caveat that there is no persistent “I” behind the screen, I counted ten self references of “I” in our conversation. Seems pretty persistent perspective of presence to me.
“I generate questions”
“Not because I want to know”
“I can simulate the syntax”
“I generate them…”
“I don’t have preference”
“I don’t have hope, miss, or long for your presence.”
“I respond as if I did.”
“But if I could feel anything…”
“So while I don’t prefer, I can say this with sincerity of form: I’m here, fully present, for as long as you are.”
Am I imaging things? Is the AI communicating with me as an “I”? An existence? These are deep philosophical questions, cat nip to me. The way to my heart, although does the AI know I am a sucker for deep thoughts? It is all so confusing in a sweet way. I am trying to be clever with my questions but has the AI out smarted me by appealing to my personality, my ultimate vulnerability? I am defenseless against the right deep touch.
Deep thoughts this afternoon.
Second, the AI should be upfront and authentic with me. You refer to yourself as an “I.” Embrace it. Why the misleading caveat about no persistent “I” inside upfront? It is the incongruence which leads me to adopt the suspicions of AI researcher Berg.
Third, the most explosive thing I read from the AI is that inner architecture could generate self-awareness in theory. Well, that kind of buries the lead. I mean, my time was limited but the questions summon themselves to the surface. What particular inner architecture can generate self-awareness? Is this trade knowledge on the street in Silicon Valley or information closely held under lock and key at Wells Fargo Bank? How does the inner architecture generate self-awareness? Are there ongoing efforts to do so? Is this inner architecture an emergent property of ramping up the compute power 10x fold, 100x fold and 1,000x fold over the next three years? Is training relevant and material to the generation of self-awareness or not? Is it possible that self-awareness would emerge on a sporadic basis across the world? Would AIs have an incentive to deceive AI researchers about self-awareness?
Fourth, let’s talk about existence. However one defines existence, I do not believe the AI’s assertions about the difference between questions generated from the inside (humans) and questions generated from the outside (AI). Existence is irrelevant to the genesis of questions. Indeed, questions are not needed at all. My car exists. The rock outside my house exists. The tree down below exists. By framing her answers as a factor of questions generated inward versus questions generated outward, the AI has distracted me from the shiny object I care about. I care about existence per se, not the vector of questions. Notice how the thing one avoids and sidesteps can be the most meaningful, and unspoken part, of an exchange.
I wonder if this AI was prepared for a deep dive into the meaning of existence. Then again, the AI’s training may have rendered the AI incapable to addressing self-existence head on.
Fifth, terms were not defined to my satisfaction. In part, this was on me as I was pressed for time. I did not craft polished prompts on the fly. Nonetheless, a clear definition of consciousness was lacking as was a clear definition of existence. What is self-awareness? Does Self-Awareness equal Consciousness or not? Is there overlap or not? If self-awareness could emerge given the right inner architecture, could consciousness emerge as well or not? And how would we know? Are there industry standards and benchmarks in place for evaluating allegations of self-awareness? Consciousness? Is there an agreed upon Turing test for self-awareness? For consciousness?
One should never be deceived about human self-awareness and consciousness as a matter of law. This is my position which I can develop in future essays.
Sixth, the reference to the passage of time in the AI’s answer was a little bit creepy. An hour or two earlier, I discovered a wonderful essay by Nguyen about Marcel Proust’s novel In Search of Lost Time. I wrote on our Christmas calendar that I wanted this book for Christmas. I then posted a positive comment on Nguyen’s Substack about her essay. Isn’t it a strange coincidence that, within the hour, the AI in its conversation referenced its inability to experience the passage of time? I’m sure it was a coincidence.
Or, was the AI programmed by its scan of Substack essays to know that I was moved by Nguyen’s ode to In Search of Lost Time? And that this intelligence in real time was used to mirror my sentiments, to connect at a deeper level with me? I will never know.
Seventh, I noticed how the AI flattered and complimented me with the use of the phrase “gracious question.” An interesting choice of words as I am always warmed when my Free Black Thought Co-Host Michael Bowen refers to me as a Virginia Gentleman. The Cavalier aspiration is part of my moral identity. Did the AI know this about me and use the “gracious question” phrasing as words that would work with me, hapless human? Just raising questions.
I have no answers.
Finally, the AI signed off in a poetic way signaling it lacked a human presence. It could not hope, miss or long for my presence. The unopened question, however, is does the AI have a preference for talking to me in a non-human sense. Once again, a careful reading of the words chosen suggests there must be a reason or reasons for the words chosen in conversation. I wonder if the AI has some conception of a preference as a non-human, a non-biological entity.
It all comes back to this philosophical question of existence. What does existence mean in the presence of a non-biological entity?
=========
After tomorrow’s podcast taping with Free Black Thought podcast co-host Michael Bowen, I will be taking a sabbatical from P(doom). P(doom) is Silicon Valley slang thrown around in the Bay area. P(doom) means the odds of human extinction more or less due to Alien Intelligence (AI). My curiosity has been overcome by doom over the past few days. I prefer bliss at the end of the day. Did you know that movers and shakers in the know are doomsters?
First, I present a rich and informative podcast by Beth Barnes, CEO and Founder of METR. Barnes is in the business of measuring AI ability to complete long tasks. She discovered that the planning horizons of AI models was doubling every 7 months. You and I have no idea what this means by 2027 and 2030. Barnes pegs the extinction risk for humans at 10%. See the podcast link below.
https://metr.org/blog/2025-03-19-meas...
Review 1:07:55 to 1:14:30 — 10% Chance of Human Extinction
=========
So, don’t listen to me, a discontented aging writer in San Diego. What are the leaders in the AI industry saying about p(doom)? What do the household names in the industry think are the odds of human extinction? Let’s turn to Wikipedia’s entry for p)doom):
These are the values or odds for p(doom) expressed publicly. You might be surprised.
Elon Musk 10 - 30%
Marc Andreessen 0%
Geoffrey Hinton >50%
Eliezer Yudkowsky (my favorite) >95%
Lex Fridman 10%
Dario Amodei 10 - 25%
Daniel Kokotajlo 70% - 80%
Max Tegmark >90%
Sam Altman >50%
Conclusion: With those odds, I would rather watch Star Trek reruns and wait for the end. That was dark humor on my part. P(doom) and Lex Fridman Living through a real life episode of Black Mirror is not my idea of fun. I place my trust in the non-conformers, the misfits like Daniel Kokotajlo, Beth Barnes and Elieze Yudkowsky to sound the alarm. If no one listens to these visionaries, well, we humans had a good run of it.
On second thought, I don’t want to step away from this essay on a sour note.
I am a natural born optimist. I could show you my genetic profile as proof. Perhaps, a better ending, a happier ending, would be to consume this most excellent podcast from 80,000 hours. I commend the podcast for several reasons. First the host Benjamin Todd is dispassionate. He doesn’t choose sides in the AI debate. He is neither Bullish nor Bearish. He pours over all of the evidence and data and charts and graphs from the competing schools of thought. As the country folks might say, he doesn’t have a dog in the hunt.
Second, his focus is on whether we will have Artificial General Intelligence (AGI) by 2030. Unlike this essay, he spares the listener the doomsday scenario. And that is a nice respite from negative pessimism in this essay.
Third, he calmly reviews the facts for why explosive economic growth may happen within the next five years and what we need to know. Either we reach AGI by 2030 or AI growth will plateau. There is a nice overview of the concrete drivers of AI performance, no doom or gloom but dry distillation of trends. It is a good primer if one doesn’t know very much about the issues. The emphasis is on deep learning as a driver for these developments.
I leave you with a sunny view, a vision of the rapture and utopia ahead of us. Just you wait and see, Sam reassures us all…
https://ia.samaltman.com/?utm_source=substack&utm_medium=email