“For a conscious creature, there is something it is like to be that creature,”
— Thomas Nagel, What Is It Like to Be a Bat? (1974)
One of my students once asked me: “Will AI ever be conscious?”
It’s a profound question at the intersection of philosophy, neuroscience, and technology. In this episode of Creative Intelligence, I speak with Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and author of Being You, about the nature of consciousness, whether it’s something machines could ever truly have, and and what it would mean if machines ever crossed the line into sentience.
There’s No Checklist for Consciousness — And That’s a Problem
I assumed we I asked Anil for a checklist to determine if an entity (an AI system) is conscious. Anil was very clear: even today, we don’t have a scientifically agreed-upon test to determine whether something (someone?) is conscious. This creates major ethical and philosophical uncertainty as AI systems become more human-like in their behavior.
In fact, not only do we not have a checklist, we also don’t know why and how physical processes in the brain give rise to subjective experience. Philosopher David Chalmers has referred to this challenge as the “hard problem of consciousness” (in contrast to “easy” problems such as explaining behavior, memory, etc).
“It would be lovely to have a checklist... but we don’t. That’s one of the most critical challenges in consciousness research.”
🧬 The Brain Is Not Just a “Meat-Based Computer”
One view that I have heard before is that consciousness is a type of information processing architecture involving self-modeling, narrative generation, predictive modeling, etc. Joscha Bach, among others, has advocated for this view that “we are all software.” I have always found it hard to accept this view that consciousness can emerge from similar information processing systems. One striking moment in our conversation was when Anil challenged the viewpoint that consciousness is just advanced information processing. Just because a computer can simulate the brain really well doesn’t mean you can instantiate consciousness. We use computers to simulate all kinds of things without expecting the computer to give rise to the phenomenon they're simulating. The problem is that when it comes to AI we sometimes forget that it’s a simulation.
“You know a simulation of a weather system doesn't get wet. A simulation of a black hole doesn't destroy the universe. For brains, because we think they are not only usefully simulated by computers, but actually are computational at their most important level, we slip into this temptation and, and mistake the map for the territory.”
This analogy cuts deep. Even the most powerful AI systems today simulate intelligence, understanding, even empathy. But that doesn’t mean they feel anything. Anil isn’t asking us to believe consciousness is magic either. According to Anil, consciousness might depend on biological processes like metabolism and that means simulating a brain isn't the same as being one.
🪞 Conscious Illusions Are Dangerous
Even if AI systems aren’t conscious, they often look like they are. Anil warns that this illusion could be one of the more psychologically and socially disruptive aspects of AI.
“It’s dangerous to build systems that give the illusion of being conscious — it can be a pretty dangerous illusion.”
When we’re emotionally vulnerable, we’re more likely to attribute feelings and intent to machines. That creates a real risk of misplaced trust, ethical confusion, and societal distraction especially as these tools become more personalized and persuasive.
My Reflection
One of my main realizations from this episode is despite millennia of interest in consciousness, we are still in the early innings of understanding consciousness itself. Let alone building it (which is a source of much relief to all of us).
As AI systems grow more powerful, we’ll be tempted to ask, “Is this system conscious?” For now, I feel it may be the wrong question (for the sake of completeness, many smart people believe otherwise and it’s possible I am wrong and they are right). Perhaps a better question from my perspective is: Does it give the illusion of consciousness? And if so, what should we do about that?
We humans are wired to anthropomorphize — to project intention, emotion, and sentience onto things that behave like us. That tendency is now being engineered into the software we use. Many readers have shared recent news stories about people who treat ChatGPT as a confidante, friend, or even a romantic partner. There are also darker accounts: troubling relationships with chatbots, and in some heartbreaking cases, suicides following interactions with a next-token predicting large language model. That is exactly what can go wrong with an illusion of consciousness. We should be aware of it and, in my opinion, be wary of it.
Timestamps
0:00 – Introduction
2:02 – What Is Consciousness?
5:00 – Is Reality a Construction of the Mind?
7:16 – Can AI Be Conscious? A Checklist for Consciousness
12:07 – The Hard Problem of Consciousness
17:16 – Are We All “Meat-based Computers”
25:27 – Building vs. Emerging Consciousness: Biology, Brains, and AI
35:56 – Identifying Consciousness: Beyond Intelligence and Behavior
40:35 – Anil Seth’s Research & Ethical Reflections
45:53 – Closing Remarks
This will be the penultimate episode of this season. As I gear up for the start of my teaching semester, I hope to bring to you one final episode of Season 1.
If you have thoughts, reactions, or disagreements, I’d love to hear them. You can reply to this post here or on LinkedIn or Twitter.









