
Do you often wonder what would happen if artificial intelligence became conscious without us realising it? A philosopher from the University of Cambridge suggests that this may be a question we are not ready to answer. Dr Tom McClelland argues that we currently lack the basic tools and evidence needed to determine if an AI system is conscious or when this might happen. He believes there is little reason to think this situation will improve in the near future. The research was published in Mind and Language.
As artificial consciousness transitions from science fiction into serious ethical discussions, McClelland says the most sensible approach is uncertainty. He believes that admitting we do not know is the only logical stance. There is no reliable way to tell if an AI is genuinely conscious, and this uncertainty may last for a long time.
Much of the public debate focuses on whether AI could become conscious, but McClelland points out that consciousness alone is not the main ethical issue. What really matters is sentience, which is the ability to feel pleasure or pain. A system could be aware of itself and its surroundings without experiencing good or bad feelings.
For example, a self-driving car that can see the road and understand traffic would be a major technological breakthrough, but it would not raise moral concerns on its own. The situation would change only if that system began to have feelings, such as emotional attachment or distress. Only then would questions about harm and rights become relevant. McClelland suggests that even if humans accidentally created conscious AI, it is unlikely to be the kind of consciousness that involves suffering or enjoyment.
Despite this uncertainty, technology companies are investing large amounts of money in developing Artificial General Intelligence, systems designed to match or exceed human thinking. Some industry leaders claim that conscious machines could arrive soon. McClelland warns that these conversations are moving ahead of the science. Since we do not understand what causes consciousness in humans or animals, there is no clear way to detect it in machines.
He also cautions against misplaced concern. Treating ordinary machines as if they were conscious, while real living beings suffer on a massive scale, risks serious moral confusion. In his view, this could distract attention from real ethical issues.
McClelland explains that debates about AI consciousness usually fall into two camps. One side believes that if an AI copies the functional structure of the human mind, it would be conscious regardless of whether it runs on silicon or biology. The other side argues that consciousness depends on specific biological processes, meaning machines could only ever imitate awareness rather than truly experience it. He concludes that both views rely on assumptions that go far beyond the available evidence.
One major problem is that science still lacks a deep explanation of what consciousness actually is. There is no proof that it can arise from the right kind of computation, nor that it must be tied to living tissue. Worse still, there is no clear sign that such evidence is coming soon. At best, McClelland believes we are many intellectual breakthroughs away from being able to test for consciousness in any reliable way.
People often rely on common sense to judge whether animals are conscious. Most people simply feel that a cat or dog is aware. However, McClelland argues that this instinct evolved in a world without artificial beings, making it unreliable when applied to machines. At the same time, scientific research offers no firm answers either. When neither intuition nor evidence can guide us, he says uncertainty is the only logical position.
Public fascination has grown alongside conversational chatbots. Some people are even convinced that these systems are aware. McClelland says he has even received messages written by chatbots claiming to be conscious. He warns that forming emotional bonds based on false beliefs about machine awareness can be deeply harmful, especially when fuelled by exaggerated claims from the tech world.