ARTICLE AD BOX
![]()
Microsoft AI chief Mustafa Suleyman has advised developers and researchers to stop pursuing projects that suggest artificial intelligence (AI) is capable of consciousness. He said this, asserting that only biological beings possess this trait and noted that the pursuit is misplaced.
In a recent interview with CNBC at the AfroTech Conference, Suleyman said, “I don’t think that is work that people should be doing. If you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question.” Suleyman has been a vocal opponent of the prospect of conscious AI, or AI services that could convince humans they are capable of suffering.In 2023, Suleyman co-authored the book “The Coming Wave” and penned an essay titled, “We must build AI for people, not to be a person.” His comments come as the generative AI market, led by companies like OpenAI, pushes toward artificial general intelligence (AGI). Meanwhile, the AI companion market is growing with products from companies including Meta and Elon Musk’s xAI.
What Mustafa Suleyman said about AI having emotions
Highlighting the difference between AI becoming more capable and the idea of it having emotions, Suleyman said: “Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain.’
It’s a very, very important distinction. It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing.”Referring to the theory of biological naturalism, which suggests that consciousness depends on the processes of a living brain, Suleyman explained: “The reason we give people rights today is because we don’t want to harm them, because they suffer. They have a pain network, and they have preferences which involve avoiding pain. These models don’t have that. It’s just a simulation.”Suleyman also highlighted that the science of detecting consciousness is still in its early stages. While he acknowledged that different organisations may have their own research goals, he strongly opposed studying AI consciousness.“They’re not conscious. So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be,” he added.
English (US) ·