Latest Trends

Microsoft AI chief says only biological beings can be conscious

Mustafa Suleyman, CEO of Microsoft AI, speaks during an event commemorating the company’s 50th anniversary at Microsoft headquarters in Redmond, Washington, April 4, 2025.

David Ryder | Bloomberg | Getty Images

Microsoft AI chief Mustafa Suleyman says only biological beings are capable of consciousness and that developers and researchers should stop pursuing projects that suggest otherwise.

“I don’t think it’s a job that people should be doing,” Suleyman told CNBC in an interview this week at the AfroTech conference in Houston, where he was among the keynote speakers. “If you ask the wrong question, you get the wrong answer. I think it’s totally the wrong question.”

Suleyman, Microsoft’s most senior executive working on artificial intelligence, has been one of the leading voices in the emerging field speaking out against the prospect of seemingly conscious AI, or AI services capable of convincing humans that they are capable of suffering.

In 2023, he co-authored the book “The Coming Wave,” which explores the risks of AI and other emerging technologies. And in August, Suleyman wrote an essay titled: “We must build AI for people, not for being a person.” »

This is a controversial topic, as the market for AI companions is growing rapidly, with products from companies such as Meta and Elon Musk’s xAI. And it’s a complex question because the generative AI market, led by Sam Altman and OpenAI, is moving toward artificial general intelligence (AGI), or AI capable of performing intellectual tasks comparable to the capabilities of humans.

Read more CNBC reports on AI

Altman told CNBC’s “Squawk Box” in August that AGI is “not a very useful term” and that what’s really happening is that the models are advancing quickly and we’ll rely on them “for more and more things.”

For Suleyman, it is particularly important to draw a clear contrast between AI becoming smarter and more capable and its ability to feel human emotions.

“Our physical experience of pain is something that makes us very sad and makes us feel very bad, but AI doesn’t feel sad when it feels ‘pain,'” Suleyman said. “That’s a very, very important distinction. It’s just about creating the perception, the apparent narrative of the experience, of itself and of consciousness, but that’s not what one actually experiences. Technically, you know that because we can see what the model is doing.”

In the field of AI, there is a theory called biological naturalism, proposed by philosopher John Searle, which states that consciousness depends on the processes of a living brain.

“The reason we give people rights today is because we don’t want to hurt them, because they are in pain. They have a pain network and they have preferences that involve avoiding pain,” Suleyman said. “These models don’t have that. It’s just a simulation.”

Suleyman and others have said that the science of consciousness detection is still in its infancy. He did not go so far as to say that others should be prevented from researching the topic, acknowledging that “different organizations have different missions.”

But Suleyman stressed how opposed he was to the idea.

“They are not conscious,” he said. “It would therefore be absurd to continue research into this issue, because it is not the case and it cannot be.”

“Places we won’t go”

Suleyman is on a speaking tour, in part to educate the public about the risks of pursuing AI consciousness.

Ahead of the AfroTech conference, he spoke last week at the Paley International Council summit in Silicon Valley. Suleyman said there that Microsoft would not build chatbots for erotica, a position that conflicts with others in the tech sector. Altman announced in October that ChatGPT would allow adult users to participate in erotic conversations, while xAI offers a risky animated companion.

“Basically, you can buy these services from other companies, so we make decisions about where we won’t go,” Suleyman reiterated at AfroTech.

Suleyman joined Microsoft in 2024 after the company paid his startup, Inflection AI, $650 million in a licensing and acquisition deal. He previously co-founded DeepMind and sold it to Google for $400 million more than ten years ago.

During his Q&A session at AfroTech, Suleyman said he decided to join Microsoft last year in part because of the company’s history, stability and broad technological reach. It was also sued by CEO Satya Nadella.

“The other thing to say is that Microsoft needed to be self-sufficient in AI,” he said on stage. “Satya, our CEO, embarked on this mission about 18 months ago, to ensure that internally we had the ability to train our own models end-to-end with all of our own data, pre-training, post-training, reasoning, deploying into products. And that was part of building my team.”

Since 2019, Microsoft has been a major investor and cloud partner of OpenAI, and the companies have used their respective strengths to build large AI companies. But those relationships have shown signs of strain lately, with OpenAI partnering with Microsoft competitors like Google and Oracleand Microsoft is focusing more on its own AI services.

Suleyman’s concerns about consciousness gained resonance. In October, California Governor Gavin Newsom signed SB 243, which requires chatbots to disclose that they are AI and tell minors every three hours to “take a break.”

Last week, Microsoft announced new features for its Copilot AI service, including an AI companion called Mico and the ability to interact with Copilot in group chats with others. Suleyman said Microsoft is creating services that are aware that they are AI.

“We are simply creating AIs that always work in service of humans,” he said.

There’s plenty of room for personality, he added.

“The knowledge is there and the models are very, very responsive,” Suleyman said. “It’s up to everyone to try to sculpt AI personalities with values ​​that they want to see, that they want to use, and that they want to interact with.”

Suleyman pointed to a feature Microsoft launched last week called Real Talk, which is a conversational style of Copilot designed to challenge users’ viewpoints instead of being sycophantic.

Suleyman called the actual comments impertinent and said they recently roasted him, calling him “the ultimate bundle of contradictions” for warning of the dangers of AI in his book while accelerating its development at Microsoft.

“It was just a magical use case because in a way it felt like I felt sort of seen by it,” Suleyman said, noting that AI itself is full of contradictions.

“It’s both disappointing in some ways and, at the same time, it’s totally magical,” he said. “And if that doesn’t scare you, you don’t really understand it. You should be afraid of it. Fear is healthy. Skepticism is necessary. We don’t need unbridled accelerationism.”

WATCH: Microsoft is making money from AI, says Jefferies’ Brent Thill

Microsoft is making money from AI, says Jefferies' Brent Thill

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button