Microsoft, no longer dependent on OpenAI, is now competing for 'superintelligence,' with AI head Mustafa Suleyman aiming to guarantee it benefits humankind.

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Coins2Day and co-authors Eye on AI, Coins2Day’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Coins2Day, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Coins2Day’s flagship AI newsletter.

Microsoft AI CEO Mustafa Suleyman
Microsoft AI CEO Mustafa Suleyman
Stephen Brashear—Getty Images

Upon his arrival at Microsoft in March 2024 to head the firm's nascent consumer AI division, which includes offerings such as Copilot, Mustafa Suleyman faced distinct constraints on his authority.

TL;DR

  • Microsoft AI CEO Mustafa Suleyman announced the MAI Superintelligence Team to pursue "humanist superintelligence."
  • This new team aims to develop advanced AI capabilities that benefit humanity, differentiating from competitors.
  • Microsoft is now pursuing its own superintelligence development while maintaining its OpenAI partnership.
  • The initiative prioritizes human well-being and responsible innovation, with Karén Simonyan as chief scientist.

Due to Microsoft’s landmark deal with OpenAI, the firm was prohibited from engaging in its own AGI development. The accord also limited the scale of models Microsoft could develop, preventing the company from creating systems that surpassed a specific computational limit. (This constraint was quantified in FLOPS, representing the quantity of mathematical operations an AI model executes each second, serving as a general indicator of the total computational resources invested in training a model.) 

“For a company of our scale, that’s a big limitation,” Suleyman told Coins2Day.

That’s all changing now: Suleyman announced the formation of the new MAI Superintelligence Team on Thursday. Led by Suleyman and part of the broader Microsoft AI business, the team will work toward “humanist superintelligence (HSI),” which Suleyman defined in a blog post as “incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally.”

Microsoft is the just latest company to rebrand its advanced AI efforts as a drive toward “superintelligence”—the idea of artificial intelligence systems that would potentially be wiser than all of humanity combined. But for now, it’s better marketing than science. No such systems currently exist, and scientists debate whether superintelligence is even achievable with current AI methods.

Despite this, businesses haven't refrained from declaring superintelligence their objective and forming groups labeled “superintelligence.”. Significantly, Meta renamed its AI initiatives to Meta Superintelligence Labs in June 2025. Sam Altman, the CEO of OpenAI, has stated that his organization has already determined how to create artificial general intelligence, or AGI—the concept of an AI system possessing human-level proficiency across most cognitive functions—and, although it hasn't yet launched an AI model achieving this initial aim, it has started to consider possibilities beyond AGI, toward superintelligence.

Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist, cofounded an AI startup called Safe Superintelligence that is also dedicated to creating this hypothetical superpowerful AI and making sure it remains controllable. He had previously led a similar effort within OpenAI. AI company Anthropic also has a team dedicated to researching how to control a hypothetical future superintelligence.

Microsoft's positioning of its new superintelligence initiative as “humanist superintelligence” is a calculated move to differentiate itself from the more tech-focused objectives of competitors such as OpenAI and Meta. “We reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavor to improve our lives and future prospects,” Suleyman stated in the blog entry. “We also reject binaries of boom and doom; we’re in this for the long haul to deliver tangible, specific, safe benefits for billions of people. We feel a deep responsibility to get this right.”

Microsoft AI has been on a year-long journey to establish an AI “self-sufficiency effort,”, Suleyman informed Coins2Day, while simultaneously aiming to extend its OpenAI partnership by 2030, ensuring continued early access to OpenAI’s premier models and intellectual property.

Now, he explained, “we have a best-of-both environment, where we’re free to pursue our own superintelligence and also work closely with them.”

This newfound independence has necessitated substantial spending on AI chips for the team's model training, although Suleyman opted not to disclose the quantity of the team's GPU inventory. Primarily, he stated, the undertaking centers on “making sure we have a culture in the team that is focused on developing the absolute frontier [of AI research].”. He admitted that achieving this objective will span several years, but indicated it represents a “key priority” for Microsoft.

Karén Simonyan has been appointed as the chief scientist for the newly formed Humanist Superintelligence team. Simonyan came to Microsoft in the same March 2024 deal which saw Suleyman and several other significant researchers move from Inflection, the AI venture he established, to the company. The team also comprises numerous researchers previously recruited by Microsoft from Google, DeepMind, Meta, OpenAI, and Anthropic. 

Suleyman maintained that the new superintelligence initiative, prioritizing humanity's well-being, doesn't preclude rapid innovation, though he acknowledged that creating a “humanist” superintelligence necessitates careful consideration of its “not ready for primetime.”.

Regarding how his perspectives align with AI leaders in the Trump administration, like AI and crypto “czar” David Sacks, who advocate for unrestricted AI advancement and reduced oversight, Suleyman stated that Sacks is accurate in several respects.

“David’s totally right, we should accelerate. It’s critical for America; it’s critical for the West in general,” he said. However, he added, AI developers can push the envelope while also understanding potential risks like misinformation, social manipulation, and autonomous systems that act outside of human intent.

“We should be going as fast as possible within the constraints of making sure it doesn’t harm us,” he said.