Qualcomm announces AI chips to compete with AMD and Nvidia

Qualcomm announced on Monday that it would launch new artificial intelligence accelerator chips, marking new competition for Nvidiawhich has dominated the AI semiconductor market so far.
The AI chips mark a shift from Qualcomm, which has until now focused on semiconductors for wireless connectivity and mobile devices, not massive data centers.
Qualcomm said the AI200, which goes on sale in 2026, and the AI250, planned for 2027, can be integrated into a system that fills an entire liquid-cooled server rack.
Qualcomm matches Nvidia and AMDwhich offer their graphics processing units, or GPUs, in full rack systems allowing up to 72 chips to act as a single computer. AI labs need this computing power to run the most advanced models.
Qualcomm’s data center chips are based on the AI components of Qualcomm’s smartphone chips, called Hexagon Neural Processing Units, or NPUs.
“We wanted to prove ourselves in other areas first, and once we built our strength there, it was pretty easy for us to move up a notch at the data center level,” said Durga Malladi, Qualcomm’s general manager of data centers and edge, on a call with reporters last week.
Qualcomm’s entry into the data center world marks new competition in the fastest-growing technology market: equipment for new AI-driven server farms.
Nearly $6.7 trillion in capital spending will be devoted to data centers by 2030, with the majority going toward systems based on AI chips, according to an estimate from McKinsey.
The sector has been dominated by Nvidia, whose GPUs make up more than 90% of the market so far and whose sales have driven the company to a market capitalization of more than $4.5 trillion. Nvidia’s chips were used to train OpenAI’s GPTs, the large language models used in ChatGPT.
But companies like OpenAI are looking for alternatives, and earlier this month the startup announced plans to buy chips from second-largest GPU maker AMD and potentially take a stake in the company. Other companies, such as Google, Amazon And Microsoftare also developing their own AI accelerators for their cloud services.
Qualcomm said its chips focus on inference, or running AI models, rather than training, which explains why labs such as OpenAI are creating new AI capabilities by processing terabytes of data.
The chipmaker said its rack-scale systems will ultimately cost less to operate for customers such as cloud service providers, and that one rack uses 160 kilowatts, which is comparable to the high power consumption of some Nvidia GPU racks.
Malladi said Qualcomm will also sell its AI chips and other parts separately, especially for customers such as hyperscalers who prefer to design their own racks. He said other AI chip companies, such as Nvidia or AMD, could even become customers of parts of Qualcomm’s data center, such as its central processing unit, or CPU.
“What we’ve tried to do is make sure our customers are able to take everything or say, ‘I’ll mix and match,'” Malladi said.
The company declined to comment on chip, board or rack pricing, as well as how many NPUs can be installed in a single rack. In May, Qualcomm announced a partnership with Saudi company Humain to supply data centers in the region with AI inference chips, and it will be Qualcomm’s customer, committing to deploy as many systems that can use 200 megawatts of power.
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership and a new approach to memory management. It said its AI cards support 768GB of memory, which is higher than offerings from Nvidia and AMD.
Qualcomm’s design for an AI server called AI200.
Qualcomm



