Microsoft Unveils Maia 200 AI Chip, Outperforming Amazon and Google in Cloud Performance
Microsoft has introduced the Maia 200 AI chip, leveraging TSMC's 3nm process to achieve 30% better performance per dollar than current models, while outperforming Amazon's Trainium 3 and Google's TPU v7 in specific performance metrics. With over 140 billion transistors and 216 GB of memory, the chip is tailored for large-scale AI tasks and will support OpenAI's GPT-5.2 model and Microsoft 365 Copilot. Currently deployed in Iowa, the Maia 200 is part of Microsoft's strategy to enhance its hardware capabilities in the competitive AI sector.

Microsoft has launched the Maia 200 AI chip, built on TSMC's 3nm process, claiming it delivers 30% better performance per dollar than existing models in its data centers. The chip, featuring over 140 billion transistors and 216 GB of high-speed memory, reportedly offers three times the FP4 performance of Amazon's Trainium 3 and surpasses Google's TPU v7 in FP8 performance, though independent verification is pending.
The Maia 200 is designed for large-scale AI workloads and will host OpenAI's GPT-5.2 model and Microsoft 365 Copilot. It is currently deployed in Microsoft's Iowa data center, with Arizona as the next planned location.
Developers can access a preview of the Maia SDK. This launch is part of Microsoft's broader strategy to control its hardware ecosystem in the competitive AI landscape.




Comments