Microsoft began deploying its second-generation AI processor, the Maia 200, in select data centers. This initiative is part of a broader effort by major cloud providers. The goal is to lessen dependency on Nvidia hardware and gain control over AI infrastructure economics.
The shift to custom silicon fundamentally changes cloud computing strategy as AI workloads expand. Developing proprietary chips allows Microsoft to potentially offer price advantages for AI inference. AI inference represents a growing cost for customers.
Microsoft currently uses the Maia 200 for internal workloads. The company has not provided a timeline for external customer availability.