$13b Run Rate & Doubling
Microsoft announced earnings yesterday & the data painted a brilliant picture for the future of AI.
Greater than 30% annual growth in back-to-back quarters is sensational for a $100b run rate business. Microsoft is projecting similar for next quarter. The AI subset is on a $13b run rate, more than double last year.
Azure other cloud services revenue grew 31%. Azure growth included 13 points from AI services, which grew 157% year-over-year, and was ahead of expectations even as demand continued to be higher than our available capacity… In Azure, we expect Q3 revenue growth to be between 31% and 32% in constant currency driven by strong demand for our portfolio of services.
Data center capacity remains the limiting factor, but that constraint should subside by end of year.
We have more than doubled our overall data center capacity in the last 3 years, and we have added more capacity last year than any other year in our history…And while we expect to be AI capacity constrained in Q3, by the end of FY ‘25, we should be roughly in line with near-term demand given our significant capital investments.
RPOs, or remaining performance obligations, are customer prepurchases of Azure compute that haven’t yet been used. $300b of RPO is basically the next 2 years of Azure revenue already committed.
As we shared last week, we are thrilled OpenAI has made a new large Azure commitment…We have, and I think we talked about it, close to $300 billion of RPO.
AI performance gains in software are 5x more effective than those in hardware. Deepseek’s announcements last week underscore the point.
On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimizations. And as AI becomes more efficient and accessible, we will see exponentially more demand.
Smaller models will be run on consumer hardware. NPUs (neural processing units) are newer consumer chips that run AI. Apple, Qualcomm, Intel, Samsung & others offer them. As models become more powerful, they will run on less expensive equipment. It’s interesting to see DeepSeek mentioned specifically here :
We also see more and more developers from Adobe and CapCut to WhatsApp build apps that leverage built-in NPUs. And they will soon be able to run DeepSeek’s R1 distilled models locally on Copilot+ PCs as well as the vast ecosystem of GPUs available on Windows. And beyond Copilot+ PCs, the most powerful AI workstation for local development is a Windows PC running WSL2 powered by NVIDIA RTX GPUs.
200,000 engineers are building AI in the 2 months of the AI Foundry launched. There are 150m engineers on Github, so overall AI penetration remains very small.
Azure AI Foundry features best-in-class tooling run times to build agents, multi-agent apps, AIOps, API access to thousands of models. Two months in, we already have more than 200,000 monthly active users.. All up, GitHub now is home to 150 million developers, up 50% over the past 2 years.
—————
Boost Internet Speed–
Free Business Hosting–
Free Email Account–
Dropcatch–
Free Secure Email–
Secure Email–
Cheap VOIP Calls–
Free Hosting–
Boost Inflight Wifi–
Premium Domains–
Free Domains