Microsoft’s latest quarterly results read like a story of two engines: booming AI demand on one side, and a capital-intensive race to keep up on the other. The software giant beat Wall Street on revenue and profit — revenue was $81.27 billion and adjusted EPS came in at $5.16 — yet investors punished the stock after-hours as the company signaled that the cost of building AI capacity is eating into margins.

Growth, but not at any price

Azure and other cloud services still grew fast — about 39% year over year — and overall cloud revenue cracked the $50 billion mark for the first time, roughly $51.5 billion. Productivity businesses, which include Microsoft 365, continued to expand; Microsoft disclosed it now has roughly 15 million paid seats for the Microsoft 365 Copilot add-on, against more than 450 million paid commercial Microsoft 365 seats overall. Those are the kinds of numbers investors like to see.

But growth slightly moderated: Azure’s growth ticked down from about 40% in the prior quarter to 39%, and the company’s gross margin tightened to its narrowest level in three years, just over 68%. On the guidance front Microsoft implied an operating margin around 45.1% for the coming quarter — a touch below Street expectations — and said operating expenses will include continued investments in AI computing capacity and talent.

The data-center sprint

The headline that rattled markets was capital spending: Microsoft recorded $37.5 billion in capital expenditures and finance leases for the quarter, a 66% jump year over year. The company said it added nearly one gigawatt of total capacity in the quarter alone. That’s not just racks and power — it’s custom chips, networking, leases with third-party providers and the electricity to run generative-AI workloads around the clock.

Microsoft has been expanding both its own footprint and third-party arrangements: it leases capacity from firms such as CoreWeave and others, and it’s built bespoke infrastructure for partners like OpenAI. All this furthers Microsoft’s strategic bet that controlling scale and latency for AI workloads will pay off, but it’s expensive and it weighs on near-term margins.

If you want a sense of how the industry is thinking about capacity and geography, take a look at broader experiments with where datacenters can sit — even ambitious ideas like putting compute in orbit have entered the conversation in recent months through other industry projects. That helps explain why companies are so focused on securing capacity now rather than waiting.

Money on the books — and concentration risk

Microsoft’s commercial remaining performance obligation (RPO) — a measure of contracted-but-unrecognized revenue — jumped to $625 billion. But nearly half of that, about 45%, is tied to OpenAI commitments. That concentration has analysts asking whether OpenAI can deliver the cashflow to match those commitments and what it would mean for Microsoft if the mix or cadence changes.

On the earnings call, CFO Amy Hood defended the RPO mix as “larger than most peers” and argued the rest of the backlog is diversified and growing. Still, investors tend to dislike large, single-customer exposures folded into long-term revenue metrics.

The market reaction and the product picture

Shares fell: reports showed a roughly 7% drop in extended trading, though intraday moves ranged lower (some outlets noted a 4–5% decline). Why the selloff despite a beat? Two main reasons: the scale of AI-related spending and slightly lighter-than-expected margin guidance. Put simply, Wall Street cheered the demand story but grumbled about the price tag.

The company did report bright spots beyond raw cloud numbers. Productivity and business processes revenue topped estimates, and Microsoft continues to position Copilot add-ons and GitHub Copilot as ways to grow monetization per customer. Meanwhile, More Personal Computing — the Windows/Surface/Xbox segment — was weaker, with gaming revenue declining and an unspecified impairment noted for the division.

Why this matters beyond Microsoft

Microsoft’s choices highlight a tension that will shape the next phase of the AI arms race. Companies can either try to be thin-margin aggregators of models and applications, or they can build expensive, proprietary infrastructure to deliver differentiated performance and capture a greater slice of value. Microsoft clearly chose the latter: it’s building scale and tying partners into its cloud and compute ecosystem. That strategy powers products like Microsoft’s own in-house models (the company has been developing multimodal AI work, such as its recent text-to-image efforts) and strengthens its position as a provider of choice for large AI customers.

For readers tracking the industry, it’s useful to remember that other cloud and AI players are experimenting with unconventional approaches to capacity and location — from new terrestrial regions to more exotic concepts — which all feed back into how quickly supply can meet the voracious demand for generative-AI compute. If you’re following Microsoft’s model strategy, check how its in-house model work is progressing in tandem with infrastructure builds in coverage like the company’s recent image-model announcement and the wider datacenter innovations shaping capacity planning.Microsoft Unveils MAI-Image-1 Google’s Project Suncatcher represents another side of that conversation about where and how to place compute.

Microsoft’s quarter shows that the industry’s next battleground won’t be just algorithms or apps: it will be kilowatts, leases and the ability to deliver AI at scale without letting unit economics slip away. For now, customers want more AI and faster; investors want to know how long Microsoft will spend to get there and when those investments turn into steadier, fatter profits. The company’s answers will shape not only its stock price but the wider cloud market for years to come.

MicrosoftCloudAIEarningsData Centers