Elon Musk just moved the conversation about where the internet’s heavy lifting happens from warehouse parks and desert plains into low Earth orbit.
This month SpaceX filed with the FCC for what the company describes as a network of “orbital data centres” — a proposal that, on paper, could one day put vast amounts of AI compute above our heads. The filing hints at as many as a million satellites dedicated to compute, not just connectivity. Days later, Musk folded his AI startup xAI into SpaceX, tying rockets and models together in a single corporate story.
Why the idea sounds so tempting
Musk’s pitch is simple and alluring: sunlight in space is stronger and more continuous than on the ground, so solar arrays can deliver more energy per square meter. Less dependency on terrestrial grids and the water-hungry cooling systems that large data centers use today could, in theory, ease the environmental and community fights that have sprung up around ground facilities.
On a recent podcast Musk said that in 30–36 months space will be “the most economically compelling place to put AI.” It’s a headline-grabbing timeline. It also explains why a rocket company might want to own an AI company.
The practical hurdles are real
Talk to engineers and the sales pitch starts to fray. The physics of space give you free sunlight—and also relentless radiation, a vacuum that makes heat rejection tricky, and an environment where physical repairs are orders of magnitude more expensive.
Cooling: In vacuum you can’t convect heat away with air. Everything must radiate heat to cold sinks, which means big radiators that must be pointed away from the Sun. That’s doable, but it’s heavy and geometry-dependent; radiator size scales with waste heat. Voyager Technologies’ CEO, Dylan Taylor, told CNBC that cooling remains a major unresolved challenge and that two years would be “aggressive” to solve it.
Radiation and reliability: Outside Earth’s protective atmosphere and magnetic field, electronics suffer bit flips and cumulative damage from ionizing radiation. Error-correcting code and radiation-hardened hardware help, but they add cost and lower performance-per-watt compared with commercial GPUs on Earth. Software can mitigate transient errors, but permanent hardware degradation is harder to handle when maintenance missions cost millions.
Servicing and upgrades: Data centers on Earth routinely swap failing drives and upgrade racks every few years. In orbit, sending human crews for repairs is absurdly expensive; relying on robotic servicing or designing fully redundant systems bloats mass and complexity. As one engineer put it, things break constantly in data centers. That’s part of the operating model on the ground. In orbit, that becomes a design constraint.
Debris and congestion: Launching tens or hundreds of thousands of additional objects into low Earth orbit raises collision risk. Astronomers have already complained about the visual and radio interference from Starlink satellites; more hardware increases the stakes for orbital traffic management and long-term sustainability.
The economics puzzle
Musk’s argument centers on energy economics: more sunlight per panel means lower operating costs for power. But power is only one input. Launch costs, radiator mass, radiation-hardening, redundancy, communications latency and bandwidth, and the cost of returning or replacing hardware all factor into the bottom line. Analysts at Deutsche Bank and others estimate parity with terrestrial centers is likely further out — toward the 2030s in optimistic scenarios, not 30 months.
Google has been quietly studying this problem too (see its Project Suncatcher for experiments and modeling), and several startups have already sent test servers to space. Investors and industry players are sniffing around different trade-offs: smaller, specialized orbital nodes for inference; larger platforms for model training; or hybrid architectures that split workloads between Earth and orbit.
If you care about the human side of the equation, moving compute aloft could relieve local strain on power grids and reduce the need for huge ground installations that some communities oppose. But those benefits must be balanced against capped launch cadence, the cost of building and replacing orbital hardware, and geopolitical and regulatory questions about where and how compute is licensed and controlled.
Who’s in the race — and what they’re actually building
SpaceX’s FCC filing grabbed headlines because of its scale. But the idea isn’t uniquely Musk’s. Google has public plans to test the concept; smaller companies and consortiums (including Voyager’s Starlab ambitions) are exploring compute and laser-communication links between space nodes and the ground. Starcloud and others have launched experimental satellites with servers aboard.
Some of the ecosystem is already useful even if full-scale orbital data centers remain distant. For example, satellite networks like Starlink are being used for emergency connectivity and even new services on the ground — T‑Mobile recently leveraged satellite links to enable texting 911 via Starlink in some cases — showing that space-based infrastructure can supplement terrestrial systems even before any large compute grid exists. See the work on those services for how connectivity and compute might couple in practice T‑Mobile’s satellite texting effort.
Timeline and politics
Musk’s timelines are famously optimistic. Regulators have options and levers: the FCC accepted the filing for public comment, and the chair shared the application publicly on X. That visibility matters — so do federal space and defense priorities, funding flows, and export-control regimes. If a company is seen as strategically important, approvals and public investment can accelerate projects. Conversely, congressional scrutiny, international coordination and astronomers’ complaints could introduce friction.
What this could mean for AI and the planet
If orbital compute becomes cost-competitive, the architecture of AI could shift. Training clusters might stage heavy jobs in space while latency-sensitive inference stays closer to users. That hybrid model would upend assumptions about data locality, cooling, and energy sourcing.
But we aren’t there yet. The technical obstacles—cooling in vacuum, radiation resilience, replaceability, and debris management—are engineering problems, not philosophy, and they carry real price tags. Whether the long-term arithmetic favors orbit depends on falling launch costs, breakthroughs in lightweight radiators and durable hardware, and how much Earth’s own energy and water economics change in the next decade.
One thing is certain: the idea has moved from sci‑fi musing to regulatory paperwork to corporate strategy in weeks. That shift will force engineers, regulators, economists and communities to wrestle with trade-offs that until now were mostly hypothetical. Some of the biggest gains may be incremental: better satellite lasers, improved on-orbit servicing, smarter workload scheduling between ground and space. Those advances will determine whether Musk’s dizzying timeline is bravado or prophecy.
If nothing else, the conversation is changing the map of where compute — and the politics, economics and environmental impacts that come with it — gets built.
For more on what major companies are testing about moving data centers into orbit, see Google’s work on orbital compute Project Suncatcher.