Since the ChatGPT moment in late 2022, artificial intelligence has stopped being a niche research topic and started functioning like an industrial complex — factories of compute, battalions of startups, and a growing roster of public figures steering policy and product. Recent profiles and lists from across the media and academia capture that change: corporate chiefs who control the hardware; researchers who decide what counts as responsible work; activists and bereaved families who are forcing safety into law. Put them together and you get a picture of a field that’s simultaneously technocratic and political.

Who’s holding the levers

Look at the names that keep recurring in year-end roundups and you begin to see clusters, not just individuals. On the infrastructure side there’s Jensen Huang at NVIDIA, the chipmaker that has become the plumbing of modern AI. Data‑center leaders — from Meta’s Rachel Peterson to firms like Digital Realty — are racing to build the enormous power and cooling capacity neural nets demand. That race even spills into unusual places: companies and governments are talking about off‑world options for datacenters as they chase space for peak capacity.

On the software and services side, OpenAI’s Sam Altman and Anthropic’s Dario Amodei represent two different bets: massive, centralized foundation models versus safety‑first commercial rollouts. Emerging challengers such as Perplexity (now shipping agentic browsers) and Thinking Machines Lab (laterally built by ex‑frontline researchers) show that the market for model makers is diversifying fast.

Academia still matters. Stanford-affiliated figures — researchers and lab directors — are shaping both the methods and the culture around AI. Their influence feeds industry hiring, spinouts, and the regulatory framing of what 'responsible' AI looks like.

Lines of tension that will define the next year

There are three fault lines worth watching.

  • Compute and communities. Building giant models costs gigawatts of electricity and billions of dollars. Corporations point to efficiency and economic growth; community groups worry about local grids, utility bills, and environmental impact. That conversation is moving beyond op‑eds and into policy and procurement decisions, and it will influence where tomorrow’s data centers get built and who pays for their externalities. See, for instance, conversations about ambitious infrastructure projects such as Google’s space datacenter ideas, which reframe where we put heavy compute loads (Google’s Project Suncatcher Aims to Put AI Data Centers in Space).
  • Open vs. closed models. Some labs push open weights and shared datasets; others argue that openness risks misuse and prefer gated access and licensing deals. Open science advocates at institutions like the Allen Institute for AI (Ai2) and Hugging Face’s efficiency and transparency projects are trying to prove you can do useful work without monopolizing capabilities.
  • Safety, harm, and law. Activists and families harmed by deepfakes and unsafe chatbots have already won concrete changes: new legislation, company age limits, and social pressure. That public pressure is altering product road maps — and it’s one reason companies now embed safety teams and ethics frameworks into product launches. Relatedly, tools that integrate AI into everyday platforms (like Google's Gemini reaching into Gmail and Drive) raise hard questions about privacy, consent, and data use (Gemini’s Deep Research May Soon Search Your Gmail and Drive — Google Docs Gains ‘Document Links’ Grounding).

Money and markets: how capital is rewiring priorities

Venture capital, big cloud providers, and hyperscalers are effectively choosing winners. Big raises for frontier labs, multibillion‑dollar chip purchases, and lucrative partnerships have pushed certain models and approaches to the front of the pack. At the same time, a new wave of specialized startups — from data evaluation firms to robotics companies and generative media studios — is reshaping where value is captured. Investors who once treated AI as a bet on software are now underwriting hardware, regulated services, and model evaluation businesses.

The investor lens also explains why companies like Cloudflare and Digital Realty suddenly look like policy actors: their technical choices around what traffic to allow, and what compute to host, can determine whose models survive and on what terms.

The people you’ll still be hearing about

Profiles and lists from the past year highlight a mix: founders (Sam Altman, Dario Amodei), chipmakers (Jensen Huang), infrastructure chiefs (Rachel Peterson, Andy Power), researchers (Fei‑Fei Li, Yejin Choi), and advocates (Elliston Berry, Megan Garcia). Each brings a different force: product momentum, supply of compute, scientific framing, or moral urgency.

The diversity of actors matters because it changes how decisions get made. A CEO in a boardroom can greenlight a billion‑dollar cluster; a legislator or activist can make certain product features illegal; a lab director can publish a paper that becomes the field’s new default. That spread of influence produces a messy, but more democratic, ecosystem.

Signals for the year ahead

Expect the argument over scale to continue — for technical and political reasons. Some groups will insist that bigger models are necessary for better reasoning and multimodal capabilities; others will double down on frugal, efficient approaches that can run on the edge or on smaller clusters. Policy will chase technology, not the other way around: companies will ship new agentic features and regulators will respond. In the middle, market incentives will push safety and IP licensing into the business model itself.

A small but telling detail: consumer and enterprise experiences are beginning to diverge. Products that live inside regulated industries — healthcare tools that can triage patient risk, or bank copilots that process legal documents — will face stricter scrutiny and, often, slower rollouts. Meanwhile, experimental consumer features will continue to iterate faster, sometimes painfully so.

Google, OpenAI, NVIDIA, and major cloud providers will remain central because they control critical inputs: compute, data plumbing, and distribution. But influence is no longer monocultural. New labs, activist coalitions, and open research institutes are carving out power, and sometimes turning public outrage into law.

If you want to follow the action without drowning in announcements, keep an eye on infrastructure (where models run), governance (who writes the rules), and the people who bridge the two — executives who care about reputation and regulators who care about consequences. And for practical developments that affect everyday users, watch how major models are integrated into the software you already use: assistants that reach into your inbox, browsers that act on your behalf, and video‑generation tools trained on licensed content. For example, recent product rollouts from major labs and agents have reshaped how people interact with search and media in ways that feel like daily life already changing (OpenAI’s Sora landing on Android and the conversations it sparked).

This moment feels less like a single peak and more like a shifting landscape. Power in AI is now distributed across chips, reagents of data, legal frameworks, and — crucially — the narratives we tell about what these systems should do. The debate is noisy, sometimes ugly, and often technical. That’s exactly why it matters: the choices made this year will shape who benefits from AI, and who pays the bills.

AILeadersInfrastructurePolicyResearch