Ask two people about AI and jobs and you’ll get two very different headlines.

On one side is Geoffrey Hinton — the so‑called “godfather of AI” — who told a public forum with Senator Bernie Sanders that mass unemployment from automation is “very likely.” Hinton argued that tech companies are effectively banking on systems that can do the work of humans cheaper and faster, a dynamic that could hollow out entire swathes of employment from call centers and administrative roles to some white‑collar jobs. He warned of second‑order risks as well: AI‑generated deepfakes undermining trust, and even the chilling possibility that automation could make war politically easier for powerful nations.

On the other side sits a different signal: real‑time hiring data from platforms such as LinkedIn. Sue Duke, LinkedIn’s managing director for EMEA, told a Fortune audience she’s not seeing an AI‑triggered hiring collapse. Instead, organizations that adopt AI are advertising for more people — especially those who can sell, build, integrate and oversee these new systems. “They’re going out and looking for more business development people, more technologically savvy people, and more salespeople,” she said, noting that adaptability and uniquely human skills like communication and team building remain in demand.

Two competing futures

These positions aren’t exactly contradictory; they’re snapshots taken at different distances. LinkedIn’s data describes the immediate labor market: recruiters filling roles to deploy AI, manage change, and capture new business. Hinton’s view is about structural transformation over a longer horizon — the slow erosion of demand for human labor as AI capabilities expand.

Both perspectives come with evidence. Analysts and some policymakers point to studies suggesting tens of millions — even as many as roughly 100 million jobs in a decade, according to a figure cited by Sanders — could be susceptible to automation. The tech industry has also seen high‑profile layoffs and a reallocation of capital toward AI projects, which critics say demonstrates firms are prioritizing automation over headcount.

At the same time, firms integrating AI often create roles that didn’t exist a few years ago: prompt engineers, AI ethicists, data platform managers, and customer‑facing roles that package AI into revenue streams. LinkedIn’s call for adaptability and AI literacy reflects that churn: you might lose a narrow task, but gain demand for broader problem‑solving and oversight.

Beyond job counts: politics, trust and safety

Hinton’s warnings go beyond unemployment. He urged governments to impose safety testing, transparency, and provenance systems to stop deepfakes and other misuse — arguing that detection alone won’t keep up with generative capabilities. He also flagged geopolitical risks: cheaper, remote warfare and unequal impacts where automation benefits rich nations while harming poorer ones.

Those concerns line up with policy debates brewing in legislatures and industry forums. If AI reshuffles income and political power, solutions will likely need to mix labor markets interventions (upskilling, portable benefits), social policy (wage support, new safety nets), and stricter guardrails around critical uses of AI.

What companies and workers can do now

For workers: upskill in areas where demand is growing — AI literacy, tooling, cross‑disciplinary problem solving — and double down on human strengths like negotiation, empathy and complex judgment that current systems struggle to replicate. Employers hiring for AI initiatives still prize those skills, according to LinkedIn’s observations.

For companies: invest deliberately in human‑in‑the‑loop design, transparency and workforce transition programs. Some of the immediate shifts — agentic features in consumer products and productivity tools — show how quickly AI is embedding into workflows. Google’s experiments with agentic booking and assistants, for example, hint at how automated agents will take on tasks once handled by people; see how platforms are moving toward that model in recent product shifts like Google’s AI Mode adding agentic booking and the increasing integration of AI into everyday apps such as Gmail and Drive via Gemini’s deep‑research features.

The policy gap

Hinton and others stress that markets alone won’t manage the transition. Proposals range from stricter safety testing and provenance for media to broader economic policies that could include guaranteed income experiments or targeted retraining funds. The debate about whether AI has reached — or will soon reach — human‑level intelligence also shapes urgency; experts remain split on timing and scale, a debate that matters for how quickly policy must move (a debate explored in recent analysis of AI’s tipping point) (/news/ai-experts-debate-human-level-intelligence).

There’s no single narrative that wins here. In the short term, demand for people who can make AI work — and sell it — is real. Over the longer haul, the pace and direction of AI development, corporate strategies, and government action will determine whether we get broad prosperity, uneven disruption, or something worse.

What to watch in practice: who pays for the transition. If the investment bill for automation is recouped primarily by labor‑replacing products rather than shared through taxes, wages or public programs, the social cost will be higher. If companies, workers and governments share responsibility for retraining and safety, we stand a better chance of steering toward broadly shared gains.

No single quote or dataset settles this. But the tension is clear: the labor market is reshaping now, even as the full arc of change remains fogged in. That means choices — by CEOs, regulators and voters — will shape whether the next decade delivers opportunity or hardship at scale.

AIJobsPolicyFuture of Work