“Right now, around 80–90% of games are made by AI.” It’s the kind of line that stops you mid-scroll — and it came from Akihiro Hino, CEO of Level-5, a studio behind Professor Layton and Inazuma Eleven. Whether you read that as ominous or optimistic depends on who you ask, but it crystallizes a debate that has been raging through studios, investor decks and message boards in 2025.
AI isn’t a single thing in games. It’s a toolkit being pressed into three broad roles: idea and asset generation (concepts, art, temporary VO), engineering assistance (code scaffolding, bug triage) and systems-level automation (QA, pipeline orchestration). That brushstroke helps explain why a studio CEO can tell a parliamentary committee one thing while players spot low-quality AI images in blockbuster releases.
Where studios actually use AI
Level-5’s public demonstrations are refreshingly granular: Stable Diffusion for illustration and 3D-concept references, ChatGPT-style LLMs to draft quest outlines and character motivations, VOICEVOX for placeholder dialogue and GitHub Copilot-like tools for code snippets. That’s affirmed by hands-on accounts from producers and PMs who have tested AI in real AAA pipelines. One project manager’s experiment — described in a case study — had an AI produce a complete art-roadmap, Jira templates and production estimates in hours; the same human-led effort had taken a month of staff time to produce previously.
Those time savings, when they materialize, are attractive. They translate into lower R&D costs, faster prototyping and, crucially, the ability to promise investors that you’re “AI-enabled.” It’s also why big publishers keep announcing AI initiatives and why Microsoft and others are building purpose-built models for creative work. See how studios and platforms are building their own image models in the wake of in-house efforts like Microsoft’s MAI-Image-1.
The messy middle: quality, control and public reaction
Generative systems can be brilliant at ideation and maddeningly bad at finish. Games that used AI for concept art or NPC dialogue sometimes shipped with stilted lines or odd visuals — examples that fed consumer backlash when fans found gen-AI assets in major releases. Developers scramble to clarify whether AI “empowers” creators or quietly replaces them. Some studios remove discovered AI assets; others leave them in and defend their editorial choices.
Players are not indifferent. Indie teams that prize handmade aesthetics and craft often flaunt anti-AI badges. For many indie devs, the grind of design and the creative problem-solving are core to the work; automating that away feels like erasing what made game-making meaningful to them. There’s also a simmering ethics debate: generative models are trained on huge corpora of existing art and code, frequently with murky licences and little compensation to original creators.
Automation moves up the value chain
Beyond art and dialogue, AI is marching into test labs and production tooling. Square Enix’s public roadmaps, for instance, forecast automated QA taking on an ever-larger slice of testing work. That shift is practical: exhaustive regression testing across hundreds of platforms is expensive and time-consuming; AI-driven QA tools can flag regressions, triage logs and automatically generate reproduction steps.
But automation has limits. The worst-case scenarios you see on forums — NPCs behaving strangely or image assets that look “off” — often come from edge cases where training data and game context collide. Human designers and leads still provide the aesthetic judgement and narrative nuance AI lacks. As Hino put it, developers increasingly need an “aesthetic sense” to guide and refine what models spit out.
Economics, optics and the investor angle
There’s a commercial dimension that shouldn’t be ignored: signaling. Announcing AI use can be as much about courting capital as about improving production. With investors pouring billions into AI platforms, studios tout partnerships with AI vendors to look future-ready. That dynamic helps explain why executives insist AI is ubiquitous even as mid-tier and indie teams push back.
The other economic pressure is labor and calendar. A PM’s account of compressing a month-long R&D task into a few hours with an LLM-driven workflow isn’t just bragging; it’s a potential cost saver for studios that run thin on headcount and time. But as with all tools, the productivity gains are uneven and require new skills: prompt engineering, model evaluation, and a governance layer to catch hallucinations and copyright issues.
What this means for developers and players
For developers: your toolkit is changing. Roles will shift toward curating, validating and integrating AI output. Producers and leads who can formalize what constitutes acceptable AI output — and bake that into pipelines — will be in demand.
For players: the presence of AI in credits will become a question to ask, and not always a deal-breaker. Some games will benefit from procedurally rich worlds and emergent NPCs; others will suffer if shortcuts degrade polish.
If you want to see a live example of how gen-AI cropped up in mainstream releases, look at multiplayer shooters and narrative hits that used AI for dialogue snippets or image assets this year — a handful provoked debate and even affected review scores. One such title that blended ambitious tech with mixed results is Arc Raiders.
This is not a binary choice between humans and machines. The more useful frame is hybrid: machines accelerate iteration and reduce grunt work; humans keep the vision, the taste and the final polish. Studios that treat AI as an assistant — and invest in standards, tooling and ethics — are likelier to benefit.
If you’re tracking where the jobs and skills will land, watch how QA and tooling evolve. Some companies are betting big: plans to automate large portions of QA by 2027 show that the industry is already imagining a workflow where humans handle nuance and machines handle scale. See more on those automation plans in Square Enix’s roadmap discussion about automating QA.
A final practical note: if you’re a developer or studio lead looking to prototype with AI, remember the basics — version control for prompts and model outputs, audit trails for datasets, and a review step that treats AI output like any outsourced asset. And if you’re buying hardware to test builds and performance, remember consoles and test kits still matter — some teams still keep a PlayStation 5 Pro on hand for compatibility checks (PlayStation 5 Pro).
The line Hino drew — that most games now use AI in some fashion — is provocative but useful. It forces a question studios have mostly been dodging: not whether AI will touch games, but how we redesign the craft and the contracts around it. The answer will depend as much on governance and culture as on model accuracy.
No neat ending here. This is an unfolding rewrite of production, and every studio will take a slightly different pen to the page.