Waymo says it's preparing a voluntary software recall after a spate of incidents in which its fully autonomous vehicles navigated around stopped school buses with flashing red lights and deployed stop arms.
The National Highway Traffic Safety Administration has widened a formal probe that began after video emerged in October of a Waymo vehicle passing a stopped school bus in Atlanta. Federal regulators sent Waymo a Dec. 3 letter asking for detailed information about how the company’s systems handle school buses and other roadside hazards; the company must respond to NHTSA’s questions in the coming weeks.
What happened
Local school districts have documented dozens of episodes. Atlanta Public Schools told investigators it has recorded six such incidents. In Texas, the Austin Independent School District’s cameras have captured between 19 and 20 violations since the school year began—numbers the district shared in letters urging Waymo to halt operations during school pickup and drop-off hours until the problem is fixed.
In at least one of the flagged incidents, NHTSA said, no human safety operator was inside the robotaxi when it maneuvered around the bus—heightening questions about how the company’s software interprets school-bus signals and nearby children.
Waymo says it identified a software-related contributor to the behavior, pushed updates to its fleet by Nov. 17 and has since applied additional fixes nationwide. Mauricio Peña, Waymo’s chief safety officer, told reporters the company plans to file a voluntary software recall with NHTSA to ensure the update is distributed fleetwide.
Why this matters
All 50 states require motorists to stop for school buses when red lights flash and the stop arm is deployed. For autonomous fleets, reliably recognizing that combination—flashing lights, the extended stop arm, and sometimes groups of children on or near the roadway—is both a perception and a decision problem. It’s not just about seeing the bus; the vehicle must treat that scene the same way a cautious human driver would.
The stakes are twofold. There’s the immediate safety risk to children and bus drivers. And there’s trust: autonomous services operate in public spaces and depend on local communities accepting them. Repeated violations, especially ones captured on bus-camera footage, chip away at that social license and give regulators and elected officials reason to tighten oversight.
Waymo has pushed back with its own safety data, pointing to lower injury-crash rates than those of human drivers in its operating areas. But even a company with strong overall metrics can be tripped up by specific edge cases—like how its system models and responds to a stopped school bus.
Bigger questions about perception, testing and oversight
The incidents highlight a broader challenge across the self-driving industry: how companies train, validate and update perception and decision-making systems for rare but high-consequence scenarios. Improving maps, sensors and scene interpretation matters—so do independent benchmarks and audits of vision systems to ensure companies don’t conflate uncommon cues.
Work in adjacent fields illustrates both the promise and difficulty of that work. Recent advances in navigation and mapping, such as conversational and agentic features in consumer mapping tools, show how quickly software is taking on complex tasks; the same rapid progress complicates how regulators evaluate systems in safety-critical settings. See how navigation and mapping are evolving in products like Google Maps’ new AI copilot for context on how layered systems can behave in the real world navigation and mapping improvements like Google Maps' Gemini AI copilot. Meanwhile, efforts to create consent-first, bias-aware benchmarks for computer vision offer a model for how industry and regulators might audit autonomous-vehicle perception systems benchmarks for vision safety such as Sony's FHIBE.
How companies and cities are reacting
Some school districts have asked Waymo to stop operating during bus hours; Waymo has declined those requests so far, arguing that its fleetwide updates have materially improved performance. NHTSA’s expanded investigation, though, gives regulators a formal avenue to compel more information, require fixes, or demand a recall process when software is at fault.
For Waymo, which now runs service without a human driver in parts of several U.S. cities, the probe and recall are a test of how quickly the company can close safety gaps without pausing service. The outcome will matter beyond Waymo: it will shape expectations for other companies rolling out driverless technology and for the rules cities and states set about where and when autonomous vehicles can operate.
This episode isn’t a verdict on autonomous driving as a whole, but it is a reminder that even well‑resourced efforts to automate driving face messy, human-centered problems—especially where children and buses intersect. How Waymo, regulators and local communities resolve those tensions will influence not just legal outcomes but how comfortable people are sharing the curb with machines.