Every sensor on a Waymo robotaxi sees the world in layers. The LiDAR maps it in three dimensions, radar bounces through it, and cameras read it in color and contrast, building a composite picture of the road that no human retina could match at the same fidelity. So when a Waymo encountered a flooded section of a 40 mph road in San Antonio on April 20, the car absolutely saw the water. It slowed down for it. Then it drove in anyway, floated off the road surface, and came to rest in Salado Creek. The voluntary recall Waymo filed with NHTSA on April 30, covering 3,791 vehicles, was triggered not by a sensor that missed a hazard, but by a software stack that saw the hazard clearly and still chose the wrong response.
You might be sitting in one of those 3,791 recalled vehicles right now, somewhere in Phoenix, Los Angeles, Austin, or Atlanta, and Waymo has confirmed the permanent software fix is still in development. Tesla’s Cybercab, entering production at Giga Texas, runs a supervised robotaxi service in Austin, Dallas, and Houston on a pure-vision architecture with no LiDAR whatsoever. Uber’s platform in Dallas is dispatching Avride-operated Hyundai Ioniq 5s that are currently under NHTSA investigation for 16 crashes involving lane changes and failure to stop for traffic ahead. Amazon’s Zoox uses cameras, LiDAR, radar, and long-wave infrared on every vehicle, the most sensor-redundant consumer-facing stack in the industry, and is still in limited city testing. Each platform has a different answer to what a self-driving car should do when it encounters something it cannot traverse, and after the San Antonio creek, all of those answers deserve a much closer look.
The NHTSA recall notice characterizes the flaw precisely: the software “may allow the vehicle to slow and then drive into standing water on higher speed roadways.” That is a classification error buried in the decision stack, not a sensor failure, and the distinction matters more than the recall number suggests. Waymo’s 5th-gen Jaguar I-Pace and 6th-gen Zeekr RT both carry LiDAR, radar, and cameras in overlapping fields of view, and the San Antonio car processed the flooded road accurately as a hazard worth responding to. The decision architecture, however, had no hard-stop condition for water on a 40 mph road, only a caution flag that reduced speed and left proceeding as an available output. A separate Waymo had already been stranded near McCullough Avenue in San Antonio roughly two weeks before the April 20 incident, confirming this was a repeatable failure mode across a fleet that was still carrying passengers in nine other cities.
Tesla’s Cybercab carries no LiDAR, putting its supervised fleet in Austin, Dallas, and Houston in a fundamentally different position when floodwater appears than Waymo’s overlapping sensor stack would. The platform relies on eight cameras and 4D millimeter-wave radar, meaning no independent depth-sensing channel exists to assess water severity when camera visibility degrades in heavy rain. A real-world FSD 14.3.1 test in April 2026 ended in manual takeover when the front bumper camera submerged, a precise illustration of where the vision-only approach runs out of information. Avride, dispatching Hyundai Ioniq 5s through Uber’s Dallas app since December, is under concurrent NHTSA investigation for 16 crashes involving lane changes and failures to stop for road hazards, all 16 occurring with a trained safety monitor seated in the vehicle. Amazon’s Zoox sits at the opposite end of the sensor redundancy spectrum, combining cameras, LiDAR, radar, and long-wave infrared in a 360-degree array with a human TeleGuidance fallback for scenarios the stack cannot resolve, though its commercial footprint remains a fraction of Waymo’s.
The Waymo recall, the Avride probe, and a dashcam video of a Waymo rolling through a red light on Irving Boulevard in Dallas all surfaced in the same seven-day window, collectively mapping the same design gap across three platforms: a perception-to-action pipeline that detects a hazard but generates the wrong response to it. Waymo’s OTA patch is deploying now, but the permanent fix remains in development, meaning every current ride runs on interim constraints rather than a finished solution. The San Antonio incident involved an empty car, and that is the only reason this story ends with a recovery operation rather than a casualty report. Each platform carrying passengers today is still writing its edge-case rulebook, publishing each new chapter only after something breaks on a live road. Knowing which system you are riding in, what its sensor stack can assess in a sudden storm, and whether its flood-detection logic has been patched from an interim fix to an actual solution is, I’d argue, the most practical safety question a passenger can ask in 2026.
The post Waymo’s Self-Driving Car Saw the Flood and Drove In Anyway. Here’s The Problem Plaguing Every Robotaxi. first appeared on Yanko Design.