Are self-driving cars truly here, or are we being sold a dream? The truth might shock you: Even the most advanced autonomous vehicles still rely on human "babysitters" to avoid potentially catastrophic errors. This isn't just a minor inconvenience; it's a fundamental flaw that could be holding back the entire industry.
At The Autonomous 2025 conference in Vienna, Mary "Missy" Cummings, a former U.S. Navy fighter pilot and current autonomy expert at George Mason University, cut straight to the chase. When asked about the progress of self-driving technology, her answer was a resounding "No." Despite the hype and the headlines, truly driverless operation remains a distant goal. Cummings, along with TTTech Auto CEO Stefan Poledna, emphasized that humans are still needed to intervene when AI confronts situations beyond its capabilities. Think of them as safety nets, ready to catch the autonomous car before it falls.
Self-driving cars are, in a limited sense, a reality. Robotrucks are hauling cargo, robotaxis are transporting passengers, and roboshuttles are navigating designated routes, often operating around the clock without onboard safety drivers. Level 2+ systems are becoming increasingly common, and the first Level 3 autonomous systems are hitting the market, albeit at a slower pace than initially predicted. IDTechEx data shows that robotaxi deployments in the U.S. have accelerated in 2025. Waymo, for instance, has expanded its services to cities like Phoenix, San Francisco, Los Angeles, Atlanta, and Austin. Tesla has even launched a limited driverless service in Austin, and companies like Zoox and May Mobility are beginning commercial operations.
But here's where it gets controversial... Cummings argues that a critical technical hurdle prevents widespread adoption: hallucinations. She defines these as "statistical inferencing errors," essentially meaning that self-driving cars can perceive things that aren't there or misinterpret their surroundings.
Imagine a Tesla incident in the San Francisco Bay Bridge Tunnel. The vehicle, supposedly seeing something that wasn't actually present, slammed on the brakes, decelerating from 65 mph to zero and causing an eight-car pileup. Investigations into phantom braking incidents have revealed similar patterns across various types of autonomous vehicles, from cars to shuttles to trucks. According to Cummings, these are all manifestations of computer vision hallucinations, and, crucially, "We do not know how to solve this problem...and because it's unsolved, we continue to see these accidents emerge."
And this is the part most people miss... The implications extend beyond minor fender-benders.
Consider the 2023 Cruise incident in California that led to the company temporarily suspending operations. A pedestrian, initially struck by a human-driven car, was thrown into the path of a Cruise self-driving vehicle. The autonomous car executed a perfect emergency braking maneuver – but failed to recognize that the pedestrian had fallen underneath it. As Cummings explained, "The car did not know that there was a human involved anymore." The system's inability to process the complete context of the situation had devastating consequences.
Cummings pulls no punches when discussing the fundamental limitations of current AI systems. "AI doesn’t think. It doesn’t imagine, it doesn’t know." She also strongly advises against using generative AI to create safety cases for self-driving systems. Why? Because neural networks, by their very nature, are designed to identify the most frequent and likely occurrences. They are simply not equipped to handle those rare, unexpected edge cases that are crucial for safety. This is why human oversight remains essential.
Cummings reserves her harshest criticism for vision-only autonomous systems, stating unequivocally that "Vision-only self-driving cars are never, ever, ever going to happen." Her reasoning is rooted in basic robotics principles: No robot can reliably navigate the world using only one type of sensor. Redundancy is key.
Another area of concern is highway driving. No self-driving car company has yet demonstrated consistently safe operation at highway speeds. While demonstrations may look promising, the underlying problem of AI hallucination persists, making sustained safe operation an elusive goal.
Cummings also points out a crucial reality that is often downplayed: No vehicle currently operates with complete autonomy. "Every self-driving car company needs human babysitters," she asserts. "There is no such thing as a self-driving car, because they all need some level of human input."
The Automated Vehicle Safety Consortium (AVSC), a program of SAE Industry Technologies Consortia, distinguishes between two types of remote support:
- Remote assistance: Providing information or advice to a driverless vehicle when the autonomous system encounters a situation it can't handle, helping it continue its trip.
- Remote driving: A remote driver taking over control of the vehicle, performing tasks like braking, steering, and acceleration in real-time.
Cummings believes that remote assistance is preferable to remote driving, especially at higher speeds (above 30 km/h). However, even with remote assistance, time delays can lead to accidents.
Offshoring remote operations to countries like the Philippines, as Waymo has done, introduces further risks. Latency and signal delays can compromise the operator's judgment, even when they're not directly controlling the vehicle. According to Cummings, a delayed remote-assist signal contributed to a Waymo vehicle being broadsided at an intersection in California.
The reality is that fully self-driving cars are still more of an aspiration than a practical reality. As Cummings puts it, the current market players are, "At best…human-babysat self-driving car companies."
TTTech Auto's Poledna agrees that some form of human involvement is currently unavoidable. In driverless deployments, he says, "you need to have the remote babysitter to help out of some critical situations." The ultimate goal is to reduce the frequency of these interventions over time, gradually decreasing the reliance on human oversight.
But the challenge isn't just about remote babysitters. Even when humans are physically present in the vehicle, supervising autonomous systems can be problematic. As Poledna explains, "Humans are really, really bad at supervising something where you have just very irregular interactions. The better the system is, the more inattentive and relaxed you become. You mentally switch off."
Level 3 and Level 4 systems can mitigate this issue by stopping or pulling over when uncertain and asking the occupant for assistance. In this scenario, the human becomes the on-board babysitter, a role Poledna considers acceptable.
When asked if he is more optimistic than Cummings, Poledna clarifies, "I’m not in a different camp than her." He acknowledges that today's AI systems are based on statistical models and that true general artificial intelligence is still far in the future. "There will always be special situations where you need a human to come in...There’s not some world understanding or reasoning behind it that we humans have. It’s just a huge approximation of the earned data."
Cummings invokes the Swiss cheese model of accident causation, where safety is built upon multiple layers of defense. If a hazard manages to slip through all the layers, an accident occurs. In the context of AI, inadequate design, insufficient testing, and poor maintenance can align to produce catastrophic outcomes. "The loss is still a crash; people die or almost die," she warns.
Drawing from her experience as a lead expert in a recent U.S. court case where Tesla was ordered to pay $243 million in damages for a fatal crash linked to flawed AI, Cummings highlights the industry's tendency to cut corners. Companies often prioritize speed to market over thorough testing and ongoing retraining of neural networks, a decision that can have devastating consequences.
Cummings emphasizes that safety-critical autonomy demands redundancy, requiring "second opinion" systems that are actually implemented, not just claimed.
Poledna echoes this view, arguing that current software and chip technologies cannot deliver a single monolithic component with the same level of safety as a human. To achieve human-level safety, redundancy is essential. If one function fails, redundancy and partitioning mechanisms ensure that the rest of the system continues to operate safely.
Modern chips can implement strong partitioning properties, allowing individual functions to be tested and operated in isolation. This means that if one part of the system fails, it won't bring down the entire vehicle. System-level redundancy across sensors, compute, communication, and actuation is also crucial. Poledna counters arguments that "humans drive with their eyes" by pointing out that "AI is not a human brain, and cameras are not human eyes." Understanding these fundamental differences is essential for building safe and scalable autonomous systems.
While Cummings acknowledges her cautious perspective, she remains confident that there is a place for self-driving vehicles, particularly in applications like robotic shuttles. She urges greater industry collaboration, emphasizing that the competitive spirit in America often hinders progress. Data sharing, especially regarding AI hallucination, is key to bringing safer autonomous systems to market.
So, where do you stand? Are you comfortable with the idea of "human babysitters" in self-driving cars, or do you think this reliance highlights a fundamental flaw in the technology? Is the industry moving too fast, prioritizing profits over safety? And what level of redundancy do you think is necessary to ensure truly safe autonomous operation? Share your thoughts in the comments below – let's have a conversation about the future of self-driving technology!