Emergency Vehicle Lights Can Screw Up a Car’s Automated Driving System

Newly published research finds that the flashing lights on police cruisers and ambulances can cause “digital epileptic seizures” in image-based automated driving systems, potentially risking wrecks.

A new paper by Elad Feldman, Jacob Shams, Satoru Koda, Yisroel Mirsky, Assaf Shabtai, Yuval Elovici, and Ben Nassi of Ben-Gurion University of the Negev and Fujitsu Limited raises questions about systems like Tesla’s Autopilot.Photograph: Matt Gush/Getty

Carmakers say their increasingly sophisticated automated driving systems make driving safer and less stressful by leaving some of the hard work of knowing when a crash is about to happen—and avoiding it—to the machines. But new research suggests some of these systems might do the virtual opposite at the worst possible moment.

A new paper from researchers at Ben-Gurion University of the Negev and the Japanese technology firm Fujitsu Limited demonstrates that when some camera-based automated driving systems are exposed to the flashing lights of emergency vehicles, they can no longer confidently identify objects on the road. The researchers call the phenomenon a “digital epileptic seizure”—epilepticar for short—where the systems, trained by artificial intelligence to distinguish between images of different road objects, fluctuate in effectiveness in time with the emergency lights’ flashes. The effect is especially apparent in darkness, the researchers say.

Emergency lights, in other words, could make automated driving systems less sure that the car-shaped thing in front of them is actually a car. The researchers write that the flaw “poses a significant risk” because it could potentially cause vehicles with automated driving systems enabled to “crash near emergency vehicles" and “be exploited by adversaries to cause such accidents.”

While the findings are alarming, this new research comes with several caveats. For one thing, the researchers were unable to test their theories on any specific driving systems, such as Tesla’s famous Autopilot. Instead, they ran their tests using five off-the-shelf automated driving systems embedded in dashcams purchased off of Amazon. (These products are marketed as including some collision detection features, but for this research, they functioned as cameras.) They then ran the images captured on those systems through four open source object detectors, which are trained using images to distinguish between different objects. The researchers aren’t sure whether any automakers use the object detectors tested in their paper. It could be that most systems are already hardened against flashing light vulnerabilities.

The research was inspired by reports that Teslas using the electric carmaker’s advanced driver assistance feature, Autopilot, collided with some 16 stationary emergency vehicles between 2018 and 2021, says Ben Nassi, a cybersecurity and machine learning researcher at Ben-Gurion University who worked on the paper. “It was pretty clear to us from the beginning that the crashes might be related to the lighting of the emergency flashers,” says Nassi. “Ambulances and police cars and fire trucks are different shapes and sizes, so it’s not the type of vehicle that causes this behavior.”

A three-year investigation by the US National Highway Traffic Safety Administration into the Tesla-emergency vehicle collisions eventually led to a sweeping recall of Tesla Autopilot software, which is designed to perform some driving tasks—like steering, accelerating, braking, and changing lanes on certain kinds of roads—without a driver’s help. The agency concluded that the system inadequately ensured drivers paid attention and were in control of their vehicles while the system was engaged. (Other automakers’ advanced driving assistance packages, including General Motors’ Super Cruise and Ford’s BlueCruise, also perform some driving tasks but mandate that drivers pay attention behind the wheel. Unlike Autopilot, these systems work only in areas that have been mapped.)

In a written statement sent in response to WIRED’s questions, Lucia Sanchez, a spokesperson for NHTSA, acknowledged that emergency flashing lights may play a role. “We are aware of some advanced driver assistance systems that have not responded appropriately when emergency flashing lights were present in the scene of the driving path under certain circumstances,” Sanchez wrote.

Tesla, which disbanded its public relations team in 2021, did not respond to WIRED’s request for comment. The camera systems the researchers used in their tests were manufactured by HP, Pelsee, Azdome, Imagebon, and Rexing; none of those companies responded to WIRED’s requests for comment.

Although the NHTSA acknowledges issues in “some advanced driver assistance systems,” the researchers are clear: They’re not sure what this observed emergency light effect has to do with Tesla’s Autopilot troubles. “I do not claim that I know why Teslas crash into emergency vehicles,” says Nassi. “I do not know even if this is still a vulnerability.”

The researchers’ experiments were also concerned solely with image-based object detection. Many automakers use other sensors, including radar and lidar, to help detect obstacles in the road. A smaller crop of tech developers—Tesla among them—argue that image-based systems augmented with sophisticated artificial intelligence training can enable not only driver assistance systems, but also completely autonomous vehicles. Last month, Tesla CEO Elon Musk said the automaker's vision-based system would enable self-driving cars next year.

Indeed, how a system might react to flashing lights depends on how individual automakers design their automated driving systems. Some may choose to “tune” their technology to react to things it’s not entirely certain are actually obstacles. In the extreme, that choice could lead to “false positives,” where a car might hard brake, for example, in response to a toddler-shaped cardboard box. Others may tune their tech to react only when it’s very confident that what it’s seeing is an obstacle. On the other side of the extreme, that choice could lead to the car failing to brake to avoid a collision with another vehicle because it misses that it is another vehicle entirely.

The BGU and Fujitsu researchers did come with a software fix to the emergency flasher issue. Called “Caracetamol”—a portmanteau of “car” and the painkiller “Paracetamol”—it’s designed to avoid the “seizure” issue by being specifically trained to identify vehicles with emergency flashing lights. The researchers say it improves object detectors’ accuracy.

Earlence Fernandes, an assistant professor of computer science and engineering at University of California, San Diego, who was not involved in the research, said it appeared “sound.” “Just like a human can get temporarily blinded by emergency flashers, a camera operating inside an advanced driver assistance system can get blinded temporarily,” he says.

For researcher Bryan Reimer, who studies vehicle automation and safety at the MIT AgeLab, the paper points to larger questions about the limitations of AI-based driving systems. Automakers need “repeatable, robust validation” to uncover blind spots like susceptibility to emergency lights, he says. He worries some automakers are “moving technology faster than they can test it.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More like this

The Worst Hacks of 2024

You Need to Create a Secret Password With Your...

The Invisible Russia-Ukraine Battlefield

Russian Spies Jumped From One Network to Another Via...