Investigators from the National Highway Traffic Safety Administration and the National Transportation Safety Board are descending on Tempe, Ariz., to investigate the death of a woman struck late Sunday night by an Uber self-driving car. Given the NTSB’s well-deserved reputation for thoroughness, and given the amount of data that should be available from the witness, the physical evidence and the car’s data storage devices, there is little reason to doubt that NTSB will be able to identify any and all causes of the collision, and present a comprehensive report to the public—perhaps within a year.
While police, commentators and the press chase issues of fault and liability, safety advocates and the self-driving community will be focused squarely on how this collision could have occurred when Uber’s suite of sensors, hardware and software all exist specifically to avoid collisions like this one.
TechCrunch’s Devin Coldewey published an informative piece on Monday describing Uber’s self-driving hardware and its role in detecting pedestrians and vulnerable road users. The Uber’s redundant and parallel suite of sensors include radar, LIDaR and a camera array, all focused on sensing and seeing persons and objects in the roadway, day or night.
So how could they all not have detected Elaine Herzberg walking at least 40 feet across an open street? By all accounts she was not cloaked in a high-tech, radar-evading, light-deflecting material designed for stealth; she was an ordinary person in ordinary clothing, doing what ordinary people do—crossing a street. She was also pushing a bicycle laden with bags, which should have created a larger profile and presented materials (namely, metal) that would have made her even more likely to have been detected by the car.
Unfortunately, there are numerous ways autonomous vehicle hardware and software can fail, alone or in combination. It is important to note that there is no reason to believe that any of the following occurred here; these are merely hypothetical discussion points for illustration purposes only.
First, sensors may not sense. Ms. Herzberg was walking at night, and less light makes detection more difficult for the human eye and for camera sensors. Absence of light should be irrelevant to LIDaR and radar performance, but each sensor can separately be obstructed, dirty, soap-scummed or fly-specked.
Video footage reviewed by police will only be as good as the sensor, aperture, light-gathering and resolving power of the camera and lens. In other words, footage they have seen may not necessarily correlate to what a human could see that evening, or would see under similar lighting conditions.
Investigators will undoubtedly attempt to verify that all sensors were online, clean, calibrated and operational. NTSB may ultimately recommend that more sensors—or different kinds of sensors, such as thermal or infrared—might be prudent to fill any detection gaps in similar operating domains.
Second, sensors can often see an obstruction, but machine learning algorithms can assess it as a shadow, a stationary sign, or something else ‘white-listed’ by programmers as objects to ignore. Recall that the investigation into Joshua Brown’s death in May 2016 while using his Tesla AutoPilot focused on whether the freeway-crossing semi-tractor trailer was ‘white-listed’ and misclassified as an overpass to be ignored. If an object is sensed but misidentified, programming can select the wrong course of action—including no action—in response.
Third, a car’s hardware could sense a figure, but due to conflicting inputs from different sensors, or prioritization of one sensor over the other, or due to insufficient learning data, the artificial intelligence ‘brain’ might find no definitive object classification match and simply not know how to respond. Given a limited library of objects and experiences, and a short list of actions to take in response, software might select the playbook for proceeding on course.
Fourth, an autonomous car may not stop for a pedestrian in the road because of the Trolley Problem. Using the present collision for illustrative purposes only, the car might have seen Ms. Herzberg perfectly at the last second, and even understood that she was a human being and potential collision danger, but it followed programming designed to take the least-dangerous course of action for its passenger(s) under the circumstances. It could also have prioritized staying the course over the potential liability if it abandoned the legal right of way.
The possibility of an autonomous vehicle developer doing this is almost too sinister to imagine, yet there are presently no laws that would require automated vehicles to take evasive action that might increase the risk of harm to passengers or bystanders. A fatality caused by the Trolley Problem would undoubtedly invite a strict pre-certification scheme and scrutiny of developers going forward.
Fifth and finally, an automated vehicle is an electronic device like any other; it can experience a host of electronic problems that other devices experience, including signal interference, power loss, loose wires, signal latency, cyber-hacking, packet conflicts, boot loop, pet hair in the fan vent, or any other failures of non-hardened, non-automotive grade computer components, all of which can foreseeably lead to fatalities.
As the police, press and commentators debate unrelated issues of liability and fault, automation safety advocates can trust that NTSB will do all that is required to determine what truly occurred in the car and what needs to be done to prevent it from happening again.