Should We Hit the Brakes on Autonomous Vehicles?


Over the last several years, self-driving car companies have made significant progress on their newly road-ready vehicles. Cruise, a General Motors subsidiary, has been operating its self-driving vehicles in Phoenix, San Franciso, and Austin, together reaching one million driverless miles earlier this year. Waymo, owned by Google’s parent company Alphabet Inc., has been operating near Phoenix since 2017. Waymo rides originally required a safety driver to be present, but last month began operating as driverless taxis in San Francisco. The next planned location for Waymo vehicles to begin operating is Los Angeles. Another up-and-coming company, Zoox, has recently received driverless testing authorization from the California and Nevada Departments of Motor Vehicles. Zoox set itself apart from its competitors by developing a completely driverless vehicle; Zoox vehicles have no driver’s seats and no manual controls. The driverless testing permit Zoox obtained from the California DMV marks the first time a fully autonomous vehicle of this category has operated with passengers.

Though great strides have been made recently, driverless vehicles are not incident-proof. An Uber self-driving vehicle hit and killed a 49-year-old pedestrian in Tempe, Arizona in 2018. This vehicle had a safety driver in the driver’s seat, though the vehicle was operating in autonomous mode. Uber did not receive any charges, but the safety driver, who was watching a video at the time of the incident, was charged with negligent homicide. In May of this year, a Waymo vehicle in San Francisco was operating in autonomous mode when a small dog ran into the street. The vehicle struck the dog which did not survive. 

Cruise vehicles have also been making headlines recently. On August 10, in the North Beach area of San Francisco, as many as ten Cruise vehicles “stopped dead and blocked traffic”. High network traffic from a nearby festival caused reduced network connectivity in the area, thus impacting Cruise’s ability to resolve the issue. Typically, when faced with an issue they cannot resolve on their own, Cruise cars rely on network connections to communicate with headquarters. Operators will either remotely tell the vehicle what to do, or dispatch a local driver to physically remove the vehicle from trouble. Since these vehicles could not communicate with HQ, they remained stalled, many in the middle of the street, blocking traffic in the area. It was later revealed that a pedestrian intentionally interfered with the vehicles. Cruise’s delayed ability to resolve the issue, however, is a reminder of the possibly infinite edge cases for which self-driving car companies must prepare. Five days after the North Beach incident, another San Francisco Cruise vehicle drove into – and got stuck in – wet concrete in a construction zone. Two days after that, and shortly after the California Public Utilities Commission voted to allow Cruise to expand its services, a driverless Cruise vehicle collided with a fire truck in an intersection. The company has now been instructed to halve the number of vehicles it has operating in San Francisco.

Not surprisingly, not everyone is a fan of self-driving cars. At the top of the list are those living and working in areas where the vehicles are being operated. Bay Area first responders have opposed the expansion of Cruise and Waymo services. They compiled a list of 55 incidents where self-driving cars interfered with rescue operations in just the last six months, including “running through yellow emergency tape, blocking firehouse driveways and refusing to move for first responders”. Many citizens are also opposed to the city being used as testing grounds for self-driving car companies. Safe Street Rebel, an anonymous activist group, has been taking to the streets to interfere with self-driving vehicles. Their protests typically include “coning”, or rendering a driverless vehicle inoperable by placing a traffic cone on its hood. The group hopes to draw attention to their fight against cars in an attempt to promote public transportation. 

Despite a growing list of incidents, many people suggest, likely accurately, that self-driving vehicles are safer than vehicles operated by humans. Incidents that occurred while test drivers were present highlight how technology and humans are both prone to error. However, should human error and technological error be equally expected? Humans have inherent limitations; reaction times and critical thinking may fall short when a hazard presents itself in the blink of an eye. But when it comes to technology, such limitations are not inherent. In fact, one of technology’s only innate attributes is that it is only as capable as we design it to be. So, rather than prioritizing the expansion of driverless services to new cities, now is the time to expand the capabilities of the current technology.