Back to top

Earlier this spring 49-year-old Elaine Herzberg was walking her bike across the street in Tempe, Ariz., when she was hit and killed by a car traveling at over 40 miles an hour.

There was something unusual about this tragedy: The car that hit Herzberg was driving on its own. It was an autonomous car being tested by Uber.

It’s not the only car crash connected to autonomous vehicles (AVs) as of late. In May, a Tesla on “autopilot” mode accelerated briefly before hitting the back of a fire truck, injuring two people.

The accidents unearthed debates that have long been simmering around the ethics of self-driving cars. Is this technology really safer than human drivers? How do we keep people safe while this technology is being developed and tested? In the event of a crash, who is responsible: the developers who create faulty software, the human in the driver’s seat who fails to recognize the system failure, or one of the hundreds of other hands that touched the technology along the way?

The need for driving innovation is clear: Motor vehicle deaths topped 40,000 in 2017 according to the National Safety Council. A recent study by RAND Corporation estimates that putting AVs on the road once the technology is just 10 percent better than human drivers could save thousands of lives. Industry leaders continue to push ahead with development of AVs: Over $80 billion has been invested so far in AV technology, the Brookings Institute estimated. Top automotive, rideshare and technology companies including Uber, Lyft, Tesla, and GM have self-driving car projects in the works. GM has plans to release a vehicle that does not need a human driver--and won’t even have pedals or a steering wheel--by 2019.

But as the above crashes indicate, there are questions to be answered before the potential of this technology is fully realized.

Ethics in the programming process

Accidents involving self-driving cars are usually due to sensor error or software error, explains Srikanth Saripalli, associate professor in mechanical engineering at Texas A&M University, in The Conversation. The first issue is a technical one: Light Detection and Ranging (LIDAR) sensors won’t detect obstacles in fog, cameras need the right light, and radars aren’t always accurate. Sensor technology continues to develop, but there is still significant work needed for self-driving cars to drive safely in icy, snowy and other adverse conditions. When sensors aren’t accurate, it can cause errors in the system that likely wouldn’t trip up human drivers. In the case of Uber’s accident, the sensors identified Herzberg (who was walking her bike) as a pedestrian, a vehicle and finally a bike “with varying expectations of future travel path,” according to a National Transportation Safety Board (NTSB) preliminary report on the incident. The confusion caused a deadly delay--it was only 1.3 seconds before impact that the software indicated that emergency brakes were needed.

Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn’t anything inherently unsafe about proceeding through a green light, a human driver might have expected there to be left-turning vehicles and slowed down before the intersection, Saripalli pointed out. “Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary,” he writes.

However, in both the Uber accident that killed Herzberg and the Tesla collision mentioned above, there was a person behind the wheel of the car who wasn’t monitoring the road until it was too late. Even though both companies require that drivers keep their hands on the wheel and eyes on the road in case of a system error, this is a reminder that humans are prone to mistakes, accidents and distractions--even when testing self-driving cars. Can we trust humans to be reliable backup drivers when something goes wrong?

Further, can we trust that companies will be thoughtful--and ethical--about the expectations for backup drivers in the race for miles? Backup drivers who worked for Uber told CityLab that they worked eight to ten hour shifts with a 30 minute lunch and were often pressured to forgo breaks. Staying alert and focused for that amount of time is already challenging. With the false security of self-driving technology, it can be tempting to take a quick mental break while on the road. “Uber is essentially asking this operator to do what a robot would do. A robot can run loops and not get fatigued. But humans don’t do that,” an operator told CityLab.

The limits of the trolley scenario

Despite the questions that these accidents raise about the development process, the ethics conversation up to this point has largely been focused on the moment of impact. Consider the “trolley problem,” a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars. If an AV is faced with an inevitable fatal crash, whose life should it save? Should it prioritize the lives of the pedestrian? The passenger? Saving the most lives? Saving the lives of the young or elderly?

Ethical questions abound in every engineering and design decision, engineering researchers Tobias Holstein, Gordana Dodig-Crnkovic and Patrizio Pelliccione argue in their recent paper, Ethical and Social Aspects of Self-Driving Cars, ranging from software security (can the car be hacked?) to privacy (what happens to the data collected by the car sensors?) to quality assurance (how often does a car like this need maintenance checks?). Furthermore, the researchers note that some ethics are directly at odds with the private industry’s financial incentives: Should a car manufacturer be allowed to sell cheaper cars outfitted with cheaper sensors? Could a customer choose to pay more for a feature that lets them influence the decision-making of the vehicle in fatal situations? How transparent should the technology be, and how will that be balanced with intellectual property that is vital to a competitive advantage?

The future impact of this technology hinges on these complex and bureaucratic “mundane ethics,” points out Johannes Himmelreich, interdisciplinary ethics fellow at Stanford University in The Conversation. We need to recognize that big moral quandaries don’t just happen five seconds before the point of impact, he writes. Programmers could choose to optimize acceleration and braking to reduce emissions or improve traffic flow. But even these decisions pose big questions for the future of society: Will we prioritize safety or mobility? Efficiency or environmental concerns?

Ethics and responsibility

Lawmakers have already begun making these decisions. State governments and municipalities have scrambled to play host to the first self-driving car tests, in hopes of attracting lucrative tech companies, jobs and an innovation-friendly reputation. Arizona governor Doug Ducey has been one of the most vocal proponents, welcoming Uber when the company was kicked out of San Francisco for testing without a permit.

Currently there is a patchwork of laws and executive orders at the state level that regulate self-driving cars. Varying laws make testing and the eventual widespread roll-out more complicated and, as it is, it is likely that self-driving cars will need a completely unique set of safety regulations. Outside of the US, there has been more concrete discussion. Last summer Germany adopted the world’s first ethical guidelines for driverless cars. The rules state that human lives must take priority over damage to property and in the case of unavoidable human accident, a decision cannot be made based on “age, gender, physical or mental constitution,” among other stipulations.

There has also been discussion as to whether consumers should have the ultimate choice over AV ethics. Last fall, researchers at the European University Institute suggested the implementation of an “ethical knob,” as they call it, in which the consumer would set the software’s ethical decision-making to altruistic (preference for third parties), impartial (equal importance to all parties) or egoistic (preference for all passengers in the vehicle) in the case of an unavoidable accident. While their approach certainly still poses problems (a road in which every vehicle prioritizes the safety of its own passengers could create more risk), it does reflect public opinion. In a series of surveys, researchers found that people believe in utilitarian ethics when in comes to self-driving cars--AVs should minimize casualties in the case of an unavoidable accident--but wouldn’t be keen on riding in a car that would potentially value the lives of multiple others over their own.

This dilemma sums up the ethical challenges ahead as self driving technology is tested, developed and increasingly driving next to us on the roads. The public wants safety for the most people possible, but not if it means sacrificing one’s own safety or the safety of loved ones. If people will put their lives in the hands of sensors and software, thoughtful ethical decisions will need to be made to ensure a death like Herzberg’s isn’t inevitable on the journey to safer roads.

Karis Hustad

Karis Hustad is a Denmark-based freelance journalist covering technology, business, gender, politics and Northern Europe. She previously reported for The Christian Science Monitor and Chicago Inno. Follow her on Twitter @karishustad and see more of her work at karishustad.com

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.