Indiana’s Trusted Personal Injury Advocates Since 1963.

Why Accidents Involving Self-Driving Cars Are So Complex

There’s no doubt self-driving cars are the future, but we still have a few issues to work out when it comes to accidents involving these vehicles. One of the biggest unanswered questions is: Who is at fault when an accident with a self-driving car occurs?

There’s no straightforward answer yet because self-driving vehicles are still so new. That’s not very reassuring if you’ve been in an accident with a self-driving car — and you weren’t the one operating it. Before more of these vehicles start showing up on the road, it’s important to know why assigning liability is so complicated.

Self-driving car manufacturers mis-label features

There aren’t many standards when it comes to self-driving cars, although many states are working to get laws on the books to change that. One area that lacks regulation and could lead to confusion in the event of an accident is the terminology used for automated driving systems.

Manufacturers have been known to misname these features, which mislead drivers. For example, Tesla and some other companies refer to their systems as “autopilot” when in fact they still require a substantial degree of human operation. In fact, a German court ruled that Tesla’s Autopilot driver-assist feature is misleading.

Currently, there are no fully autonomous vehicles that are considered level 5 on the autonomous driving scale. Cars that can go on “autopilot” and operate without any human operation whatsoever are not on the market yet. Labeling features as autopilot when they’re not leads to confusion and ultimately, accidents.

A false sense of security

Confusing drivers with misleading driver-assist feature names can not only lead to accidents, but liability issues. If an accident occurs, is Tesla at fault for the misleading name or is the driver at fault for not fully understanding the automated driving feature?

The German court that ruled against Tesla’s use of “autopilot” for its feature placed blame with the company. Several accidents or incidents involving improper use of Tesla’s Autopilot feature have happened, including:

These examples show how easily drivers can be lulled into a false sense of security behind the wheel of a self-driving car. The difficulty, however, is determining whether it’s the responsibility of the car manufacturer to educate drivers on the system or if the driver is responsible for knowing the features before operating the vehicle.

Automated driving systems are not fool-proof

Along the same lines as the false-sense-of-security issue is the problem that automated driving systems are not fool-proof. Errors still occur with these systems, due primarily to drivers’ misunderstanding of what they can and cannot do. One study of drivers in a Tesla Model S found that drivers tended to spend longer periods of time with their eyes off the road when partial automation was engaged.

Most people assume that self-driving cars should be error-free and “do the work for you.” These automated features can actually be quite complex and there is no driving training in place to help new drivers understand these features. Operating a vehicle with driver-assist features isn’t like turning on a new smartphone for the first time. It’s not as intuitive. The danger of causing an accident due to misuse makes the use of self-driving cars even riskier.

There’s no clear way to obtain consent

While it’s clear there is a communication gap between how driver-assist features should be used and how drivers actually use them, what’s less clear is how to close that gap. This makes it difficult to determine liability in an accident.

One suggestion for resolving the liability issue is using end user license agreements that are common in many tech services. The agreement would require the driver’s signature as proof they read and understood the information provided about the automated features. This practice would shift liability to drivers.

The problem with end user license agreements is that almost nobody reads them before signing. Their implementation in self-driving cars would likely not solve the problem of drivers misunderstanding how to properly use automated features.

Most self-driving cars still require significant human interaction

The sheer nature of self-driving cars today makes liability murky. Almost all self-driving cars that are commercially available still use a combination of human operation and automation so it’s more difficult to determine liability. If the human driver wasn’t interacting with the vehicle at all and it operated on a true “autopilot,” the case for placing liability on the manufacturer would be stronger. It could still be many years before we have fully self-driving cars, though. In the immediate future, we have to figure out how we can determine responsibility when an accident with a self-driving car occurs.

That’s why self-driving cars accidents are so complex, also learn why to hire a personal injury lawyer in Indianapolis.

If you were in a car accident, contact a lawyer in Indianapolis and we will help!

A Path of Advocacy for Justice

Get Brandon Yosha’s book and join his mesmerizing journey against negligence and insurance goliaths. 
Find inspiration in everyday Americans’ victories after devastating losses and learn how to stand up for justice.

Related Articles

I’ve Been in an Accident!
What Next?

Download our free checklist: 5 Things To Do After An Accident and find out the critical mistakes made by personal injury victims (the ones insurance companies are praying you’ll make).

    Yosha Law Firm – Personal Injury Lawyers

    Yosha Law Firm – Personal Injury Lawyers
    N/a