Guest Op-Ed: Do We Need to Upgrade from “Informed Consent” to “Informed Risk” with Autonomous Vehicles?
Recent events, including the testimony of Facebook’s CEO on Capitol Hill discussing how consumer provided data is used for profit, the significant disruption to City of Atlanta services from a ransomware attack currently estimated to have economic impacts of near $3 million, and the first known fatality involving a self-driving vehicle, demonstrate the fascinating and eye-opening time we are living in, particularly when it comes to the continued integration of technology into our lives.
New innovations are quickly making their way into every aspect of our being (and commute) – from personal digital assistants like “Alexa” and “Siri,” to apps that bring services literally to our fingertips, to the ongoing deployment of self-driving vehicles. With the benefits, most tangibly in the form of convenience, that come from new technologies being rolled out what seems like daily, also come risks that are often little understood, or more accurately ignored.
The danger that consumers are not clearly consenting to conditions they have read and understand is a potential impediment to the continued adoption of new and larger innovations like self-driving vehicles, which are anticipated to collect large amounts of personal data from a “driver/operator/rider” (we are still working on those definitions). As we are seeing with the ongoing discussion around Facebook and its use of data for profit (disturbingly, to the surprise of many members of Congress), not clearly disclosing risks can lead to the boycotting of an innovation and a sentiment of mistrust towards technology companies.
For autonomous vehicles, the issue of transparency and informed consent is playing out through the recent fatality involving an Uber autonomous vehicle and at least 2 deaths to date from cars with Tesla “autopilot” engaged. Not to mention, the most recent self-driving vehicle accident involving Waymo. These incidents call into question the obligation of companies to accurately disclose to consumers the true capabilities of a vehicle being touted as “autonomous” – does the vehicle require a person to be ready to take back control of a vehicle if the autonomous system disengages or fails, or can a person enjoy a fully automated experience with the vehicle truly being able to monitor and engage in the complete operation of a vehicle? This distinction will also be an important part of determining future liability for accidents involving autonomous vehicles.
If mistrust grows around the use of data by companies or potential claims from an injury or death from using a private fleet operated self-driving vehicle is found to be “unknowingly” limited, we may experience a slowed adoption of new innovations or worse yet, such technologies that offer potential societal benefits, such as enhanced mobility for underserved communities, not coming to fruition.
Gregory Rodriguez is of counsel with Best Best & Krieger LLP. Based in the firm’s Washington D.C. office, he provides strategic information, policy insight and legal assistance to plan for and incorporate emerging transportation technologies like automated vehicles into communities. He can be followed via Twitter @smartertranspo.
The views expressed above are those of the author and do not necessarily reflect the views of the Eno Center for Transportation.