Automated Vehicles: Artificial Intelligence or Stupidity?
Whether set on far-flung planets or here on our humble planet, Science Fiction (Sci-Fi) has produced many popular stories that shine a light on not just who we currently are, but who we could be. Granted, they are mostly preventative lessons about what not to do, given that they typically show how technology can decimate society. Peel back the layers of those stories, however; there is something insightful there. These dystopian plots spark ideas about how things can be improved…if managed appropriately.
Feeling like something that has been taken directly from a Sci-Fi script, the government announced a plan in August this year for the commercialisation of self-driving vehicles by 2025. It should be noted that driverless vehicles are already here with us, except in a limited, non-commercial capacity. To achieve commercialisation by 2025, self-driving cars will have to demonstrate they function as expected, without risk.
Given that most road incidents arise because of human error, full automation of the driving process eradicates a significant source of risk. Human error does not constitute all the road incident risk though, meaning that incidents still have the potential to occur with driverless cars. If this occurs, the blame is more ambiguous than in the typical driving context because the algorithms that determine how the car operates are set and managed remotely. A question of direct blame becomes indirect blame, which is more difficult to untangle. However, dash-cams are known to help reduce the cost associated with legal cases. In this instance, using dash-cam footage would help to clarify the blame.
To rectify the legal ramifications of driverless cars on road safety, it is likely that such cars will be mostly self-driving, with the capacity for humans to take control when required. This system would be an easy way to establish blame at the point of the road incident, but unfortunately has issues with how automation impacts general driving performance when drivers manually take back control.
Another issue is if drivers are distracted immediately before they manually operate the vehicle. The ideal control transition capacity is estimated to be 40 seconds, so the focus would have to be effectively and efficiently swapped in that time between the distraction to the driving task. This is found to not occur effectively or efficiently . Considering this level of uncertainty, it’s no wonder that only 10% of all drivers would consider buying a driverless car in the future; even among younger driver (<34-year-olds), who are typically more inclined to embrace technology, there is only 14% that would consider buying a driverless car .
A sensible idea would be to reduce the capacity for driverless cars to cause harm. Counterintuitively, research suggests the opposite should occur . By intentionally causing the cars to have the capacity to cause harm, humans will react in a way that they would behave toward cars that are driven by drivers. Human behaviour can be more accurately mapped as a result. In contrast, if the driverless cars do not have the capacity to cause harm, behaviour becomes more difficult to accurately predict, which causes issues when programming the algorithms that dictate how driverless cars operate. Such a measure is paradoxically implied as better for road safety, but legally problematic due to willingly encouraging the capacity for harm.
Beyond impacting road safety, driverless cars are proposed as a measure that would provide many benefits. These include :
- Delivering economic growth and new skilled jobs across the country
- Increasing efficiencies in the freight and logistics sector
- Improving public access to transport
- Reducing road congestion and improving public transport services
- Decarbonising transport
They are thus great socially (connecting the country better), economically (stimulating labour markets) and environmentally (reducing the carbon footprint of transport), but careful consideration will have to be given to how the scheme is implemented. Given the financial investment and the blossoming stream of research related to this sector, it seems that there is strong potential for this innovation to be a success. However, to succeed, driverless vehicles will have to demonstrate they are safer than cars with drivers, foster trust in the public, and establish clear legislation regarding blame in (hopefully) the rare instance when a road incident occurs. There is a lot of work to do, but with appropriate management, we can ensure that this Sci-Fi measure doesn’t fall into the stereotypical dystopian trope. We can show that we have learned from the lessons handed to us in Sci-Fi books and films to make society better. In time, we can make roads completely harmless by intelligent automated vehicle design.
 Odachowska, E., Ucińska, M., Kruszewski, M., & Gąsiorek, K. (2021). Psychological factors of the transfer of control in an automated vehicle. Open Engineering, 11(1), 419-424.
 Fleet Industry. (2022, August 26). Safety concerns over driverless and self-driving cars. Retrieved from https://www.fleetnews.co.uk/news/fleet-industry-news/2022/08/26/safety-concerns-over-driverless-and-self-driving-cars on 13/09/2022.
 Merat, N., Lee, Y. M., Markkula, G., Uttley, J., Camara, F., Fox, C., ... & Schieben, A. (2018, July). How do we study pedestrian interaction with automated vehicles? Preliminary findings from the European interACT project. In Automated Vehicles Symposium (pp. 21-33). Springer, Cham.
 Government. (2022, August). Connected & Automated Mobility 2025. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1099173/cam-2025-realising-benefits-self-driving-vehicles.pdf on 13/09/2022.