13 Nov 2018

Sidestepping Causation – the Automated and Electric Vehicles Act 2018

The new Automated and Electric Vehicles Act 2018 received its Royal Assent on 19th July. This article reviews the consequences if any of the legislative measure, on the users of Connected and Automated Vehicles or “CAVs”.

shutterstock
Akarat Phasura / Shutterstock.com

 

Driverless vehicles as they are more commonly known present unique issues in terms of legal causation, particularly if they rely on autonomous systems that deploy machine learning technology, such as neural networks.  

Instead of depending upon prescriptive linear programming, these systems reach self-determined conclusions on the basis of data (or data sets) presented to them. In simple and somewhat anthropomorphic terms, they “learn” and it is these learned behaviours which enable them to operate in the real world with autonomy. Unfortunately, the technology is in essence a “black box”. What this means is that whilst we can theoretically construct and actually build these systems, we do not yet know how or why in any specific and particular instance they reach the decisions that they do.  

The decisions or outputs of such systems are also prone to manifest themselves on a probability curve – so that whilst in most cases an expected output (or decision) will be made, in some cases, a machine learning system will produce outputs at the edge of this probability curve – a so called “edge case”. Edge cases are not defects or faults – they are merely the less common consequence of using these systems. These “edge cases” and the systems themselves, create real problems for current legal theories of causation – firstly because tort, contract and product liability all depend to a greater or lesser degree on fault or defects in order to found liability, and secondly, due to their “black box” nature and lack of direct correlation to human action, it becomes very difficult to determine on an evidential basis the degree to which human actors were in fact responsible for the system’s actions.

So far as CAVs are concerned, what we can see is that in fact it is an assemblage of many and varied integrated systems that are produced by multiple manufacturers. For a driverless car to work effectively, it needs sensors to navigate road obstructions, such as radar and lidar detection. It must have a computer to direct its actions and that computer needs to have a logic framework within which to operate – internally by use of its own operating software and also externally by reference to map data. All of these systems need to work together effectively, and this is without consideration of all the usual mechanical components which form a standard car, which must also be present and functioning.

Taking aside the organisational complexity of the AI system itself which we have already discussed, the sheer mechanical complexity of such machines alone gives rise to a potential plethora of liability targets, ranging from the vehicle manufacturer itself, all the way down to the designer of an individual component, depending upon where the actual defect, fault or breach occurs.

Clearly a huge challenge is faced by any party hoping to recover their losses through litigation should one of these vehicles fail or end up in an accident. So what does the new Automated and Electric Vehicles Act (the “AEVA”) set out to achieve? In typical fashion, the UK government has chosen to address the issue of driverless cars from the perspective of gaps in current insurance coverage, caused by fully autonomous driving.

Section 2 of the EAVA provides that, “where…an accident is caused by an automated vehicle when driving itself…the vehicle is insured at the time of the accident, and…an insured person or any other person suffers damage as a result of the accident, the insurer is liable for that damage.”

In essence the principle enshrined in the bill is that if you are hit by an insured party’s vehicle that is self-driving at the time, the insurer “at fault” pays out. If you have comprehensive cover then you will also be insured for your own injuries. If the vehicle at fault is not covered by insurance then the Motor Insurers Bureau will pay out in the usual way and seek to recover its losses from the owner of the uninsured vehicle.   

This is an essentially pragmatic response that will probably work in an environment where there is a mixed demographic of driverless cars and human piloted ones – it also avoids systemic change to the insurance industry. It does however completely sidestep the causation problems described above. Crucially, the proposed measure relies very heavily on the ability of insurers to subrogate and therefore bring claims of their own against other third parties, including manufacturers. This will of course be hugely problematic for insurers if, as we have already argued, it is likely that the relevant fault or defect cannot easily be traced. 

As we have seen, as intelligent machines and AI systems “learn” for themselves, their behaviours are increasingly less and less directly attributable to human programming. These machines are not acting on a prescriptive instruction set, but a system of rules that may not have anticipated the precise circumstances under which the machine should act. To take our prior example of the autonomous vehicle, what if our car has been programmed to look after and preserve the safety of its occupants and also to avoid pedestrians at all costs and is placed in an unavoidable situation where it has to make a decision as to whether to avoid a pedestrian crossing into its path (and thereby run into a brick wall, injuring or even killing its occupants) or running over the pedestrian (and thereby saving its occupants). Can any outcome of that decision be said to be a failure, fault or a defect – even if people are injured or possibly killed as a result?

As we have seen, tracing “fault” is likely to prove very difficult for insurers as the AEVA regime simply preserves the status quo ante. There are however some established doctrines in tort, which may prove useful – the doctrine of res ipsa loquitur – (or “the thing speaks for itself”) for example.

Res ipsa loquitur is useful in dealing with cases where there are multiple successive inexplicable failures which cannot in themselves be readily explained. A classic example of the application of this was in the US case of Toyota Motor Corporation, where Toyota found that, for no particular reason, many of its high end Lexus model cars simply accelerated – despite the intervention of their drivers. Despite much investigation, the cause of these failures could not be pinpointed. Toyota took the step of settling 400 pending cases against it after a jury applied the doctrine of res ipsa loquitur and awarded the plaintiffs in that case $3m in damages.  

In the UK the principles are enshrined in the leading case of Scott and Bennett v Chemical Construction (GB) Ltd [1971] 3 All ER 822. This provides for a three step test for the principle to be applied: (i) the event or accident would not ordinarily have occurred in the absence of negligence (ii) the thing causing the damage must be under the control of the defendant and (iii) there is no evidence as to the cause of the event or accident. Clearly such a test will need to be considered in the context of artificially intelligent machines and the extent to which such a device can be considered to be “under the control” of its user when it is making decisions on its own behalf will be debatable to a greater or lesser degree. What this doctrine also does not provide a solution to of course is the inexplicable isolated incident.  

Clearly, whilst the AEVA is a pragmatic response to an immediate problem caused by the use of such vehicles, it does not do much more than push causation issues directly into the laps of insurers who will still face considerable difficulties in pursuing claims against other third parties, including the manufacturers of such vehicles.  

Aside from the AEVA itself, the Law Commission of England and Wales, in a joint project with the Law Commission for Scotland, have – at time of writing – been tasked by the Centre for Connected and Autonomous Vehicles (which is part of the Departments for Transport and Business, Energy and Industrial Strategy) to undertake a wide ranging review of the laws relating to the regulation and review of autonomous vehicles, in recognition of the inadequacies in the law.  

The review will address concerns related to safety assurance mechanisms (such as type approvals and MoTs), accountability in terms of crime and accidents, and assessing how to decide whether such vehicles are safe for road use. Data protection and privacy issues, theft and cyber security and land use policy are expressly stated to be outside of the review’s scope, as are drones and vehicles which are designed solely for use on pavements rather than roads. The review will be for a three year period commencing in March 2018, and it is expected that a scoping paper for consultations will be published before the end of 2018.


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email subscriptions@roboticslawjournal.com or contact call us on +44 (0) 20 7193 5801.