01 Nov 2019

It’s the robot’s fault! AI and legal liability in aerospace

Lucy England and Simon Phippard from law firm Bird & Bird examine how the rise of artificial intelligence (AI) may affect the legal liability and risk in the aerospace sector.

shutterstock
Yurchanka Siarhei / Shutterstock.com

 

Recent tragic events such as the Lion Air and Ethiopian accidents will inevitably focus attention on the use and regulation of technology and automated systems in commercial airliners. Even if the technology is not at fault, events such as this are likely to make the industry scrutinise the current technological revolution more than ever.

Hailed as one of the most disruptive technologies, artificial intelligence (AI) brings a number of commercial and technical advantages to the aerospace sector: it offers manufacturers and operators the prospect of reducing operating costs and improving efficiency and, of course, managing the world of predictive maintenance. It also offers commercial airlines more opportunity to improve the customer experience, thereby increasing customer loyalty.

But what are the legal implications of this technology transformation, and how does the increased use of automation − in particular AI – change the aviation industry’s well-trodden liability regime? This article looks at some of the legal challenges this increase in automation is bringing to the aerospace sector and offers some legal food for thought.

A well-rehearsed liability environment

The air transport world already has a well-established liability regime in the event of an accident. For international commercial air transport the operator is presumed liable − in some instances is strictly liable − for compensatory damages in the event of an accident and for practical purposes there are no financial limits on that liability.

For all intents and purposes then, commercial air transport works on the basis of operators paying full compensatory damages if passengers are hurt or killed. The same regime usually applies to domestic transport, even if the international convention regime does not apply. Commercial air transport operators insure their liabilities − and indeed are usually obliged to do so − and, in the event of an accident, insurers seek to settle claims as quickly as possible.

In most cases, aircraft owners or operators are also strictly liable for damage occasioned on the ground, subject to financial limits. Although there has not been universal ratification of the Rome Convention, many countries have adopted a strict liability regime and, in most instances, the operator, if not the owner, accepts that they will carry the cost of any damage caused on the surface in an accident, and insures its liability accordingly.

Unmanned automation: do the same liabilities apply?

It is wrong to think that the terms autonomous and unmanned are interchangeable, or even that an unmanned system is an AI system. Automation is, of course, a large part of unmanned operations and an AI solution may form part of the platform or the supporting operations. Until now, there has been an assumption that unmanned systems would not involve people on board and so considerations of surface damage, rather than passenger liability, have been the major issue. In this regard, the position for unmanned operations is no different to manned operations but the current increasing pace of interest in urban air mobility applications shows that it is now time to consider the passenger situation further.

What is not addressed specifically in the current aviation liability regime, whether for manned or unmanned flights, is the air-to-air risk. No strict liability regime governs that situation and, happily, instances have been rare. But what if an unmanned aircraft causes a mid-air collision? The risk of this for unmanned operations is perceived to be higher, perhaps because of the potential number of unmanned aircraft and, often, their pilots’ lack of experience. There are some suggestions that unmanned aircraft should face a presumption of strict liability for damage to other aircraft but is that a logical position to take if the operations in question are lawfully authorised? In any event, the operator will be liable to passengers on board or for damage occasioned on the surface.

Evolution, not revolution?

So what does AI change? AI has been heralded as revolutionary but in fact it is not a new technology. It was mooted in the 1940s but it is only more recently that its use has become more widespread with the ability to capture and store more data. AI needs data, and a lot of it. Many businesses in the sector have been making use of AI-based capability on an incremental basis and the debate about levels of automation, in particular on the flight deck, has been running for many years.

Why we need “explainability”

AI applications and their algorithms tend to operate in black boxes – closed systems that give limited insight into how they reach their outcomes – and this can pose problems for key (human) decision makers in a business, many of whom are unaware of how AI systems make their decisions. Data is input and trained by the AI solution but the output – the decision on what the next move should be – is made by the software.

For applications which promise commercial benefits, such as predicting spare parts requirements or scheduling maintenance, disputes may arise as to whether the supplier’s product or service is at fault, The supplier may in fact not know why the decision to act (or not act) was made by the black box. If the levels of understanding of exactly how the AI solution functions are reduced, the scope for disagreement among experts determining what was at “fault” increases.

Of greater concern to manufacturers, operators – and passengers – is more a moral dilemma than a legal one. To borrow an example often cited in the context of driverless cars: how does a wholly automated aircraft decide where to carry out a forced landing if the choice is between a school playing field and the gardens of a retirement home? In considering this, people forget that a human could be faced with the same decision points; so why does the fact that a machine has made the ultimate decision in this abnormal situation matter?

Despite the “explainability” problem and the ethical issues that this technology raises, the introduction of AI technologies into the supply chain, and into aircraft themselves, does not alter the basis on which the victims of an accident would be compensated. In very few instances does a victim have to demonstrate “fault” on the part of a supplier or operator and this is true regardless of what has made the decision.

So what difference does AI make? The answer lies in two factors. First, how will regulators go about approving systems that depend on AI or machine learning? Second, what are the implications within the supply chain?

Regulatory challenges: safety remains key

A change in the way equipment is designed or functions presents challenges for the regulators. Civil aviation authorities will need to be satisfied that a system reliant on increased automation or AI achieves the appropriate level of safety. Historically, certification processes have relied on a highly analytical approach to all elements of the system, how it functions under all operating conditions and the consequences of any given failure. This approach is then backed up by an analysis of performance in service and a detailed examination of instances in which equipment does not perform as expected.

It is yet to be seen how regulators may adapt their approach to these new technologies but if the “logic” by which the AI solution functions is unclear, the regulator may be reluctant to permit use in safety critical applications. Will this limit the development of very highly automated systems, including those that make decisions on their own? For safety critical ones, quite possibly: manufacturers will need to convince regulators that these systems are as safe, if not safer, than if the role were still being carried out by a human.

What does this mean for manufacturer liability?

The principles regarding claims by anyone who suffers injury or loss from a defective product are also well established. Since the mid-1990s, EU product liability law has made a “producer” strictly liable for injury suffered as a result of placing a defective product on the market and a product is defective if it does not provide the safety which the public is entitled to expect. That product might be a food processor or a commercial airliner; if it is not sufficiently safe while in use, and injury or damage to goods results, the producer is liable. The producer may be the manufacturer, someone who has branded the product as its own, or an importer into the EU.

Similar principles apply in the US and in many other countries. It is important to recognise that − again − product liability arises regardless of fault. A manufacturer may exercise great care in designing or selecting equipment, but if the equipment fails in service in a way that is dangerous, there may be a product liability exposure. The risk is usually channelled through the “highest" player in the supply chain. So, if a control module employing AI technology malfunctions such that the entire module is not sufficiently safe, both the AI system supplier and the integrator – say an aircraft manufacturer – would face a product liability exposure.

In this situation nothing precludes an operator or manufacturer that is facing a product liability claim from seeking recourse against another supplier who is at fault or in breach of contractual obligations. New technologies do not change contractual relationships in that way and those avenues of recourse remain open. The difficulty, however, may be in determining that the AI system provided was at “fault”. An unexpected result – even when an accident is fatal − may not be the wrong result; much will depend on what data the AI solution was trained with and the parameters within which it has been trained. This is where these new technologies will start testing legal boundaries.

Commercial risk management

What, then, should businesses do? As with any increased automation, AI and machine learning technology brings both advantages and disadvantages and may or may not be welcomed by the more conservative parts of the aerospace sector. It will, however, find more applications, even if its application in safety critical roles is adopted more slowly. In any event, businesses need to consider the implications carefully when employing it.

That will require a proper risk assessment. To what extent do AI functions need to be fail-safe and what is the default situation if they are not functioning correctly? The risk involved is not only about safety and operational standards but also the commercial implications. If a business makes commitments in terms of equipment reliability or spare parts availability in reliance on AI solutions, it needs to ensure that it has confidence in its own applications or, if AI software has been supplied by a third party, that it is satisfied, not only with the supplier’s integrity, creditworthiness and support capability, as usual, but also with understanding what data has been used to train the AI solution, how much it has been tested and what the parameters of the expected outcomes from the AI solution could be. Businesses will need to ensure that contractual commitments and limitations are accurately defined and appropriate to the equipment or service in question.

The outlook

Aviation lawyers are used to the paths that allocate risk and liability in the event of accidents and incidents. While much will stay the same − and we do not believe an entirely new regulatory regime is needed, as some commentators suggest − automation, and in particular AI solutions, will continue to challenge not just regulators but those looking to use AI-based systems to gain significant competitive advantage.

Historically, aviation has relied heavily on the training and skills of flight crew as the last line of defence when other systems fail; those who are able to make judgments in the face of competing priorities and when procedures no longer offer an appropriate solution. There is no single view as to how the legal challenges will be resolved, but replicating the human capabilities on which our industry is based will continue to be an interesting challenge to observe.

 


related topics