28 Sep 2020

Protecting ethical AI technology

Dr Rachel Free of CMS considers the ethical issues of AI and how the law can protect it

Shutterstock

Research and development in AI is increasingly focused on how to create ethical AI technology. There are various technological reasons why machine learning has difficult ethical challenges. Researchers and engineers are working on ways to overcome these hurdles; significant investments of time and money are being made. But it seems these types of technology are difficult to protect using intellectual property for a number of reasons. This article sets out some of the issues and discusses options available to AI stakeholders for protecting ethical AI technology.

Examples of ethical issues in AI technology

Generating a rationale for an AI decision

When a trained neural network computes a prediction, it is difficult for scientists to give a principled explanation of why the particular prediction was computed as opposed to another. Such a principled explanation is desirable for ethical reasons – suppose the neural network is predicting whether a customer will default on a loan, or whether a job candidate will be suitable for a vacancy at an employer’s enterprise. These types of predictions have potential to cause harm if they are inaccurate and so it’s important to be able to give an explanation about how they were made. If the neural network decides not to grant a loan to the customer, the customer will need to know why and to know what factors need to be changed in order to obtain the loan. Work is being done to develop technology that will enable neural network predictions to be explained in a human, understandable way. 

Implementing the right to be forgotten

Another example is the problem of how to efficiently remove data about a particular person from a machine-learning system or a knowledge base, which has been created using data about the particular person and that of a huge number of others. This problem is also referred to as “how to enable the right to be forgotten”. Removing data about a particular person is extremely difficult where that data has become subsumed in a complex representation of data inside a computer, such as a deep neural network, without completely retraining the neural network. Removing data about a particular person from a knowledge base is also extremely difficult for the same reason. Ways of tracking which data has been used in which parts of the knowledge base and removing the effects of particular data need to be invented. This would overcome the high costs of completely retraining or reconstructing the neural network or knowledge base. These problems are seen as very complex and more than mere administration, since they cannot be done manually and there is no straightforward solution currently known.

Determining accountability where an autonomous agent is involved

Determining accountability, for example when an autonomous vehicle is involved in a collision or event resulting in death of a human or other harm, is a very real obstacle to securing acceptance of autonomous decision making systems. The problems involved in determining which entity is accountable are known to be extremely difficult to solve. Indeed, a recent report from the European Commission proposed that because of this difficulty a sensible and pragmatic way forward is to make the autonomous AI agent itself the entity that is accountable. Research is being done to find tamper-proof ways of recording state of the autonomous vehicle and ways to trigger when it is appropriate to record such state, so that after an event involving harm the recorded state can be used as evidence. How to record state of the autonomous agent in tamper-proof ways will become even harder in future because there will be a possibility for the AI agent to be deceptive. Humans will need to invent ways to record state in ways guaranteed to represent ground truth.

Driving “acceptable” behaviour

A further example is how to train a machine-learning system to perform a particular task in a manner that is acceptable to humans so that, for example, it is not biased against particular sections of society. A machine-learning system trained to recognise faces might inadvertently be biased against people from a particular ethnic group, depending on the training data used. Technical solutions are being developed that involve adapting the training objective used when training a neural network, to ensure that the neural network is not biased when it computes predictions for people of different demographic groups.

Encoding ethical aims into algorithms

A computer algorithm is a set of instructions for achieving a task. The instructions have to be detailed because the computer does not have any common sense and is only able to implement the instructions “to the letter”. To illustrate this point, think of a recipe for baking a cake. One of the instructions in the recipe might be to add milk. If a computer were given the instruction “add milk” it would execute that instruction exactly, perhaps by placing a carton of milk into the cake mixture. Because of the detail needed to instruct the computer, it is difficult to break down high level ethical objectives into instructions for a computer to execute. Thus it is difficult to encode ethical values into a computer algorithm. Suppose a computer program is instructed to play a game such as hangman and to win as quickly as possible. It may decide to fill the memory of its computer opponent until the opponent crashes. Being instructed to win as quickly as possible is not enough to enable ethical play.

How can we protect ethical AI technology?

Copyright

Ethical AI technology comprises source code as it is always complex software and so copyright can be used to protect the source code. Copyright is cost effective, since no registration process is needed and the right arises automatically. However, copyright is often difficult to enforce, as evidence of copying of the work is needed and is often difficult to obtain. If a competitor independently creates an AI solution that achieved the same function but in a different way, then copyright is no bar to the competitor.

Trade secrets

Trade secrets are useful for protecting material that’s confidential and is expected to remain so. We might think that AI algorithms are a good example of material that will remain confidential because often AI algorithms have a “black box” nature. But, increasingly, details about AI algorithms are being revealed in order to give transparency and facilitate trust. Sometimes regulatory bodies require a disclosure of the AI algorithm. In addition, there are ways to reverse engineer AI algorithms that are provided as cloud services with an application programming interface available to customers. 

Brand protection

Using trade mark registrations to protect a brand of an AI product or service is increasingly important. Customers need to trust AI products and services, since end users find it difficult to understand these technologies and have to rely on the reputation of manufacturers and service providers in order to decide what to buy. By using trade mark registrations, an AI brand holder is able to gain protection for reputation and prevent competitors from exploiting AI goods and services using the same brand.

Patents

Patents are a powerful monopoly right protecting the way an AI algorithm works. However, to obtain a patent in Europe for AI technology, generally speaking there needs to be a technical purpose.

Does technology that answers “how to address the risks of increasingly able AI” have a "technical purpose"?

In order to assess whether a purpose is technical or not, the EPO looks to case law. However, there is no existing case law regarding AI ethics as it is such a new field.

Another way to assess whether a purpose is technical is to consider whether the field of study is a technical field. For example, an engineering purpose would be considered technical because engineering is a field of technology. In the case of AI ethics, ethics is a branch of philosophy and philosophy is not a science or technology because it is not empirical. Ethical values are held by human societies and vary according to the particular human society involved. Therefore, there is an argument that dealing with the risks of increasingly able AI by giving AI ethical values is a social problem that is not in a technical field. I disagree with this line of argument, since scientists and engineers will need to devise engineering solutions, be they software and/or hardware engineering solutions, in order to give AI ethical values and ensure it upholds those values. The problem of deciding what ethical values to give AI is a separate problem.

With regard to ways to make AI computation interpretable by humans, there are arguments that this is a technical purpose as it gives information to humans about the internal states of the computer.

With regard to ways to remove data from already trained AI systems without having to completely retrain them, there are arguments that this is a technical purpose because it is not merely administrative. Getting the solution wrong would lead to a non-working result or worse, to an incorrectly operating AI that may cause harm as a result. The same applies for ways to make AI decision making systems unbiased. These problems are part of a broader task of controlling an AI system, which is a technical problem of control and is not an administrative problem of removing data.

In summary, the options for protecting ethical AI technology are uncertain. Copyright is available but is limited in scope. Trade secrets are often not a viable option for this type of technology. Brand protection is crucial. Patent protection is powerful if it can be obtained, though there is currently a lack of case law in the field.

Dr Rachel Free is a partner and patent attorney at CMS

 


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email subscriptions@roboticslawjournal.com or contact call us on +44 (0) 20 7193 5801.