Adversarial machine learning
Researchers have discovered a method of “vaccinating” artificial intelligence (AI) from the latest wave of attacks. Hackers have been tricking AI systems using what is termed “adversarial attacks” where there is an added layer (the adversary) onto data, such as an extra layer of noise on an image.
This can trick the AI’s algorithms and make it misclassify the image, which can then let malicious code enter. Dr Richard Nock of Australian research agency CSIRO says, “Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world.”
In an effort to counter this new form of cyber threat, the researchers at CSIRO have begun to develop an AI vaccine. Nock says, “We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set. When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks.” In a research paper accepted at the 2019 International Conference on Machine Learning (ICML), the researchers also demonstrate that the “vaccination” techniques are built from the worst possible adversarial examples, and can therefore withstand very strong attacks.
Adrian Turner, CEO at CSIRO's Data61 said this research is a significant contribution to the growing field of adversarial machine learning. “Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges, but that can't happen without focused research into these technologies.
“The new techniques against adversarial attacks developed at Data61 will spark a new line of machine learning research and ensure the positive use of transformative AI technologies," Mr Turner said.