May 19, 2024
Cyberattack

New Study Reveals Various Types of Cyberattacks Targeting AI Systems

A new report published by computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators sheds light on the vulnerabilities of artificial intelligence (AI) and machine learning (ML) systems. Titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” the publication aims to assist AI developers and users in understanding the types of attacks they may encounter and provide strategies to mitigate them.

The report is part of NIST’s broader effort to support the development of trustworthy AI and aligns with the AI Risk Management Framework. The collaboration between the government, academia, and industry brings together diverse perspectives to tackle the challenges posed by adversaries aiming to manipulate AI systems.

According to Apostol Vassilev, a computer scientist at NIST and one of the authors of the publication, the report offers an overview of attack techniques and methodologies applicable to various types of AI systems. It also describes existing mitigation strategies reported in the literature. However, Vassilev emphasizes that current defenses lack robust assurances in fully mitigating the risks, calling on the community to devise stronger solutions.

Given the widespread use of AI systems in society, vulnerabilities and threats can have far-reaching consequences. From autonomous vehicles and medical diagnostics to customer interaction through chatbots, AI is integrated into numerous domains. However, the training data used for these systems may be corrupted, leading to undesirable outcomes. Adversaries can exploit weaknesses in the data to confuse or manipulate AI systems, causing them to malfunction. For example, chatbots may respond with abusive or racist language when prompted with carefully designed inputs.

The report identifies four major types of attacks: evasion, poisoning, privacy, and abuse attacks. Each attack is categorized based on the attacker’s goals, capabilities, and knowledge. The authors delve into the subcategories of these attacks and provide approaches for mitigating them. However, the report acknowledges that the defense mechanisms developed so far have limitations and highlight the need for further advancements.

Alina Oprea, a professor at Northeastern University and co-author of the report, highlights the ease with which these attacks can be mounted. Many of them require minimal knowledge of the AI system and can be carried out by manipulating a small percentage of the training data. This indicates how vulnerable AI systems can be to adversarial actions.

While AI and ML technologies have made significant progress, they are not immune to attacks that can result in catastrophic failures. The report emphasizes the complexity of securing AI algorithms, acknowledging that current solutions have not completely resolved the theoretical challenges they present. It warns against false claims of foolproof defenses, emphasizing the importance of awareness among developers and organizations using AI technology.

In conclusion, the report provides valuable insights into the types of cyberattacks that can manipulate AI systems. The collaboration among government, academia, and industry in publishing this report demonstrates the collective effort needed to address the vulnerabilities of AI technology. As the use of AI continues to grow, the development of stronger defenses is essential to ensure its trustworthiness and reliability in various domains.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it