July 27, 2024
AI Networks Found to be Highly Vulnerable to Targeted Attacks

AI Networks Found to be Highly Vulnerable to Targeted Attacks

Artificial intelligence (AI) tools have become increasingly prominent in various sectors, from autonomous vehicles to medical imaging. However, a recent study has discovered that AI networks are more susceptible to targeted attacks than previously believed. Specifically, the study focused on adversarial attacks, where individuals manipulate the data input into an AI system in order to deceive it. For instance, by strategically placing a sticker on a stop sign, hackers can render the sign completely invisible to an AI system, potentially leading to accidents. Similarly, hackers can modify X-ray images to mislead AI systems into providing inaccurate diagnoses.

While AI systems can generally identify stop signs despite alterations, vulnerabilities exist that attackers can exploit. These vulnerabilities allow attackers to manipulate AI systems and make them interpret data in a way that suits their intentions. For example, an AI system trained to identify stop signs can be made to identify the sign as a mailbox, speed limit sign, or even a green light by utilizing slightly different stickers or exploiting other vulnerabilities. Such findings are critical because if an AI system proves unable to withstand these attacks, it cannot be safely implemented—especially in applications that have implications for human safety.

To assess the vulnerability of deep neural networks to adversarial attacks, researchers developed a software called QuadAttacK. This software can be used to detect vulnerabilities in any deep neural network. By testing a trained AI system with clean data, QuadAttacK observes the AI’s decision-making process and learns how it interprets data. Consequently, QuadAttacK identifies ways to manipulate the data and deceive the AI system accordingly. This allows the software to make the AI system perceive the manipulated data as desired.

Proof-of-concept testing was conducted using QuadAttacK on four widely-used deep neural networks: ResNet-50, DenseNet-121, ViT-B, and DEiT-S. The researchers were astonished to find that all four networks were highly vulnerable to adversarial attacks. Furthermore, they discovered that the attacks could be fine-tuned to make the AI systems interpret the data according to the attacker’s intentions.

Given these vulnerabilities, the researchers have made QuadAttacK publicly available for use by the research community. The next step is to develop strategies to minimize these vulnerabilities, although specific solutions are still being investigated.

The study, titled “QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks,” will be presented on December 16 at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023) in New Orleans, Louisiana. Thomas Paniagua, a Ph.D. student at North Carolina State University, is the first author of the paper, while Ryan Grainger, also a Ph.D. student at NC State, is a co-author.

*Note:
1.      Source: Coherent Market Insights, Public sources, Desk research
2.      We have leveraged AI tools to mine information and compile it