麻豆区

Skip to main content Skip to search

YU News

YU News

Researchers聽Immunize 3D AI Systems Against Hackers with Digital Vaccines

Yucheng Xie, an assistant professor in the Katz School鈥檚 Department of Graduate Computer Science and Engineering, is co-author of a paper that was accepted to the prestigious IEEE 34th International Conference on Computer Communications and Networks.

By Dave DeFusco

Artificial intelligence systems are becoming smarter and more visual鈥攃apable of understanding the world in three dimensions. These 3D 鈥減oint clouds,鈥 which map thousands of points in space to form digital models of cars, people or buildings, are the foundation for technologies like self-driving vehicles, architectural design and augmented reality. 

As these systems grow more powerful, they also become more vulnerable. Hackers have discovered ways to secretly manipulate AI models during training, essentially poisoning them, so they behave normally most of the time but fail in very specific situations. These are called backdoor attacks, and they鈥檙e among the stealthiest threats in modern AI.

A team of computer scientists, including Dr. Yucheng Xie, an assistant professor in the Katz School鈥檚 Department of Graduate Computer Science and Engineering, developed a new way to fight back. Their paper, 鈥淰igilante Defender: A Vaccination-based Defense Against Backdoor Attacks on 3D Point Clouds Using Particle Swarm Optimization,鈥 has been accepted to the prestigious IEEE 34th International Conference on Computer Communications and Networks (ICCCN).

Their method, inspired by biological vaccination, gives ordinary contributors to a shared AI model the ability to protect it from hidden attacks without needing access to the model鈥檚 internal code. In a typical AI system, models learn from massive amounts of training data鈥攊mages, sound or, in this case, 3D point clouds. The more data, the better the learning, but in distributed or collaborative learning systems, where data comes from many outside sources, that openness creates a weak spot.

鈥淚f even one contributor uploads poisoned data, the entire model can be compromised,鈥 said Professor Xie 鈥淭he model behaves perfectly in most situations, but when it encounters the attacker鈥檚 secret trigger鈥攁 certain shape, color or pattern鈥攊t misclassifies the object. It鈥檚 like a sleeper agent inside your AI.鈥

These triggers can be tiny and nearly impossible to detect. For example, a hacker might add a few subtle points to a 3D scan of a stop sign, tricking an autonomous car into reading it as a speed limit sign instead.

鈥淏ecause the model still performs well on regular data, the trainer has no reason to suspect something鈥檚 wrong,鈥 said Professor Xie. 鈥淭hat鈥檚 what makes backdoor attacks so dangerous. They hide in plain sight.鈥

Most existing defenses rely on centralized systems鈥攖he server side鈥攖o detect and remove malicious data, but Dr. Xie鈥檚 team took a different approach. They empowered the clients, or individual contributors, to protect themselves. Their strategy, called vigilante vaccination, allows a well-intentioned contributor to inject harmless vaccine triggers into their own training data. These benign triggers teach the AI to ignore the patterns that an attacker might use later.

The trick is figuring out what kind of trigger to use, especially when the defender has no idea what the attacker鈥檚 trigger looks like.

To solve this, the researchers turned to Particle Swarm Optimization (PSO), a technique inspired by the way birds or fish move together in flocks or schools. In PSO, a swarm of digital 鈥減articles鈥 explores a large search space by trying different trigger configurations to find the ones that are most likely to reveal vulnerabilities in the model.

鈥淭hink of it as a team of scouts looking for weak spots,鈥 said Professor Xie. 鈥淓ach scout tests a possible pattern and, over time, the swarm converges on the best solution. Once we identify a likely trigger, we retrain the model using the correct labels. That teaches it not to associate the trigger with a wrong outcome.鈥

The team tested their approach using several standard 3D datasets, including ModelNet40 and ShapeNetPart, as well as three popular AI architectures: PointNet, PointNet++ and DGCNN. They also ran their vaccine against three state-of-the-art backdoor attacks known in the research world as PointBA, PCBA and EfficientBA. Across all tests, their vaccination method cut attack success rates dramatically, down to as low as 5.9 percent, while keeping the model鈥檚 accuracy intact. In other words, the models kept doing their job correctly even after being 鈥渋mmunized.鈥

What makes this breakthrough even more important is that it works in black-box situations, where contributors don鈥檛 have access to the model鈥檚 inner workings, just its inputs and outputs.

鈥淚n many real-world systems, users can only interact with the model through a limited interface,鈥 said Professor Xie. 鈥淥ur approach works even in those conditions. You don鈥檛 need to know how the model鈥檚 built to help defend it.鈥

The researchers call their approach client-side defense, meaning that protection starts at the user level rather than the server. It鈥檚 a shift in philosophy that could make distributed AI systems much safer and more democratic.

鈥淭his is about empowerment,鈥 said Professor Xie. 鈥淚nstead of waiting for a central authority to catch an attack, we allow individuals to act as vigilante defenders. Every participant can take steps to strengthen the collective model.鈥

From autonomous vehicles to medical imaging, 3D point cloud models are shaping the future of technology. As AI becomes more deeply embedded in daily life, however, the cost of compromised systems grows exponentially.

鈥淭he integrity of AI isn鈥檛 just a technical issue, it鈥檚 a public trust issue,鈥 said Professor Xie. 鈥淧eople need to know that the systems making critical decisions are secure and reliable.鈥

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us