Home STAY CURRENTArticles How Researchers Discovered DoS Attack on Machine Learning

How Researchers Discovered DoS Attack on Machine Learning

by CISOCONNECT Bureau

Machine learning systems can be forced to slow down and fail as a result of a new adversarial attack strategy. Read on to know more…

Machine learning systems can be forced to slow down and cause critical failures due to a new adversarial attack technioque. DeepSloth was discovered by a team of researchers from the University of Maryland and Alexandru Ioan Cuza University who were studying the robustness of multi-exit architectures against adversarial attacks.

Optimization strategies that speed up deep neural network operations are neutralised. At the International Conference on Learning Representations (ICLR), this slowing adversarial attack also known as DeepSloth was presented. The DeepSloth attacks targets the effectiveness of multi-exit neural networks.

Multi-output Architectures
The attack disables optimization techniques that help deep neural networks perform faster. Deep neural networks, a popular form of machine learning algorithm, can sometimes necessitate gigabytes of memory and extremely fast processors, rendering them inaccessible to IoT devices, smartphones, and handheld devices with limited resources.

Many of these devices require data to be sent to a cloud server capable of running deep learning models. To address these issues, scientists have devised a number of strategies for optimising neural networks for compact devices. A single optimization architecture, multi-output architecture, causes neural networks to stop calculating once they reach an acceptable level.

Dumitras and colleagues devised an antagonistic slowing attack aimed at reducing the efficiency of multiple-output neural networks. The DeepSloth attack modifies the input data in a subtle way to prevent neural networks from producing premature outputs and force them to perform full calculations. In other words, the attack modifies the input data to induce neural networks to complete full computations instead of exiting early. It has the potential to invalidate the benefits of multi-exit architectures.

These architectures can cut a DNN model’s energy needs in half during inference time. Researchers demonstrated that any input can cause a system to deviate, wiping out all savings.

DeepSloth Testing
DeepSloth was put to the test on a variety of multi-exit architectures. The early exit efficacy can be lowered by 90% to 100% if an attacker has complete knowledge of the targeted architecture. Even if attackers do not have exact information about the target model, DeepSloth can reduce efficacy by 5% to 45%. This is the equivalent of a Denial-of-Service (DoS) attack on neural networks.

When multi-exit architecture models are served directly from a server, targeted DeepSloth attacks can occupy the server’s resources and prevent it from utilising its full capacity, according to the researchers. This attack can force an edge device to transfer all of its data to the server in cases where a multi-exit network is partitioned between the cloud and an edge device.

The edge device may miss critical deadlines as a result of this. A delay in any health monitoring programs that uses AI to swiftly identify accidents and call help if necessary, could result in catastrophic outcomes.

Concluding Words
According to the researchers, this could be one of the first attacks aimed at multi-exit neural networks in this manner. Furthermore, adversarial training, a common method for protecting machine learning models against adversarial attacks, is ineffective against these attacks. Although this technique isn’t presently dangerous, it’s possible that more damaging slowdown attacks will emerge in the future.

“Adversarial training, a standard countermeasure for adversarial perturbations, is not effective against DeepSloth. Our analysis suggests that slowdown attacks are a realistic, yet under-appreciated, threat against adaptive models,” the team of researchers said.

Recommended for You

Recommended for You

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Close Read More

See Ads