Neural networks of the future will be able to successfully hide malware from antiviruses. This is reported by The Next Web.
A new feature of neural networks was discovered by researchers from the University of California at San Diego and the University of Illinois. Experts have suggested that developed neural networks are able to hide virus programs in lines of code in such a way that they are not visible to antivirus software. Based on their theory, scientists have created EvilModel, a model of an "evil" neural network, the purpose of which is to covertly infect equipment.
The authors of the study taught the model to embed malware in the parameters of the neural network in such a way that it was invisible to malware scanners. In order not to arouse suspicion, EvilModel should perform its main task as a "clean" model, but secretly hide the software from hackers.
Scientists explained that most models use 32-bit numbers to store parameters. According to the experiment, an attacker can store up to three bytes of malware in each parameter without significantly affecting its value. When a neural network is infected, a hacker breaks the malware into three-byte parts and embeds data into its parameters.
As a result, the authors of the experiment managed to infect the desired computer with a virus without causing alarm to the security systems. With the help of batch normalization and retraining, it was possible to increase the volume of malware data to 36.9 megabytes, while maintaining the accuracy of the model above 90 percent.
Earlier, Oxford University experts organized a discussion between humans and artificial intelligence. During the dialogue, the Megatron-Turing NLG language model warned humanity about the dangers of artificial intelligence.