Researchers at Cornell University have found that it is possible to hide malware code inside of AI neural networks, Tech Xplore reports. Zhi Wang, Chaoge Liu, and Xiang Cui have posted a paper describing their experiments with injecting code into neural networks on the arXiv preprint server. As computer technology grows ever more complex, so do attempts by criminals to break into machines running new technology for their own purposes, such as destroying data or encrypting it and demanding payment from users for its return. In this study, the team found a new way to infect certain kinds of computer systems running artificial intelligence applications. AI systems do their work by processing data in ways similar to the human brain. But such networks, the research trio found, are vulnerable to infiltration by foreign code. The researchers found that not only did standard antivirus software fail to find the malware, the AI system performance was almost the same after being infected. Thus, the infection could have gone undetected if covertly executed. The researchers note that simply adding malware to the neural network would not cause harm – whoever slipped the code into the system would still have to find a way to execute that code. They also note that now that it is known that hackers can inject code into AI neural networks, antivirus software can be updated to look for it.
https://techxplore.com/news/2021-07-malware-ai-neural-networks.html