Monday, May 20, 2024

Researchers find new way to infect AI networks

Researchers at Cornell University have found that it is possible to hide malware code inside of AI neural networks, Tech Xplore reports. Zhi Wang, Chaoge Liu, and Xiang Cui have posted a paper describing their experiments with injecting code into neural networks on the arXiv preprint server. As computer technology grows ever more complex, so do attempts by criminals to break into machines running new technology for their own purposes, such as destroying data or encrypting it and demanding payment from users for its return. In this study, the team found a new way to infect certain kinds of computer systems running artificial intelligence applications. AI systems do their work by processing data in ways similar to the human brain. But such networks, the research trio found, are vulnerable to infiltration by foreign code. The researchers found that not only did standard antivirus software fail to find the malware, the AI system performance was almost the same after being infected. Thus, the infection could have gone undetected if covertly executed. The researchers note that simply adding malware to the neural network would not cause harm – whoever slipped the code into the system would still have to find a way to execute that code. They also note that now that it is known that hackers can inject code into AI neural networks, antivirus software can be updated to look for it.

BIG Media
BIG Media
Our focus is on facts, accurate data, and logical interpretation. Our only agenda is the truth.

BIG Wrap

Death toll rises to six in New Caledonia riots

(Al Jazeera Media Network) Another person has been killed in France’s Pacific Islands territory of New Caledonia as security personnel try to restore order, taking...

Hell on Earth’ as violence escalates in Sudan’s el-Fasher

(Al Jazeera Media Network) The United Nations human rights chief has expressed horror over the escalating violence in Sudan’s North Darfur region as one...