Random noise improves performance of AI systems
Noise, i.e., superimposing random fluctuations over signals, generally makes it more difficult to interpret information. Anyone who has tried to follow the news while listening to a radio program with a fuzzy signal knows that from their own experience. However, neural networks actually seem to benefit from a slight amount of noise being mixed into the data to be processed.
FAU researchers demonstrate that this makes neural networks more flexible and more accurate.
This is the conclusion reached in a current study conducted by physicists, AI researchers and neuroscientists at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Uniklinikum Erlangen. The random fluctuations prevent the AI algorithms from becoming prematurely fixated on the wrong solution. Noise is probably also essential for the brain to work correctly. The results have now been published in the journal “Frontiers in Complex Systems”.
In our brains, many billions of nerve cells are linked over trillions of contact points that they use to share information. Learning processes strengthen some of these contacts and eliminate others. In time, this leads to complex networks that can very accurately interpret information such as language or images. If we see a piece of furniture, we generally recognize straightaway whether it is a chair or a bed.
Algorithms used in artificial intelligence are so powerful as they largely simulate the way our brains work. However, the models on which they are based are often only loosely inspired by the brain. Accordingly, artificial neural networks generally consist of several layers switched one after another without any feedback between them. Each one of them modifies the incoming information and then passes it on to the next layer. “The brain works differently. There, the information also continually flows back to the previous layers,” explains Dr, Patrick Krauss. “This feedback allows them not only to take the current input into account, but also to a certain extent the information they received beforehand. And that makes them particularly powerful for certain tasks involving temporal connections.”
Prior information leads to better results
Krauss is a physicist, neuroscientist and AI researcher at FAU and head of the Cognitive Computational Neuroscience Group at the Chair of Computer Science 5 (Pattern Recognition). “In specialist jargon, we refer to this property as recurrence,” he explains. “This can also be recreated with computers. Our research focused on recurrent neural networks like these.” The beneficial effect of noise on these networks can be portrayed in a highly simplified example: When you enter the postal address in online shops, the website often helps by suggesting the relevant name of the town or city based on the letters entered so far. In other words, it uses prior information to make an accurate prediction. The problem with this is if you make just one typo (e.g. N-u-r-m instead of N-u-r-e-) the system no longer suggests the correct name (Nuremberg). It gets stuck in a dead end in which the correct solution is no longer possible.
“In this case, it helps to add noise, in other words random signals, to the information – in our example: the letters which have already been typed,” emphasizes Dr. Achim Schilling, who works in the same group. He conducted the study together with Krauss and Dr. Claus Metzner and has focused for a long time already on the brain and AI. “The noise ensures that the neural network can get itself out of the dead end it has got itself into.” This lowers the probability of the system becoming prematurely fixated on a false solution. Instead, it can suggest possible continued patterns for both possibilities, without deciding on one particular one.
Optimal amount of noise improves robustness and efficiency
In their work, the researchers were able to show that noise improves the performance of a number of different recurrent neural networks. They also demonstrated that this is the case for Boltzmann machines and Hopfield networks, for the development of which the cognition scientist Geoffrey Hinton and physicist John Hopfield won this year’s Nobel Prize for Physics. However, noise only helps if it is added in small doses and does not drown out the information entirely. “There is the perfect amount of noise for every network,” explains Krauss. “Adding it makes procedures not only considerably more robust but also more efficient.” These findings could lead to the development of more powerful AI systems.
The study also shows how much AI research benefits from an interdisciplinary approach. “The basic idea of the Cognitive Computational Neuroscience Group is to bring together scientists from physics, brain research and AI development,” Krauss explains. “By applying methods and concepts from physics to neurosciences, we gain on the one hand a better understanding of how the brain works and on the other we learn how to translate these findings into better computer algorithms.”
https://doi.org/10.3389/fcpxs.2024.1479417
Further information:
Dr. Patrick Krauss
Chair of Computer Science 5 (Pattern Recognition)
Phone +49 9131 85 27775
patrick.krauss@fau.de
Dr. Achim Schilling
Chair of Computer Science 5 (Pattern Recognition)
Phone +49 9131 85 27775
achim.schilling@fau.de
Dr. Claus Metzner
Chair of Computer Science 5 (Pattern Recognition)
Phone +49 9131 85 27775
claus.metzner@fau.de