Home » Learning in your sleep: Why an AI should switch off sometimes

Learning in your sleep: Why an AI should switch off sometimes

by admin
Learning in your sleep: Why an AI should switch off sometimes

There are various theories about the way people store and retrieve memories. One of these is the Complementary Learning Systems Theory (CLS), which states that the interaction between the hippocampus and the neocortex, between a fast-learning and a slow-learning brain area, is significantly involved in converting new experiences into memories – a process , which primarily takes place during sleep.

Advertisement

The developers of neural networks have long been making use of such theories from brain research. In 2021, a team from Singapore introduced DualNet, an AI model that uses both a slow and fast training process, imitating human learning. In a current study that has not yet been reviewed by independent experts, researchers at the University of Catania in Italy go one step further: their algorithm works with sleep and wake phases based on the CLS theory.

The team wanted to find out whether AI models become more reliable if they are not constantly bombarded with new information, but instead have the opportunity to let information sink in every now and then. In fact, there is a phenomenon in machine learning called “catastrophic forgetting,” in which the algorithms completely forget what they previously learned. One possible explanation is that during sequential learning, new representations overlay the old ones and thus push them out of memory.

In order to test whether a division into sleep and wake phases makes the algorithms more robust in their application, the researchers from Catania developed a training method called “Wake-Sleep Consolidated Learning” (WSCL) and applied it to an image recognition model. “We introduce a sleep phase that mimics the human brain states in which synaptic connection, memory consolidation, and dreaming occur,” they write.

See also  Pediatric brain tumors, hope for severe cases from new treatment

During the awake phase, the model is fed training data as normal, in this case new images of animals. In this phase, new experiences are stored in short-term memory, so to speak. The waking phase is followed by the sleeping phase, which, analogous to human sleep, is divided into two alternating phases: Non-REM sleep, in which the neural network, on the one hand, reproduces the memories that were collected during the waking phase and, on the other hand, processes past experiences, So older training data is consolidated in long-term memory. There is also REM sleep, in which dreaming simulates new experiences and prepares the brain for future events.

This dream phase, in which the AI ​​processes abstract images with different combinations of animals, is important, says researcher Concetto Spampinato, who was involved in the study, to the “New Scientist”. It helps “to bring together previous paths of digital neurons and thus create space for other concepts in the future.” This would make it easier for the model to learn new concepts. A kind of brain training for the AI.

But does dividing training into waking, sleeping and dreaming phases actually have an effect? Yes, say the researchers. They compared their algorithm trained with WSCL to three common image recognition models. The detection rate was between two and twelve percent higher. In addition, the so-called “forward transfer” was higher; the model applied more old knowledge to learn new tasks. This is an indication that the plasticity of neural networks can be improved through specific sleep and wake phases.

(jl)

See also  Elon Musk's Speech: Expect Fully Autonomous Driving by the End of the Year as AI Advances

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy