Summary: Today’s artificial intelligence can read, speak and analyze data, but it still has critical limitations. NeuroAI researchers designed a new model of AI inspired by the efficiency of the human brain.
This model allows AI neurons to receive feedback and adapt in real time, improving learning and memory processes. The innovation could lead to a new generation of more efficient and accessible AI, bringing AI and neuroscience closer together.
Key facts:
- Inspired by the brain: The new AI model is based on how the human brain efficiently processes and organizes data.
- Real-time adjustment: AI neurons can receive feedback and adapt on the fly, improving efficiency.
- Potential impact: This discovery could pioneer a new generation of AI that learns like humans, improving both the fields of AI and neuroscience.
Source: CSHL
It reads there. Speak. It collects mountains of data and recommends business decisions. Today’s artificial intelligence may seem more human than ever. However, AI still has some critical shortcomings.
“As impressive as ChatGPT and all these current AI technologies are, in terms of interacting with the physical world, they are still very limited. Even in the things they do, like solving math problems and writing essays, they get billions and billions of training examples before they get good at it,” researcher Kyle Daruwalla of Cold Spring Harbor Laboratory (CSHL) explains to NeuroAI.
Daruwalla has been looking for new, unconventional ways to design AI that can overcome such computational hurdles. And he may have found just one.
The key was data movement. Nowadays, most of the power consumption of modern computers comes from data bouncing around. In artificial neural networks, which are made up of billions of connections, data can have a very long way to go.
So to find a solution, Daruwalla looked for inspiration in one of the most powerful and energy-efficient computing machines in existence – the human brain.
Daruwalla created a new way for AI algorithms to move and process data much more efficiently, based on how our brains take in new information. The design allows individual AI “neurons” to receive feedback and adapt on the fly, rather than waiting for an entire circuit to update simultaneously. This way, the data doesn’t have to travel as far and is processed in real time.
“In our brains, our connections are changing and adapting all the time,” says Daruwalla. “It’s not like you stop everything, adapt and then start being you again.”
New machine learning model provides evidence for an as yet unproven theory linking working memory to learning and academic performance. Working memory is the cognitive system that enables us to stay on task by recalling stored knowledge and experiences.
“There have been theories in neuroscience about how working memory circuits can help facilitate learning. But there is nothing as concrete as our rule that actually ties these two together.
“And that was one of the nice things we stumbled upon here. The theory led to a rule where tuning each synapse individually required this working memory to sit next to it,” says Daruwalla.
Daruwalla’s design could help create a new generation of AI that learns like us. Not only would this make AI more efficient and accessible, but it would also be a full-circle moment for neuroAI. Neuroscience has been feeding AI valuable data since long before ChatGPT uttered its first digital syllable. Soon, it looks like AI may return the favor.
About this news about artificial intelligence research
Author: Sarah Giarnieri
Source: CSHL
Contact: Sara Giarnieri – CSHL
Image: Image is credited to Neuroscience News
Original research: Open access.
“Information Barrier-Based Hebbian Learning Rule Naturally Links Working Memory and Synaptic Updates” by Kyle Daruwalla et al. Frontiers in Computational Neuroscience
ABSTRACT
Information barrier-based Hebbian learning rule naturally links working memory and synaptic updates
Deep push neural networks are effective models for a wide range of problems, but training and deploying such networks incurs a significant energy cost. Spike neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when properly deployed on neuromorphic computing hardware.
However, many applications train SNNs offline, and running network training directly on neuromorphic hardware is an ongoing research problem. The main drawback is that back propagation, which makes it possible to train such deep artificial networks, is biologically unacceptable.
Neuroscientists are unsure how the brain would propagate a precise error signal back through a network of neurons. Recent progress addresses part of this issue, e.g. weight transport problem, but a complete solution remains elusive.
In contrast, new learning rules based on information bottleneck (IB) train each layer of a network independently, bypassing the need to propagate errors across layers. Instead, the spread is implicit due to the forward connection of the layers.
These rules take the form of a three-factor Hebbian update, a global error signal modulating local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing several samples at the same time, and the brain only sees a single sample at a time.
We propose a new three-factor update rule where the global signal accurately captures information across samples via an auxiliary memory network. The auxiliary network can be trained A priori regardless of the data set used with the primary network.
We demonstrate baseline-comparable performance on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no connection between learning and memory, our rule represents a direct connection between working memory and synaptic updates. To our knowledge, this is the first rule that makes this connection clear.
We examine these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternative view of learning, where each layer balances memory-informed compression against task performance.
This view naturally includes several key aspects of neural computation, including memory, efficiency, and locality.