By Ashwini Sakharkar 22 Jun, 2024

Collected at: https://www.techexplorist.com/researchers-design-new-ai-model-inspired-brains-efficiency/85417/

The ability to read, converse, process vast amounts of data, and provide business recommendations makes modern artificial intelligence seem more human-like than ever. However, AI still has several significant limitations.

Kyle Daruwalla, a NeuroAI Scholar at Cold Spring Harbor Laboratory (CSHL), points out that despite the impressive capabilities of current AI technologies like ChatGPT, they are still restricted in interacting with the physical world. Even tasks such as solving math problems and writing essays require billions of training examples to perform well.

Daruwalla is exploring unconventional methods to develop AI that can overcome these computational challenges, and he may have just identified a promising approach.

The key was moving data. Presently, a large portion of the energy used in modern computing is attributed to the transmission of data. Artificial neural networks, consisting of billions of connections, often require data to travel long distances. Therefore, Daruwalla sought inspiration from the human brain, known for being both highly computationally powerful and energy-efficient.

Daruwalla devised a new method for AI algorithms to efficiently move and process data, inspired by how our brains absorb new information. This new design enables individual AI “neurons” to adapt and receive feedback in real time, rather than having to wait for an entire circuit to update simultaneously. As a result, data doesn’t need to travel as far and can be processed instantly.

A schematic comparing typical machine-learning models (A) with Daruwalla’s new design (B). Row A shows input or data having to travel all the way through every layer of the neural network before the AI model receives feedback, which takes more time and energy. In contrast, row B shows the new design that allows feedback to be generated and incorporated at each network layer.
A schematic comparing typical machine-learning models (A) with Daruwalla’s new design (B). Row A shows input or data having to travel all the way through every layer of the neural network before the AI model receives feedback, which takes more time and energy. In contrast, row B shows the new design that allows feedback to be generated and incorporated at each network layer. Credit: CSHL

“In our brains, our connections are changing and adjusting all the time,” Daruwalla says. “It’s not like you pause everything, adjust, and then resume being you.”

The new machine-learning model offers support for a previously unverified hypothesis linking working memory to academic performance and learning. Working memory is the mental system that allows us to focus on tasks while retrieving stored information and past experiences.

“There have been theories in neuroscience of how working memory circuits could help facilitate learning. But there isn’t something as concrete as our rule that actually ties these two together. And so that was one of the nice things we stumbled into here. The theory led out to a rule where adjusting each synapse individually necessitated this working memory sitting alongside it,” Daruwalla says.

Daruwalla’s design might lead the way for a new era of AI that learns similarly to humans. This advancement would not just enhance the efficiency and accessibility of AI, but it would also symbolize a significant moment for neuroAI. Neuroscience has been providing valuable information to AI long before ChatGPT spoke its first digital word. It appears that AI might soon reciprocate the favor.

Journal reference:

  1. Kyle Daruwalla, Mikko Lipasti. Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates. Frontiers in Computational Neuroscience, 2024. DOI: 10.3389/fncom.2024.1240348

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments