By Cold Spring Harbor Laboratory November 28, 2024
Collected at: https://scitechdaily.com/researchers-may-have-solved-a-decades-old-brain-paradox-with-ai/
Cold Spring Harbor Laboratory scientists developed an AI algorithm inspired by the genome’s efficiency, achieving remarkable data compression and task performance.
In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they’re born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors.
However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence.
When Zador first encounters this problem, he puts a new spin on it. “What if the genome’s limited capacity is the very thing that makes us so smart?” he wonders. “What if it’s a feature, not a bug?” In other words, maybe we can act intelligently and learn quickly because the genome’s limits force us to adapt. This is a big, bold idea—tough to demonstrate. After all, we can’t stretch lab experiments across billions of years of evolution. That’s where the idea of the genomic bottleneck algorithm emerges.
Using AI to Mimic Evolutionary Efficiency
In AI, generations don’t span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package—much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It’s as if it innately understands how to play.
Does this mean AI will soon replicate our natural abilities? “We haven’t reached that level,” says Koulakov. “The brain’s cortical architecture can fit about 280 terabytes of information—32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match.”
Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study’s lead author, explains: “For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware.”
Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
Reference: “Encoding innate ability through a genomic bottleneck” by Sergey Shuvaev, Divyansha Lachi, Alexei Koulakov and Anthony Zador, 12 September 2024, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2409160121
The study was funded by Deep Valley Labs, the G. Harold and Leila Y. Mathers Foundation, and Schmidt Futures.
Leave a Reply