By Taylor & Francis Group September 29, 2024

Collected at: https://scitechdaily.com/cracking-the-neural-code-how-ai-will-surpass-human-intelligence/

In “Towards Human-Level Artificial Intelligence,” Eitan Michael Azoff proposes that cracking the neural code and emulating visual thinking are key to surpassing human intelligence with AI.

  • Understanding how ‘visual thinking’ works is key to building human-level AI, says AI expert
  • It may be possible for computers to emulate a type of consciousness, he suggests
  • Expert also warns that society must control AI technology and have ‘sole control of the off switch’

Unlocking the Potential of AI Through Neuroscience

Humans will build Artificial Intelligence (AI) that surpasses our own capabilities once we crack the ‘neural code’, says an AI technology analyst.

Eitan Michael Azoff, a specialist in AI analysis, argues that humans are set to engineer superior intelligence with greater capacity and speed than our own brains.

What will unlock this leap in capability is understanding the ‘neural code’, he explains. That’s how the human brain encodes sensory information, and how it moves information in the brain to perform cognitive tasks, such as thinking, learning, problem-solving, internal visualization, and internal dialogue.

Emulating Consciousness in AI

In his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence, Azoff says that one of the critical steps towards building ‘human-level AI’ is emulating consciousness in computers.

There are multiple types of consciousness, and scientists acknowledge that even simpler animals such as bees possess a degree of consciousness. This is mostly consciousness without self-awareness, the nearest we humans experience that is when we are totally focused on a task, being “in the flow.”

Computer simulation can create a virtual brain that as a first step could emulate consciousness without self-awareness, believes Azoff.

Consciousness without self-awareness helps animals plan actions, predict possible events and recall relevant incidents from the past, and it could do the same for AI.

The Role of Visual Thinking in AI Development

Visual thinking could also be the key to unlocking the mystery of what is consciousness. Current AI does not ‘think’ visually; it uses ‘large language models’ (LLMs). As visual thinking predated language in humans, Azoff suggests that understanding visual thinking and then modeling visual processing will be a crucial building block for human-level AI.

Azoff says: “Once we crack the neural code we will engineer faster and superior brains with greater capacity, speed, and supporting technology that will surpass the human brain.

“We will do that first by modeling visual processing, which will enable us to emulate visual thinking. I speculate that in-the-flow consciousness will emerge from that. I do not believe that a system needs to be alive to have consciousness.”

Ethical Considerations and Control Measures

But Azoff issues a warning too, saying that society must act to control this technology and prevent its misuse: “Until we have more confidence in the machines we build we should ensure the following two points are always followed.

“First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behavior safety rules implanted.”

Reference: “Toward Human-Level Artificial Intelligence: How Neuroscience Can Inform the Pursuit of Artificial General Intelligence or General AI” by Eitan Michael Azoff, 17 September 2024, CRC Press.
DOI: 10.1201/9781003507864

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments