Breaking News

Contributor: The human brain does not learn, thinks or reminds us as an AI. Kiss the difference

Recently, the founder of Nvidia, Jensen Huang, whose company builds the chips supplying the most advanced artificial intelligence systems today, do: “What is really incredible is the way you program an AI is like the way you program a person.” Ilya Sutskever, co-founder of Openai and one of the main figures of the AI ​​revolution, also declared That it is only a matter of time before AI can do everything that humans can do, because “the brain is a biological computer”.

I am a cognitive researcher in neuroscience, and I think they are dangerously false.

The greatest threat is not that these metaphors confuse us on the functioning of the AI, but that they mislead us about our own brain. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to a new machine after another: a clock, a standard, a computer. The latest erroneous metaphor is that our brains are like AI systems.

I have seen this change in the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like “training”, “fine refining” and “optimization” are frequently used to describe human behavior. But we do not train, must not refine or optimize as AI does. And such inaccurate metaphors can really harm.

The idea of ​​the spirit of the 17th century as a “virgin slate” imagined children as empty surfaces entirely shaped by external influences. This led to rigid education systems that have tried to eliminate the differences from neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Likewise, the “black box” model from the beginning of the 20th century of behavioral psychology affirmed that only visible behavior counted. Consequently, mental health care has often focused on managing symptoms rather than understanding their emotional or organic causes.

And now there are new emerging erroneous approaches while we are starting to see ourselves in the image of AI. Digital educational tools developed in In recent years, for exampleAdjust the lessons and questions according to a child’s answers, theoretically keeping the student to an optimal level of learning. This is strongly inspired by the formation of an AI model.

This adaptive approach can produce impressive results, but it neglects less measurable factors such as motivation or passion. Imagine two children who learn the piano using an intelligent application that adapts to their changing mastery. We quickly learn to play perfectly but hates each training session. The other makes constant mistakes but benefits from every minute. By only judging the terms we apply to AI models, we will say that the child playing perfectly surpassed the other student.

But children’s education is different from the formation of an AI algorithm. This simplistic evaluation would not explain the misery of the first student or the enjoyment of the second child. These factors count; There is a good chance that the child has fun will be the one who still plays in a decade – and he could even end up with a better and more original musician because he likes activity, errors and everything. I really think that AI in learning is both inevitable and potentially transformative for the best, but if we will assess children only in terms of what can be “formed” and “refined”, we will repeat the old error to underline production rather than experience.

I see that being played with undergraduate students who, for the first time, believe that they can achieve the best measured results by fully outsourcing the learning process. Many use AI tools in the past two years (some courses allow it and some do not do so) and now count on them to maximize efficiency, often to the detriment of reflection and real understanding. They use AI as a tool that helps them produce good trials, but the process in many cases no longer has much connection with original thought or discovering what arouses the curiosity of students.

If we continue to think in this framework of the brain as an ai, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from the identification of familiar models, but to break them by disorder and unexpected errors. Alexander Fleming discovered penicillin by noting that molds growing in a petri box which he had accidentally left aside killed the surrounding bacteria. A lucky error made by a disorderly researcher who continued to save the lives of hundreds of millions of people.

This disorder is not only important for eccentric scientists. It is important for each human brain. One of the most interesting discoveries in neuroscience in the past two decades is the “default fashion network”, a group of brain regions that becomes active when we dream and are not focused on a specific task. This network has also proven to play a role when reflecting on the past, imagining and thinking about ourselves and others. Not taking into account this ardouillage behavior of the mind as a problem rather than adopting it as a basic human characteristic will inevitably lead us to build imperfect systems in education, mental health and law.

Unfortunately, it is particularly easy to confuse AI with human thought. Microsoft describes generative AI models like chatgpt on its official website As tools that “reflect human expression, redefining our relationship with technology”. And the CEO of Openai, Sam Altman, recently highlighted his favorite new Functionality of Chatgpt entitled “Memory”. This function allows the system to keep and recall the personal details between conversations. For example, if you ask Chatgpt where to eat, it could remind you of a Thai restaurant that you mentioned wanted to try months earlier. “It’s not that you connect your brain in a day,” said Altman explain“But … it will get to know you, and it will become this extension of yourself.”

The suggestion that the “memory” of the AI ​​will be an extension of ours is again an erroneous metaphor – leading us to misunderstand the new technology and our own mind. Unlike human memory, which has evolved to forget, update and reshape memories based on a myriad of factors, the memory of AI can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that almost remembers everything is not a self -extension; It stands out from the very mechanisms that make us human. This would mark a change in the way we comply, will understand the world and make decisions. It could start with small things, such as the choice of a restaurant, but it can quickly move to much more important decisions, such as taking a different career path or choosing a different partner from what we would have, because AI models can surface the connections and the context that our brain can have eliminated for one reason or another.

This outsourcing can be tempting because this technology seems human to us, but AI learns, understands and sees the world in a fundamentally different way and does not really feel pain, love or curiosity like us. The consequences of this continuous confusion could be disastrous – not because AI is intrinsically harmful, but because instead of transforming it into a tool that completes our human mind, we allow it to reshape its own image.

To him lower gef and pHD candidate in cognitive neuroscience from the University of Columbia and author of the novel “Ms. Lilienblum’s Cloud factory. ». His sub-dos newsletter, Neurons storiesconnects the ideas of neuroscience to human behavior.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button