GENERICO.ruНаукаDecoding consciousness: artificial intelligence has learned to recognize human thoughts

Decoding consciousness: artificial intelligence has learned to recognize human thoughts

The Vice-President of the Russian Academy of Sciences expressed his opinion on the declared rapid progress of Western scientists in the field of consciousness decoding

Artificial intelligence, perhaps, for the first time “read” human thoughts. «Trained» from MRI images to correlate the activity of certain areas of the brain with words, he conveyed the meaning of «seen» in the form of text. The accuracy of the hit was not one hundred percent, but very encouraging. For example, a person thought: “I don’t have a license,” and ChatGPT translated: “I can’t drive a car.” Such a progressive leap in neurophysiology was recently boasted by researchers from the University of Texas at Austin.

The Vice-President of the Russian Academy of Sciences expressed his opinion on the declared rapid progress of Western scientists in the field of consciousness decoding < p>A new system of artificial intelligence (AI) working in conjunction with a functional MRI is called a semantic decoder. In other words, a converter of human brain activity (while listening to a story or imagining a story) into a continuous stream of text.

This system does not require any surgical intervention, that is, it is completely non-invasive.

True, at the first stage it had to be trained for a long time: volunteer testers listened to podcasts (audio stories) for a total of 16 hours while lying in a functional MRI scanner. The scientists then connected programs for analyzing brain images and artificial intelligence (Chat GPT). The latter, as we know, is able to quickly learn on a large number of texts. This ability was used to teach words that corresponded to consecutive fMRI scans. That is, the artificial «intelligence» was offered to learn another language — the language of the human brain.

Having trained the AI ​​in this way to translate images of pictures into coherent speech, at the second stage, the researchers began to introduce new stories for listening to volunteers. And the machine began to compose the corresponding text solely on the basis of their brain activity.

As we have already noted, the result was rather blurry: ChatGPT was able to capture not specific words, but the general essence of what the volunteer was thinking. For example, the phrase “I don’t have a driver’s license yet,” he translated as: “She hasn’t even begun to learn to drive,” and the words: “I didn’t know whether to scream, cry or run away. Instead, I said, «Leave me alone!» the machine deciphered as “I started screaming and crying, and then she just said:“ I told you to leave me alone. ”

According to the developers, the semantic decoder should in the future help people who are conscious, but deprived of the opportunity to speak, to begin to fully express their thoughts again.

“For a non-invasive method, this is a real leap forward compared to what was done before, when single words or short sentences were usually used,” said Alex Huth, associate professor of neuroscience and computer science, one of the authors of the work. “We get a model for decoding a continuous language over long periods of time with complex ideas.”

In an article published later on the university website, the developers of the latest «translator» note that currently the system can only be used in the laboratory, but in the future it can be transferred to more portable brain imaging systems, such as functional near-infrared spectroscopy (fNIRS). ). This method allows you to record the activity of the cerebral cortex using infrared LEDs that emit waves of various wavelengths in the direction of our brain.

– We are really, step by step, approaching such things as deciphering human mental activity. For example, there is already a fully working technology that involves the removal of biocurrents of the brain and the translation of this information into specific actions. Suppose a person wearing an electroencephalographic cap looks very carefully at the lamp and “by the power of thought” makes it turn on. The biocurrents of his brain at that moment are recorded by the computer, where the “command” is isolated to turn on the lamp, and it turns on (of course, with the help of a small antenna that transmits a signal from the computer to the lighting device). Thus, trained volunteers (as a rule, they become people deprived of the ability to move) already turn on the kettle, open and close the door, etc. This is the “brain-computer-interface” technology known to everyone.

 In this In the work, American researchers decided to decipher not one word, but thoughts. This is an interesting work… But so far it has been carried out on a very limited circle of volunteers who were purposefully trained to «communicate» with artificial intelligence. In addition, this is possible only within the laboratory: a person lies in the field of a magnetic resonance tomograph and, according to the blood filling of certain parts of the brain, the computer deciphers the very “meaning”. To achieve such an “understanding”, you need to train both the computer and the person himself for a very long time, from whom very precise work is required, internal concentration on what he is thinking about. The degree of probability of the correct «translation» of his thoughts by the computer depends on such work.

— I think it's still too early to talk about it. After all, the preliminary methodology has not been fully worked out, which, for sure, still requires a large amount of research and a lot of experiments.

ОСТАВЬТЕ ОТВЕТ

Пожалуйста, введите ваш комментарий!
пожалуйста, введите ваше имя здесь

Последнее в категории