Civilization

Brain Decoder

Isn’t this the most horrifying thread in Stanisław Lem’s “Solaris”, where the cytoplasmic ocean can read the minds and even the subconscious of Kelvin and other scientists, as if from an open book, but it doesn’t work vice-versa? And what if we create such an ocean ourselves?

Let us imagine that humanity – in the person of cognitivists and computer specialists from the University of Texas at Austin – has managed to devise a decoder “reconstructing continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI)”. In non-scientific terms: this decoder reads thoughts in real time, using non-invasive brain scans. It's still learning, but getting better all the time.

Restoring the ability to speak

The case became famous at the beginning of May this year when, in the pages of “Nature Neuroscience”, the aforementioned scientists reported on the development of an AI capable of “reading thoughts”. The news quickly appeared in the world’s leading media: “NYT”, “National Public Radio”, “Euronews”, “NBS”, “The Guardian”, etc. As the authors of this work, which is likely to go down in the annals of scientific history, assure us at the outset: “A brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases”.

Isn’t this what the brilliant physicist and cosmologist Stephen Hawking used for half of his increasingly immobile life? When he wanted to speak, he selected letters and words from a synthesiser screen controlled by the vibrations of his cheek muscles. If the muscles stopped vibrating, the scientist would fall irrevocably silent.

SIGN UP TO OUR PAGE Already in 2019, i.e. four years ago (which the Cambridge genius didn’t live to see) a new way of extracting a person’s speech directly from their brain was developed. For the man’s intention to speak certain words can be picked up from the brain signals and converted into text quickly enough to keep up with natural conversation. We owe this discovery to a group led by Edward Change at the University of San Francisco. The work of the device which is able to decode a small set of listed words consists in a dense encephalographic reading. It was supposed to get rid of the toys that were used by Hawing. However, the circumpandemic breakthrough in the field of AI is currently putting the then innovations out to pasture.

Why do we need mosquitoes, fleas and ticks

Let the insects eat us after we die, the ethologist appeals.

see more
Today, Jerry Land, lead author of the “Nature Neuroscience” publication and a PhD student at UT Texas Austin, claims the following in the pages of the medical journal “STAT” : “Brain decoders are being developed to help restore communication to people who have lost the ability to speak or write. Currently, most brain decoders use recordings from implanted electrodes, and are primarily intended for people with motor system disorders. We hope that eventually our brain decoder can provide a non-invasive option for people with a wide range of communication disorders”.

Indeed, the decoder developed in Texas, according to its scientific description: “generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks”.

Stratigraphic maps

It is important to know that when we are about to say something, different, but quite specific, areas of the brain are activated, depending on what, with what emotions and in relation to which memories we are constructing the utterance. Words and even whole phrases therefore have their place in the brain when we listen to them, when we think about them and also when we say them. In general, we think first and then speak the thought, although more than one person can often be accused of saying something thoughtlessly. However, this is a simplification and a colloquialism that neuroscience denies. It’s just that not everything we think about is particularly clever, and hence there are some statements that are not preceded by any reflection.

Having simplified the new AI-based non-invasive decoder to the maximum, functional magnetic resonance (fMRI) is able to accurately map areas of the brain which are currently very active and to display them with the use of colourful dots. Like stratigraphic maps: blue where the activity is low, red – where it is extremely high while the rest follows the gradient between: green, yellow, orange… AI, on the other hand, is a programme meant to analyse this type of image very quickly, armed with databases full of data from experiments, in which during an fMRI examination a man thought in concrete terms (or at least that’s what he claimed) about what he wanted to say, listened to an utterance, etc. Of course, on condition that this participant was open to decoding their thoughts, listening to a story that was new to him or imagining the story being told, the machine generates an appropriate text from the brain activity alone. In compiling meaningful and grammatically coherent utterances, it relies in part on a transformer model similar to those that drive ChatGPT Open AI and Google Bard.
The voice synthesiser used by Stephen Hawking was a ”toy” compared to what scientists can build today. Photo: RAMON DE LA ROCHA / EPA / PAP
The training of the decoder is that the person placed in the fMRI scanner listens to specific podcasts for 16 hours – spread over several months of bearable sessions, because you have to be completely still – the scanner watches and analyses it. During the experiment itself, participants were not only supposed to listen to or think about a given story while in the scanner. They were also asked to watch four short silent films. The semantic decoder was supposed to cope with each of these tasks, different in terms of brain work and possible subsequent verbalization of experiences by the person participating in the experiment.

Not words but essence

This decoder has looked at – by thousands, and even tens of thousands – our brains that are so colorful with activity, and it turns out that today it can read minds. However... Only (or as many as) half of the time when the decoder has been trained to monitor the brain activity of a particular participant, the machine generates text that closely (and sometimes exactly) matches the intended meaning of the original utterance.

A positive result is not a word-for-word transcription. Instead, the researchers designed the decoder to capture the essence of what is being said or thought. For example, “I didn’t know whether to scream, cry or run away”, was translated as “Leave me alone! I told you to leave me alone”.

The results for people on whom the decoder was not trained were incomprehensible and useless – so we are quite individual in our brain-speech mappings. The efforts of the machine were similarly hopeless when the participants, on whom the decoder was learning, later, for example, thought about something other than what they had been assigned.

Of course, what is really shocking here, or may become, is not a question of the decoder’s effectiveness, i.e. the fact that, for example, some stroke victim with no speech ability will sometimes find themselves unable to communicate despite this automatic helper, and “their listeners” will get lost during the reception of the message and misunderstandings will occur. Because being “lost in translation” is not a tragic or traumatizing experience. We’ll give it another try, and maybe then things will go better.
What really makes many people’s hair stand on end in this matter is the ability to control the mind, in which it is being read, without people knowing it, brilliantly captured by Stanisław Lem in “Solaris”. The very issue of creating social subordination with the use of tools much more modern than the mass media, even before the appearance of the latest decoder of thoughts from Texas, gave many sleepless nights. Today it could be, as in the unforgettable joke about Wiesław Gomulka’s speeches. “We stood over the precipice, but we took a big step forward”.

The authors of the publication in Nature Neuroscience themselves assure that “as brain-computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder”. This means that their machine cannot be used, for example, to monitor “thoughtcrime”.

Similarly, in the abovementioned opinion, Jerry Tang expresses his understanding of these fears and asserts: “While our brains give rise to our mental processes, we have a limited understanding of how most mental processes are actually encoded in brain activity. As a result, brain decoders cannot simply read out the contents of a person’s mind. Instead, they learn to make predictions about mental content. A brain decoder is like a dictionary between patterns of brain activity and descriptions of mental content. The dictionary is built by measuring how a person’s brain responds to stimuli like words or images”.

If fMRI imaging (depending on a large, medically necessary, and therefore hard-to-reach, and expensive machine) could be changed to something more practical and handy, such as functional near-infrared spectroscopy (fNIRS), which measures essentially the same thing as fMRI, i.e. the activity of specific areas of the brain, but with a lower resolution... It may not be available to millions, but it will be available to thousands! Why not?

“We want to make sure people only use these types of technologies when they want to and that it helps them” – Jerry Tang asserts the press service of his university which is now sending his word to the world. A world in which on the one hand there are millions of people whom cerebral strokes and other disorders deprived of the ability to speak and communicate, on the other hand, in which state and corporate surveillance had reached a hitherto unknown extent.

Jerry Tang and his boss Alexander Huth have filed a patent application for their invention. They themselves therefore judge that the technology has a future. As is often the case of scientific breakthroughs, the point is that psychologists, lawyers & politicians responsible for the legislation don’t wake up when the milk is already spilt. The question of what these devices can be used for, and how to protect people’s privacy, must be regulated immediately.

– Magdalena Kawalec-Segond
– Dominik Szczęsny-Kostanecki

TVP WEEKLY. Editorial team and jornalists

Sources:

https://www.nature.com/articles/s41593-023-01304-9
https://www.statnews.com/2023/06/08/brain-decoders-mind-reading-research-ethics-privacy/
https://medicalxpress.com/news/2023-05-brain-decoder-reveal-stories-people.html
Main photo: An exhibition on the human brain was held in Barcelona last year. Photo: Enric Fontcuberta / EPA / PAP
See more
Civilization wydanie 22.12.2023 – 29.12.2023
To Siberia and Ukraine
Zaporizhzhia. A soldier in a bunker asked the priest for a rosary and to teach him how to make use of it.
Civilization wydanie 15.12.2023 – 22.12.2023
Climate sheikhs. Activists as window dressing
They can shout, for which they will be rewarded with applause
Civilization wydanie 15.12.2023 – 22.12.2023
The plane broke into four million pieces
Americans have been investigating the Lockerbie bombing for 35 years.
Civilization wydanie 15.12.2023 – 22.12.2023
German experiment: a paedophile is a child's best friend
Paedophiles received subsidies from the Berlin authorities for "taking care" of the boys.
Civilization wydanie 8.12.2023 – 15.12.2023
The mastery gene
The kid is not a racehorse.