Researchers at the University of Texas at Austin Developed a revolutionary “semantic decoder” It uses artificial intelligence to convert scans of the human brain’s speech activity into paraphrased text. While still inaccurate compared to the source text, manufacturers have already warned that this development is a big step forward for AI’s role in assistive technology, and that it could be exploited if not properly regulated.
first published on Monday Nature Neuroscience, the team’s findings detail a new system that integrates generative programs similar to OpenAI’s GPT-4 and Google Bard, along with existing techniques that can interpret functional magnetic resonance imaging (fMRI) scans. brain. While previous Brain Computer Interfaces (BCIs) have shown promise of achieving similar translational capabilities, UT Austin’s version is the first non-invasive version that does not require actual physical implants or wiring. It is said that
In this study, researchers asked three subjects each to listen to an audio podcast inside an fMRI machine for a total of 16 hours. Meanwhile, the team trained his AI model to create and parse semantic features by analyzing Reddit comments and autobiographical texts. By meshing the two datasets, the AI learned and matched words and phrases associated with the subjects’ brain scans to create semantic connections.
After this step, participants were again asked to lie down on the fMRI scanner and listen to new sounds that were not part of the original data. A semantic decoder would then convert the speech to text via a scan of brain activity, producing results similar to if the subject watched a silent video clip or imagined their own story in their head. Although the AI transcripts generally provided misplaced or incorrectly worded answers, the overall output still paraphrased the subject’s inner monologue well. was even able to accurately reflect phonetic word choice.As new york times This result shows that the University of Texas at Austin team’s AI decoder captures not only word order, but also actual implicit meaning.
[Related: Brain interfaces aren’t nearly as easy as Elon Musk makes them seem.]
It’s still in its very early stages, but researchers believe that in the future, an improved version could provide a powerful new communication tool for individuals who have lost the ability to speak, such as stroke victims and people coping with ALS. I hope you can. As it stands, fMRI scanners are large, stationary machines confined to medical facilities, but the team hopes to investigate how similar systems work using functional near-infrared spectroscopy (fNIRS). thinking about.
However, the new semantic decoder has an important provision. Subjects must make a concerted and conscious effort to cooperate with the objectives of the AI program by staying focused on the objective. Simply put, a busy brain means more garbled transcripts. Likewise, the decoder technique can only be trained by him one at a time.
Despite these current limitations, the research team already anticipates the potential for rapid progress along with misuse.[F]Future development may allow decoders to bypass these [privacy] requirements,” the team wrote in its study. “Furthermore, even inaccurate decoder predictions without the subject’s cooperation could be deliberately misinterpreted for malicious purposes … Risks to brain decoding techniques for these and other unforeseen reasons It is important to raise awareness of and enact policies that protect the spirit of each person: privacy.”