Getty Images

If all goes as hoped, brain implants could eventually restore communication to people paralyzed by injury or disease. However, we are far from such a future, and implants are currently limited to testing in clinical trials.

One of the clinical trials, based at the University of California, San Francisco, found that one of the patients in the trial was bilingual, using English and Spanish, making unintentional changes in how the brain processes language. Something became clear. By tracking activity in the area of ​​the brain where the intention to speak is translated into control of the vocal tract, the researchers found that both languages ​​produce consistent signals in this area. Therefore, we found that training the system to recognize English phrases can help improve English recognition. Spanish.

make noise

Understanding bilingualism clearly helps us understand how the brain processes language in general. A new paper describing the effort also notes that the goal should be to restore communication in multiple languages ​​to restore people’s communication. Bilingual people may change their language based on different social situations or within sentences to express their thoughts more clearly. They often describe bilingual ability as an important element of their personality.

So if we really want to restore people’s communication, providing access to all the languages ​​they speak should be part of that.

The new version is designed to make that even more possible.Part of Clinical trial called BRAVO (Brain-Computer Interface for Arm and Speech Restoration) uses a relatively simple implant (128 electrodes) in the brain’s motor areas (necessary to trigger and execute the intention to perform a movement in the muscles). It included placing it in the part that converts it into a signal.

From a language perspective, this activates the neurons needed to translate the desire to say words into the muscle activity necessary to control the mouth and tongue, exhale fully, and tense the vocal cords. means. This is downstream of the part of the brain where word selection takes place, and English and Spanish are probably different (i.e. downstream of the part where meaning is categorized, and the two languages ​​may overlap) ).

Hypothetically, if parts of words are sufficiently similar in pronunciation, then the muscle control required to produce that sound would also be similar. Therefore, tracking neural activity here should be able to process both languages ​​and also detect overlap between languages.

utilize language

Because neural activity looks like a series of noisy “spikes,” or bursts of voltage changes, the process of detecting these signals is fairly complex. Translating these into specific meanings is typically done by AI systems trained to associate specific behavioral patterns with specific information (whether that information is “I want to say cat” or “I saw a cat”). Processed by

So a lot of the work here involved training the software part of the system to recognize when participants with implants wanted to say certain words. This involved imagining him saying them and the software recognizing the words he was trying to say. The researchers used this to train the system to recognize 50 English words, 50 Spanish words, and some words that were identical in both languages. This turned out to be 178 different words, depending on the tense of the verb, etc.



Source

Share.

TOPPIKR is a global news website that covers everything from current events, politics, entertainment, culture, tech, science, and healthcare.

Leave A Reply

Exit mobile version