When Marisha Speets first began as a speech-language pathologist in a kindergarten serving wealthy families in Nashville, Tennessee, she used the typical screening and rating scale she believed in—taught.
However, when she was placed in Jackson, Mississippi, in kindergarten that served poor families, she realized that the tests were not working.
“It’s ‘I don’t think this child has speech or language issues, but the test says they’re at risk.’ Another way, I don’t identify the child I think is at risk,” says Speights. “I was asked this question about when to use these measures in groups with different characteristics.”
She eventually took the question to Northwestern University, where she is now building her own AI technology system that could quell the issue.
Pediatric voice technology known as Pedzstar Lab and Acoustics Research Lab are building a toolbox of acoustic biomarkers to track children’s speech patterns using samples from both children with speech impairment. Once researchers can examine the differences between the two groups, the team hopes to create applications with artificial intelligence and machine learning, ultimately predicting language disorders.
Speights has compiled a sample of 400 children so far, building its scope across geographical locations, cultural backgrounds and socioeconomic status. She hopes to collect over 2,000 audio samples from many children in the end.
“We have a lot of kids that are not represented in our current dataset. That was one of our big goals: to represent more of the different kinds of child speakers,” she says.
Pedzstar is taking part in increasing numbers of attempts to use AI in the world of speech pathology.
Jordan Green, professor of communication science and disability at Harvard’s Massachusetts General Hospital health experts, said recently. Research paper Excitement is “obvious” around healthcare AI. Use in the speech world ranges from virtual therapists, interactive games, chatbot conversation partners, and AI-driven diagnostics.
Nina Benway, a postdoctoral researcher at the University of Maryland College Park, says the increased usage of AI could be attributed to more data to train AI technology systems, more accessible computing power, and large-scale mainstream language models such as CHATGPT.
“It’s used most in many fields by clinicians, helping them plan lessons, generate materials and do things like that, but the idea of using AI to help them with treatment was relatively new,” Benway says.
Improve student outcomes
Regarding the world of speech-language pathology, Speights says that pre-K levels are largely overlooked compared to older children and adults.
“It’s difficult to collect speech data with kids. You can’t give them something to read,” she says. “We need to control the environment to create engaging activities and get quality recordings, and we need to acquire skilled people for young children.”
In her work, Speights plays with toy farm animals to children. Because many words use early development voices. For example, the “Kuh” sound of a cow. She and her team capture the sounds the kids make during play time, then move on to structured tasks such as looking at the photos, explaining what they are looking at, and formal assessments.
Speights says they hope that the lab work will ultimately produce software and help diagnose language disorders in children.
The University of Buffalo is similarly seeking AI to help with audio diagnostics. In the fall of 2022, the university was part of the New York State University (SUNY) system. I received it A five-year $20 million grant from the National Science Foundation examines the impact of technology on the diagnosis and treatment of language problems in children.
“Everyone knows someone who has a child who is struggling or struggling with some of their language,” says Venu Govindaraju, director of NSF’s National Institute of AI Exceptions Education. “Because of the possibility of AI, people say, “If AI can do this, they can do this” – meaning support for speech development – “Similarly.”
The project is currently collecting and verifying data and has the ultimate goal of creating a universal screening tool for teachers to use in schools. Researchers also want to support the intervention by focusing on the personalized attention of each student.
“This hit a chord with a lot of people. They can see AI and its possibilities in other fields as well as in other fields, so they are open to possibilities and I think there are two. [tools] says Govindaraju. Like other things, it’s quick and easy to detect [treatment] That’s what happens. ”
Mitigate heavy workloads
Both Govindaraju and Speights quickly said that AI is not going to replace speech-language pathologists and that technology will not make its own diagnosis. It will be supervised by a licensed care provider who will make the final call.
However, in some parts of the country, speech experts are lacking, and the field needs solutions to assist.
Lauren Arner, associate director of the School Services for Language and Language Pathology at the American Association for Speech and Language Pathology, says the organization believes it has the right guardrail in place. AI says it can help increase the increased workload that many speech-language pathologists see.
“As many of the workloads complete the evaluation and related documentation, any of these technologies can be used to mitigate some of the workload. [helps]She said.
According to Asha’s 2024 Annual School Surveythe number of children diagnosed with speech impairment is increasing, overwhelming the number of speech pathologists. Approximately 27% of pathologists said they are considering leaving their profession due to burnout, as many teachers have done. As with teachers, some experts attribute the widening gap to a lack of wages and organizational funding, but many believe the trench will never be closed again.
“There will always be more children than speech-language pathologists,” says Speights. She believes automation can reduce workloads in several areas, adding that care providers can “focus more on precision care, such as supporting children who really need support to get personalized, personalized care.”
Speights adds that the tool can help pathologists follow the progression of children’s language over time, but Arner says it could be particularly useful for rural families where speech pathologists are seen more frequently than those with access to urban support.
However, using AI can be a key safety consideration: maintaining identifiable information for children from the AI system and ensuring that the data collected is properly protected. With PedzStar, Speights ensures that no personal information is taken during the audio sample collection process, making it easier to access the wider cloud, whereas what is collected is housed on internal servers.
“Because of the vulnerability of pediatrics, we want to make sure our children are protected,” she says.
According to Ahner, Asha plans to release AI guidance this summer, and encourages speech pathologists to review school or organizational policies regarding AI before using the tool. Benway at the University of Maryland recently Article has been released We will outline the considerations for implementing AI in the field of speech pathology, and boil down to three things: effectiveness, reliability, and expression.
“When clinicians do the assessment, AI may help gather those measures, but clinicians plan for treatment, diagnosis, etc,” says Benway. “AI can be most useful in the short term when clinicians are automating what they already do, rather than trying to become clinicians.”