Years after the release of ChatGpt, which raised ethical concerns about education, schools are still struggling with ways to adopt artificial intelligence.
Last week’s batch of executive orders from the Trump administration included an enhanced “AI leadership.”
The White House order highlights the desire to use AI to boost learning across the country, opening discretionary federal grants to educator training, and showing the federal government’s interest in teaching technology at K-12 schools.
But even with a new executive order, those interested in incorporating AI into schools will look to the state rather than the federal government for leadership on how to achieve this.
So is the state stepping up for schools? According to some, what they are excluded from AI policy guidance speaks volumes about their priorities.
I’m going back to the state
Despite President Trump’s emphasis on “leadership” in executive orders, the federal government really keeps the state in the driver’s seat.
After taking office, the Trump administration Rescised the federal order of the Biden era About the artificial intelligence I had We highlighted the potential harms of technology Includes discrimination, disinformation and threats to national security. The Education and Technology Bureau, the main source of federal guidance for schools, has also been terminated. And it thwarted the Civil Rights Office, another core agency that helps schools navigate AI use.
Even under the Biden administration’s plan, the state would have had to helm schools’ attempts to teach and utilize AI, says Reg Leichty, founder and partner of the Foresight Law + Policy Advisor. Now, with the new federal direction, that’s even more true.
Many states are already stepping into that role.
In March, Nevada announced its state’s guidance counselling school on how to responsibly incorporate AI. I joined the list of More than half of the state – 28 including Puerto Rico’s territory – released such documents.
These are spontaneous, but schools offer important directions on how to navigate the sharp pitfalls AI raises and how to ensure that technology is used effectively, experts say.
The guidance also signals that AI is important to schools, says Pat Yongpradit, who heads Teachai, an advisory body and coalition of state and global government agencies. Yongpradit’s organization I created a toolkit He says it was used in at least 20 states when developing school guidelines.
(One of the Teachai Executive Committee groups is ISTE. Edsurge is an independent newsroom that shares parent organizations with ISTE. Learn more about EdsurgeEthics and Policies and Supporters here.)
So, what is the guidance?
recently Review by Centre for Democracy & Technology These state guidance has been found to be broadly agreeing with the benefits of AI for education. In particular, they tend to emphasize the usefulness of AI to boost individual learning and make it easier to manage burdensome administrative tasks for educators.
The document also agrees to threaten the dangers of technology, particularly privacy, weaken students’ critical thinking skills and perpetuate bias. Furthermore, it is important to emphasize the need for human surveillance of these emerging technologies and note that detection software for these tools is not reliable.
At least 11 of these documents also address AI’s promises to make education more accessible to students with disabilities and English learners, the nonprofit found.
The biggest point is that both red and blue states have issued these guidance documents, says Maddie Dwyer, policy analyst at the Center for Democracy Technology.
It’s a rare flash of bipartisan agreements.
“I think this is very important because it’s not just the states that are doing this work,” Dwyer said, adding that it suggests a drastic recognition of statewide AI output bias, privacy, harm and reliability issues. It’s “encouraging,” she says.
However, even with a high level of agreement among state guidance documents, the CDT argued that the state missed out on key topics in AI, particularly how to help the school navigate. Deepfake How to guide your community into a tech conversation.
Teachai’s Yongpradit disagrees that these have been overlooked.
“There’s a huge risk of mercilessness because AI always pops up,” he says. Nevertheless, some people show robust community involvement, and at least one deals with deepfakes, he says.
However, some experts are aware of the bigger problem.
Does silence speak volume?
Relying on states to create their own rules on this emergency technology increases the likelihood that they have different rules across these states, even if they appear to be widely agreed.
Some companies would prefer to be regulated by a uniform set of rules rather than having to deal with different laws across the state, according to Foresight Law + Policy Advisor Leichty. But without fixed federal rules, he says, it’s worth having these documents.
But for some observers, what is the most troublesome aspect of state guidelines? do not have Among them.
It is true that Clarence O’Cau, senior lawyer at the Privacy and Technology Center at Georgetown University Law Center, is acknowledged that these state documents agree on some of the fundamental issues with AI.
But he adds when you drill down to the details, No state is working to monitor police. At these AI guidance schools.
All over the country, police use technology in schools, such as facial recognition tools, to track and discipline students. Surveillance is widespread. For example, a democratic senator investigation into student monitoring services led to documents from one such company, Goguardian, which claimed. Approximately 7,000 schools across the country used the product As of 2021, it was only from that company. These practices exacerbate the school-to-jail pipeline and accelerate inequality by exposing students and families to greater contact with police and immigration authorities, Okoh believes.
The state introduced its law Brooch AI monitoring. But in Okoh’s eyes, these laws are of little use to prevent infringement, and in many cases even exempt police from restrictions. Indeed, he only refers to one particular bill in this legislative session, In New Yorkwhich would ban school biometric monitoring techniques.
Perhaps the AI guidance for the states closest to raising the issue comes from Alabama, which focuses on the risks presented by schools’ facial recognition techniques, but Dwyer said it has not discussed the police at the Center for Democracy Technology.
Why aren’t the states highlighting this in their guidance? State lawmakers are likely to focus solely on generated AI when thinking about technology, and are not evaluating concerns about surveillance technology that speculates Okoh in the centre of privacy and technology.
By changing the context of the federal government, that makes sense.
According to Okoh, during the last administration, there were several attempts to regulate this trend in student police. For example, the Ministry of Justice I’ve come to reconciliation The Pasco County School District in Florida uses predictive policing to argue that the district discriminated against. program I was able to access it Student Recordsfor students with disabilities.
But now, civil rights agencies are not ready to continue their work.
Last week, the White House also issued an “executive order.”Reviving common sense school discipline policies“It targets what Trump labels as ‘racially prioritized policies.’ They were meant to combat what observers like Okow understood to be punishing black and Hispanic students and overdose.
Combined with a new focus at the Civil Rights Office to investigate these issues, the executive order of discipline makes it difficult to challenge the use of AI technology for state discipline, which is “hostile” to civil rights, Okoh said.
“The rise of AI surveillance in public education is one of the most urgent civic and human rights challenges facing public schools today,” Okoh told Edsurge, “Unfortunately, the state’s AI guidance largely ignores this crisis. [states] It’s been done [too] Distracted by the glossy and boring nature of AI chatbots, I noticed the massive surveillance in their schools and the rise of digital authoritarianism. ”