When artificial intelligence tools like CHATGPT were first introduced for release in 2022, Gillian Hayes, a secondary profession for academic staff at the University of California Irvine, remembers setting rules around AI without a good understanding of what it really is and how it will be used.
The moment felt similar to the industrial and agriculture revolution, Hayes said.
“People were trying to make decisions with what they could get.”
Seeing the need for clearer data, Hayes and her colleague Candice L. Odgers, professor of psychological science and informatics at UC Irvine, launched a national survey to investigate the use of AI in teens, parents and educators. Their goal was to collect a wide range of data sets that could be used to continuously investigate how AI use and attitudes change over time.
The researchers partnered with Foundry10, an educational research institute, to investigate 2,826 parents, between 9 and 17 adolescents and US K-12 students. We then ran a series of focus groups that included parents, students and educators to better understand what participants knew about AI, what was involved, and how it affected their daily lives. Researchers ended data collection in the fall of 2024; release Some of their discoveries earlier this year.
The outcome was a surprise to Hayes and her team. They found that many of the teenagers in the study were aware of the concerns and dangers surrounding AI, but there were no guidelines to use properly. Without this guidance, AI can be confusing and complex, researchers say, preventing both adolescents and adults from using the technology ethically and productively.
A moral compass
Hayes was particularly surprised by the way the youth in the survey had rarely used AI and he had used it. About 7% use AI every day, with the majority using it via search engines rather than chatbots.
Many teenagers in the study also had “strong moral compasses,” Hayes said, facing the ethical dilemma that comes with using AI, especially in the classroom.
Hayes recalls one of the teenage participants who self-published a book using AI-generated images on the cover. The book also contained some AI-generated content, but was primarily original. The mothers of participants who helped them publish the book then discussed the use of AI with students. It was okay to use AI in this scenario, mom said, but they shouldn’t use it to write school assignments.
Young people often don’t try to cheate. They don’t necessarily know what fraud looks like in AI, says Hayes. For example, some people wonder why they were allowed to have their classmates review their papers, but they were unable to use Grammarly, an AI tool that reviews essays with grammar errors.
“For the majority [adolescents]they know that cheating is bad,” Hayes says. I don’t think many teachers and parents know. ”
The teenagers in the study were also concerned about how using AI would affect their ability to develop critical thinking skills, says Jennifer Rubin, a senior researcher at Foundry10 who led the study. They recognized that AI is a technology that is likely to be needed throughout their lifetimes, but using it can potentially hinder education and career, she says.
“It is a great concern that generative AI will affect school development at a time when it is truly developmentally important for young people,” adds Rubin. “And they know this too.”
Equity is a pleasant surprise
The findings did not show a gap in fairness among AI users. This was another surprise for Hayes and her team.
Experts often hope that new technology will improve access and improve access to students in rural communities, low-income families and other marginalized groups, Hayes says. However, they usually do the opposite.
However, in this study, there appeared to be little social disparity. While it is difficult to determine whether this is unique to participants who completed the survey, we suspect that Hayes may be related to AI novelty.
Usually, college or wealthy parents teach their children about new technology and how to use it, Hayes says. However, with AI, parents cannot pass that knowledge because no one fully understands how it works.
“In the gen-ai world, no one is going to step into the footsteps yet, so I don’t think there’s any reason to believe that your average high-income or higher educator has the skills to really step into your child in this field,” says Hayes. “Everyone is working with a reduced capacity.”
Throughout the research, some parents seem to be unsure of AI capabilities, Rubin adds. Some believed it was just a search engine, while others believed it had no idea it could produce false output.
There were also differences of opinions on how to discuss AI with children. Some wanted to fully embrace technology, while others were cautiously supportive of progress. Some thought that young people should avoid AI completely.
“My parents aren’t like that [all] It comes with a similar mindset,” Rubin says. [of the technology]. ”
Establishing rules
Most of the parents in the study agreed that school districts should have clear policies on using AI and using them properly, Rubin said. This can be challenging, but it is one of the best ways for students to understand how to use technology safely, she says.
Rubin pointed to the district that began implementing color systems using AI. The use of green may indicate that you are working with AI to brainstorm or develop essay ideas. The use of yellow may be more like a gray area, such as seeking a step-by-step guide to solving mathematical problems. Red use is inappropriate or unethical, such as asking ChatGpt to write an essay based on assigned prompts.
Many districts also encourage listening sessions with parents and families and help them discuss AI with their children.
“It’s a fairly new technology. For families who don’t use much tools, there are a lot of mysteries and questions around them,” Rubin says. “They want a way they can follow some guidance provided by their educators.”
Karl Rectanus, chairman of the EDSAFE AI Industry Council, which promotes the safe use of AI, encourages the use of a secure framework when approaching questions about AI. The framework asks whether its use is safe, accountable, fair and effective, and says it can be adopted by both large organizations in individual classrooms and teachers.
Teachers have a lot of responsibility, so “please ask them to be technical experts that even developers don’t fully understand,” says Rectanus. Provide simple questions to consider can “help people move forward when they don’t know what to do.”
Rather than banning AI, educators need to find ways to teach students safe and effective ways, Hayes says. Otherwise, students will not prepare for it when they finally enter the workforce.
For example, at UC Irvine, one faculty assigns oral exams to a computer science student. It takes students to submit the code they wrote and explain how it works. Students can continue to use AI to write code — as professional software developers often do — they need to understand how the technology has written and how it works, says Hayes.
“I want us all to be adaptable and really think, ‘What are the outcomes of my learning here, and where can I teach and evaluate it in a world where there are generative AI?” Hayes says, “I don’t think we’re going anywhere.”