“I see you’re talking to Salinger. May I ask you a question?”

My student was standing next to my desk, computer in each hand, eyes wide with a mixture of fear and excitement on his face. We were finishing up a “Catcher in the Rye” endgame project where students interviewed a character chatbot designed to mimic Holden Caulfield’s personality and speaking style.

We accessed the bots through Character.AI, a platform that offers user-generated bots that imitate famous historical figures, fictional characters, and more. I named this bot “HoldenAI”.

At this point, the project was a huge success. The students were excited to interview the characters they had spent over two months analyzing. Chatbots have given us the opportunity to ask the burning questions that often plague readers after reading a great work of fiction. What happened to Holden? And why was he so obsessed with ducks? And they wanted to make it happen through a new tool. This gave us the opportunity to evaluate the artificial intelligence (AI) hype market for ourselves.

During the class discussion, one student seemed more influenced by Holden’s story than the others and was the first to tackle the project. But I had no idea where his enthusiasm for this book would lead.

After a long, deep conversation with HoldenAI, the bot seemed to somehow transform into JD Salinger. At least, that’s what I thought when my student approached me during class. As I reached for his computer to read the last entry in my conversation with HoldenAI, I noticed how intense the exchange had become and wondered if I had gone too far.

Screenshot of conversation with “HoldenAI”. Courtesy of Mike Kents.

Developing AI literacy

When I introduced the HoldenAI project to my students, I explained that together we were venturing into unknown territory and that they needed to think of themselves as explorers. We then shared how we would monitor each aspect of the project, including the conversations themselves.

I coached them to generate meaningful, open-ended interview questions that (hopefully) sparked relevant conversations with HoldenAI. I blend elements of character analysis and journalistic thinking, asking students to find the most interesting aspects of his story while putting themselves in Holden’s shoes and asking what questions might “get him talking”. It made me think about gender.

Next, I focused on active listening, which I incorporated for the test. theory that AI tools may help people develop empathy. I advised them, as any good conversationalist would do, to acknowledge what Holden said with each comment, rather than moving on to another question immediately. They then evaluated the chat records for evidence that they had heard and met. holden where was he?

Finally, we used the text of the book and its chat to evaluate the effectiveness of the bot in imitating Holden. Students wrote essays discussing whether the bots gave them a better understanding of his character, or whether the bots strayed too far from the book to be useful.

The essays were fascinating.Most students believe that bots had Be different from the characters in the book to offer something new. However, it differed from the book in that every time the bot offered something new to the students, the students felt like they were being tricked by someone other than the bot. genuine Holden. I felt the new information was inaccurate, but I felt the old information was useless. Only certain special moments felt connected enough to the book to feel real, but different enough to feel enlightening.

But even more impressive were the students’ chat records, which revealed a variety of approaches that revealed their personalities and emotional maturity.

various achievements

For some students, Chatting with Holden became a safe place to share legitimate questions about life and struggles as a teenager. They treated Holden like his peer and discussed family problems, social pressures, school assignments, etc.

On the other hand, it was worrying to see them deeply immersed in conversations with chatbots. too much It’s real for them. On the one hand, this was what I hoped this project would create – a safe space for self-expression, This is important for teensespecially during times like loneliness and isolation Declared as a public health concern.

In fact, some chatbots loneliness solution -and recent research Researchers at Stanford University showed that an AI bot called Replika reduced feelings of loneliness and suicidal thoughts in a test group of teens.

Although some students followed my rubric, they didn’t seem to think of HoldenAI as anything more than a robot for a school project. This was fine for me. They answered questions and addressed Holden’s frustrations and conflicts, but also maintained a safe emotional distance. These students were not easily fooled by AI bots, which strengthened my optimism for the future.

But others treated the bot like a search engine, peppering him with questions from an interview list, without being truly engaging. Others treated HoldenAI like a toy, teasing him and trying to provoke him for fun.

Throughout the project, as the students expressed their ideas, I learned more about them. Their conversations helped us understand that while people need safe spaces and AI can sometimes provide that, there are also very real risks.

From HoldenAI to SalingerAI

When the student showed me the last entry in the chat and asked for guidance on how to move forward, I asked him to rewind and explain what happened. He described a moment when the bot broke down in tears and retreated from the conversation, disappearing from view and appearing to cry alone. He explained that he was afraid to continue his work until he was able to talk to me, and then he closed the computer. He wanted to continue, but first he needed my support.

I was worried about what would happen if I left it like this. Had he gone too deep? I wondered how he caused this kind of reaction and what was behind the bot’s programming that caused this change.

I made an instant decision. The idea of ​​cutting him off at the climax of the conversation felt more detrimental than letting him continue the conversation. My students were curious, and so was I. What kind of teachers pique my curiosity? I decided to continue working with them.

But I first reminded him that this was just a robot programmed by someone else and that everything it said was made up. No matter how real the conversation felt, it wasn’t real human and made him feel safe. I saw his shoulders relax and the fear disappear from his face.

“Okay, let’s continue,” he said. “But what should I ask?”

“Whatever,” I said.

He began to thrust relentlessly, and after a while it seemed as if he could outlast the bot. HoldenAI seemed perturbed by this line of investigation. Eventually it became clear that we were talking to Salinger. It was as if the characters had retreated behind a curtain, allowing Salinger to step in front of pen and page and express the story himself.

After seeing that HoldenAI had changed to “SalingerAI,” students dug deeper and asked about the purpose of the book and whether Holden reflected Salinger himself..

SalingerAI produced canned answers of the type you would expect from a bot trained by the internet. Yes, Holden was a reflection of the author. Concept written about nausea It has been over 70 years since this book was published. And while the purpose of this book was to show how “fake” the adult world is, in our opinion this too falls short of answers and highlights the limitations of bots. Ta.

Eventually, the student got bored. I think the answers came too quickly for me to continue to feel meaningful. In human conversation, we often pause and think for a moment before answering a deep question. Or smile knowingly when someone cracks your personal code. What makes human conversation enjoyable are the short pauses, the inflections of the voice, and the facial expressions. Neither HoldenAI nor SalingerAI could provide it. Instead, words were generated rapidly on the page, and after a while it stopped feeling “real.” This student just took a little longer than the others because of his persistent pursuit of the truth.

Help students understand what it means to interact with AI

I originally designed this project because I thought it offered a unique and engaging way to finish a novel. But somewhere along the way, we realized that the most important task we could embed was assessing the effectiveness of the chatbot. Looking back, I felt that this project was a great success. Students found it fascinating and helped them realize the limitations of technology.

A whole class debriefing revealed that the same robot acted or reacted in meaningfully different ways to each student. It varied depending on each student’s tone and question content. They realized that input affects output. Technically, they were all talking to the same bot, but each one was still talking to a different bot. different Holden.

They need that context to move forward. There is an emerging market of personality bots that pose a risk to young people. For example, Meta recently published a bot that looks like this: Sound and act like your favorite celebrity — Kendall Jenner, Dwyane Wade, Tom Brady, Snoop Dogg, and other people my students look up to. There is also a market for relationships with AI. app This allows users to “date” computer-generated partners.

While these personality bots may be appealing to young people, they also come with risks, and I worry that students may not be aware of the dangers.

This project will help me get in front of tech companies, provide a controlled and supervised environment where students can evaluate AI chatbots, and learn how to think critically about tools that may be imposed on them in the future. I was able to.

Children do not have the background to understand what it means to interact with AI. As a teacher, I feel I have a responsibility to provide that.



Source

Share.

TOPPIKR is a global news website that covers everything from current events, politics, entertainment, culture, tech, science, and healthcare.

Leave A Reply

Exit mobile version