When ChatGPT came out a year and a half ago, many professors immediately worried that students would use it as a substitute for their own assignments — clicking buttons on a chatbot instead of doing the thinking required to answer essay prompts for themselves.
But two English professors at Carnegie Mellon University initially reacted differently: They saw the new technology as a way to show their students how to improve their writing.
To be clear, these professors (Professors Taku Ishizaki and David Kaufer) were also concerned that generative AI tools could be easily misused by students. And that’s still a concern.
But they had ideas about how to set their own guardrails to create a new kind of teaching tool that would help students bring more of their ideas to the table and spend less time figuring out how to format their writing.
“When everyone was worried that AI was going to take over student writing, we said, “If we can suppress AI, it will be able to eliminate the writing remediation work that is preventing students from writing.” You can save a lot of money,’” Kaufer recalls. [looking] Just to see what’s going on with their writing. ”
Professors call this approach “constrained generative AI,” and have already built prototype software tools to try it out in the classroom. My scribe —It’s being piloted in 10 college courses this semester.
Kaufer and Ishizaki were in a unique position. Together, they’ve been building tools to help teach writing for decades. The previous system they built, Docuscopeuses an algorithm to find patterns in students’ writing and then visually presents those patterns to students.
A key feature of the new tool is called “Notes to Prose,” which interfaces with ChatGPT to turn student-typed bullet points and prose into draft sentences and paragraphs.
“The bottleneck in writing is sentence generation, or turning ideas into sentences,” Ishizaki says. “This is a big task. In terms of cognitive load, this part is really costly.”
In other words, it can be difficult, especially for novice writers, to think of new ideas and remember all the rules for constructing sentences at the same time. This is similar to how it is difficult for new drivers to understand both the surroundings of the road and the mechanics of driving.
“We asked ourselves, ‘Can generative AI really alleviate that burden?'” he says.
Kaufer says that novice writers tend to organize fragmented ideas into carefully crafted sentences early in the writing process, only to later abandon them because the ideas don’t fit with the final argument or essay. He added that he may end up deleting the text.
“They start getting serious about polishing too early,” Kaufer says, “so what we’re trying to do is use AI. We have the tools to rapidly prototype language as we prototype the quality of thought.”
He says the concept is based on writing research from the 1980s that showed experienced writers spend about 80 percent of their early writing time thinking about the plan and structure of the text as a whole, rather than the sentence itself.
Tame the chatbot
According to the professors, building the “Notes to Prose” feature took a lot of effort.
In early experiments with ChatGPT, Ishizaki says, when we asked people to type in a few snippets and create sentences, “we started adding a lot of new ideas to the text.” In other words, the tool tended to go further in completing the essay by adding other information from its vast store of training data.
“So we prepared a very long series of prompts to make sure there were no new ideas or new concepts,” Ishizaki added.
This approach differs from other efforts focused on using AI in education in that the only source the myScribe bot draws on is student notes, rather than a broader dataset.
Stacey Rohrbach, an associate professor and dean of graduate studies at Carnegie Mellon University’s School of Design, sees potential in tools like the one her colleagues created.
“We’ve long encouraged students to consistently create strong outlines and ask themselves, ‘What am I trying to say in each sentence?'” she said, and she hopes the “restrained AI” approach can help in that effort.
And she says she’s already seen student writers abusing ChatGPT, so she thinks some restraint is needed.
“This is the first year I’ve seen a lot of AI-generated text,” she says. “And the idea is lost. The sentence is well-constructed, but it becomes gibberish.”
John Warner, an author and education consultant who is writing a book on AI and writing, said he doubts the myScribe tool will be able to completely prevent “hallucinations” caused by AI chatbots — instances in which the tool inserts false information.
“The people I talk to think it’s probably not possible,” he says. “The hallucination is a feature of how large-scale language models work. Large-scale language models have no judgment. They may not be able to escape from making something up. What do you know?”
Kaufer says testing so far has gone well. In his subsequent email interview, he wrote: “It is important to note that ‘Notes to Prose’ works within the confines of a paragraph unit. This means that if it goes beyond note boundaries (or, as you put it, ‘hallucinations’) ), meaning it is immediately obvious and easy to identify. Concerns about AI hallucinations would be even more magnified if we were talking about larger units of discourse. ”
However, Ishizaki acknowledges that their tools may not be able to completely eliminate AI hallucinations: “But we hope to be able to restrain or guide the AI to minimize ‘hallucinations’ and inaccurate or unintended information, and then allow writers to correct them in the review/revision process.”
He described the tool as more than just a one-off system, but a “vision” for how technology should develop. “We’re setting a goal for where writing technology should evolve,” he said. “In other words, the notion of notes-to-prose is integral to our vision for the future of writing.”
But even as a vision, Warner says he has other dreams for the future of writing.
A technology writer recently pointed out that ChatGPT is like having 1,000 interns, he says.
“On the one hand, that’s great,” Warner says. “On the other hand, 1,000 interns are going to make a lot of mistakes. The early interns are going to spend more time than they save, but the goal is that over time, the interns will need less supervision and they’ll learn.” But with AI, Warner says, “supervision doesn’t necessarily improve the underlying product.”
In this way, he argues, AI chatbots will ultimately become “very powerful tools that require a great deal of human oversight.”
And he argues that converting notes into text is a crucial process of actual human writing and should be preserved.
“A lot of these tools are trying to make processes efficient that don’t need to be efficient,” he says. “Big things happen when you go from notes to drafts. This isn’t just a translation. These are my ideas and I want to put them on the page. Rather, these are my ideas and I want to put them on the page. While I was there, my ideas took shape.”
Kaufer is sympathetic to that argument. “The bottom line is that AI is here to stay and it’s not going away,” he says. “There will be fights over how it’s used, and we’re fighting for responsible use.”