In February, an absurd AI-generated rat penis somehow snuck into a room it had since retreated into. Frontiers of cell and developmental biology article. Now, it seems to me that this strange farce is just a particularly disturbing example of a more persistent problem occurring within the scientific literature. Journals are currently at a crossroads in how best to respond to researchers who use popular but factually questionable generative AI tools to write manuscripts and create images. Detecting evidence of AI use is not always easy; Report from 404 Media This week, what appears to be dozens of public articles partially generated by AI have been shown hiding in plain sight. Do the dead give? Common computer-generated terminology.
404 Media A search for the AI-generated phrase “latest knowledge update” in Google Scholar’s public database found 115 different papers that appeared to rely on copy-pasted AI model outputs. It has been. This string of words is one of many phrases frequently produced by large language models like OpenAI’s ChatGPT. “Knowledge update” in this case refers to the period during which the model’s reference data has been updated. chat. Other common generated AI phrases include:As an AI language model” and “Play response.” Outside of academic literature, these AI artifacts are scattered in many places. Amazon product reviewsand across social media platforms.
Some of the cited papers are 404 Media Quantum entanglement and AI text Lithium metal battery performance. Other examples of magazine articles that appear to include the common generated AI phrase, “No access to real-time data.” Shared on X (formerly Twitter) over the weekend.At least some of the examples reviewed by pop science It seemed to be related to research on AI models. In other words, the AI’s utterances were part of the subject matter in those cases.
becomes terrible. Apparently, if you search for “latest knowledge update point” or “no access to real-time data” on Google Scholar, a bunch of AI-generated papers pop up. This is a really bad timeline. pic.twitter.com/YXZziarUSm
— Life After My PhD (@LifeAfterMyPhD) March 18, 2024
Some of these phrases were published in reliable and famous magazines, 404 Media Researchers claim that most of the examples they have discovered come from small-scale so-called “paper mills” that specialize in publishing papers quickly, often for a fee. There has been no scientific scrutiny or thorough peer review. Researchers claim that these paper mills are proliferating rapidly. Contributed to an increase in false or plagiarized academic research results in recent years.
Unreliable AI-generated claims could lead to more retractions
Recent examples of apparently AI-generated text appearing in published journal articles come amid an increase in retractions in general.Recent Nature analysis More than 10,000 retractions were found among research papers published last year, more than in any previous year measured. Although the majority of these cases did not involve AI-generated content, concerned researchers have long noted that the increasing use of these tools Increase in false or misleading content Go through a peer review process. In the embarrassing rat penis incident, bizarre images and meaningless AI-generated labels like “dissiliced” and “testtomcels” somehow managed to slip by multiple reviewers without being noticed or reported. Ta.
There is good reason to believe that articles posted with AI-generated text could become more common. In 2014, the IEEE and Springer journals were merged. Over 120 articles removed It turned out that it contained meaningless language generated by an AI.The prevalence of AI-generated text in journals is almost It has definitely increased in the last 10 years. Since then, more sophisticated and easy-to-use tools, such as OpenAI’s ChatGPT, have become widely adopted.
Survey of scientists in 2023 Nature We found 1,600 respondents, or about 30% of the people surveyed. admitted to using AI tools to assist in writing the manuscript. And while phrases like “as an AI algorithm” are perfect evidence to reveal the origins of Sentence’s Large Language Models (LLMs), they do not eradicate many other, more subtle uses of the technology. It Is difficult. The detection model used to identify AI-generated text includes: proved frustratingly inadequate.
Supporters of allowing AI-generated text in some cases say it helps non-native speaker You may be able to express your thoughts more clearly and the language barrier may be lower. Some argue that, if used responsibly, the tool has the potential to: Reduce publishing time and improve overall efficiency. However, publishing inaccurate data or fabricated findings generated by these models risks damaging the journal’s reputation in the long term.recent papers published in Current osteoporosis reports When we compared human-written review article reports to those generated by ChatGPT, we found that the AI-generated examples were often easier to read.At the same time, AI-generated reports Full of inaccurate references.
“Let’s be honest, ChatGPT was pretty convincing, even with some false statements,” study author Melissa Casena, a professor at Indiana University School of Medicine, said in a recent paper. interview and time. “Sometimes it didn’t ring any alarm bells because it used the right syntax and was integrated with the right statements in the paragraph.”
Journals need to agree on common standards for generative AI
Major publishers still disagree on whether to allow AI-generated text in the first place. From 2022 onwards, magazines will be published. science It has become The use of AI-generated text or images is strictly prohibited Those that are not initially accepted by the editor. Natureon the other hand, issued a statement Last year, it announced that it would no longer allow AI-generated images and videos to be published in magazines. will do Allows AI-generated text in certain scenarios.Current Japan Automobile Manufacturers Association Allow text generated by AI However, researchers need to clarify when it appeared and what specific model was used.
These policy differences can cause unnecessary confusion for both the researchers submitting the work and the reviewers tasked with reviewing the work. Researchers already have an incentive to use the tools at their disposal to publish papers quickly and increase the overall number of published works. Agreed standards for AI-generated content by leading journals will set clear boundaries for researchers to follow. Large, established journals have further distanced themselves from less scrupulous paper mills by drawing clear lines on certain uses of the technology or by outright banning any attempts to make factual claims. You can also put