A group of 20 major technology companies on Friday announced a joint effort to combat AI misinformation in this year’s elections.
The industry is specifically targeting deepfakes. Deepfakes can use deceptive audio, video, and images to imitate key actors in democratic elections or provide false voting information.
microsoft, Meta, Google, Amazon, IBM, adobe and chip designer arm All signed the agreement. Social media companies such as his artificial intelligence startup OpenAI, Anthropic, and Stability AI also joined the group. snapTikTok, X.
Technology platforms are gearing up around the world for major elections that will affect more than 4 billion people in more than 40 countries. The rise of AI-generated content has raised serious concerns about election-related misinformation, with the number of deepfakes created increasing 900% year-over-year, according to data from machine learning company Clarity.
Misinformation in elections has been a major issue dating back to the 2016 presidential campaign. At the time, Russian actors discovered a cheap and easy way to spread inaccurate content on social platforms. Today, lawmakers are even more concerned about the rapid rise of AI.
“There’s reason to have serious concerns about how AI could be used to mislead voters on the campaign trail,” Democratic state senator Josh Becker of California said in an interview. Told. “It’s encouraging to see some companies coming to the table, but we don’t see enough detail at this point that we’ll probably need legislation that sets clear standards.”
Meanwhile, detection and watermarking technologies used to identify deepfakes aren’t advancing fast enough to keep up. For now, the two companies are in the process of agreeing on what amounts to a set of technical standards and detection mechanisms.
There are many layers to this problem, and we have a long way to go to effectively address it. Services that claim to identify AI-generated text, such as essays, for example, show prejudice For people whose native language is not English. For images and videos it’s not so easy.
Even if the platforms behind AI-generated images and videos agree to include things like invisible watermarks and certain types of metadata, there are ways to circumvent these safeguards. Screenshots can even fool detectors in some cases.
Additionally, the invisible signals that some companies include in AI-generated images have yet to reach many audio and video generators.
News of the agreement comes a day after ChatGPT creator OpenAI announced its new model for AI-generated video, Sora. Sora works similarly to DALL-E, OpenAI’s image generation AI tool. The user enters the desired scene and Sora returns a high-resolution video clip. Sora can also generate inspired video clips from still images, enhance existing videos, and fill in missing frames.
Companies participating in the agreement will have eight high-level commitments, including assessing model risk, “detecting and addressing” the distribution of such content on their platforms, and providing public transparency into those processes. I agreed to the promise. As with most voluntary initiatives in the technology industry and elsewhere, the release specifies that the initiative applies only “in connection with the services each company provides.”
“Democracy is built on safe and secure elections,” Kent Walker, Google’s president of international affairs, said in a release. He said the agreement reflects the industry’s efforts to address “AI-generated election misinformation that undermines trust.”
Christina Montgomery, IBM’s chief privacy trust officer, said in a release that in this critical election year, “we need to protect people and society from the amplified risks of deceptive AI-generated content. “Concrete and cooperative measures are needed.”
clock: OpenAI announces Sora