Sundar Pichai, Alphabet CEO.
alphabet Chief Executive Sundar Pichai on Wednesday committed to an “AI pact” at a meeting with European Union officials to discuss disinformation around elections and Russia’s war in Ukraine.
In a meeting with European Commission internal market chief Thierry Breton, Pichai said Alphabet’s Google will work with others on self-regulation to ensure that AI products and services are developed responsibly.
In a tweet on Wednesday, Bretton said, “Google CEO @SundarPichai and all major #AI relationships, both European and non-European, to voluntarily develop an ‘AI pact’ ahead of the legal deadline for AI regulation. We have agreed to work with them,” he said. afternoon.
“We expect European technology to respect all rules regarding data protection, online safety and artificial intelligence. In Europe, we are not free to choose. @SundarPichai recognizes this and we are pleased to be committed to complying with all EU regulations. ”
The development hints at how tech giants are trying to placate politicians and stay ahead of looming regulation. Earlier this month, the European Parliament gave the go-ahead to a landmark package of rules on AI, including provisions to ensure that training data for tools like ChatGPT does not violate copyright law.
The rule aims to take a risk-based approach to regulating AI, banning the application of technologies deemed “high risk” such as facial recognition, and requiring strict transparency for applications with limited risk. imposes restrictions.
As regulators grow more concerned about some of the risks surrounding AI, tech industry leaders, politicians and academics are wondering how new forms of AI, such as so-called generative AI and the large-scale language models that power them, will be used. They are sounding alarm bells about how sophisticated they are.
These tools make it easy for users to create new content, such as William Wordsworth style poems and essays, by simply telling them what to do.
They raise concerns, especially because of their potential for labor market disruption and their ability to generate disinformation.
ChatGPT is the most popular generative AI tool, with over 100 million users since its launch in November. Google released its own Google Bard to replace his ChatGPT in March, and announced an advanced new language model known as PaLM 2 earlier this month.
In a separate meeting with European Commission Vice-President Bela Djulova, Pichai promised to ensure that its AI products were developed with safety in mind.
According to a transcript of the meeting shared with CNBC, Pichai and Julova said, “AI could impact disinformation tools and everyone should prepare for the new wave of AI-created threats. I agreed.”
“Some of the effort could be directed toward marking AI-generated content and ensuring transparency. Pichai said Google’s AI models already have safeguards built in to ensure safe deployment of new products. He stressed that the company continues to invest in this area to ensure
Tackling Russian Propaganda
The meeting between Pichai and Julova focused on Russia’s war with Ukraine and misinformation surrounding the election, according to the statement.
According to a report from the conference, Djulova “shared concerns about the proliferation of pro-Kremlin war propaganda and disinformation, even with respect to Google’s products and services.” EU officials also discussed access to information in Russia.
Zhurova called on Pichai to take “immediate action” as Russia’s independent media are facing problems monetizing their content on YouTube. Pichai agreed to follow up on the matter, according to a statement.
Mr Djulova also “highlighted the risk of disinformation in the electoral process of the EU and its member states.” The next European Parliament elections are due in 2024. There will also be local and national elections across the EU this year and next.
However, Julova praised Google’s “commitment” to the Disinformation Code of Practice, a self-regulatory framework that was published in 2018 and has since been revised. The Code is intended to encourage online platforms to tackle misinformation. But Djulova said that under the framework, “further efforts are needed to improve reporting.”
Signatories to this code are required to report how they have implemented countermeasures against disinformation.
WATCH: Microsoft releases new wave of AI capabilities as competition from Google heats up