The ChatGPT and OpenAI emblem and website.
Nurphoto | Nurphoto | Getty Images
The U.K. government on Wednesday published recommendations for the artificial intelligence industry, outlining an all-encompassing approach for regulating the technology at a time when it has reached frenzied levels of hype.
In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.
It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”
“Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors,” the government said.
“When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.”
Maya Pindeus, CEO and co-founder of AI startup Humanising Autonomy, said the government’s move marked a “first step” toward regulating AI.
“There does need to be a bit of a stronger narrative,” she said. “I hope to see that. This is kind of planting the seeds for this.”
However, she added, “Regulating technology as technology is incredibly difficult. You want it to advance; you don’t want to hinder any advancements when it impacts us in certain ways.”
The arrival of the recommendations is timely. ChatGPT, the popular AI chatbot developed by the Microsoft-backed company OpenAI, has driven a wave of demand for the technology, and people are using the tool for everything from penning school essays to drafting legal opinions.
ChatGPT has already become one of the fastest-growing consumer applications of all time, attracting 100 million monthly active users as of February. But experts have raised concerns about the negative implications of the technology, including the potential for plagiarism and discrimination against women and ethnic minorities.
AI ethicists are worried about biases in the data that trains AI models. Algorithms have been shown to have a tendency of being skewed in favor men — especially white men — putting women and minorities at a disadvantage.
Fears have also been raised about the possibility of jobs being lost to automation. On Tuesday, Goldman Sachs warned that as many as 300 million jobs could be at risk of being wiped out by generative AI products.
The government wants companies that incorporate AI into their businesses to ensure they provide an ample level of transparency about how their algorithms are developed and used. Organizations “should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI,” the DSIT said.
Companies should also offer users a way to contest rulings taken by AI-based tools, the DSIT said. User-generated platforms like Facebook, TikTok and YouTube often use automated systems to remove content flagged up as being against their guidelines.
AI, which is believed to contribute £3.7 billion ($4.6 billion) to the U.K. economy each year, should also “be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes,” the DSIT added.
On Monday, Secretary of State Michelle Donelan visited the offices of AI startup DeepMind in London, a government spokesperson said.
“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely,” Donelan said in a statement Wednesday.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
Lila Ibrahim, chief operating officer of DeepMind and a member of the U.K.’s AI Council, said AI is a “transformational technology,” but that it “can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.”
“The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks,” Ibrahim said.
It comes after other countries have come up with their own respective regimes for regulating AI. In China, the government has required tech companies to hand over details on their prized recommendation algorithms, while the European Union has proposed regulations of its own for the industry.
Not everyone is convinced by the U.K. government’s approach to regulating AI. John Buyers, head of AI at the law firm Osborne Clarke, said the move to delegate responsibility for supervising the technology among regulators risks creating a “complicated regulatory patchwork full of holes.”
“The risk with the current approach is that an problematic AI system will need to present itself in the right format to trigger a regulator’s jurisdiction, and moreover the regulator in question will need to have the right enforcement powers in place to take decisive and effective action to remedy the harm caused and generate a sufficient deterrent effect to incentivise compliance in the industry,” Buyers told CNBC via email.
By contrast, the EU has proposed a “top down regulatory framework” when it comes to AI, he added.
WATCH: Three decades after inventing the web, Tim Berners-Lee has some ideas on how to fix it