Zahra Bakhloloumi, CEO of Salesforce UK and Ireland, speaks at the company’s annual Dreamforce conference on September 17, 2024 in San Francisco, California.
David Paul Morris | Bloomberg | Getty Images
LONDON — Chief Executive Officer of the United Kingdom sales force He wants the Labor government to regulate artificial intelligence, but says it is important that policymakers don’t condemn all technology companies developing AI systems in the same way.
Salesforce UK and Ireland CEO Zahra Bakhloloumi told CNBC in London that the US enterprise software giant takes all laws “seriously”. However, it added that any UK proposals aimed at regulating AI must be “proportionate and co-ordinated”.
Bahrololoumi pointed out that there is a difference between companies like OpenAI that develop consumer AI tools and companies like Salesforce that develop enterprise AI systems. Consumer-grade AI systems such as ChatGPT are less restrictive than enterprise-grade products, which must meet higher privacy standards and adhere to corporate guidelines, she said.
“What we want is a bill that is targeted, proportionate and fit for purpose,” Bahrololmi told CNBC on Wednesday.
“There are clearly differences between consumer technology and organizations that operate consumer technology and organizations that operate enterprise technology, and we all play different roles within the ecosystem. is in charge of [but] We are a B2B organization,” she said.
A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said the planned AI regulations would not apply “overarching rules on the use of AI”, but rather “a small group of people developing the most powerful AI models”. It will be aimed at “maintenance companies”. ”
This means that this rule may not apply to companies like Salesforce, which have not created their own underlying models like OpenAI.
A DSIT spokesperson said: “We recognize the power of AI to accelerate growth and improve productivity and are committed to developing our AI sector, especially as we accelerate the adoption of technology across the economy. We are fully committed to supporting them,” he added.
data security
Salesforce touts the ethics and safety considerations built into its Agentforce AI technology platform. This allows enterprise organizations to launch their own AI “agents.” This essentially creates autonomous digital workers who perform tasks across a variety of functions, including sales, service, and service. marketing.
For example, a feature called “zero retention” means that customer data cannot be stored outside of Salesforce. As a result, the generated AI prompts and output are not stored in Salesforce’s large language models (such as ChatGPT, the programs that form the basis of today’s genAI chatbots).
For consumer AI chatbots like ChatGPT, Anthropic’s Claude, and Meta’s AI assistant, it’s unclear what data is used for training and where that data is stored, Bahrololoumi said. That’s what it means.
“To train these models, you need an awful lot of data,” she told CNBC. “So with things like ChatGPT and these consumer models, you don’t know what you’re using.”
Even Microsoft’s Copilot, which is sold to enterprise customers, comes with high risks, Bahrololoumi said. gartner report The tech giant’s AI personal assistant has criticized the security risks it poses to organizations.
OpenAI and Microsoft did not immediately respond to inquiries from CNBC for comment.
AI concerns ‘apply to all levels’
Bola Rotibi, head of enterprise research at analyst firm CCS Insight, told CNBC that while enterprise-focused AI suppliers are “better aware of enterprise-level requirements” regarding security and data privacy, regulations He said it would be a mistake to think that there will be no scrutiny. Both consumer and business companies.
“All the concerns around consent, privacy, transparency, data sovereignty, etc. apply at all levels, whether you’re a consumer or a business,” Rotibi told CNBC in an email. GDPR (General Data Protection Regulation) became law in the UK in 2018.
But Rotibi said regulators could be “more confident” in the AI compliance measures employed by enterprise application providers like Salesforce, adding that “they are more confident in the AI compliance measures employed by enterprise application providers like Salesforce. Because I understand what it means.”
“There will likely be a more nuanced vetting process for AI services from widely deployed enterprise solution providers like Salesforce,” he added.
Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London. The event is aimed at accelerating the use of the company’s new “Agent” AI technology by partners and customers.
Her comments came after British Prime Minister Keir Starmer’s Labor Party refrained from introducing the AI Bill in the King’s Speech, which was written to outline the government’s priorities for the coming months. It is something. At the time, the government said it planned to enact “appropriate legislation” on AI, but gave no further details.