Google CEO Sundar Pichai speaks with Emily Chan at the APEC CEO Summit held at Moscone West in San Francisco, California on November 16, 2023. The APEC Summit is being held in San Francisco until November 17th.
Justin Sullivan | Getty Images News | Getty Images
MUNICH — Rapid developments in artificial intelligence could help strengthen defenses against security threats in cyberspace. Google CEO Sundar Pichai.
Amid growing concerns about the potential nefarious uses of AI, Pichai said intelligence tools could help governments and businesses speed up the detection and response to threats from hostile actors. Ta.
“It’s natural to be concerned about the impact on cybersecurity. But counterintuitively, I think AI will actually strengthen our defenses against cybersecurity,” Pichai told attendees at the Munich Security Conference last weekend. told.
Cybersecurity attacks are growing in volume and sophistication, and are increasingly being used by malicious actors to exert power and extort money.
Estimated damage to the global economy from cyberattacks 8 trillion dollars That amount is expected to reach $10.5 trillion by 2025, according to cyber research firm Cybersecurity Ventures.
January report Researchers at the UK’s National Cyber Security Center, part of the UK intelligence agency GCHQ, say AI will only increase these threats, lowering the barrier to entry for cyber hackers and allowing more malicious cyber activity such as ransomware attacks. He said it would make it possible.
“AI is disproportionately helping those on the defensive because it has the tools to impact at scale.
Sundar Pichai
Google CEO
But Pichai said AI is also reducing the time defenders need to detect and respond to attacks. This would alleviate the so-called defender’s dilemma, where cyber hackers must succeed only once to attack a system, while defenders must succeed every time to protect it. he said.
“AI is disproportionately helping the people we’re defending because we’re getting tools that can influence it at scale against the people we’re trying to attack,” he said.
“So in a sense we’re ahead of the competition,” he added.
Last week, Google announced a new initiative to provide AI tools and infrastructure investments aimed at strengthening online security. A free, open-source tool named Magika aims to help users detect malware (malicious software). Said The white paper proposes countermeasures and research to create guardrails around AI.
Pichai said these tools are already used in the company’s products and internal systems, such as Google Chrome and Gmail.
“AI stands at a critical crossroads, with an opportunity for policymakers, security experts, and civil society to finally tip the balance of cybersecurity away from attackers and toward cyber defenders.”
The announcement comes after major MSC companies signed an agreement to take “reasonable precautions” to prevent AI tools from being used to disrupt democratic voting in the 2024 bumper election year and beyond. was held at the same time.
The new agreement, which includes Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, contains a framework for how companies must respond.
This occurs as the Internet becomes an increasingly important sphere of influence for both individuals and state-sponsored malicious actors.
Former US Secretary of State Hillary Clinton on Saturday described cyberspace as “the new battlefield.”
“The technological arms race has been raised to the next level by generative AI,” she said in Munich.
“If you can run a little bit faster than your opponent, you’ll do better. That’s what AI gives us defensively.”
mark hughes
DXC President of Security
a report Microsoft announced last week that state-sponsored hackers from Russia, China, and Iran are using the company’s OpenAI Large-Scale Language Models (LLM) to step up their efforts to deceive their targets.
Russia’s military intelligence, Iran’s Revolutionary Guards, and the Chinese and North Korean governments are all said to have relied on the tool.
Mark Hughes, president of security at IT services and consulting firm DXC Technology, told CNBC that malicious actors are using a hacking tool called WormGPT, inspired by ChatGPT, to perform tasks such as reverse engineering code. He said he is becoming increasingly dependent on it.
But he said he also sees “significant benefits” from similar tools that allow engineers to quickly detect and stop attacks by engineers.
“This allows us to speed things up,” Hughes said last week. “Most of the time in cyber, the time you have is the time the attacker has an advantage against you, which is common in any conflict situation.
“If you can run a little bit faster than your opponent, you’re going to do better. That’s what the AI is really giving us defensively right now,” he added.