News

AI can ‘disproportionately’ help defend against cybersecurity threats, Google CEO Sundar Pichai says

  •  

Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17.

Justin Sullivan | Getty Images News | Getty Images

MUNICH — Rapid developments in artificial intelligence could help strengthen defenses against security threats in cyberspace, according to Google CEO Sundar Pichai.

Amid growing concerns about the potentially nefarious uses of AI, Pichai said the intelligence tools could help governments and companies speed up the detection of — and response to — threats from hostile actors.

“We are right to be worried about the impact on cybersecurity. But AI, I think actually, counterintuitively, strengthens our defense on cybersecurity,” Pichai told delegates at Munich Security Conference at the end of last week.

Cybersecurity attacks have been growing in volume and sophistication as malicious actors increasingly use them as a way to exert power and extort money.

Cyberattacks cost the global economy an estimated $8 trillion in 2023 — a sum that is set to rise to $10.5 trillion by 2025, according to cyber research firm Cybersecurity Ventures.

A January report from Britain’s National Cyber Security Centre — part of GCHQ, the country’s intelligence agency — said that AI would only increase those threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyberactivity, including ransomware attacks.

“AI disproportionately helps the people defending because you’re getting a tool which can impact it at scale.

Sundar Pichai

CEO of Google

However, Pichai said AI was also lowering the time needed for defenders to detect attacks and react against them. He said this would reduce what’s known as the defenders’ dilemma, whereby cyber hackers have to be successful just once to attack a system whereas a defender has to be successful every time in order to protect it.

“AI disproportionately helps the people defending because you’re getting a tool which can impact it at scale versus the people who are trying to exploit,” he said.

“So, in some ways, we are winning the race,” he added.

Google last week announced a new initiative offering AI tools and infrastructure investments designed to boost online security. A free, open-source tool dubbed Magika aims to help users detect malware — malicious software — the company said in a statement, while a white paper proposes measures and research and creates guardrails around AI.

Pichai said the tools were already being put to use in the company’s products, such as Google Chrome and Gmail, as well as its internal systems.

U.S. lawmakers reiterate support for Ukraine as President Zelenskyy calls for more aid

“AI is at a definitive crossroads — one where policymakers, security professionals and civil society have the chance to finally tilt the cybersecurity balance from attackers to cyber defenders.”

The release coincided with the signing of a pact by major companies at MSC to take “reasonable precautions” to prevent AI tools from being used to disrupt democratic votes in 2024’s bumper election year and beyond.

Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X were among the signatories of the new agreement, which includes a framework for how companies must respond to AI-generated “deepfakes” designed to deceive voters.

It comes as the internet becomes an increasingly important sphere of influence for both individuals and state-backed malicious actors.

Former U.S. Secretary of State Hillary Clinton on Saturday described cyberspace as “a new battlefield.”

“The technology arms race has just gone up another notch with generative AI,” she said in Munich.

“If you can run a little bit faster than your adversary, you’re going to do better. That’s what AI is really giving us defensively.

Mark Hughes

president of security at DXC

A report published last week by Microsoft found that state-backed hackers from Russia, China and Iran have been using its OpenAI large language model (LLM) to enhance their efforts to trick targets.

Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments were all said to have relied on the tools.

Mark Hughes, president of security at IT services and consulting firm DXC Technology, told CNBC that bad actors were increasingly relying on a ChatGPT-inspired hacking tool called WormGPT to conduct tasks like reverse engineering code.

However, he said that he was also seeing “significant gains” from similar tools which help engineers to detect and reserve engineer attacks at speed.

“It gives us the ability to speed up,” Hughes said last week. “Most of the time in cyber, what you have is the time that the attackers have in advantage against you. That’s often the case in any conflict situation.

“If you can run a little bit faster than your adversary, you’re going to do better. That’s what AI is really giving us defensively at the moment,” he added.

Germany has been benefitting from a 'peace dividend' for years, defense minister says

This article was originally published on CNBC