Artificial intelligence will be able to beat humans at cyber offence by the end of the decade, predicted the keynote speaker at a series of lectures hosted by computer science luminary Geoffrey Hinton this week.
Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California, made that projection Tuesday, saying it was based around his belief that AI systems will eventually become 鈥渟uperhuman鈥 when tasked with coding and finding exploits.
Exploits are weak points in software and hardware that people can abuse. Cyber criminals often covet these exploits because they can be used to gain unauthorized access to systems.
Once a criminal has access through an exploit, they can run a ransomware attack where they encrypt sensitive data or block administrators from getting into software, in hopes of extracting cash from victims.
To find exploits, Steinhardt said humans would have to read all the code underpinning a system, so that they can find an exploit and carry out an attack.
鈥淭his is really boring,鈥 Steinhardt said. 鈥淢ost people just don鈥檛 have the patience to do it, but AI systems don鈥檛 get bored.鈥
Not only will AI undertake the drudgery associated with finding an exploit, but it will also be meticulous with the task, Steinhardt said.
Steinhardt鈥檚 remarks come as cybercrime has been increasing
A 2023 study from EY Canada of 60 Canadian organizations found that four out of five had seen at least 25 cybersecurity incidents in the past year and experts say some companies face thousands of attempts every day.
Many have hailed AI as a potential solution because it can be used to quickly identify attackers and gather information on them, but Steinhardt said it is just as likely to be used by people with nefarious intentions.
Already, he said the world has seen instances where bad actors have harnessed the technology to create deep fakes 鈥 digitally manipulated images, videos or audio clips depicting people saying or doing things they have not said or done.
In some instances, deep fakes have been used by bad actors to make calls to people suggesting it鈥檚 their loved one reaching out and they are in need of money quicky.
Businesses have been victims, too.
Earlier this year, media reported that a worker at Arup, the British engineering company behind prominent buildings including the Sydney Opera House, had been duped into handing over US $25 million to fraudsters making use of deep fake technology to pose as the company鈥檚 chief financial officer.
鈥淚鈥檝e been trained to watch out for scams and phishing emails and I think I would have confirmed before sending $25 million over but I鈥檓 not sure,鈥 Steinhardt said, explaining how new this phenomenon is and how realistic the fakes seem.
鈥淭his is not something that we鈥檙e used to and this isn鈥檛 the only use of digital impersonation to create problems.鈥
Steinhardt鈥檚 talk concluded the Hinton Lectures, a two-evening series of talks put on by the Global Risk Institute at the John W. H. Bassett Theatre in Toronto.
The event鈥檚 namesake, Geoffrey Hinton, who is widely known as the godfather of AI, introduced Steinhardt earlier in the evening, describing the professor as the most popular choice to debut the lecture series.
The evening before Steinhardt had told the audience he sees himself as a 鈥渨orried optimist,鈥 who believes there鈥檚 a 10 per cent chance AI will lead to human extinction and a 50 per cent chance it will cause immense economic value and 鈥渞adical prosperity.鈥