AI Predicted to Surpass Humans in Cyber Offense by 2030
Tara Deschamps, The Canadian Press
TORONTO — Artificial intelligence is expected to surpass humans in cyber offense capabilities by the end of this decade, according to a leading expert speaking at a series of lectures hosted by esteemed computer scientist Geoffrey Hinton.
During the event, Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences as well as statistics at UC Berkeley, shared this prediction. He believes that AI systems will eventually achieve a "superhuman" status in areas such as coding and detecting vulnerabilities.
Vulnerabilities, or exploits, are weaknesses found in software and hardware that can be manipulated by cybercriminals to gain unauthorized access to systems. Once an attacker exploits these weaknesses, they may execute a ransomware attack, encrypting critical data or preventing users from accessing their systems in order to extort money from them.
Steinhardt pointed out that humans typically need to review extensive code to identify these vulnerabilities, a task that can be tedious and time-consuming. "This is really boring," he stated. "Most people just don’t have the patience to do it, but AI systems don’t get bored."
AI's capacity to efficiently carry out the repetitive and detail-oriented tasks of finding exploits is a key advantage. Steinhardt emphasized that AI would not only automate this process but would execute it with a high level of precision.
His comments come at a time when cybercrime is on the rise. A study conducted by EY Canada found that four out of five organizations in the nation had experienced at least 25 cybersecurity incidents over the past year, with some companies facing thousands of attempts on a daily basis.
While many see AI as a tool to combat cyber threats by rapidly identifying attackers and collecting intelligence, Steinhardt warned that it could equally empower those with malicious objectives. He highlighted recent instances where bad actors have used AI technology to create deep fakes—digitally altered images, videos, or audio that misrepresent people.
There have been troubling scenarios where deep fakes have been leveraged by criminals to impersonate loved ones in urgent financial situations, as well as cases affecting businesses. For example, a worker at Arup, a British engineering firm known for projects like the Sydney Opera House, was reportedly tricked into transferring $25 million to fraudsters posing as the company's chief financial officer.
Steinhardt shared his concerns about the sophistication of these scams, stating, "I’ve been trained to watch out for scams and phishing emails, and I think I would have confirmed before sending $25 million over, but I’m not sure," illustrating the realistic nature of these digital impersonations.
The conclusion of Steinhardt’s presentation marked the end of the Hinton Lectures, a two-evening event organized by the Global Risk Institute at the John W. H. Bassett Theatre in Toronto. Geoffrey Hinton, recognized as the godfather of AI, introduced Steinhardt during the session, referring to him as the ideal speaker to kick off the lecture series.
The evening prior, Steinhardt expressed his outlook as a "worried optimist," estimating a 10 percent chance that AI could drive humanity to extinction, compared to a 50 percent chance that it may lead to significant economic value and prosperity.
This report by The Canadian Press was first published on October 29, 2024.
AI, Cybersecurity, Exploits