Top cybersecurity officials are urging technology firms to bake safeguards into the futuristic artificial intelligence systems they’re cooking up, to prevent them from being sabotaged or misused for malicious purposes.
Without the right guardrails, it will be easier for rogue nations, terrorists and others to exploit rapidly emerging AI systems to commit cyberattacks and even develop biological or chemical weapons, said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, known as CISA.
Companies that design and develop AI software must strive to dramatically reduce the number of flaws people can exploit, Easterly said in an interview.
“These capabilities are incredibly powerful and can be weaponized if they are not created securely.”
The Canadian Centre for Cyber Security recently joined CISA and Britain’s National Cyber Security Centre, as well as 20 international partner organizations, in announcing guidelines for secure AI system development.
AI innovations have the potential to bring many benefits to society, the guideline document says. “However, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible way.”
When it debuted late last year, OpenAI’s ChatGPT fascinated users with its ability to respond to queries with detailed, if sometimes inaccurate, responses. But it also sparked alarm about possible abuse of the nascent technology.
Security for AI has special dimensions because the systems allow computers to recognize and bring context to patterns in data without rules explicitly programmed by a human, the guidelines note.
AI systems are therefore vulnerable to the phenomenon of adversarial machine learning, which can allow attackers to prompt unauthorized actions or extract sensitive information.
“There is agreement across the board, among governments and industry, that we need to come together to ensure that these capabilities are developed with safety and security in mind,” Easterly said.
“Even as we look to innovate, we need to do it responsibly.”
Many things can go wrong if security is not taken into account during design, development or deployment of an AI system, said Sami Khoury, head of Canada’s Cyber Centre.
In the same interview, Khoury called the initial international commitment to the new guidelines “extremely positive.”
“I think we need to lead by example, and maybe others will follow later on.”
In July, Canada’s Cyber Centre published advice that flagged AI system vulnerabilities. For instance, someone with ill intent could inject destructive code into the dataset used to train an AI system, skewing the accuracy and quality of the results.
The “worst-case scenario” would be a malicious actor poisoning a crucial AI system “on which we’ve come to rely,” causing it to malfunction, Khoury said.
The centre also cautioned that cybercriminals could use the systems to craft so-called spear-phishing attacks more frequently, automatically and with a higher level of sophistication. “Highly realistic phishing emails or scam messages could lead to identity theft, financial fraud, or other forms of cybercrime.”
Skilled perpetrators could also overcome restrictions within AI tools to create malware for use in a targeted cyberattack, the centre warned. Even individuals with “little or no coding experience can use generative AI to easily write functional malware that could cause a nuisance to a business or organization.”
Early this year, as ChatGPT was making headlines, a Canadian Security Intelligence Service briefing note warned of similar dangers. It said the tool could be used “to generate malicious code, which could be injected into websites and used to steal information or spread malware.”
The Feb. 15 CSIS note, recently released through the Access to Information Act, also said ChatGPT could help generate “fake news and reviews, to manipulate public opinion and create misinformation.”
OpenAI says it does not allow its tools to be used for illegal activity, disinformation, generation of hateful or violent content, creation of malware, or attempts to generate code designed to disrupt, damage, or gain unauthorized access to a computer system.
The company also forbids use of the tools for activity with a high risk of physical harm, such as weapons development, military operations, or management of critical infrastructure for energy, transportation or water.
READ ALSO:
READ ALSO: