By Richard Menear, CEO, Burning Tree
Artificial intelligence (AI) has already found its way into many mainstream business applications — redefining the modern workplace and unleashing the power of data and automation across a broad range of functions.
Many companies in all industries are now adopting variations of AI to remain agile and competitive in areas such as customer service, training, human resources and security. At times, companies may also have implemented AI technology without even knowing it, as software solutions adopt and embed AI as part of their own solutions.
However, although AI can play a crucial role in businesses’ cyber security defences, it also has a dark side. As a result, organisations must understand both the opportunities and the risks presented by these technologies as they look to embrace them as an essential part of everyday business.
At its core, AI is about machines being able to understand and execute tasks that humans would otherwise do. These systems independently learn and replicate human behaviour — meaning that, just like humans, they have their flaws.
Over the past couple of years, artificial intelligence has increasingly affected and amplified a wide range of risk types, including model, compliance, operational and reputational risks.
In the news, there is no shortage of headlines revealing the unintended consequences of AI. And, recently, reports of AI models gone awry during the COVID-19 pandemic have served as a stark reminder that artificial intelligence can create significant risks. Many models rely on historical data; however, the pandemic has driven widespread changes in human behaviour, rendering this data nearly useless in some cases. As a result, many AI systems have been left vulnerable to attack.
But although organisations’ AI systems present a tempting target for hackers, cyber criminals are also hijacking AI for their own purposes. Reports of cyber criminals using AI-generated voice in social engineering attacks first surfaced in 2019. Since then, we have seen countless attempts to exploit this deep-fake technology in a bid to deceive victims. These sophisticated AI-enabled attacks are compromising information and impacting businesses at a scale never seen before.
There is a real sense of irony that a technology that can be so instrumental in building up cyber security defences can also tear them down with the very same algorithms and machine learning.
Although artificial intelligence is a powerful tool with significant benefits — most notably its ability to automate so many business functions — it is vital to remember that AI can also be used to enhance attack techniques or even create entirely new ones.
Cyber adversaries are not standing still. So, as AI-enabled threats become more sophisticated, organisations must be ready to adapt their defences and balance the desire to automate with appropriate risk management. Achieving this balance involves taking steps to secure both internal AI systems and defend against external AI-enabled threats.
To start with, businesses will need to look at how artificial intelligence systems fit into their organisational security governance structures, identifying AI and robotic process automation (RPA) accounts within the company and protecting these as if they are privileged accounts.
Employing fine-grain access measures for business process control functions such as policy enforcement, segregation of duties, mandates and authorisation rules will also be crucial — especially where decisions have a material risk.
Through awareness programmes, organisations must also educate users on the risks of AI and robotics, particularly within social engineering attacks. AI systems will make mistakes, but so will human employees. So, it is essential that businesses devote time, training and support to help staff learn to work alongside these intelligent systems.