Artificial intelligence is a technology that brings opportunities and risks in equal measure to the field of cybersecurity. An evaluation of its benefits and limits is the best way to fight effectively against cybercrime.
The rapid evolution of artificial intelligence (AI) is impacting cybersecurity in ways that we must understand as a matter of urgency. With the introduction of LLMs (Large Language Models) such as ChatGPT, and the proliferation of open-source models, new opportunities are becoming available to the IT specialist – and to the cybercriminal.
One advantage of LLMs is the efficiency gain from using AI to analyse massive data sets to detect potential threats. This task would traditionally take hours, if not days, but can now be completed in minutes or seconds.
The same technology is also being used to create increasingly sophisticated techniques for analysing threat models, identifying unusual behaviour patterns at system access portals, improving breach and attack simulations, etc.
Beyond simple vulnerability detection, GenAI (Generative AI) models can also be trained to recommend corrections to insecure code, generate training materials for security teams, and identify measures to reduce the impact of threats.
Risks not to be ignored
However, any disruptive technology also has its drawbacks. The standard arsenal for combating and preventing cyberattacks is no longer sufficient. Hackers now have access to generative video and voice tools to help them craft increasingly sophisticated social engineering attacks.
Human beings and AI must work together in the fight against cybercrime.
Another major concern is the ability of amateur cybercriminals to exploit vulnerabilities in AI systems to manipulate their behaviour. If they are not properly secured, AI models can be tricked or manipulated into performing undesirable actions. For example, this type of malicious engineering can allow individuals to acquire sensitive data previously shared with LLMs in requests from other people.
In addition, the development of generative AI chatbots such as WormGPT, FraudGPT and Darkbert may help some of their users create their own cyberattacks without the need for detailed computing knowledge. Cybersecurity Ventures predicts that cybercrime will cost 10.5 trillion dollars a year in damages by 2025 (compared with 3 trillion in 2015). This is roughly equivalent to one-third of all euros currently in circulation.
Ultimately, by creating over-reliance on AI, these new tools can lead security professionals to drop their guard. Just as the calculator has replaced paper and pencil in maths classes, there is a risk that organisations will replace human judgement with AI systems.
But human beings bring a degree of consciousness, contextual understanding and intuition that machines so far lack. And unlike the calculator, these AI systems may provide incorrect information.
Robust counter-measures and a proactive approach
It is therefore necessary to meet these threats with robust counter-measures. This means examining, revising and securing current and future models, as well as the data used to train them; investing in education and training for cybersecurity professionals on the limits of AI, to find the right balance between human expertise and AI-based automation; and ensuring constant monitoring for deviant behaviour in AI models.
Cybersecurity experts need to collaborate more closely with AI developers to address the related security issues. Further research is also required into the reliable and secure rollout of AI technologies in various fields. A proactive and well-informed approach will be the best weapon against AI-based cybercrime.
05/21/2024