The rising tide of AI cyber risk

By Stephen Faulkner, Solutions Director, Orange Cyberdefense UK.

  • 5 months ago Posted in

2023 will undoubtedly be known as the year in which artificial intelligence (AI) rapidly rose up the agenda of the C-suite, when business leaders finally had to sit up and take notice of a technology which is set to rapidly change how organisations across multiple sectors will operate over the next decade.

AI has been heralded as having the potential to transform business processes and drive productivity. However, there have been several warnings about the potential negative impact of its growth. According to research by the World Economic Forum (WEF), generative AI – including platforms such as ChatGPT – is expected to be adopted by nearly 75% of surveyed companies and is second only to humanoid and industrial robots in terms of expectations of job losses.

Concerns have also been raised about how AI will influence cybercriminals – those adversaries that benefit from the increased connectivity of our world by hacking their way into an organisation’s network.

The UK government has warned that, by as soon as 2025, AI could increase the risk of cyber-attacks and erode trust in online content. A report examining the impact of generative AI claimed that the tech could be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons.” The threat is being taken seriously across the globe. Earlier this year, the US National Institute of Standards and Technology (NIST) announced that it is developing guidance for businesses detailing safe, responsible measures for building, deploying and testing generative AI tools.

Generative AI – the risks

Most readers will no doubt have tested the capabilities of generative AI via platforms such as ChatGPT. Such solutions have great potential to drive enterprise productivity and allow all of us to spend more time on the type of tasks that require a greater level of human intellect and intuition. So, how exactly could these platforms be used by threat actors in the coming years? And what other security risks does the development of generative AI present to organisations? Here are just a few: · Detecting and analysing vulnerabilities: In one case an ethical hacker used ChatGPT to analyse snippets of PHP code (often used for web development) and discovered the possibility of accessing usernames through a database. While in this case the technology was used for a legitimate purpose, cybercriminals may have already started using it to discover new vulnerabilities.

· Releasing sensitive data: Anyone who shares information with an external service provider must assume that it can store and distribute that information as part of its architecture. Research by Netskope – a partner of Orange Cyberdefense – found that large enterprises are seeing increasing quantities of sensitive data being posted to ChatGPT. Earlier this year the UK National Cybersecurity Centre (NCSC) published an advisory note warning that companies that create AI-powered chatbots can read the queries typed into them. This is also the case with information shared via SaaS platforms such as Slack and Microsoft Teams, which do have clear data and processing boundaries but can be blurred when augmented with third-party add-ons.

· Social engineering: One of the most common forms of engineering is phishing, including the use of scam emails, and according to the Orange Cyberdefense Security Navigator 2023, it remains a common tactic. The increased use of generative AI platforms is likely to increase the frequency and believability of scam content. All cybercriminals need to do is ask the platform to write an email or text message asking the recipient to click on a link (to be inserted by the ‘threat actor’) relating to a missed delivery or payment reminder, for example. They only have to fool 1% of the thousands of recipients and their job is done.

· Democratising code generation: RaaS (Ransomware as a service) platforms and phishing kits have been in use by cybercriminals for many years. The growth of generative AI will bring the potential to generate malicious code into the mainstream. There is nothing to stop a curious teenager from asking ChatGPT to “write me a ransomware in Python.” Within a matter of minutes, they could be using that ransomware for extortion purposes.

· Cross-border extortion: Large language models (LLMs) – such as ChatGPT – and machine translation are forcing an increased ‘internationalisation’ of cyber extortion as a crime. This is enabling actors from different language groups to target and extort victims in English-speaking countries, and allowing victims in countries that don’t use ‘common’ languages to be extorted more readily. Think of places like China and Japan where language may have historically presented a barrier to criminals.

Raising risk awareness to fight the threats of AI

The security risks facing all organisations as a result of the development of AI capabilities are vast and will continue to grow. Companies operating in the channel need to stay abreast of developments in this area, ensuring they are educating both their customers and employees about how to reduce the risk of falling victim to some of the dangers outlined in this article.

Technology and education must be used in combination to combat the increased power that AI has presented to those intent on creating cyber chaos. This should include ensuring all our stakeholders can recognise phishing attempts and are aware of the risks of disclosing sensitive information. The threat from AI may seem to be a futuristic nightmare, but in reality, the threat is with us now and the channel must address the risks as a priority.

By Sam Bainborough, Director EMEA-Strategic Segment Colocation & Hyperscale at Vertiv.
By Gregg Ostrowski, CTO Advisor, Cisco Observability.
By Rosemary Thomas, Senior Technical Researcher, AI Labs, Version 1.
By Ian Wood, Senior SE Director at Commvault.
By Ram Chakravarti, chief technology officer, BMC Software.
By Darren Watkins, chief revenue officer at VIRTUS Data Centres.
By Steve Young, UK SVP and MD, Dell Technologies.