AI is Here: How Should CISOs Respond?

By Gail Coury, SVP and CISO, F5.

  • 1 year ago Posted in

With artificial intelligence (AI) use growing, Chief Information Security Officers (CISOs) play a critical role in its implementation and adoption. They need to prepare for the risks associated with AI content creation as well as AI-assisted security threats from attackers. By following some key best practices, we’ll be better prepared to safely welcome our new robot overlords into the enterprise!

AI is growing fast!

The popularity of ChatGPT has sparked massive interest in the potential of generative AI and many businesses are deploying it across the enterprise. AI technology is now in the wild—and it’s moving faster than any other technology I’ve seen.

There are several compelling use cases for generative AI in the enterprise:

· Content Creation: Tools such as ChatGPT can assist content creators in generating ideas, outlines, and drafts—potentially saving individuals and teams significant time and effort.

· Learning and Education: Properly trained AI tools can be used to quickly understand new and complex subjects by summarizing large amounts of information, answering questions, and explaining complicated concepts in simple language.

· Coding Support: Tools like GitHub Copilot and OpenAI’s API Service can help devs write code more efficiently and identify errors for queries.

· Product and Operations Support: Tools can be used to more efficiently prepare common reports and notices, such as bug resolutions.

Issues and challenges

However, there are challenges to overcome, such as whether using AI at all will run afoul of laws and regulations in international markets.

Earlier this year OpenAI temporarily blocked the use of ChatGPT in Italy after the Italian Data Protection Authority accused it of unlawfully collecting user data. Meanwhile, German regulators are looking at whether ChatGPT adheres to the European General Data Protection Regulation (GDPR). In May, the European Parliament took a step closer to issuing the first rules on use of Artificial Intelligence.

Another challenge are the issues around data collection and the accidental disclosure of personal or proprietary information. Companies need to secure their confidential information

against and ensure they aren’t plagiarizing from other companies and individuals who are using the same tools they are. We’ve already seen reports of intellectual property being entered into public generative AI systems, which could impact a company’s ability to defend its patents. One AI-powered transcription and note-taking service makes copies of any materials that are presented in Zoom calls that it monitors.

The third major challenge is that AI-powered cyberattack software could try many possible approaches, learn from how we respond to each, and quickly adjust its tactics to devise an optimal strategy—all at a speed much faster than any human attacker. We have seen new sophisticated phishing attacks that are utilizing AI, including impersonating individuals both in writing and in speech. For example, an AI tool called PassGAN, short for Password Generative Adversarial Network, has been found to crack passwords faster and more efficiently than traditional methods.

CISOs and AI

As CISOs, we help leaders create an organizational strategy that provides guidelines for use and takes into account legal, ethical, and operational considerations.

When used responsibly and with proper governance frameworks in place, generative AI can provide businesses with advantages ranging from automated processes to optimization solutions.

"A good AI strategy takes into account its legal, ethical, and n

Creating a comprehensive AI strategy

With new technologies such as generative AI, come opportunities. But they also come with risks. A comprehensive AI strategy ensures privacy, security, and compliance, and needs to consider:

· The use cases where AI can provide the most benefit.

· The necessary resources to implement AI successfully.

· A governance framework to manage the safety of customer data and ensure compliance with regulations and copyright laws in every country where you do business.

· Evaluating the impact of AI implementation on employees and customers.

Once your organization has assessed and prioritized use cases for generative AI, a governance framework needs to be established for AI services such as ChatGPT. Components of this framework will include setting up rules for data collection and retention and policies must be created to mitigate the risk of bias, anticipate ways the systems can be abused, and mitigate the harm they can do if used improperly.

A company’s AI strategy should also cover how changes brought about by AI automation will affect employees and customers. Employee training initiatives can help ensure that everyone understands how these new technologies are changing day-to-day processes and how threat actors may already be using them to further increase the efficacy of their social engineering

attacks. Customer experience teams should assess how changes resulting from AI implementation might impact customer service delivery so that they can adjust accordingly.

AI and security

A process for establishing and maintaining strong AI security standards is vital. What you need is guardrails that are specific to how AI functions—for example, which AI service it pulls content from and what it does with whatever information you feed into it.

AI tools need to be designed with adversarial robustness in mind. We currently see this happening in the lab to improve training, but doing this in the ‘real’ world, against an unknown enemy, must be top-of-mind—especially in military and critical infrastructure scenarios.

With attackers looking closely at AI, your organization needs to plan and prepare their defense right now. Here are a few practices to consider:

1. Ensure you analyze your software code for bugs, malware, and behavioral anomalies. Signature ‘scans’ only look for what is known, and these new attacks will leverage unknown techniques and tools. 2. When monitoring your logs, use AI to fight AI. Machine Learning security log analysis is a great way to search for patterns and anomalies. It can incorporate endless variables to search for and produce predictive intelligence, which in turn provides predictive actions.

3. Update your cybersecurity training to reflect new threats such as AI-powered phishing, and your cybersecurity policies to counter the new AI password cracking tools.

4. Continue to monitor new uses of AI, including generative AI, to stay ahead of emerging risks.

These steps are critical to building trust with your employees, partners, and customers about whether you’re properly safeguarding their data.

Preparing for the Future

To stay competitive, it’s essential for organizations to adopt AI technology while safeguarding against potential risks. By taking these steps now, companies can ensure they’re able to reap the full benefits of AI while minimizing exposure.

By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.
Companies are facing a Catch 22 when it comes to the need to invest in new forms of AI, whilst...
By Mahesh Desai, Head of EMEA Public Cloud, Rackspace Technology.
By Narek Tatevosyan, Product Director at Nebius AI.
By Mazen El Hout, Senior Product Marketing Manager at Ansys.
By Amit Sanyal, Senior Director of Data Center Product Marketing at Juniper Networks.
By Gert-Jan Wijman, Celigo Vice President and General Manager, Europe, Middle East and Africa.