CISOs confident about data privacy and security risks of generative AI

Over half of CISOs believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organisational security.

  • 2 months ago Posted in

New data from the latest members’ survey of the ClubCISO community, in collaboration with Telstra Purple, highlight CISOs’ confidence in generative AI in their organisations. Around half of those surveyed (51%), and the largest contingent, 50%) believe these tools are a force for good and act as security enablers. In comparison, only 25% saw generative AI tools as a risk to their organisational security.

The study's findings underscore the proactive stance of CISOs in comprehending the risks linked to generative AI tools and their active support in implementing these tools across their respective organisations.

45% of respondents suggested they now allow generative AI tools for specific applications, with the CISO office making a final decision on their use. Only a quarter (23%) also have region-specific or function-specific rules to govern generative AI use. The findings represent a marked change from when generative AI applications first landed following the launch of ChatGPT and when data privacy and security concerns were top-of-mind risks for organisations.

Despite ongoing concerns around the data privacy of specific applications, 54% of CISOs are confident they know how AI tools will use or share the data fed to them, and 41% have a policy to cover AI and its usage. In contrast, only a minority (9%) of CISOs say they do not have a policy governing the use of AI tools and have not set out a direction either way.

Inspiring further confidence, 57% of CISOs also believe that their staff are aware and mindful of the data protection and intellectual property implications of using AI tools.

Commenting on the findings, Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, said, “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organisation’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”

He continued, “Generative AI is rightly being seen for the opportunity it will unlock for organisations. Its disruptive force is being unleashed across sectors and functions, and rather than slowing the pace of adoption, our survey highlights that CISOs have taken the time to understand and educate their organisations about the risks associated with using such tools. It marks a break away from the traditional views of security acting as a blocker for innovation.”

Raytion’s industry-leading information retrieval technology will enable unified real-time access...
Lumen Technologies and Microsoft have formed a partnership that will use the Microsoft Cloud to...
Stack Overflow has published the results of its 2024 Developer Survey, the definitive report on the...
Although the United States is home to the world's largest AI giants, Apple, Microsoft, Nvidia,...
The vast majority (80%) of organisations have increased their year-on-year investment in generative...
Nscale, a fully vertically integrated AI cloud platform, has acquired Kontena, a leader in...
Company unveils new AI suite, empowering organizations to automate ITOps processes, enabling more...
Forcepoint DSPM now integrated with OpenAI’s ChatGPT Enterprise Compliance API.