Balancing compliance and resilience in the age of AI

By Asha Palmer, SVP of Compliance Solutions at Skillsoft.

  • 3 hours ago Posted in

As Artificial Intelligence (AI) continues to evolve, so does the global effort to regulate its use. Governments, industry leaders and international institutions are all working to find a sustainable balance between fostering innovation and managing emerging risks. In the UK, the Government’s recent plans for a Cyber Security and Resilience Bill reflect a broader trend of rising regulatory expectations, part of a global wave of compliance reforms that are increasingly changing across jurisdictions.  

Europe has been a leader in establishing landmark regulations, including the creation of the Digital Operational Resilience Act (DORA) and Germany’s Supply Chain Act, which have reshaped organisational responsibilities around risk, transparency and governance. While AI remains central to regulatory discourse, the evolving scope of regulatory scrutiny is expanding to include data protection, anti-bribery measures, corruption and ethical supply chain practices.  

Amid growing compliance demands, organisations face the dual challenge of maintaining regulatory alignment while pushing forward with digital transformation and strengthening their overall digital resilience. To succeed, leaders must take a forward-thinking, integrated approach to AI adoption, embedding compliance into the foundation of innovation strategies rather than treating it as an afterthought. 

Empowering teams with strong leadership and a clear ethical framework will be crucial to navigating this shifting environment. So, how can organisations respond to these changes with agility, resilience and strategic thinking? 

The right questions for responsible innovation  

Effectively navigating the increasing complexity of both local and global regulation starts by posing the right strategic questions. Asking the right questions helps organisations to shape flexible, future-ready policies that not only manage risk but also uphold ethical standards and strengthen resilience amid ongoing regulatory shifts.  

For organisations aiming to lead in responsible AI deployment, this means looking at core aspects of their operations and asking:

   

Are our AI systems designed to protect personal data and uphold user privacy?  

What processes are in place to identify and prevent biased algorithms?  

Can we clearly show how AI-driven decisions are made to regulators, customers and other stakeholders? 

How are we managing conflicting or evolving regulations across geographies? 

Does our compliance framework address AI-related risks across the entire supply chain, including third-party vendors? 

Are employees equipped with the knowledge and training to uphold emerging standards in AI ethics and governance? 

Incorporating these questions into governance frameworks not only helps ensure ongoing compliance but also supports ethical, innovative AI use. By doing this, organisations can build trust and strengthen their reputation, while remaining competitive in an increasingly regulated landscape.            

The power of collaboration 

Creating a compliance programme that remains both robust and adaptable in the face of evolving AI challenges requires more than just policy, it demands a culture of collaboration and transparency. By fostering open dialogue across departments, including legal, compliance, IT, marketing and product development sectors, organisations can build a comprehensive understanding of the risks, benefits and implications of AI.  

Establishing dedicated ethics boards or AI governance task forces can help ensure continuous and informed oversight. Regular policy reviews and dynamic feedback loops allow governance frameworks to evolve alongside regulatory and technological developments, while internal forums that will help teams contextualise how global regulations apply to their specific areas. 

But collaboration shouldn’t stop at the organisational level. Engaging with external stakeholders such as AI vendors, academic institutions, policy experts and consultants can bring valuable perspectives that enhance governance strategies and uncover blind spots. Through a collective effort, organisations can develop AI strategies that are innovative and compliant, supporting ethical integration and building trust among stakeholders. This joint accountability ensures businesses remain resilient and well-positioned to adapt to technological advancements.  

Ethics and transparency as business priorities 

To preserve trust with external stakeholders, organisations must prioritise transparency. Employees at every level should be encouraged to openly share how AI models are developed, tested and deployed. Achieving this requires more than occasional updates, but rather demands consistent reporting, audits and clear communication of risks and mitigation plans throughout the AI lifecycle.  

It’s not just about how often you communicate, but the depth and relevance of what is communicated. Transparency should go beyond surface-level updates to offer meaningful insight into the protections in place and the organisation’s ethical approach to innovation. This level of transparency is welcomed by regulators and builds confidence among external stakeholders, who can be safe in the knowledge that the business is innovating responsibly. It demonstrates the awareness of potential risks and proves that measures are in place to address them. 

Evolving through continuous learning 

As AI becomes increasingly integrated into business operations, training must evolve to keep pace with the new challenges and responsibilities it introduces. Organisations should begin by evaluating their workforce’s current AI proficiency through targeted skills assessments to identify learning gaps and then tailor training to specific roles and risk profiles. Implementing role-based, activity-based, and risk-based learning ensures employees are trained with AI knowledge that is both relevant and effective for their roles.  

This must be promoted by a culture of continuous learning that encourages both employees and leadership to stay informed and agile to evolving technologies, regulatory changes and best practices. Embedding this mindset across the organisation strengthens compliance, enhances risk management, and positions teams to drive innovation with confidence. By investing in workshops, training programmes and updated industry resources, organisations can reinforce a continuous learning mindset throughout their workforce, keeping them competitive, agile and prepared.  

Future-proofing operations  

As the UK and EU continue to introduce new and evolving AI regulations, the bar for compliance is rising, making innovation, regulatory alignment and operational resilience critical for long-term success. Organisations that will lead in this new era are those that implement agile governance, foster cross-functional collaboration, promote transparency and invest in training and continuous learning.  

It’s time to evaluate how well your organisation is positioned to meet these growing demands. By aligning compliance initiatives with innovation strategies and resilience planning, organisations can future-proof their operations and strengthen trust among employees and stakeholders.  

Q&A with Mark Scrivens, FPT Software UK Chief Executive Officer, FPT Corporation.
By Laurent Doguin, Director, Developer Relations & Strategy, Couchbase.
By Gary Sidhu, SVP Product Engineering at GTT.
By Daniel Sukowski, Global Business Development Industry & IIOT, Paessler GmbH.
International Women in Engineering Day provides an opportunity to celebrate the women driving...