Optimising AI Operating Models amid regulatory changes

By Mahesh Desai, Head of EMEA Public Cloud, Rackspace Technology.

  • 1 week ago Posted in

Following the EU AI Act - the world's first comprehensive AI law – coming into force in August this year, more scrutiny is on businesses' use of generative AI than ever before. With AI spending set to double in 2024 and the proportion of businesses with at least modest AI-oriented gains expected to reach 87%, up from 74% in 2023, now is the time for businesses to truly evaluate their use of AI.

The EU AI Act has made a promising first step towards implementing responsible AI. However, with the code of practices in review until next year and the likes of Apple and Meta snubbing the AI Safety Pact, this is far from being resolved – meaning regulatory implementation remains a hot topic that will continue into 2025.

Businesses will want to avoid being left behind by new AI legislation, both internationally and domestically. AI Operating Models – a set of guidelines for responsible and ethical AI adoption – will allow them to stay ahead of the curve.

These guardrails and controls ensure safe and responsible AI use so that organisations can make the most of the technology and integrate it seamlessly into their human team, avoiding misuse, guaranteeing privacy and offering businesses the ability to make informed decisions.

The race to keep up with regulation

The first significant collaborative effort towards AI regulation between governments came in November 2023 at the AI Safety Summit in the UK. But a year later, ambiguity remains over how governments will tackle the way AI affects businesses. While this could make some business leaders approach AI with caution, or avoid it completely, other organisations will not be able to afford to fall behind those that do implement it. While we wait for standardised regulations, boards must take the reins – and steer their companies towards safe and secure AI ideation, incubation and industrialisation. Doing so will allow them to remain competitive in the era of AI innovation.

Businesses also need to remain informed on the threats AI poses to their staff and customers, particularly through third-party and proprietary AI tools. The cause for concern lies in the fact that third-party platforms often sit within the public domain, so any employees who input customer data risk making said confidential data public. The lack of defined rules and controls over how employees and clients use AI tools could cause both safety and financial implications, further reinforcing the need for organisations to implement an AI Operating Model.

Building the ideal AI companion

Beyond its front-level business benefits, AI will also empower human jobs – acting as the perfect co-worker and making everyday tasks more streamlined and productive. From a workforce management perspective, business leaders should treat it as they would a human employee, meaning the appropriate policies, guardrails, training and governance need to be in place. This allows them to limit any fears over AI displacing human roles, and instead supplement human intelligence to enhance job performance.

But to ensure a seamless AI integration, businesses must have an idea of their corporate goals - and the skills and capabilities AI will introduce to help them achieve those. Once these targets have been established, the organisation must be able to evaluate the AI’s effectiveness and then make improvements so it can be even more effective and do its job at an optimal level.

The need for human supervision

AI Operating Models hold the potential to transform the way organisations tackle all types of AI concerns – whether those are related to ethics, transparency, regulation or compliance. It establishes the foundations for safe and secure AI adoption and spreads its benefits throughout an organisation while supporting human co-workers to use it to enhance their decision-making. Simultaneously, humans are needed to oversee the AI and check that it produces accurate and compliant results. Until fully autonomous AI systems are more commonly available, human intervention will be essential, particularly in any sector where handling sensitive data is the norm, such as healthcare and financial services.

Businesses must also ensure trust in their AI systems with a thorough validation process that sets rules for AI accountability and the integrity of its data. These guardrails validate data sources, guarantee transparency over the data’s origin and tags and manage all the data that goes into training and inferencing.

With AI becoming an increasingly important component of business in almost every industry, business leaders should take their time understanding the new regulations and stay on top of the latest developments to ensure their guidelines limit risks and remain competitive. This requires organisations to establish much-needed frameworks - across all departments - for how they oversee AI and ensure this evolving technology remains transparent and accountable. The result will be an ideal collaboration between AI and humans that optimises business performance and reduces the risk of unregulated AI to employees and customers.

By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.
Companies are facing a Catch 22 when it comes to the need to invest in new forms of AI, whilst...
By Narek Tatevosyan, Product Director at Nebius AI.
By Mazen El Hout, Senior Product Marketing Manager at Ansys.
By Amit Sanyal, Senior Director of Data Center Product Marketing at Juniper Networks.
By Gert-Jan Wijman, Celigo Vice President and General Manager, Europe, Middle East and Africa.
By Mike Bollinger, Global Vice President Strategic Initiatives, Cornerstone.