We would like to keep you up to date with the latest news from Datacentre Solutions by sending you push notifications.
It’s no surprise that the rapid rise in the profile AI and automation technology has resulted in a sharper focus on the regulatory side of the industry. But with the UK moving slower than some of its international counterparts, there is an increasing risk that we fall behind the curve.
It’s not a straightforward issue though. Rigid guidelines, developed hastily, will be rendered null and void due to the sheer pace at which the technology is developing. It is important that any UK regulation related to AI has a level of flex and acknowledges that the technology will continue to evolve.
For example, we must learn from the Online Safety Bill, which has been slow in its development. All the while, social media has continued to grow and evolve at pace – with some platforms changing their route to market and operational practices completely.
The pace of regular technological change is quick, but the advancement of AI models appears to be even quicker. In parallel, organisations across the world want to better understand how AI can help their operations but they’re simultaneously crying out for the guardrails of regulation and frameworks.
It’s something of a Catch-22. Without any regulation, organisations that act impulsively could see significant negative outcomes on their deployments. On the other hand, rigid guidelines with no flexibility, cannot adapt and evolve effectively as the technology develops.
Right now, the fear attached to AI appears to relate to the fear of getting it wrong. We need some level of regulation to fuel further adoption and responsible innovation of AI – and along with it, develop both the knowledge of those building AI systems and improve the understanding of its users.
Professor Shannon Vallor is Co-Director of the Bridging Responsible AI Divides (BRAID) programme at the University of Edinburgh. During a talk at Turing Fest, she outlined two choices for organisations. They could either move fast and break things with catastrophic effects, or they can innovate boldly but responsibly.
I attended London Tech Week earlier this year, where I heard Rishi Sunak speak about the opportunity AI could bring to the UK. He spoke not only about the technology improving the economic standing of the country’s SMEs, but also about helping improve public services and supporting the advancement of life sciences.
Achieving this balance is a real challenge and one that our lawmakers must get to grips with quickly. Other global actors, including the EU, are already making significant steps to develop workable guidelines, and Westminster needs to keep up.
Getting this rectified quickly could reap dividends as we know that AI can help many sectors. For example, manufacturing, renewables and health (including drug discovery) could hugely benefit from the broader use of AI.
With general elections on the horizon, there is an even greater need for regulation to move swiftly or risk getting caught up in the noise of another electoral battle.
The development of such regulation requires expert guidance. Having specialists from diverse backgrounds, sectors and industries is vital if any regulation is to have the scope required to guide and benefit organisations and individuals for years to come.
We are in a prime position to support this, and it was encouraging to have so many insightful conversations when I was in London. I spoke with people who were genuinely intrigued by The Data Lab’s approach of bringing academia, industry and the public sector together and who could see it working internationally.
At the core, we must break free from the cycle of stagnant discussions about AI. We have the perfect opportunity to maximise the future use of the technology to benefit humanity. Let’s take the opportunity and ensure that AI’s potential is fully realised.