Why AI cloud computing beats on-premise infrastructure

By Daniel Beers, Senior Vice President, Global Data Center Operations of Ardent Data Centers, a product of Northern Data Group.

  • 5 months ago Posted in

The age-old argument of buying vs. leasing has plagued organizations for centuries. From the decision to rent an office rather than purchasing the building, to hiring seasonal workers instead of permanent staff, even to signing up for a monthly rather than annual Adobe Photoshop subscription, everyday business is flush with dilemmas regarding the permanence of places, products and services.  

Often, there’s no clear-cut answer: leaders must consider factors like payback time, storage space, control over the asset and more. But in the age of AI, during which technology is progressing at a never-before-seen rate, buying participatory tools outright is often an unwise investment. After all, they may become obsolete even before their purchase becomes profitable. Instead, many businesses are choosing to access compute power externally via the cloud. 

Let’s explore why cloud computing is booming – and how more businesses can charter off-site GenAI capabilities that are as powerful and accessible as on-site infrastructure. 

  

Simple, affordable scalability

Modern AI applications require significant computational resources. But installing infrastructure can prove time-consuming and expensive, often leaving it out of the reach of SMEs. According to IBM, the physical size of an average data center varies between 20,000 to 100,000 square feet. In comparison, the average full-size football pitch equals around 64,000 sq. ft. Meanwhile, a standard GenAI data center’s energy requirements range from 300 to 500 megawatts, an amount that could power as many as 500,000 homes. 

Simply, operating a data center is a serious undertaking, requiring huge amounts of expensive space and resources, particularly amidst today’s high energy prices. Cloud compute providers offer instant access to powerful hardware, which can then easily be scaled up or down based on demand. Then, organizations only need to pay for the resources they use, rather than the 24/7 running and ownership costs of retained infrastructure. 

Advanced performance accessibility

The recent semiconductor crisis, in which car production was slashed and PS5s became akin to gold dust, offers a reminder of how supply issues can disrupt progress. Now, the rising demand for GPUs due to a widespread adoption of AI threatens to cause supply chain challenges once again. 

According to Nasdaq, Nvidia, the leader in GenAI chipmaking with an estimated market share of 95%, saw huge demand for its H100 GPU. In fact, on its recent earnings conference call, the company said demand for its upcoming flagship H200 and Blackwell GPUs will extend well into 2025. 

In some ways, this demand bodes well for Nvidia, and the companies that already own their chips and organizations looking for flexible access to compute power. Many cloud providers have already integrated thousands of advanced GPUs from top manufacturers like Nvidia, which customers can instantly lease and use. Some providers even enjoy early purchase rights to manufacturers’ next-gen models thanks to a longstanding, successful partnership. Customers that partner with these organizations will therefore be able to harness advanced compute power long before their competitors, helping to establish them as market leaders in an increasingly AI-first world. 

Prioritization of next-gen technology

However, this AI world is also a murky, unfamiliar one. The industry has seen so much hype and so many headlines that it can be tough for everyday business owners to decipher what’s important, what deserves their attention and what should be ignored or avoided. While the technology ostensibly seems to have taken over the world, “if you compare a mature market to a mature tree, we’re just at the trunk,” Ali Golshan, founder of an AI start-up, told The Washington Post. “We’re at the genesis stage of AI.”  

For organizations looking to capitalize on AI, it can therefore be incredibly useful to partner with a specialist provider that has inside knowledge of the industry and technology. Cloud providers regularly invest in the latest technologies first. Their experts can identify the best-in-class hardware needed for customers now and into the future, and purpose-build corresponding data center environments with proprietary performance-optimizing solutions. Similarly, cloud providers invest heavily in the latest security measures to protect data and infrastructure, while handling important maintenance tasks such as software updates to enable customers to freely focus on innovation. 

Bringing the best ideas to life

AI is the future of business, but this future remains unpredictable. The technology could progress at an even faster or slower rate than foreseen, and its impact could be felt at varying levels of consequence. Meanwhile, new laws that aim to put safety guardrails around AI’s development are also set to alter its development course. The European Union’s AI Act, more comprehensive than the US’s light-touch compliance approach, will likely come into force in the summer of 2024. And, according to Ali Golshan of the AI start-up, one of clients’ biggest concerns is that strict new AI laws will render their past investments a waste.

This unpredictability underlines the benefits of AI cloud computing. By partnering with a specialist external provider, businesses can access highly-coveted GPUs whenever and however often they like. That way, they can enjoy advanced technology support and realize their previously unachievable innovation goals – all without breaking the bank. You can too. So, why not explore cloud computing today? 

By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.