Not long ago, AI was something most people barely thought about. Now, it’s everywhere—powering cars, assisting in surgeries and optimizing supply chains. While the public is just starting to grasp its potential impact, those of us in the industry have been working on the technology behind AI for years. Long before the world was paying attention, engineers have been laying the groundwork for this technology. The technology that supports AI was built, step by step, through years of solving complex computing challenges. Now, after all that work, we’re facing an existential challenge. AI is expanding faster than anyone predicted, and the infrastructure that supports it needs to catch up.
Recent analyses indicate global energy demand from data centers could increase by 165% by the decade's end. The computing power required to train and run AI models is skyrocketing, and the energy demands are growing right alongside it. I’ve seen this pattern before—when technology accelerates at this pace, the underlying hardware has to evolve just as quickly. How we design, build, and power AI needs a fundamental shift, starting at the semiconductor level.
AI’s Growing Compute Bottleneck
Let’s start with where AI workloads actually run today. Most of the heavy lifting—especially training—happens in massive data centers. These facilities are packed with high-performance chips, moving and processing enormous amounts of data 24/7. And as AI models get more advanced, the computing power they need is skyrocketing. The next generation of large language models will require many times more computational resources than what we have today. The scale is incredible—but so are the energy costs that come with it.
At the same time, AI workloads are shifting. More and more, inference—the process of running AI on real-world data—is moving away from the cloud and closer to the devices that need it. Wearables, industrial automation, autonomous vehicles, and smart home systems all rely on AI to make split-second decisions, and can’t afford the delays or costs that come with constantly sending data back and forth to the cloud. Processing AI at the edge solves that. But it comes with a challenge of its own: How do you pack high-performance AI into power-constrained, battery-operated devices?
The Role of Semiconductor Innovation
Addressing AI’s compute and energy demands requires a new generation of semiconductor technologies designed to maximize efficiency without compromising performance. Traditional computing architectures are already stretched thin, and just throwing more power-hungry processors at the problem won’t work long-term. The semiconductor industry has to take the lead, developing new materials and architectures that deliver real efficiency gains without sacrificing performance. The biggest opportunities are in two areas: bringing AI processing to the edge and improving the way data moves at high speed.
FD-SOI: Enabling AI at the Edge
Advancements in semiconductor technology are making AI more efficient where it matters most. Unlike traditional Bulk and FinFET technologies, FD-SOI delivers a crucial mix of high performance and ultra-low power consumption. That’s exactly what’s needed for AI at the edge, where energy efficiency is just as important as speed.
Edge AI devices have to process data on the spot while using as little power as possible. FD-SOI lets processors run at voltages as low as 0.4V, reducing energy use without sacrificing performance. For battery-powered AI devices, that means running longer without constant recharging. It also minimizes power waste in standby mode—critical for the always-on AI we rely on every day, from smart assistants to wearables to industrial monitoring systems.
Then there’s heat. AI workloads generate a lot of it, and small edge devices don’t have the luxury of large cooling systems. FD-SOI naturally reduces heat buildup, keeping performance steady without extra bulk. That’s a big deal for AI applications that need to work reliably in real-world conditions, whether it’s inside an autonomous vehicle or out in the field on a remote IoT sensor.
Silicon Photonics: Overcoming AI’s Data Bottleneck
While FD-SOI is driving efficiency at the edge, AI’s other major challenge lies in moving data at the speed required for modern workloads. Large AI models depend on rapid, high-bandwidth communication between processors, and today’s electronic interconnects are becoming a bottleneck. Traditional copper-based connections are not scaling efficiently with AI’s growing data demands, leading to excessive energy consumption and heat generation in data centers.
Silicon photonics is making AI data movement faster and more efficient. Instead of relying on traditional electrical interconnects, it uses optical connections to transmit data at high speeds while using far less power. That’s critical as AI workloads grow. Right now, interconnect speeds sit around 400–800 Gbps, but AI infrastructure will soon need to handle 1.6 Tbps and beyond. Silicon photonics is built to support that kind of scale without the massive energy costs of traditional solutions.
It’s also changing how AI hardware is designed. Take Co-Packaged Optics (CPOs), for example. By integrating optical components directly with processors, CPOs shorten the distance data has to travel, cutting down on energy loss and making AI systems more compact and power-efficient. These kinds of innovations aren’t just keeping up with AI’s demands—they’re making sure the infrastructure behind it is ready for what comes next.
The Future of AI Depends on Smarter Semiconductor Design
Every conversation I have with customers, engineers, and industry leaders comes back to the same thing: AI’s biggest challenge isn’t capability—it’s power. They need hardware that can keep up without driving energy costs through the roof. They want efficiency without compromise.
FD-SOI is making AI at the edge possible by delivering real performance at ultra-low power. Silicon photonics is solving the bottlenecks in data centers, making sure AI can scale without hitting an energy wall. These aren’t theoretical solutions—they’re the technologies that will define AI’s next phase.
The demand for AI isn’t slowing down, and the infrastructure behind it needs to move just as fast.