Artificial intelligence (AI) is becoming part of everyday life for many of us around the world. On an individual level, people are increasingly turning to AI models with search queries. Google still dominates search, but ChatGPT poses the most significant threat to their dominance.
At a business level, no sector is left untouched—from farming to healthcare and finance to entertainment, organizations around the world are embedding AI within regular operations.
AI demand and usage are expected to rise exponentially during the next few years, so tech companies are responding by building swathes of enormous data centers. But this growth comes at a cost: Energy, economics, and environmental impact. Conventional computing simply can’t keep pace with increasing computational and energy demands. To sustain the AI revolution, we must rethink the physics of modern computing.
The energy problem
Even if we take AI out of the equation, electronic computing is at a crunch point. Moore’s Law is dying, Dennard scaling has broken down, and the result is a plague of dark silicon (the portion of on-chip transistors that must remain unpowered or idle to avoid overheating).
Training a large AI model is no small feat. Large language models (LLMs) are trained on vast amounts of data, with trillions of parameters. They predict, measure, adjust, and repeat the process billions of times. The computing power required to train AI models is estimated to double every six months.
Processing and moving this much data requires massive amounts of parallelism and power. In conventional computing, more power requires higher-density systems. Higher density equals more resistance, and more resistance equals more heat. This forces data centers to divert huge amounts of energy away from computing toward cooling, with as much as 40% of a data center’s total energy consumption spent on preventing servers from melting.
AI infrastructure underpinning is already struggling, and it’s clear conventional computing can’t sustain us moving forward.
The economic problem
Data center operators are facing a financial conundrum: Either limit computing density to what their current cooling facilities can handle—hindering their business capabilities—or push thermal limits and cause hardware and components to degrade at an accelerated rate and increase operational expenses and waste.
Then there’s the cost of building new data centers—McKinsey predicts $5.2T of investment will be required by 2030. If data centers continue to rely on conventional computing, it’s a significant financial risk to take on inefficient infrastructure. The everyday consumer is also impacted by defective economics; as AI places unprecedented strain on electricity grids and data centers’ power demands rise, the cost of electricity is driven up. These costs are passed on to surrounding individual households in the form of rapidly increasing electricity bills.
The environmental problem
Most significantly, AI’s growing electricity demands, high water consumption (for cooling), and electronic hardware waste have an enormous impact on our planet. Training and running large-scale models requires vast amounts of energy, much of which is still generated from fossil fuels, and it contributes directly to rising carbon emissions.
Research indicates that global AI systems account for approximately 2.5 to 3.7% of worldwide greenhouse gas emissions, which exceeds those of the aviation industry (~2%), and emissions are expected to rise as AI adoption accelerates.
Beyond energy use, the rapid expansion of data centers also consumes land, water, and local resources at an unprecedented scale. The world cannot sustain AI’s growth if we continue to rely on existing infrastructure.
If we get the physics right, AI can actually help us tackle the climate crisis. AI is discovering new materials for batteries and optimizing transportation management to reduce carbon emissions. AI is being used to plan new energy-efficient cities, and even predict deforestation rates for proactive conservation efforts.
Photonics is a foundation for intelligence
Fortunately, there’s a silver bullet: Photonic (or optical) computing. Photonic computing exploits the massless, chargeless nature of photons to overcome key limitations of electron-based systems. Photons generate significantly less heat as they travel through a circuit, propagating in waveguides with very little heat or energy loss, and reduce the thermal dissipation and cooling requirements for demanding workloads.
Light allows multiple signals to pass through the same physical space without interference. Using wavelength-division multiplexing (WDM), photons can transmit many separate signals through a single pathway with minimal crosstalk. This enables a level of spatial spectral parallel processing that electronic systems can’t achieve (due to electromagnetic interference and charge screening), and it delivers orders-of-magnitude performance and efficiency gains.
These are fundamental advantages rooted in physics—not incremental engineering gains. These physics-driven advantages manifest in better energy efficiency and reduced latency, especially for linear algebra operations central to AI. Silicon photonic integration with CMOS processes allows us to scale performance and deploy systems without facing the power-density and thermal walls that limit traditional electronic systems.
Photonics is the only alternative to the electronic infrastructure, which is increasingly inefficient and impractical for the scaling demands of AI. The collapse of Moore’s law and the end of Dennard scaling don’t signal an end to progress but the start of a new path where light, not electrons, carries the compute of the world.
More on Optalysys
About the Author
Robert Todd
Robert Todd is chief technology officer at Optalysys (Leeds, U.K.).

