Optical computing to cut AI energy demand?
An optical computer prototype developed by Xingjie Ni, an associate professor of electrical engineering at Penn State School of Electrical Engineering and Computer Science, and his team opens up a more realistic path to reduce the energy demands of artificial intelligence (AI) computation.
Their work was inspired by the growing mismatch between the computational demands of modern AI and the energy cost of running it.
“Today’s most powerful AI systems require enormous amounts of computation, and it translates directly into rising electricity use, heat generation, and cooling demands,” says Ni. “In many cases, the challenge is no longer only how to make AI models more accurate but how to make them physically sustainable to run.”
Optical computing is intriguing within this context because light processes information extremely quickly, in parallel, and often with much lower energy consumption than electronics for certain operations. But there’s a catch: While optics is naturally excellent at linear computation, AI also critically depends on nonlinearity.
Ni and his team set out to address one of the bottlenecks of optical AI hardware: “How can we achieve the nonlinear behavior needed for intelligent computation without paying the usual price in high optical power, complex materials, or bulky hardware?”
Transforming info
Optical computing uses light to transform information as it passes through optical components. “Many of these transformations are linear—and the output changes proportionally with the input,” says Ni. “Light passing through a dimmer filter, for example, can be described as the input optical power multiplied by the filter’s transmittance. And a lens can perform a Fourier transform on an optical field. Linear operations are extremely important because they handle much of the heavy numerical workload in AI, such as large matrix operations. Optics performs these operations especially well, with high speed and massive parallelism.”
Unfortunately, linearity alone isn’t enough for general computation because many important computational functions are inherently nonlinear and can’t be fully reproduced by stacking only linear operations. “More importantly, nonlinearity is what allows AI systems to move beyond simple signal transformation and represent complex patterns, make decisions, and learn meaningful relationships in data,” Ni says. “Without nonlinearity, even a very deep or large network can mathematically reduce to a single linear transformation, which severely limits what it can do. This is why nonlinearity is indispensable for neural networks and machine learning systems.”
Since strong nonlinearity in optics is difficult to obtain efficiently, it often requires high light intensity or specialized materials that can increase both energy consumption and system complexity. “Our approach addresses this challenge by generating what we call ‘data nonlinearity.’ The nonlinear input-output behavior emerges from repeated optical reverberation within a multipass cavity, rather than from the intrinsically nonlinear optical materials themselves,” says Ni. “It makes the concept much more attractive for low-power AI hardware.”
How does an optical computer work?
It starts with uniform light. “Input data are encoded by a patterned element inside a compact optical cavity—a liquid crystal display (LCD) screen, in our case,” explains Ni. “When light first passes through the LCD, it’s modulated by the displayed pattern and begins to carry the input information. The cavity then sends the light back through the same pattern multiple times.”
Each pass can be thought of as another multiplication between the pattern already carried by the light and the pattern displayed on the LCD. As this process repeats, the overall input-output response develops into a nonlinear mapping that allows the system to perform AI-relevant computation—rather than merely act as a fast linear processor.
This is a big deal since nonlinearity is often the most difficult and energy-expensive part to implement for optical computing. “Our approach avoids this conventional tradeoff. Instead of relying on strong material nonlinearities, high-power lasers, or complicated optical architectures, we obtain useful nonlinear computational behavior through repeated low-power optical reverberation,” says Ni.
The benefits are both computational and practical. Computationally, the system can support AI-relevant tasks because “it provides the nonlinear behavior that machine learning fundamentally requires, while still retaining the key advantages of optical computing: High speed and massive parallelism,” Ni says. “Practically, it remains compact, low power, and compatible with inexpensive incoherent light sources like LEDs. This makes it a more realistic path toward energy-efficient optical AI accelerators.”
One of the coolest aspect of this work “was realizing we didn’t necessarily need strong material nonlinearity to obtain the nonlinear behavior required by AI,” says Ni. “This is the conceptual shift at the heart of our work. Instead of treating nonlinearity purely as a material property, we showed it can emerge from optical dynamics and repeated interactions inside a multipass system.”
It also works with incoherent light. “Much of optical computing has been associated with coherent lasers and more delicate optical interference-based setups,” Ni says. “Seeing that a simpler, lower-cost platform could still achieve useful nonlinear learning behavior made the result feel both scientifically interesting and practically meaningful. It suggests a more realistic path toward scalable optical AI hardware.”
A path to useful nonlinear behavior
Main challenges to overcome? Figuring out how to obtain useful nonlinear behavior without falling back on the usual tradeoff of high optical power, exotic materials, or complex hardware. “We needed to design a system in which repeated optical passes would generate a strong enough nonlinear mapping to support learning, while still keeping the setup compact, stable, and energy-efficient,” says Ni. “Another challenge was demonstrating that the generated nonlinearity wasn’t just a theoretical curiosity, but could actually solve meaningful benchmark tasks and compete with standard digital models.”
Results clearly show this form of data nonlinearity allows the device to go beyond the performance limit of a purely linear system. “It’s an important result because it confirms nonlinearity is not only scientifically interesting, but also genuinely useful for optical computation,” Ni says. “Looking forward, if the goal is practical optical AI acceleration, then robustness, programmability, and eventual system integration will be equally important challenges to address.”
Validating nonlinear behavior
Simulations played a key role demonstrating the team’s device could perform meaningful learning tasks and validate how reverberating optical transformations generate the desired nonlinear behavior. “We developed our own code to model this process and numerically showed the output exhibits a nonlinear mapping of the input,” says Ni.
Their work also “includes benchmark evaluations such as image classification and XOR tasks, in which the optical learner was compared with both linear and fully nonlinear digital models,” Ni says. “These comparisons showed the system wasn’t merely producing an unusual optical effect, but was actually delivering task-level performance comparable to nonlinear digital networks. More broadly, our simulation and modeling effort helped connect the underlying optical cavity physics to machine learning behavior.”
AI accelerators
Most immediate application? “An AI accelerator for math-heavy workloads, where speed, energy efficiency, and low latency matter,” says Ni. “Potential use cases include pattern recognition, machine vision, and other interference tasks that today consume significant power in digital hardware. If matured further, this kind of optical module could help reduce the energy and cooling burden for data centers.”
At this stage, the team’s device is a proof of concept and isn’t ready to be deployed as a commercial product. “Our immediate next steps are to make the system more programmable, robust, compact, and scalable,” Ni says. “This includes adding more tunable degrees of freedom, reducing electronic overhead, testing on larger and more realistic workloads, and moving toward designs that can be integrated and manufactured.”
FURTHER READING
B. Liu et al., Sci. Adv., 12, eaeb4237 (2026); https://doi.org/10.1126/sciadv.aeb4237.
About the Author
Sally Cole Johnson
Editor in Chief
Sally Cole Johnson, Laser Focus World’s editor in chief, is a science and technology journalist who specializes in physics and semiconductors.


