A quiet revolution is unfolding within the world of optics. For centuries, tools for manipulating light, such as lenses, mirrors, and prisms, followed a familiar formula: Static, bulky, and designed for a single purpose. But artificial intelligence (AI), robotics, cloud computing, and space technologies demand faster, smaller, and more adaptable systems, so optics must evolve.
Active metasurfaces are nanostructured optical chips that shape and steer light on command, and they’ve begun ushering in an era of programmable optics. Imagine a single optical chip that can switch from a wide-angle depth sensor to a narrow-beam projector, or from a light detection and ranging (LiDAR) scanner to an optical switch, all via software.
The problem with today’s optics
Traditional optics are primarily single-function tools, which are inflexible when the task changes. Large glass lenses that focus light in a fixed way, or mechanical gimbals used to steer laser beams, tend to be slow, fragile, and expensive. This rigidity limits progress and forces engineers to design around the optics instead of letting optics adapt to the system.
Static optics can’t keep up with applications such as autonomous navigation, where robots and vehicles need dynamic vision; routing light across thousands of ports in real time for optical circuit switching within data centers; or holographic light fields for immersive augmented reality (AR) next-gen displays.
It requires a smarter, more responsive optical layer not etched within glass—but, instead, written in code.
Active metasurfaces
Metasurfaces are thin, flat optical chips patterned with structures smaller than the wavelength of light. Unlike lenses that bend light through curvature and bulk, metasurfaces control light through engineered nanostructures smaller than the wavelength of light.
Most metasurfaces today are passive, etched once, and fixed in function—and are already showing up in smartphone cameras and wearable displays, replacing multi-lens stacks with a single flat optic.
But the real gamechanger is the active metasurface, a chip where every pixel can be electronically tuned to control the phase or direction of light in real time.
Light control metasurfaces (LCMs) can dynamically steer beams, split beams, act like a lens or an adaptive optical element, or project almost any optical function all without a single moving part. Imagine the flexibility of a projector or radar antenna packed into a tiny chip—it’s what programmable optics enables.
Turning a concept into real products
Programmable optics is already making its way into real-world systems.
At Lumotive, we’re deploying active metasurfaces in three-dimensional (3D) sensing systems for robotics, smart infrastructure, and industrial automation. Our solid-state beam steering replaces bulky spinning LiDARs and unreliable mechanical mirrors.
Metalenz and others are shipping passive metasurfaces in consumer electronics, creating ultrathin cameras and sensors for space-constrained devices.
We’re seeing the beginning of a shift from static optics to software-defined light control, which opens the door to more agile, multifunctional, and intelligent photonic systems.
Smarter machines need smarter sensing
One of the most immediate use cases for programmable optics is for 3D sensing and LiDAR.
Today’s depth sensors are often hard-coded with a fixed field of view (FOV), fixed resolution, and single scan mode. But environments change constantly, and a robot navigating a warehouse doesn’t need the same scan pattern as a drone flying outdoors or a car backing into a parking space.
With programmable optics, different sensors aren’t required for every use case. One chip can adapt its behavior on-the-fly to widen its FOV when entering a room, zoom in on a moving object, split beams to track multiple regions at once, or shift scanning angles to reduce glare or multipath errors.
This approach mirrors what’s already happened with software-defined radios and digital imaging. Replacing the mechanical scanning elements in traditional LiDARs with a solid-state scanner also makes the sensors more rugged and less expensive, within a smaller footprint.
Switching: The future of networking
Programmable optics is poised to transform data center infrastructure. As AI models grow larger and demand more bandwidth, traditional electronic switches struggle to keep pace. Optical switching—sending light directly from point A to point B—offers a leap in speed and energy efficiency.
Legacy optical switches rely on microelectromechanical systems (MEMS) mirrors or other mechanical switching means, which limit scalability and switching speed. But active metasurfaces dynamically redirect light beams across thousands of ports in microseconds, and can enable ultrafast, ultradense optical fabrics for the next generation of cloud and AI infrastructure.
CMOS matters
To truly scale, programmable optics must follow the path of silicon: Miniaturization, integration, and mass production. It makes complementary metal-oxide semiconductor (CMOS) compatibility crucial. When optical chips can be fabricated alongside standard semiconductor electronics, they can be integrated directly into sensing modules, networking gear, or consumer devices.
Because there are no moving parts, they’re inherently more robust, compact, and scalable than traditional optics. Think tiny LiDARs or compact free-space optical communication terminals without gimbals—these chips are already being built today.
General-purpose optical platforms
Our long-term vision is bold and surprisingly within reach. Just as central processing units (CPUs) replaced dozens of specialized chips, and graphics processing units (GPUs) became flexible engines for AI, a single programmable optical chip could replace dozens of static components like lenses and prisms, mechanical scanners, optical switches, beamsplitters, and structured light projectors.
This chip won’t just steer or shape light—it will become a general-purpose optical platform light control that can be rewritten with software. These chips will have sensors that evolve with their environment, networks that rewire themselves, displays that blend reality and holograms, and optical computing platforms that reduce power and latency by orders of magnitude.
The shift from fixed optics to programmable light control is as fundamental as the transition from analog to digital computing. It’s a better way to build sensors or switches, and it puts optics on par with digital logic and software.
As we move into a future defined by autonomy, intelligence, and immersion, programmable optics will be one of the invisible engines to make it all possible.
About the Author
Gleb Akselrod
Gleb Akselrod is the founder and CTO of Lumotive (Seattle, WA).

