In mid 2025, Amazon crossed a milestone that would have seemed like science fiction a decade ago: One million autonomous robots are deployed across its logistics operations. It’s a staggering figure—one that illustrates how far automation has come and how quickly it’s spreading.
Meanwhile, lawmakers are considering the first federal frameworks for robotics and autonomous systems, with 2026 poised as a potential inflection year—not for speculative hype, but for mass adoption.
One critical layer is still holding it back: Perception. The ability of machines to sense and understand their physical environment is critical.
Across factories, farms, cities, and vehicles, the ability of machines to accurately sense and understand their surroundings shapes not only performance but public trust. It’s the difference between a robot that performs predefined tasks and one that adapts to complex environments. Increasingly, strategic investors are recognizing that perception—modular, programmable, and scalable—is the infrastructure layer that will separate those who deploy autonomy at scale from those who stall out during the pilot phase.
This is why a growing wave of strategic and financial investors are backing companies like Lumotive, which are focused on the foundational role programmable optics will play in the future of intelligent systems. We’re entering an era where perception isn't just a hardware problem, it’s a system challenge. Our platform gives developers the tools to adapt faster, build smarter, and scale wherever the market takes them.
Flexibility and modularity
Lumotive’s beam-steering chips and LiDAR reference designs are architected for flexibility—the ability to plug in different lasers, sensors, and processors depending on the application. That kind of modularity turns perception into a platform; it’s not a one-off product. And it enables industrial robots to move from rigid, preconfigured environments into dynamic, unpredictable ones.
Industrial automation labor shortages are widespread, timelines are tightening, and the old approach—custom sensors built during 18-month cycles—isn’t viable anymore. Our programmable architecture helps reduce these cycles to as little as three to six months, and accelerates everything from warehouse automation to outdoor robotics for construction and agriculture.
For example, Hokuyo’s YLM-10LX sensor is based on Lumotive’s light control metasurface (LCM) semiconductors and was the first commercial solid-state LiDAR. It offers precision and programmability without mechanical fragility for automated guided vehicle (AGV) or autonomous mobile robot (AMR) applications. NAMUGA's Stella-2 followed quickly, integrating LCM and a single-photon avalanche diode (SPAD)-based image sensor into a lightweight module for smart machines like autonomous mowers and delivery bots.
Within the robotics sector, we’re seeing a subtle shift from mobility to intelligence—primarily for the autonomous lawn care robot market. Once a novelty, they’ve now entered the mainstream. But the gap between good and great machines lies in how they perceive: How do they distinguish pets from sprinklers, or new grass seed from fallen leaves? This category is maturing fast, but true mass adoption hinges on high-fidelity, solid-state perception systems that can handle unpredictable outdoor environments.
Cities are evolving, too—not just becoming “smart,” but responsive. The next generation of urban infrastructure depends on real-time sensing that adapts, learns, and acts—not just collecting data. In Las Vegas, Nevada and Singapore, LiDAR networks are already optimizing traffic in real time. But the next wave—smart cities—will require a more distributed, programmable perception layer. Our platform allows for scalable deployment of sensing systems that can adapt within the field: Adjusting resolution, field of view, and power usage based on the task at hand. This kind of contextual intelligence is what lets cities respond instead of merely collecting data.
Autonomous systems for mobility are no longer in the pilot stage—they’re in the field now. Waymo delivers more than 250,000 paid rides per week across multiple U.S. cities, with plans to hit 3,500 autonomous vehicles by 2026. But even Alphabet’s flagship fleet still contends with challenges around edge cases, public trust, and operational complexity. Scalable perception—customizable in terms of form factor, power constraints, and use case—will enable the next wave from sidewalk bots to autonomous delivery fleets to AI-enhanced electric vertical takeoff and landing (eVTOL) aircraft.
Beyond sensing, programmable optics is transforming how information moves—unlocking dynamic, energy-efficient interconnects for the AI era. Photonics-enabled beam steering can unlock dynamic optical switching for faster, lower-latency interconnects to support massive throughput at reduced energy cost. As AI workloads multiply, bottlenecks shift from graphics processing units (GPUs) to the optical backplane—and this is where solid-state, software-defined photonics can make the difference.
The technology stack that enables this future isn’t “one size fits all,” which makes the shift to programmable optics so critical. It gives developers—not just at global tech giants but across the innovation ecosystem—the ability to build sensing systems that evolve with their markets.
Programmable optics is more than autonomy—it’s about perception that adapts, scales, and endures. From robots navigating chaotic environments to data centers routing at the speed of light, it’s the foundation for intelligent infrastructure at global scale. For the companies and cities shaping the next decade, the path forward is clear: See better, go further.
About the Author

Sam Heidari
Sam Heidari is CEO of Lumotive (Redmond, WA) and leverages 30 years of semiconductor leadership, including as former CEO of Quantenna, where he led its IPO and $1B+ acquisition. He holds a Ph.D. from USC, serves on multiple boards, and has driven Lumotive's path to mass production and global growth since 2021.