It’s no secret silicon photonics faces wide-ranging challenges, including hybrid integration, finding new materials with added functionality to meet demands for high-speed modulation, and working with silicon CMOS foundries. And everyone you ask about these topics is likely to have a different opinion (see video).
Intel’s vision: What’s driving silicon photonics?
Intel started developing silicon photonics back in 2000, “so we’re in our 23rd year of developing them,” says Robert Blum, head of Silicon Photonics Strategy, Intel Foundry Services, Intel Corp.
The telecom industry is running into bandwidth issues at the edge, but it took quite a while for this problem to materialize. “When we look at computer workloads, artificial intelligence (AI)/machine learning workloads are really the prime use case we’ll see optics developed to truly scale. When every CPU and GPU has an optical tile and massive amounts of data going in and out of the chip—and also going significant distances because these highly meshed architectures have multiple GPUs or CPUs clustered together—we simply won’t be able to do it electrically anymore. This will be a turning point for the industry, and it’s just a few years away.”
Last year, Intel decided to open up a second photonics fab, which is a 300 mm fab on a 32 nm CMOS node, to start a foundry offering. “I’m in the process of putting it together and hopefully eventually we’ll get to a good PDK and design methodology,” says Blum.
Silicon photonics ‘is still witchcraft’
One of the panelists, Peter Winzer, founder and CEO of startup Nubis Communications, introduced his company, which just emerged from stealth mode. “We’re building what we believe is the highest density optical transceiver on the planet—at very low power,” he says. “It’ll be the optical interface for AI and machine learning predominantly, and for mobile fronthaul as well. AI and machine learning are where the real need is and it dwarfs everything in Ethernet.”
The success of silicon photonics is based on the premise that it’s a semiconductor technology and can be manufactured in volume by semiconductor fabs.
“Silicon photonics is still witchcraft,” says Winzer, pointing out how primitive it is compared with analog biCMOS and digital CMOS in terms of designing, modeling, making, testing, and producing a chip.
To turn silicon photonics into a true CMOS play, “we need to improve PDK libraries, which are incomplete and not optimized,” Winzer explains. “Foundries still expect users to characterize the building blocks, which is ridiculous. Maybe one foundry offers resistors and another foundry only capacitors and, if you want a resistor and capacitor and other technology, good luck developing your own. This is where we are in silicon photonics.”
The same modeling software is “at best fair in silicon photonics,” Winzer adds. “Tape-outs are low in silicon photonics and it can take nine to 12 months to get chips back. MPWs are even longer and it’s frustrating because it gets into a cycle of iteration, which is needed because there is no ‘first time is right’ design in silicon photonics.”
This cycle of iteration kills progress rather than enables it, Winzer points out. “Obviously, the process isn’t as repeatable and the yield isn’t there. We need to run these iterations to get to a product,” he says. “And the IP vendor ecosystem that exists in digital CMOS and to some extent in analog CMOS doesn’t really exist in silicon photonics—you can’t go to an IP vendor and say I want this block and this block, put it together, and it works.”
And a packaging ecosystem doesn’t really exist in silicon photonics. “There are efforts to get it going, but it isn’t nearly as advanced as in CMOS,” says Winzer. “The process volume rate is just not there. Some of the foundries you tape-out with in silicon photonics, if you ask them: ‘Can I scale this up to several thousand wafers per year?’ They say: ‘not with me,’ and you’re stuck because you can’t transfer what you just developed with them to another foundry. You start from the very beginning, running test chips, running your test structures, and the whole development process starts from scratch. If you make the wrong choice of foundry as a startup, you’re screwed. You have one chance, one shot, that’s it. This is the difference between silicon photonics and CMOS.”
Hybrid integration with electro-optic polymers
Michael Lebby, CEO of Lightwave Logic, which makes electro-optic polymers, delved into the state of hybrid photonic integrated circuits (PICs).
“With indium phosphide (InP), you can do pretty much everything except ASICs or large-scale transistor-based PICs,” Lebby says. “You can do small-scale but not large-scale PICs. On the silicon side, you can do pretty much everything, except lasers. Intel has merged InP and silicon and sort of solved that problem. So pure-play materials aren’t really getting us there. During the last decade, is people started searching for different materials to improve performance in various ways.”
Lebby describes a hybrid PIC as a combination of InP and silicon, which can include polymers or dielectrics, or other materials like thin-film lithium niobate. “A bunch of different materials are being added to either InP or silicon to improve the performance of the PIC to improve performance,” he says.
This is where electro-optic polymers come in—they generate really high-speed modulators. “And because it’s a liquid, you can spin it on and drop it onto a silicon photonics platform,” Lebby says.
One challenge he sees, agreeing with Winzer, is that foundries are “pretty fixed in their PDKs and recipes because they run in CMOS,” says Lebby. “When you’re a photonics company and have photonics dimensions, components, and designs, it doesn’t automatically fit. Can the photonics industry change their recipes/designs to fit the foundry PDKs? Sometimes yes, sometimes no. It’s actually difficult.”
A big concern is whether silicon CMOS foundries are flexible enough for novel modulator/PIC platforms. “It’s an interesting question,” says Lebby. “We’ve seen foundries doing some good work. But is this the start, or something we’re going to engineer through, or are we going to have problems?”
How do silicon foundries take all of these new materials—like thin-film lithium niobate, barium titanate, plasmonics, indium phosphide, and others—and integrate them into CMOS platforms? “It’s not easy,” Lebby says.
And another point is that as an industry, silicon photonics still hasn’t clearly defined “hybrid.” For Lebby, it means it isn’t a pure play—it involves a different material. “It can be either frontend or backend,” he explains. “And the reason I say backend is if you look at the electronics industry, we’ve really gone to chip-scale packaging and the photonics industry is definitely heading in that direction. Chip-scale PICs, getting rid of the gold box packaging and the traditional package, so chip-on-board and these types of directions are becoming increasingly important.”
What will it take to make silicon photonics a success story?
If we want silicon photonics to be successful, “maybe we should look closer at the electrical integrated circuit,” says David Piehler, an engineer for Dell Technologies. “Nobody would claim that one of the million transistors in my iPhone is a great transistor or the highest-end transistor, but they would agree that I have a lot of them.”
As we move from telecom/datacenter, AI/ML clusters/high-performance computing (HPC), “there’s a movement within that field I’m not sure everyone is aware of to go from narrow fast lanes—having one very fast lane—to having many slow lanes,” says Piehler. “People who are starting to see the rubber hit the road in the AI/ML/HPC world are all telling me basically the same thing, which is good for silicon photonics because no longer do we need to worry about being at the highest end and having the very best, most competitive, fastest modules. It’s ‘we have a good-enough modulator,’ but we can make a lot of them in a very small space.”
On this topic, Winzer pointed out: “Why did we always go to faster rates? The reason is that a single modulator—if you look from 10G to 40G around 2000, a single modulator supporting 40G was cheaper than four modulators supporting 10G. That’s the reason we went from 10G to 40G, and it continued and continued.”
But once you’re talking dense integration, “the paradigm breaks down,” says Winzer. “If I build a highly integrated chip, the cost of building one modulator or 10 is the same. So this paradigm breaks down, and from this perspective I don’t have to go that fast anymore. This is one side of the coin.”
The other side is to view it from a systems perspective. “Who needs to use these lanes? It’s the Ethernet switch chips or the AI accelerators, and a 50T Tomahawk 4 switch chip at 100G/lane has 500 I/O ports,” points out Winzer. “It’s 1000 differential lanes—and 2000 bumps for the signal, not even counting ground. So you have 5000 or so bumps on the underside of that chip that supply the I/O, and you haven’t even put power into it yet. You’d need thousands and thousands of these bumps to run it at 50 or 10G. You’d need 100,000 bumps, which just isn’t feasible. And this is why the industry is going to 100G and 200G with the next-gen chip, a 100T switch chip. Now you have a choice: do you build it with 1000 ports of 100G, which then equals 10,000 or more bumps, or do you stay with 500 bumps at 200G? The reason you can’t build so many bumps is why you need the high speeds. These are two contradicting things going on in the industry right now.”