# Photonics & Interconnect
One-line thesis
As AI infrastructure shifts from training-dominated compute bottlenecks toward inference-heavy cluster scaling, photonics and interconnect become a standalone investable stack because the binding constraint moves to data movement, optical bandwidth, and bandwidth-per-watt.
Summary
Photonics is no longer just a supporting detail inside the broader AI infrastructure story. It is becoming its own bottleneck stack with distinct layers, timing, and beneficiaries. The market has already priced some obvious optical winners, but the deeper edge comes from mapping where the data-movement constraint tightens next: merchant lasers, silicon photonics foundries, optical test and burn-in, co-packaged optics, and the shift from electrical to optical interconnect as cluster density rises.
Worldview fit
- Reinforces that the next AI bottlenecks sit beyond raw compute.
- Updates the view that inference and distributed cluster scaling are fundamentally networking and data-movement problems.
- Supports bandwidth-per-watt as a primary screening criterion.
Core thesis points
- Inference shifts the bottleneck from accelerators alone to networking and data movement.
- Optical interconnect becomes more important as electrical interconnect limits tighten.
- Capital and margin do not accrue equally across the stack; upstream and enabling layers may still be under-discovered.
- Co-packaged optics is a clue that the system is moving closer to a new architecture, but timing matters.
Beneficiary layers
Core / nearer-term layers
- Networking / optical systems: AVGO, MRVL, LITE, COHR, AAOI, CIEN
- Silicon photonics foundry / manufacturing: TSEM
- Test / burn-in: AEHR
Emerging / optionality layers
- CPO integration / process platform: ALMU
- Deeper upstream merchant-laser and epitaxy layers may be real, but public-market access is less clean in this workspace
What matters most
- bandwidth-per-watt
- qualification and production ramps
- merchant-laser adoption for higher-speed optical systems
- silicon photonics capacity and foundry leverage
- whether CPO timing pulls forward or remains later-cycle
Hype / discard
Keep
- inference = networking/data-movement problem
- optical interconnect as a structural bottleneck
- upstream enabling layers as possible under-discovered value capture points
Discount
- over-precise timing claims on CPO without confirmed production evidence
- promotional framings of every optical name as a chokepoint
- treating all transceiver/module names as equal-alpha expressions
Link to AI factory theme
Photonics & Interconnect is a child stack of AI Factory Architecture. The parent theme describes the broad system shift; this theme isolates one of the most important bottlenecks inside that system.
Status
Promoted from reference support into a standalone investable theme.