AVGO research
Position in the Physical Layer of AI thesis
AVGO is the dual-layer anchor: Compute IP plus Networking. That is intentional. The Physical Layer thesis says any single-layer bet is fragile to a layer-substitution surprise. AVGO sits across two layers so a substitution in one is buffered by the other. Custom AI silicon (Google TPU, Meta MTIA, OpenAI rumored, ByteDance rumored) gives the Compute side. ~90 percent share of the cloud Ethernet switch market gives the Networking side. Both run on AVGO's own foundry partner network.
Quinn's book opened AVGO on 2026-03-06. The 2026-05-11 canonical snapshot has AVGO at +24.27%. The position is the largest single-position weight in the book at 10.6 percent.
Recent catalysts (last 30 to 60 days)
- $73B custom chip backlog (disclosed April). Three named hyperscaler customers across Compute and Networking. AVGO managed to land OpenAI on the Q4 2025 call as a custom-silicon co-design customer (rumored, not yet 10-Q-confirmed).
- 8-K TPU v6 deal (April). Disclosed contract value not given, but the timing aligns with Google's earnings call language about "doubling TPU capacity in 2026." The math triangulates.
- 800G mix expansion. Analyst day deck showed 800G port mix exiting Q2 at 38 percent. That is a tier-1 hyperscaler refresh in motion, and AVGO is the default winner of every cloud Ethernet refresh.
- VMware integration plays through. The acquired software stack is now bundled with the hardware sale to enterprises; recurring software revenue is masking some of the cyclicality on the silicon side.
The thesis (what has to be true)
1. Custom AI silicon stays the preferred path for the hyperscaler that wants to NOT pay Nvidia margins. Google TPU is the proof, Meta MTIA is the second proof, OpenAI is the third (if it lands). 2. Ethernet stays the dominant AI fabric vs InfiniBand. Nvidia is pushing NVLink + Spectrum-X, but the install base and operational familiarity favor Ethernet at every hyperscaler except Nvidia's own DGX deployments. 3. Hyperscaler capex stays inflated through 2027. If capex breaks, AVGO compresses with the rest of the basket; the dual-layer structure dampens but does not eliminate cyclicality. 4. VMware doesn't lose the enterprise contract base on price.
Kill vectors (what would break the thesis)
- Google TPU contract loss. This is the load-bearing custom-silicon relationship. If Google moves TPU v6 design to a competitor (unlikely, but not zero), the custom-silicon backlog repricing breaks 15-20 percent off the share.
- Nvidia networking break-out. If Spectrum-X wins 20+ percent of new AI fabric deployments by Q4 2026, AVGO's network share starts compressing and the dual-layer hedge weakens.
- Foundry price shock. AVGO doesn't own the fab. TSMC pricing on 3nm + 2nm flows directly into AVGO's gross margin. A TSMC capacity reallocation (Apple, AMD, Nvidia all bid) can squeeze.
- Mechanical exit at break. Discipline rule fires on close below $370 for two consecutive sessions.
Layer context
In the 8-layer Physical Layer of AI map, AVGO is the only dual-anchor. It is excluded from the 30 percent sector concentration cap calculation because counting it in both Compute and Networking layers would double-book the position size. The layer placement on /map shows it bridged across the two.
Sister names: NOK (telco-focused networking), ANET (datacenter-focused networking, smaller). Neither has the custom-silicon dimension.
Position discipline (the rules at entry)
- Trim trigger: +200 percent from cost. Locked pre-entry.
- Kill vector exit: break of $370 for 2 consecutive days. Full exit, no negotiation.
- 4-quarter thesis-fail rule applies.
Moat 9. Asym 8. Moat is the highest in the book because both layers (custom silicon design wins + Ethernet share) compound. Lose one, the other holds. Lose both, the thesis is dead.
Real money. Real position. Real receipts.