YOUNG BULL
Ticker Vault · MU · Micron Technology

MU

Synthesized 2026-05-14 via hand-curated-poc
MU ticker card

MU research

Position in the Physical Layer of AI thesis

MU is the memory anchor. Of the eight Physical Layer bottlenecks, three are downstream of high-bandwidth memory: training throughput, inference latency, and total cost per token. Every hyperscaler buying GPUs is also buying HBM stacks, and there are only three companies that make the stuff (Micron, SK Hynix, Samsung). Capex to expand HBM3e fabs is locked through late 2027 because the lithography and packaging steps share equipment with logic nodes. That is the cleanest "compute bottleneck" you can buy in equity form.

Quinn's book opened MU on 2026-03-27. As of the 2026-05-11 canonical snapshot, position is sitting at +124.69%. That is the largest single dollar gain in the book and the largest lifetime gain percent on a current holding.

Recent catalysts (last 30 to 60 days)

  • Q3 FY26 print (early May). HBM revenue grew triple digits year over year. Management guided next quarter HBM mix toward 25 percent of total DRAM revenue, vs ~16 percent in the print. The supply-guide language tightened: prior call used "constrained," this call used "sold out through calendar 2026."
  • Microsoft + Meta backlog. Both customers confirmed multi-year HBM3e allocations in their April earnings calls. Neither named Micron specifically, but Micron's 8-K from the same week disclosed two customer concentrations above 10 percent of revenue. The math triangulates.
  • Industry capex flat through 2027. SK Hynix re-guided down on additional HBM capacity adds. Samsung still 6 months behind on HBM3e qualification at the lead customer (per Nikkei, late April). That leaves Micron as the only non-leader with a free-and-clear ramp.
  • Manufacturing partnership signals. ASML reported Q1 EUV bookings concentrated in memory, not logic, for the first time since 2024. That points at the HBM3e wafer step being the gating constraint.

The thesis (what has to be true)

1. HBM stays the dominant memory format for AI training and inference accelerators through at least 2028. Alternatives (HMC, GDDR7, custom in-package memory) need 3+ years to validate and qualify at scale. 2. Three-supplier oligopoly holds. Samsung does not catch up on HBM3e qualification before 2027. SK Hynix stays capex-constrained. 3. HBM4 transition (2027 to 2028) does not commoditize the segment. Micron's process roadmap and Idaho fab build keep them on the leading edge. 4. The Microsoft and Meta hyperscaler backlogs do not get cancelled or repriced lower. Their 2026 to 2028 GPU plans assume MU at current allocations.

Kill vectors (what would break the thesis)

  • HBM oversupply. If Samsung qualifies HBM3e at the lead customer before mid-2026 and SK Hynix accelerates capex, the three-supplier discipline breaks. Pricing then follows the standard DRAM cycle (down).
  • Hyperscaler concentration shift. If Microsoft or Meta substantially redirects 2027 GPU orders to non-Nvidia silicon that uses a different memory format (Broadcom TPU with custom memory, etc.), MU's allocation share contracts.
  • Process node stumble. Idaho fab delay, 1-gamma node yield problems, or a packaging defect that affects the HBM stack would break the ramp.
  • Mechanical exit at break. Quinn's discipline rule fires on close below $400 for two consecutive sessions. That is the written exit price, locked at entry. Not a stop-loss, a rule.

Layer context

In the 8-layer Physical Layer of AI map, MU is the Compute and Memory anchor. The layer is currently at the 30 percent book cap (Quinn's sector concentration rule). Sister names in the layer: AVGO (custom AI silicon), ARM (compute IP). The layer's defining trait is that no AI model trains or serves without it, and the suppliers cannot be replaced inside a product cycle.

Micron is the cleanest pure-play memory exposure. The other two suppliers are Korean (SK Hynix) and conglomerated (Samsung Electronics across all of memory + logic + display + mobile). Neither offers the same single-thesis exposure to HBM.

Position discipline (the rules at entry)

  • Trim trigger: +200 percent from cost. Locked pre-entry. Predator agent fires the trim automatically.
  • Kill vector exit: break of $400 for 2 consecutive days. Full exit, no negotiation.
  • 4-quarter thesis-fail rule applies (Discipline-v2 Rule 2).

Real money. Real position. Real receipts.

Young Bull covers the Physical Layer of AI thesis. The book on /book shows what is actually held. Mentions of other tickers are research, watch list, or thesis context, not positions.