Memory is the bottleneck. HBM demand outruns supply through 2027.
The picks and shovels of the AI buildout.
One investor. One book. Every position written down. The infrastructure beneath the model headlines, named and held with conviction.
Join the waitlist. Get my morning brief. When Pro launches, get the bot watching your book.
The models get the headlines. Models do not train on press releases. They train on power, on memory, on fiber, on silicon. Scaling laws are a procurement problem before they are a science problem, and the procurement is happening right now.
The bet is the physical layer. Memory that meets HBM demand without slipping a node. Custom silicon priced like a service. Neoclouds taking workloads the hyperscalers cannot land fast enough. Power utilities sitting next to gigawatts of new load. Photonics for the next interconnect ceiling. The thesis is to own the shovels until either the buildout slows or someone shows me a model that runs without them.
Full thesis lives on Substack. the working notes live in the research vault (publishing soon). The agent traces live on /command.
Where conviction sits.
Click any card for cost basis, last flag, and the writeup.
Neoclouds are the new hyperscalers. $50B backlog, sovereign anchors.
Custom silicon for hyperscalers. The ASIC pivot is real.
Power is the ceiling on AI. Nuclear-adjacent generation, repriced.
Photonics is how compute scales past the copper wall.
An autonomous research engine reads filings, transcripts, and price tape against this book on a loop. Every flag is logged, every disagreement is named, and the trace is open. The site is a window onto that loop.
Agent runs · lifetime 82,851