#invest
Let’s think like an institution for a moment. Right now, while everyone debates which AI model will dominate or whether the technology lives up to its hype, a quiet land grab is happening in plain sight. The prize isn’t code or algorithms or training data. It’s something far more tangible and infinitely more scarce: megawatts. And the investors who understand this aren’t asking whether to allocate capital to data center infrastructure. They’re asking whether they’re already too late.
This is one of those rare moments where market structure creates an opportunity that won’t exist in eighteen months. The operators who secured power allocations over the past few years didn’t just get favorable positioning. They built moats that cannot be breached on any timeline that matters. When a utility allocates 200 megawatts to your facility, you’re not buying a commodity that competitors can source elsewhere. You’re locking in access to grid capacity that took a decade to build and will take another decade to expand. The substation, the transmission infrastructure, the generation capacity behind it, none of this materializes because someone else writes a bigger check. It exists or it doesn’t. And if it doesn’t, the timeline to create it stretches so far into the future that it becomes strategically irrelevant.
Here’s what makes this extraordinary: we’re watching monopolies form in real-time, and the market is still pricing these assets like competitive businesses. An operator with secured power in a constrained market isn’t competing with the next entrant who raises capital and breaks ground. They’re competing with grid build-outs that won’t complete until 2030 or later, if they happen at all. That’s not a competitive advantage that erodes as new supply enters. That’s a structural monopoly where the barrier to entry is measured in years and billions of infrastructure investment that has nothing to do with data centers and everything to do with utility-scale power delivery that entire regions lack.
Think about the negotiating dynamic this creates. Hyperscalers are spending tens of billions building AI infrastructure. Their constraint isn’t capital or technology. It’s compute capacity. When they need 100 megawatts delivered in 18 months and only three operators can provide it, pricing stops being the primary variable. The conversation shifts to availability and timeline. Can you deliver or can’t you? If you can, the rate becomes secondary because the alternative is falling behind competitors who secured their capacity and are already training next-generation models. If you can’t deliver on timeline, it doesn’t matter how attractive your pricing is. The deal doesn’t happen because speed is worth more than savings.
This completely inverts traditional competitive analysis. In most industries, early movers face eventual competition as capital recognizes opportunity and floods the market. Here, capital is irrelevant if the power doesn’t exist. Someone cannot simply raise a billion dollars and replicate your facility because the limiting input isn’t construction funding. It’s grid allocation that’s already spoken for. Even when utilities agree in principle to serve new load, interconnection queues stretch years into the future. In practice, an operator trying to secure power today is competing for capacity that won’t exist until well past the planning horizon of the hyperscalers who need it now. Existing operators with secured allocations effectively operate as regional monopolies, and every utility rejection or delay validates that those allocations become exponentially more valuable.
The asymmetry here is stunning when you actually examine the risk-reward profile. The downside is protected by long-term contracted revenue with investment-grade counterparties who cannot easily walk away even if their needs evolve. These aren’t month-to-month leases subject to competitive repricing. These are 15 to 20-year agreements with take-or-pay provisions where hyperscalers committed to capacity when everyone assumed abundant supply and competitive alternatives. The contracts locked in rates that reflect commodity data center economics, not monopoly infrastructure positioning. Meanwhile, the upside comes from contract renewals where operators can now price in scarcity premiums, expansion capacity where marginal economics reflect genuine pricing power, and new facilities where every megawatt allocated commands rates that would have seemed absurd just three years ago.
Watch what happens as these contracts come up for renewal. Hyperscalers will discover that the competitive alternative they assumed would exist simply doesn’t. The choice isn’t between renewing at higher rates or finding a cheaper provider. The choice is renewing at significantly higher rates or waiting years for new capacity that may never materialize while competitors who maintained their relationships continue scaling operations. This isn’t speculation about future pricing power. This is the inevitable outcome when one party can walk away and the other cannot, which is the textbook definition of monopoly economics.
The geographic element makes this even more compelling. Investors sometimes worry that facilities in secondary markets face disadvantages compared to prestigious metros. The opposite is true in power-constrained environments. An operator with secured power in a cooperative jurisdiction has higher barriers to entry than one fighting multi-year permitting battles in constrained coastal markets. Hyperscalers don’t care about geographic prestige. They care about latency to fiber, power reliability, and speed to delivery. An operator bringing 500 megawatts online in 18 months in a secondary market completely outcompetes someone stuck in five-year approval processes in a major metro regardless of location status.
The debt markets figured this out before equity markets, which is why credit investors are pricing project financing at investment-grade spreads. They’re not lending against growth potential or execution risk. They’re lending against contracted cash flows from counterparties who cannot practically default because the infrastructure is mission-critical and alternatives don’t exist. When lenders are comfortable with 80%+ loan-to-cost ratios at tight spreads, they’re modeling utility-like characteristics, not speculative development. The gap between how credit markets price this risk and how equity markets value these companies represents one of the clearest mispricings in infrastructure investing right now.
What makes this particularly urgent is that the advantage compounds rather than erodes. Every new facility announcement adds load to already-stressed grids. Every interconnection rejection validates that existing allocations become more valuable. Every multi-year delay in bringing competitive capacity online extends the period where secured operators can command premium economics without meaningful competition. The moat widens as the industry grows, which is the opposite of every technology market where early advantages disappear as capital and competition arrive. Here, capital is arriving and discovering it cannot compete because the constraint cannot be solved with money alone.
Smart money isn’t buying current earnings. They’re buying decade-long positioning in markets where supply cannot catch demand and new entrants cannot compete on any relevant timeline. They’re buying the second and third lease cycles where scarcity pricing replaces legacy contract economics. They’re buying expansion capacity where operators deploy marginal capital at returns that reflect monopoly positioning rather than competitive rates. They’re buying before contract renewals demonstrate pricing power and before retail discovers that boring infrastructure is actually a compounding scarcity asset.
The technology disruption risk that normally haunts infrastructure investments is largely neutered here. Even if AI efficiency improves dramatically and reduces compute per workload, hyperscalers remain contractually obligated for committed capacity. And if efficiency gains enable more workloads rather than reducing infrastructure needs, which is exactly what history suggests happens with every technology improvement, then operators with secured power are even better positioned because the constraint remains binding while demand accelerates. Either way, the contracted revenue provides downside protection while the scarcity dynamic provides upside optionality.
The timeline matters more than most investors realize. Right now, you can still buy operators with secured power at valuations that reflect competitive markets rather than monopoly positioning. You can still acquire exposure before contract renewals demonstrate pricing power. You can still position before retail discovers what institutional capital already knows. But that window closes with every passing quarter as the thesis validates, as capacity expansions get delayed or rejected, and as the market recognizes that these aren’t real estate plays generating mid-teens returns. These are positional monopolies in the most critical infrastructure buildout of the decade, and they’re mispriced because most investors are still using frameworks designed for competitive markets that no longer apply.
The operators who locked in power didn’t just win the race to build data centers. They won the race for the one input that cannot be substituted, replicated, or competed away on any timeline that matters. That’s not a competitive advantage that gets arbitraged away. That’s a structural monopoly that persists for the entire investment horizon. The question isn’t whether this thesis plays out. The physical constraints guarantee it does. The question is whether you position now while valuations still reflect competitive assumptions, or whether you wait until everyone else figures it out and pays peak valuations.
That’s the bear case everyone fixates on, but it requires several unlikely things to happen simultaneously.
The hyperscalers have already committed hundreds of billions to AI infrastructure. Microsoft, Google, Amazon, Meta aren’t speculating on whether AI might be useful. They’re building because their competitors are building, and falling behind means losing enterprise customers who are already demanding AI capabilities. This isn’t dot-com speculation where companies had no revenue. Enterprises are paying for AI services right now, and those sunk costs are enormous and irreversible.
The bull case doesn’t require AI to solve world hunger or achieve AGI. It just needs to be valuable enough that enterprises continue paying for it. Customer support automation, code assistance, content generation, data analysis all have tangible ROI today. Companies aren’t going to abandon tools that demonstrably reduce costs or increase productivity just because AI didn’t cure cancer. Once you’ve told enterprise customers you’re offering AI services, you can’t suddenly announce you’re shutting it down because demand disappointed. Hyperscalers are locked into a multi-year buildout cycle by competitive dynamics and customer commitments.
Even if generative AI hype fades, these data centers don’t become worthless. They’re running cloud infrastructure, rendering, simulation, scientific computing. The capacity doesn’t vanish, it reprices. And with secured power, operators still have cost advantages over any new entrants regardless of what workloads they’re running.
The real risk isn’t AI going to zero. It’s that growth rates disappoint and lease rates compress. But even in that scenario, operators with secured power and contracted revenue are insulated. Their leases are already signed. The question becomes whether new capacity gets leased at favorable rates, not whether existing contracts evaporate. The data center thesis doesn’t require AI to be revolutionary. It just requires it to be useful enough that hyperscalers keep training models and running inference at scale, and that bar is already cleared.
Behind the meter power doesn’t reduce the moat. It validates how severe the grid constraint is. When hyperscalers resort to building their own power generation, that’s not competition. That’s capitulation.
Behind the meter means building on-site generation rather than using the grid. This creates massive problems. First, fuel supply complexity. Natural gas requires pipeline infrastructure and long term contracts. Nuclear requires decade-plus regulatory approval for unproven SMR technology. Solar and wind need massive land and battery storage that makes gigawatt-scale economics prohibitive.
Second, operational complexity hyperscalers explicitly don’t want. Microsoft and Google are software companies, not utility operators. Running power plants requires different expertise, regulatory compliance, and safety protocols. They’re only doing this because they cannot get grid power fast enough. If an operator could deliver 500 megawatts in 18 months via secured grid allocation, hyperscalers would choose that over building their own plant every time.
Third, net zero conflicts. Behind the meter gas generation directly contradicts the 2030 carbon neutrality commitments these companies made publicly. This limits how much they can deploy before regulatory and reputational backlash hits.
Fourth, timeline and scale limits. Behind the meter still takes 3 to 5 years for permits, fuel infrastructure, equipment, and backup grid connection. That doesn’t solve the 18 month demand crisis. More importantly, it doesn’t scale to multi-gigawatt training campuses. Building your own gigawatt-scale generation means becoming a utility, which is exactly what operators with secured allocations avoided.
Every behind the meter announcement proves grid-connected capacity is so constrained that hyperscalers will accept massive complexity, cost, and operational burden just to deploy compute. When hyperscalers can choose between leasing with 18 month delivery versus building behind the meter taking 4 years while becoming power plant operators, the first option wins unless it doesn’t exist. Behind the meter validates the thesis, it doesn’t threaten it.
why wouldn’t these hyperscalers build their own data centers?
Some are, but they’re running into the same constraint everyone else faces: power. Building a data center isn’t the bottleneck. Securing megawatts is.
When Microsoft or Google wants to add 200 MW of capacity, they still need utility allocations, grid upgrades, and interconnection approvals that take years. They can’t bypass the queue just because they’re hyperscalers. The utilities serving their existing campuses are already at capacity, which is why they’re leasing from third-party operators who secured power in different service territories. It’s not a build versus lease decision. It’s a “we need capacity now and the only way to get it is through operators who already have power allocated” reality.
There’s also a capital allocation argument. Hyperscalers would rather deploy billions into AI models and software where they have competitive advantages, not into owning and operating physical infrastructure. Leasing lets them scale faster without tying up capital in real estate and electrical systems. They pay a premium for speed and flexibility, which makes economic sense when the alternative is waiting five years for their own build to come online while competitors who leased capacity sprint ahead.
The key insight is that hyperscalers building their own facilities doesn’t solve the power constraint. It just means they’re competing for the same limited grid capacity that third-party operators are. And in many cases, the operators moved first and already locked in the allocations. When a utility has 500 MW available and it’s already committed, the hyperscaler can’t just show up with a bigger checkbook and take it. The power is gone, and new supply takes a decade to materialize. That’s why they’re signing massive leases with operators like CWRV and APLD instead of exclusively building their own.
They absolutely could acquire them, and in some cases it might make strategic sense. The calculus comes down to capital allocation, speed, and strategic focus.
Hyperscalers would rather deploy capital into AI models, software, and services where they have competitive advantages and generate higher returns. Owning physical infrastructure ties up billions in low-margin real estate and electrical systems. By leasing from third-party operators, they get faster deployment without the operational burden of managing substations, cooling systems, and utility relationships. They pay a premium for this, but the premium is worth it when the alternative is waiting five years for their own build while competitors who leased capacity sprint ahead in the AI race.
There’s also the multi-sourcing advantage. If Microsoft owns all its data center infrastructure, it’s locked into its own build timelines and execution risk. By leasing from multiple operators across different geographies, they diversify their infrastructure risk and maintain flexibility. If one operator underperforms or a region faces permitting delays, they have capacity elsewhere.
That said, acquisitions aren’t off the table. If a hyperscaler determined that owning certain strategic assets with locked-in power allocations created more value than leasing, they could acquire operators like APLD or GLXY. The operators’ market caps are small relative to hyperscaler cash flows, so financing isn’t the constraint. The question is whether vertical integration makes sense strategically, and historically hyperscalers have preferred to lease infrastructure rather than own it. But if power scarcity becomes severe enough that securing guaranteed capacity outweighs the capital allocation costs, we could see consolidation. That would actually validate the thesis even more strongly, it would just mean equity holders get acquired at a premium rather than holding through the monopolistic cash flow period.
Great piece – thank you!
Can I coax you into your thoughts on the topics below:.
After watching NVDA’s GTC event today, evolving chips are 10X better than the earlier model. Although higher in price, the token generation is much greater. This will repeat in follow-on years. Also, a recirculating “greater” data loop is being generated as AI use increases, requiring more and more processing power.
- Assuming that future chips will shrink in size allowing more racks to use the same existing building space, how can electric power expand to meet that future need? In other words, can a 1GW facility grow to 2GW, 3GW? It appears that geographical constraints are present in areas where many data centers are present.(i.e., Texas).
- Access and distance to/from major fiber cable lines is also of significant importance, especially for international communication. As a result, data factories will not be a “one size fits all” scenario. Smaller substation data centers role?
A 1GW facility cannot simply scale to 2GW or 3GW without utility infrastructure that takes years to build. Power capacity is determined by what the serving utility has allocated through substations and transmission lines. Expanding requires the utility to build additional infrastructure involving multi-year permitting, regulatory approvals, and billions in capital expenditure. The grid capacity doesn’t exist just because you’re willing to pay for it. Geographic constraints in Texas and other high-concentration markets are already evident, with utilities facing years-long interconnection queues and some rejecting new load requests until upgrades complete. The shrinking chip sizes you mention actually intensify the problem because you can fit more racks in the same building but your power allocation stays fixed, so power becomes the binding constraint rather than physical space.
On fiber connectivity and smaller data centers, different workloads have different requirements that create market segmentation. Hyperscale facilities training AI models can tolerate higher latency and locate in cheaper power areas with less fiber infrastructure. Edge computing and content delivery need proximity to population centers with robust fiber for low latency. This means smaller 10-50 megawatt edge facilities near cities compete on network proximity and handle latency-sensitive workloads, while gigawatt-scale facilities in surplus power regions focus on compute-intensive training where latency matters less. Data centers are not one size fits all because power scarcity and fiber connectivity create different competitive advantages for different facility types.
======================
I don’t understand? Won’t quantum computing reduce electricity needs of ai workloads by like 99.9%? Maybe I’m not understanding something but the electricity supply crunch narrative has like 5-10 years max then it’s kaput. Plenty of time to make money then cash out but it’s a trade not an investment
This question misunderstands both quantum computing and AI infrastructure.
Quantum computers don’t run AI workloads. They solve completely different problems: factoring large numbers, simulating quantum systems, specific optimization tasks. The GPUs in data centers are doing matrix multiplication for neural networks at massive scale. Quantum computers can’t do this today and may never be able to efficiently. These are different technologies solving different problems, not competing solutions.
Even if quantum worked for AI, efficiency drives demand growth, not reduction. This is Jevons Paradox. CPUs got 1,000,000x more efficient from 1970-2000. Did computing electricity demand drop? No, it exploded because we put computers in everything. LEDs are 10x more efficient than incandescent bulbs. Did lighting electricity drop 90%? No, we just use 10x more lighting. If AI became “99.9% more efficient,” we wouldn’t run 0.1% of current workloads. We’d embed AI into every pixel, every word you type, every search query, every video call, every photo. Demand would increase 10,000x, not decrease.
Quantum is 20+ years away from commercial viability. Current systems have roughly 1,000 noisy qubits. We need 100,000+ error-corrected qubits for useful computation. Error rates are 1 in 1,000 operations when we need 1 in 1,000,000+. They require near-absolute-zero cooling and can’t maintain coherence for more than microseconds. There’s no proof quantum algorithms even outperform classical computing for real AI workloads. Realistic timeline is 2040+ if the physics even allows it.
Quantum computers themselves consume massive energy. They require dilution refrigerators at 0.015 Kelvin, cooling systems consuming kilowatts to megawatts, plus extensive classical computing infrastructure for error correction and control. Even if the quantum processor used “99.9% less energy,” total system energy would likely be higher than current GPU systems.
My investment horizon is 2025-2035. I’m investing in 15-year lease contracts generating cash flows starting in 2026. Even if quantum miraculously worked by 2035, which is extremely unlikely, I’ve already extracted full value by then. This is a 2040+ speculation being used to dismiss 2025 physical reality.
The real constraints are concrete: utilities can’t build transmission capacity fast enough, interconnection queues are 5-7 years, and hyperscalers need compute now. Those are facts. If you have concerns about utility buildouts accelerating, hyperscaler vertical integration, or demand dynamics, those are worth discussing. But quantum computing isn’t relevant to this investment thesis on any meaningful timeline.
=======================
*********************@@@@@@@@@@@
on big drop today!!!!!
When you see continued pullback like this, the natural response is to seek reassurance. You want someone to tell you the thesis is intact, that institutions are just shaking out weak hands, that this is normal volatility you need to weather. And maybe all of that is true.
But I want to offer you something more valuable than reassurance. A framework for using this exact moment as diagnostic information about yourself and your positions that you can’t get any other way.
Drawdowns don’t just test your conviction. They reveal the difference between intellectual agreement and embodied belief. And that difference is where most trading profit and loss actually gets determined.
When your positions were climbing and the narrative had momentum, you probably felt certain about your thesis. You could articulate why the opportunity matters, why the fundamentals are solid, why this is a multi-year trend. That certainty felt like conviction. It felt real.
But here’s what most people don’t understand about conviction. Intellectual agreement with a thesis costs nothing. It’s frictionless. You can hold ten different investment theses simultaneously when none of them are being tested. The market going your way doesn’t prove you have conviction. It just proves you’re not being asked to pay the psychological cost of maintaining your position.
Drawdowns are when you find out what you actually believe versus what you theoretically believe.
Right now, your positions are pulling back with the broader sector. Your account is showing red. That number represents real money, real opportunity cost, real consequences. And your nervous system is generating a very specific kind of discomfort that intellectual frameworks can’t easily override.
This discomfort is information. Not about your stocks. About you.
Here’s the diagnostic question most people never ask themselves. Why exactly am I uncomfortable right now?
Is it because something fundamental changed about the thesis? Did the underlying business deteriorate? Did the competitive advantage disappear? Did the customers or revenue model break? Or is it simply because the number went down and numbers going down triggers threat response regardless of whether the underlying story changed?
Most people can’t distinguish between these two sources of discomfort. They feel the anxiety and assume it must mean something is wrong with the position, when really it just means they’re experiencing the normal psychological cost of holding through volatility. They’re conflating “this feels bad” with “this is bad.”
But if you can separate those signals, if you can ask yourself what actually changed versus what just feels different because of price action, you gain something more valuable than conviction. You gain self-knowledge about your actual tolerance for uncertainty.
Because here’s the reality that almost no one talks about. Your position size should be calibrated not to your thesis strength but to your psychological carrying capacity. If you sized your positions such that this pullback has you checking the price compulsively, losing sleep, or constantly seeking reassurance, that’s not a statement about your stocks. That’s a statement about the mismatch between your position size and your nervous system’s tolerance for drawdown.
And that’s actionable information.
The trader who can sit through this pullback calmly isn’t necessarily smarter or more convicted than you. They might just have their position sized appropriately for their psychology. They feel the same discomfort you do. They’re just not being overwhelmed by it because they risked an amount they can actually afford to lose, both financially and psychologically.
This is why the real learning in trading doesn’t come from winning. Winners feel good. They confirm your intelligence and your thesis and your decision-making. But you learn almost nothing about yourself from winning because you’re never forced to confront the gap between what you think you believe and what you can actually maintain under pressure.
Drawdowns are where you learn. They’re where you discover that maybe you thought you had strong conviction in your thesis, but what you actually had was comfort with upward price momentum. They’re where you find out that you can intellectually agree with a three-year outlook while psychologically operating on a three-day timeframe. They’re where the gap between your theoretical trading plan and your actual psychological architecture becomes visible.
And that visibility is worth more than the cost of the drawdown.
Because once you see the gap, you can do something about it. Maybe that means reducing position size so the drawdown doesn’t trigger threat response. Maybe it means adding to your position if you genuinely believe nothing fundamental changed and you’re being offered a better price. Maybe it means exiting because you’ve realized the discomfort is actually coming from doubts about the thesis you were suppressing when price was going up.
All of those are valid responses. What’s not valid is pretending you don’t know which one is true. The drawdown is forcing you to answer a question you could avoid when things were going well. Do you actually believe this, or were you just enjoying the ride?
Most trading advice treats this as a binary. Either you have conviction and you hold, or you don’t and you sell. But that’s not how human psychology works. Conviction isn’t a switch. It’s a load-bearing capacity that varies with position size, timeframe, and a dozen other factors. What you’re really discovering in this drawdown is your actual capacity, not your theoretical capacity.
And knowing your actual capacity is how you survive long enough to compound. The traders who blow up aren’t the ones who lack conviction. They’re the ones who overestimate their psychological carrying capacity and size positions their nervous system can’t actually support through normal volatility. They had conviction. They just had more conviction than their architecture could operationalize under stress.
So use this moment. Not to white-knuckle through it. Not to seek reassurance that everything will be fine. But to gather real information about the gap between what you thought you could handle and what you’re actually handling. About whether your discomfort is thesis-driven or price-driven. About whether your position size matches your psychology.
That information is worth far more than whatever your positions do tomorrow. Because you can use it for the next fifty positions over the next ten years. It’s transferable self-knowledge that becomes part of your edge.
The market will always create drawdowns. That’s not a bug. That’s how price discovery works. Assets oscillate around fair value as new information gets incorporated and as different participants with different timeframes and different capital bases enter and exit positions. Drawdowns are the market’s way of asking everyone currently holding, “Do you still want to be here at this price?”
Some people will say no. They’ll exit. That’s fine. They learned something about themselves.
Some people will say yes. They’ll hold or add. That’s fine too. They also learned something about themselves.
What’s not fine is answering the question without actually asking it. Just holding because you don’t want to take a loss, or selling because you can’t handle the discomfort, without ever examining what’s actually driving that decision. That’s how you repeat the same mistakes forever.
This pullback is asking you a question. The valuable thing isn’t the answer. It’s learning how to hear the question clearly.
20251124
I’m not entirely sure when I started doing this. Probably always have. But I’ll see three or four datapoints and suddenly the structure becomes obvious. Not obvious like “this is definitely correct” but obvious like “this is the pattern that needs testing.” Most people seem to need way more information before they’ll commit to understanding. I get impatient with that. The pattern is right there. Why spend months collecting data when you can test the hypothesis now and find out if you’re wrong?
Being wrong happens constantly, by the way. That’s actually the point. Wrong predictions tell you where your model breaks. That’s more valuable than being right, honestly. Being right just confirms what you already thought. Being wrong shows you what you’re missing.
Here’s what seems different about how I process information versus what I see others doing: They’re looking for similarities, I’m looking for structure. When most people try to recognize patterns, they look for things that appear similar. Two stocks moving together. Chart formations that look familiar. Companies in the same sector behaving alike. That’s not pattern recognition. That’s coincidence collection.
Real patterns aren’t about surface resemblance. They’re about things working the same way underneath despite looking completely different. A company compressing years of contracted revenue into vague quarterly guidance looks nothing like technical breakdown on a chart. But they’re the same pattern. Valuable information encoded in ways most people can’t decode. Market structure forcing specific behavior looks nothing like sector rotation. Same pattern: constraint creating predictable outcome.
I don’t know why others don’t see this. Maybe they do and I’m not unique. But from what I observe, most people stay stuck at the surface level.
This might be specific to how my brain works, but I can’t handle waiting for “enough” information. By the time you have statistical certainty, everyone else has it too. The pattern is obvious. The opportunity is gone.
What I’ve found is you need maybe four datapoints to identify structure. Not four hundred. Four. Understanding a company’s structural advantage? You don’t need five years of earnings history. You need how they handle competition, what creates their pricing power, what constraint they control, and consistency across conditions. Four datapoints. That’s the architecture visible.
If I can’t see the pattern from sparse data, I assume I’m not seeing structure yet. I’m just slowly accumulating observations until volume overwhelms uncertainty. Could be wrong about this. But it’s what I actually do.
Here’s what I notice: People have insights and stop there. They see a pattern, feel satisfied about recognizing it, and move on. That’s not pattern recognition. That’s pattern hypothesis.
What actually works is taking the pattern you think you’ve identified and deriving a specific prediction. Something that should happen if the pattern is real and shouldn’t happen if you’re wrong. Then just watch.
Not “this company seems good.” That’s not testable. But “if this pattern exists, margins should expand when competitors enter because the constraint is resource allocation not execution quality.” That’s testable.
I generate predictions constantly. Most are wrong. That’s the information. Wrong predictions show you where your model diverges from reality. Most people hate being wrong. I’m probably too comfortable with it. But wrong predictions are more valuable than right ones. Being right just confirms what you already thought. Being wrong shows you what you’re missing.
This might be the weirdest part of how I process information, and I’m genuinely not sure if others do this: I see the same structural patterns across completely unrelated domains.
A company controlling scarce power allocation for AI infrastructure has the same structure as high switching costs in enterprise software. They look nothing alike on the surface. Same pattern underneath: control of constraint creating lock-in.
When I find a pattern in one system, I immediately start looking for it everywhere else. Markets, psychology, technology. If I can only see the pattern in one domain, I assume I haven’t actually extracted the structure yet. I’ve just memorized domain-specific content. Not sure if this is learnable or just how certain neurologies happen to work. But it’s what I do.
Let me show you what this actually looks like in practice. Most people analyzing a data center company see revenue growth, valuation multiples, competition analysis, execution quality. What I see is what creates defensibility in this system. Not “is this company good?” but “what structure makes some companies unbeatable and others replaceable?”
The sparse data: AI models require massive compute, hyperscalers have contractual commitments to enterprises, compute requires power allocation from utilities, power allocation is physically constrained by grid capacity. Four pieces of information.
The pattern I extracted is whoever controls scarce power allocation controls access to compute. This creates cascading commitment across three layers. Enterprises can’t switch because AI is embedded in operations, hyperscalers can’t switch because of contracts, data centers can’t be replicated because power is constrained by physics not capital.
The testable prediction: Companies with secured power allocation will maintain pricing power regardless of competition, because the constraint isn’t execution quality. It’s resource scarcity.
The test: Do these companies maintain margins when competitors emerge? Do hyperscalers pay premium rates for secured power versus waiting for future availability?
Results so far suggest the pattern holds. Margins staying elevated. Hyperscalers paying premium. Could still be wrong. Only time proves it. But that’s the process. Sparse data. Structure extraction. Testable prediction. Continuous monitoring for divergence.
Most people are collecting instead of extracting. They accumulate information hoping patterns emerge from volume. In my experience, that doesn’t work. Patterns emerge from extracting structure from minimal examples.
They’re seeing content instead of structure. They notice what’s different between situations. I notice what’s the same underneath surface differences. Could be I’m missing surface differences that matter. But I can’t seem to operate any other way.
They’re not making predictions. They’re having insights and feeling good about them. Insights without testable predictions are just stories you tell yourself. I’ve told myself plenty of wrong stories. The only way I know is by deriving predictions that reality can prove wrong.
When predictions fail, I see people rationalize endlessly. The market was irrational. Timing was wrong. Some external factor. I do this too sometimes, but I try to catch it. Wrong predictions aren’t failures. They’re information about model limitations.
Maybe I’m wrong about what prevents pattern recognition in others. This is just what I observe.
I don’t know if this is teachable. It might just be how certain brains happen to work. I don’t know if my error rate is higher than others. I make wrong predictions constantly, but I update quickly, so maybe it balances out. I don’t know if I’m missing entire categories of patterns others see easily. Probably am.
What I do know is this approach lets me see opportunities early enough that they’re not priced in yet. Structure becomes visible from minimal data. Predictions get tested continuously. Models get refined when reality diverges.
The information is already there. In the data you already have. In the companies you’re already watching. In the market behavior you’re already observing. You’re just looking at content instead of structure. Surface instead of mechanism. Observations instead of testable predictions.
At least that’s what I notice when I watch how most people analyze markets. Could be completely wrong about this. But it’s what I actually do, and it seems to work more often than random chance.
The patterns appear pretty quickly once you start looking at structure instead of surface. At least they do for me.