When You Don't Know the Answer
You spend months on architecture diagrams. Database schemas. API specs. Every variable name mapped out. The plan feels complete.
You start building. Within 48 hours, the plan dissolves.
The dependency you thought was stable has breaking changes. The performance assumption was wrong. The user behavior you predicted doesn't exist. This isn't a planning failure — it's the normal gap between theory and contact with a real system.
The uncomfortable truth is that you're smartest about a problem after you've built something, broken it, and rebuilt it. Not at the beginning, when everything is theoretical. Which means the goal of early work isn't to verify that you're right. It's to find out where you're wrong as quickly as possible.
That reframe changes what you do first. When facing genuine ambiguity, the move isn't to plan more carefully — it's to identify the riskiest assumption and pressure-test it. Build the smallest thing that could break it. A rough proof of concept. A quick spike. An ugly script that hits the real API. Not to ship — to learn. What you discover in that first attempt will reshape the next one, and the one after that. The loop runs faster than trying to predict every edge case upfront.
Ambiguity isn't a sign that you planned poorly. It's the actual condition of software work. The answer rarely exists before you start. It emerges from contact with the problem.