Think about your past projects. Did they finish on time and on budget? Did they end up getting delivered without cutting corners? Did they get disrupted along the way with a changed scope, conflicted interests, unexpected delays, and surprising blockers?
Chances are high that your recent project was over schedule and over budget — just like a vast majority of other complex UX projects. Especially if it entailed at least some sort of complexity, be it a large group of stakeholders, a specialized domain, internal software, or expert users. It might have been delayed, moved, canceled, “refined,” or postponed. As it turns out, in many teams, shipping on time is an exception rather than the rule.
In fact, things almost never go according to plan — and on complex projects, they don’t even come close. So, how can we prevent it from happening? Well, let’s find out.
99.5% Of Big Projects Overrun Budgets And Schedules
As people, we are inherently over-optimistic and over-confident. It’s hard to study and process everything that can go wrong, so we tend to focus on the bright side. However, unchecked optimism leads to unrealistic forecasts, poorly defined goals, better options ignored, problems not spotted, and no contingencies to counteract the inevitable surprises.

Hofstadter’s Law states that the time needed to complete a project will always expand to fill the available time &- even if you take into account Hofstadter’s Law. Put differently, it always takes longer than you expect, however cautious you might be.
As a result, only 0.5% of big projects make the budget and the schedule — e.g., big relaunches, legacy re-dos, big initiatives. We might try to mitigate risk by adding 15–20% buffer — but it rarely helps. Many of these projects don’t follow “normal” (Bell curve) distribution, but are rather “fat-tailed”.
And there, overruns of 60–500% are typical and turn big projects into big disasters.
Reference-Class Forecasting (RCF)
We often assume that if we just thoroughly collect all the costs needed and estimate complexity or efforts, we should get a decent estimate of where we will eventually land. Nothing could be further from the truth.
Complex projects have plenty of unknown unknowns. No matter how many risks, dependencies, and upstream challenges we identify, there are many more we can’t even imagine. The best way to be more accurate is to define a realistic anchor — for time, costs, and benefits — from similar projects done in the past.

Reference-class forecasting follows a very simple process:
- First, we find the reference projects that have the most similarities to our project.
- If the distribution follows the Bell curve, use the mean value + 10–15% contingency.
- If the distribution is fat-tailed, invest in profound risk management to prevent big challenges down the line.
- Tweak the mean value only if you have very good reasons to do so.
- Set up a database to track past projects in your company (for cost, time, benefits).
Mapping Out Users’ Success Moments
Over the last few years, I’ve been using the technique called “Event Storming,” suggested by Matteo Cavucci many years back. The idea is to capture users’ experience moments through the lens of business needs. With it, we focus on the desired business outcome and then use research insights to project events that users will be going through to achieve that outcome.

The image above shows the process in action — with different lanes representing different points of interest, and prioritized user events themed into groups, along with risks, bottlenecks, stakeholders, and users to be involved — as well as UX metrics. From there, we can identify common themes that emerge and create a shared understanding of risks, constraints, and people to be involved.
Throughout that journey, we identify key milestones and break users’ events into two main buckets:
- User’s success moments (which we want to dial up ↑);
- User’s pain points or frustrations (which we want to dial down ↓).
We then break out into groups of 3–4 people to separately prioritize these events and estimate their impact and effort on Effort vs. Value curves by John Cutler.

The next step is identifying key stakeholders to engage with, risks to consider (e.g., legacy systems, 3rd-party dependency, etc.), resources, and tooling. We reserve special time to identify key blockers and constraints that endanger a successful outcome or slow us down. If possible, we also set up UX metrics to track how successful we actually are in improving the current state of UX.
It might seem like a bit too much planning for just a UX project, but it has been helping quite significantly to reduce failures and delays and also maximize business impact.
When speaking to businesses, I usually speak about better discovery and scoping as the best way to mitigate risk. We can, of course, throw ideas into the market and run endless experiments. But not for critical projects that get a lot of visibility, e.g., replacing legacy systems or launching a new product. They require thorough planning to prevent big disasters, urgent rollbacks, and… black swans.
Black Swan Management
Every other project encounters what’s called a Black Swan — a low probability, high-consequence event that is more likely to occur when projects stretch over longer periods of time. It could be anything from restructuring teams to a change of priorities, which then leads to cancellations and rescheduling.
Little problems have an incredible capacity to compound large, disastrous problems — ruining big projects and sinking big ambitions at a phenomenal scale. The more little problems we can design around early, the more chances we have to get the project out the door successfully.

So we make projects smaller and shorter. We mitigate risks by involving stakeholders early. We provide less surface for Black Swans to emerge. One good way to get there is to always start every project with a simple question: “Why are we actually doing this project?” The answers often reveal not just motivations and ambitions, but also the challenges and dependencies hidden between the lines of the brief.
And as we plan, we could follow a “right-to-left thinking”. We don’t start with where we are, but rather where we want to be. And as we plan and design, we move from the future state towards the current state, studying what’s missing or what’s blocking us from getting there. The trick is: we always keep our end goal in mind, and our decisions and milestones are always shaped by that goal.
Manage Deficit Of Experience
Complex projects start with a deep deficit of experience. To increase the chances of success, we need to minimize the chance of mistakes even happening. That means trying to make the process as repetitive as possible — with smaller “work modules” repeated by teams over and over again.

🚫 Beware of unchecked optimism → unrealistic forecasts.
🚫 Beware of “cutting-edge” → untested technology spirals risk.
🚫 Beware of “unique” → high chance of exploding costs.
🚫 Beware of “brand new” → rely on tested and reliable.
🚫 Beware of “the biggest” → build small things, then compose.
It also means relying on reliable: from well-tested tools to stable teams that have worked well together in the past. Complex projects aren’t a good place to innovate processes, mix-n-match teams, and try out more affordable vendors.
Typically, these are extreme costs in disguise, skyrocketing delivery delays, and unexpected expenses.
Think Slow, Act Fast
In the spirit of looming deadlines, many projects rush into delivery mode before the scope of the project is well-defined. It might work for fast experiments and minor changes, but that’s a red flag for larger projects. The best strategy is to spend more time in planning before designing a single pixel on the screen.
But planning isn’t an exercise in abstract imaginative work. Good planning should include experiments, tests, simulations, and refinements. It must include the steps of how we reduce risks and how we mitigate risks when something unexpected (but frequent in other similar projects) happens.

Good Design Is Good Risk Management
When speaking about design and research to senior management, position it as a powerful risk management tool. Good design that involves concept testing, experimentation, user feedback, iterations, and refinement of the plan is cheap and safe.
Eventually it might need more time than expected, but it’s much — MUCH! — cheaper than delivery. Delivery is extremely cost-intensive, and if it relies on wrong assumptions and poor planning, then that’s when the project becomes vulnerable and difficult to move or re-route.
Wrapping Up
The insights above come from a wonderful book on How Big Things Get Done by Prof. Bent Flyvbjerg and Dan Gardner. It goes in all the fine details of how big projects fail and when they succeed. It’s not a book about design, but a fantastic book for designers who want to plan and estimate better.
Not every team will work on a large, complex project, but sometimes these projects become inevitable — when dealing with legacy, projects with high visibility, layers of politics, or an entirely new domain where the company moves.
Good projects that succeed have one thing in common: they dedicate a majority of time to planning and managing risks and unknown unknowns. They avoid big-bang revelations, but instead test continuously and repeatedly. That’s your best chance to succeed — work around these unknowns, as you won’t be able to prevent them from emerging entirely anyway.
New: How To Measure UX And Design Impact
Meet Measure UX & Design Impact (8h), a practical guide for designers and UX leads to shape, measure and explain your incredible UX impact on business. Recorded and updated by Vitaly Friedman. Use the friendly code 🎟 IMPACT
to save 20% off today. Jump to the details.


(yk)