The Market Punishes Guessing

Why most “UX issues” are actually unvalidated product bets—and how to stop building the wrong thing beautifully

My friend Shane Close said something on LinkedIn that I keep coming back to: "Product design without user input is just a guess. And the market punishes guesses."

He's right. And here's the part most teams don't want to hear: if you're not validating real intent, you're not "moving fast." You're moving blind. In 2026, that's not a scrappy startup risk anymore. It's a compounding tax on your roadmap—wasted build, churn you can't explain, and a backlog of "UX fixes" that are actually strategy failures in disguise.

Most failure is "no market need" in disguise

CB Insights looked at startup failure post-mortems and found the #1 reason companies fail is "no market need" (42%). You can debate whether that data skews toward startups (it does), but the signal is still clear: teams build things people don't actually want often enough that it's the leading cause of failure.

And here's the thing—"no market need" rarely means the idea was dumb. More often it means the timing was wrong, the target user was misread, the workflow context was misunderstood, the product shipped solving the wrong job, or the value was placed in the wrong location in the experience. All of that is intent.

"But we talked to users" isn't the same as validating intent

A lot of teams "do research" in ways that feel responsible but still miss the mark. They do a few stakeholder-driven interviews (not behavior-based). They run a survey that measures opinions, not tradeoffs. They test late, when it's politically expensive to change direction. Or they confuse usability ("can they use it?") with value ("will they use it?").

NN/g has a reminder that sounds harsh but saves teams: don't overweight what users say; pay attention to what they do. Intent isn't preference. It's revealed behavior under constraints.

Why this keeps happening (even in "good" companies)

Because guessing is rewarded culturally. Guessing looks like conviction, speed, decisiveness, "strong product instincts," and shipping something shiny. Research looks like waiting, friction, uncertainty, exposing risk, and confronting the possibility you're wrong. So unless leaders explicitly value truth over momentum, organizations drift toward confident guessing.

Jared Spool said it best: the absence of research has a technical name—guessing. NN/g is even more direct: "UX without user research is not UX." Here's the controversial premise I'm planting a flag on: most "UX issues" are not UX issues. They are unvalidated product bets that got shipped.

The business cost isn't abstract. It's measurable.

When you skip validation, you don't eliminate cost—you delay it. And delayed cost arrives as rework (engineering + design + QA), support burden, reputation damage, slowed roadmap, churn and lower retention, and the worst one: loss of trust internally ("design didn't work") and externally ("this product doesn't get me").

There's a popular "fixing issues later costs 10x/100x more" claim floating around. Some sources attribute versions of that idea to IBM's Systems Sciences Institute, and it's widely repeated. But journalists have also pointed out that the exact "100x" origin is messy and may not be as cleanly documented as people claim. So here's the honest takeaway: even if you throw out the meme math, the direction is still true. The later you learn, the more expensive it becomes to change.

NN/g makes the ROI argument from another angle: investing in usability and UX work can dramatically improve outcomes—they've long argued that dedicating meaningful budget to usability activities can "double usability" in many contexts. But I'd argue this is bigger than usability. It's about where the value lives in the user's flow.

Case Study: The Acrobat ↔ Express decision that looked obvious—until we tested intent

When I started exploring deeper integration between Adobe Express and Acrobat, one direction seemed straightforward: send Acrobat users to Express. Express is where the creative tools are. Acrobat is a productivity tool. Easy.

But I kept thinking about what actually happens when you ask someone to leave a tool they trust. What if users don't want to leave Acrobat at all? What if the real intent is: "Let me do creative edits right here, in the tool I already know"?

Acrobat has a massive installed base and deeply familiar workflows. Asking people to context-switch to a different product—even a sister product—introduces friction, uncertainty, and what I think of as a trust tax. People start wondering: Will this break my doc? Am I going to lose my place? Is this going to turn into a file management mess?

So I partnered with a researcher and tested both directions: redirect to Express, or embed Express modules directly inside Acrobat. The result was clear: users showed much stronger intent to stay inside Acrobat. They were roughly 5× more likely to use Express modules when those modules were embedded in Acrobat versus being sent out to Express as a separate experience.

That insight changed the product strategy. Instead of "Acrobat → Express," it became "Express capabilities, wherever work is already happening." That's the difference between designing a funnel (what the org wants) and designing for intent (what the user will actually do).

What that validation unlocked: platform scale, not a one-off integration

Once you prove intent, you can justify investment. The Express extensibility strategy evolved into an enterprise-ready platform: embedded editors, SDK integrations, and a governed marketplace.

The outcomes tied to that platform approach included 114% MAU growth driven by embedding Express across Adobe and third-party products, 3M new users through Express–Acrobat workflows, and 8M new users through broader integrations.

Those numbers matter—but the real story is what created them. We stopped treating "where users should go" as a strategy and started treating "where users intend to stay" as the strategy. That's how you get from a single integration decision to a scalable platform that can extend into new surfaces and ecosystems.

A practical framework: two questions, two different tests

Shane's post nailed the split: WILL they use it? (user intent / value validation) and CAN they use it? (usability / comprehension). Most teams over-invest in "CAN" after they've already made the bet. The smart move is to validate "WILL" earlier, cheaply, and repeatedly.

If you want a simple operating rule: don't let your team ship anything that hasn't answered "WILL" with evidence. Not "we think." Not "stakeholders agree." Not "it worked at my last company." Evidence.

The real controversy: research isn't slow. Politics are slow.

Here's the line I'll stand behind: research doesn't slow teams down. Teams slow themselves down by avoiding the truth until it's expensive. If your org says "we don't have time," what they often mean is "we don't have permission," or "we don't want to discover risk," or "we've already emotionally committed." That's exactly why Shane's post hit so hard: it's calling out a cultural habit, not a process gap.

Summing it up

Product design without user input is guessing. But the deeper point is: product strategy without validated intent is theater. It looks decisive. It feels fast. It photographs well in a roadmap deck. And then the market does what it always does: it punishes you for building the wrong thing beautifully.

Shane Close, thanks for the spark! I'm stealing that line (with credit) 😀.

And if you're reading this as a product leader, the next time you feel pressure to "just ship," ask one question: What would have to be true for users to actually choose this—and what's the fastest way to find out if that's true?


Read the full Adobe Express Enterprise Platform case study: https://www.lanceshields.design/work/adobe-express-enterprise-platform

Previous
Previous

How I Use AI

Next
Next

What a Year of Writing 'Design Amplified' Has Taught Me