Designing for Dragons: The New Rules of AI UX

The old UX maxim was "Don't make me think." The new AI UX maxim is "Please, for the love of all that is holy, think a little bit before trusting what this very convincing machine just told you." We're workshopping a shorter version.

People often ask me, "How do you design AI UX?" It's a question that reveals how profoundly our design challenges have shifted. Every week I see another batch of products launching with "AI-powered" features that fundamentally misunderstand what makes human-AI interaction work. It's like watching people install automatic doors that occasionally decide to be walls; technically impressive, but practically questionable.

As someone who straddles both worlds (designing interfaces for these systems while also studying their limitations), I find myself repeating three key principles that run counter to conventional wisdom.

Forget Traditional Software Design (It's a New Ballgame)

The first thing to understand is that current AI systems, particularly large language models and generative AI, don't function like traditional software. They're not deterministic; they're stochastic (meaning they involve randomness and probability). They don't follow flowcharts; they navigate probability distributions.

Traditional software is like a light switch: flip it, and barring some catastrophic electrical failure, the light comes on. Every time. AI is more like a cat; sometimes it comes running, sometimes it stares at you judgmentally, and sometimes it brings you a half-dead statistical approximation of what you asked for.

GitHub Copilot illustrates this perfectly. Microsoft likely couldn't design it using traditional UX patterns because the system doesn't just respond to explicit commands; it makes educated guesses about what developers might need next. The genius of Copilot's design is how it presents AI-powered code as suggestions that feel natural while still allowing for those moments where the AI proposes something brilliant that wasn't explicitly requested.

This is where I find myself at odds with Jared Spool's principle of "self-evident design." This principle fails spectacularly with generative AI, where the most dangerous interfaces are precisely those that appear deceptively simple while hiding complex stochastic behaviors. Sometimes we need interfaces that deliberately create friction to prevent harmful overreliance.

Designers as Emergent Behavior Leaders:

  • Champion new paradigms: Educate stakeholders that AI isn't just another feature but a fundamentally different type of collaboration

  • Create emergence workshops: Help executives experience firsthand how identical inputs can produce different but valid outputs

  • Design appropriate feedback mechanisms: Build governance frameworks that balance exploration with consistency

  • Develop new skills: System thinking and behavioral economics are now essential parts of the AI UX toolkit

Design for Imperfection (Because Perfection Isn't Coming)

Here's an uncomfortable truth: despite impressive capabilities, these systems remain fundamentally limited. They lack consistent factuality, struggle with causal reasoning, and have no intrinsic understanding of physical reality. Instead of hiding these limitations, successful AI interfaces acknowledge and accommodate them.

The most troubling limitation might be what's colloquially called "hallucination," the tendency of large language models to present falsehoods with the same confidence as facts. As Ethan Mollick puts it, "These systems don't 'know' anything in the human sense; they predict what text should come next based on statistical patterns. Their confidence has nothing to do with factual accuracy."

This creates a profound UX challenge: how do we design interfaces that don't trick users into misplaced trust? This is especially true for less tech-savvy people who interact with AI systems, assuming they function like traditional software: reliable, consistent, and factual.

This is where I find myself challenging Julie Zhuo's approach of rapid iteration through user feedback. When users can't reliably distinguish between AI errors and correct responses, their feedback becomes fundamentally unreliable. We need slower, more deliberate approaches to AI UX that prioritize safety and appropriate trust calibration.

Designers as Truth-Tellers:

  • Advocate for honest interfaces: Push back against marketing's desires to present AI as infallible

  • Create uncertainty indicators: Design signals that help users understand when AI is on solid ground versus speculating

  • Build guardrails: Implement confirmation flows, reversibility mechanisms, and monitoring systems

  • Lead cross-functional limitation management: Establish processes to track known weaknesses and ensure interfaces evolve to address them

Design Relationships, Not Products (Trust is Earned, Not Given)

The most common mistake I see in AI product design is treating these systems as fixed utilities rather than evolving partners. The best conversational AI interfaces establish appropriate trust levels that evolve over time.

We're not installing appliances anymore; we're introducing entities that learn and change. It's less like buying a blender and more like adopting a particularly algorithmic puppy.

Here I must challenge John Maeda's popular advocacy for "seamless" AI experiences. While Maeda champions invisible AI that anticipates user needs without friction, this approach dangerously obscures AI limitations. We don't need more invisible AI; we need more honest AI that makes its capabilities and limitations transparent.

Spotify's recommendation system exemplifies this relationship-centered approach. Their design team recognized that music recommendation isn't just about algorithms; it's about building understanding over time through multiple feedback mechanisms beyond simple thumbs up/down.

Designers as Trust Architects:

  • Focus on relationship metrics: Advocate for measuring trust calibration and relationship health over immediate engagement

  • Create trust journey maps: Identify critical moments where user-AI trust is built or broken

  • Champion transparency in adaptation: Design interfaces that clearly communicate how and why the AI system is changing based on behavior

  • Establish ethical personalization guidelines: Prevent manipulative patterns while enabling meaningful adaptation

The Hard Truth About AI UX Design

These principles might sound straightforward, but implementing them requires fighting against deeply ingrained patterns in software design. We've spent decades optimizing for predictability, hiding system limitations, and measuring immediate engagement. Generative AI systems demand the opposite.

It's like we've spent years perfecting horse-drawn carriages only to find ourselves suddenly designing for dragons. Yes, both can transport you from A to B, but the similarities end rather abruptly after that. Dragons have opinions. Dragons occasionally set things on fire. Dragons remember that one time you yanked too hard on the reins.

As these systems continue to evolve, these principles become more important, not less. The foundational challenge remains the same: creating interfaces that support productive human-AI collaboration while acknowledging the fundamental differences between human and machine intelligence.

When people ask me how to design AI UX, I tell them to start by recognizing that we're designing for new forms of collaboration, not just new features. Designers aren't just interface creators anymore; they're the organizational leaders who must champion emergence, honestly address imperfection, and architect appropriate trust relationships. Those who embrace these expanded responsibilities will create the interfaces that define the next era of computing. The rest will be left wondering why their users keep having weird arguments with their smart toasters.

Previous
Previous

Ideas Aren't Ageist: The Innovation Advantage of Age Diversity

Next
Next

The Creative AI Paradox: Threat, Tool, or Renaissance?