Humans are complex, it's your job to translate.

May 11th, 2026
Bellevue, WA

We're starting to see across many workplaces the use of AI getting added to enhance our design workflows. I started to think about what does it mean to use AI as a UX designer.

To do this I talked with a few colleagues about how they were using AI and what they were trying to solve for.

  • To have faster output
  • Create a live code version of the design
  • To create design output with less effort
  • To have more design accuracy
  • To have a wider range of explorations and design alternatives
  • To have ultimately the best design based on the trade-offs

You might start to see a common pattern here. Most of what's listed are essentially output artifact accelerators.

This implies that in our current state of using AI (at least in my experience) we're trying to optimize for design artifact output. Whether we output Figma screens or front-end code, we just want to create more and faster.


A thought experiment

Take a moment to consider this thought experiment... The team has done it! After many months of working late nights we have created the ultimate AI design agent. When you feed it the context and requirements it'll be able to work autonomously and output either completed Figma screens in a user flow, or even output front-end code ready to be handed off to our front-end team. This AI Agent is able to figure out all the gaps, standardizations, design system choices, and even figure out missing context by using heuristics and a research agent that feeds in the latest data about the users. Well, I guess that's it then!

Or is it?

So what happened when the team started to use the "Ultimate AI design agent"? Well, we now have to figure out how to tell it what to do. And that's where we started to argue about where to start and what good design means. Not that there isn't some set definition of the desired results, but it's hard to agree on which definition or path the AI agent should use. In order for the AI agent to work, it needs clear input. And getting to that clarity requires the team to actually agree on what they're solving, which turns out to be the hard part.


Here's my personal take on this:

I think that with the way that we're building into AI, many teams are working towards automation and solving our biggest time expenditures. Namely all the problems that we're solving today are problems with output. I think that output or "Artifact production" is going to be a fairly fast solve for AI.

The real value of the designer was never the artifact. It's that messy human problems are hard to communicate, we don't really know what we actually want sometimes, and working together is complicated.

What designers should focus on once "artifact production" has been solved is..

  • Translating and aligning human needs into solvable problems
  • Guiding alignment between humans across roles and teams who think they agree but actually don't
  • Asking questions that weren't obvious to ask and getting really good at making people think more about the ones they did ask.

My thoughts today, as a designer trying to understand where things are going with AI, is that as our roles become compressed and responsibilities blurred, the value that we'll bring is to be an aligning force that's able to think with logic and human communication. I would spend more time to focus on research, synthesis, facilitation, and systems thinking. Make human complexity more legible, and then we can pass the output tasks to the AI Agents.


And next time, maybe I'll try to answer the question "Well, other than output, what else should designers use AI for?"
We'll get into AI assisted Alignment.