back and forth, forever
the Claude-to Figma-to-Claude loop
the short version: The Figma MCP isn’t interesting as a code-to-canvas pipeline. It’s interesting because it closes the loop. When you hand-edit a generated Figma file and bring it back into the conversation, your edits become implicit instruction to the model. That’s not prompting — that’s a crit. This is what GPS not chauffeur looks like as an actual workflow, and imo it reframes the “will AI replace designers” discourse entirely.
I am genuinely fascinated by the range of perspectives with which people are approaching the Figma MCP/Claude integration. “Why would I ever want to push code into the canvas?” or “Real designers would never generate design files from a CLI.” The convos I’m following on design LinkedIn are mostly looking at this like it’s a one-directional pipeline. Code in, Figma out. And then correctly concluding that it doesn’t make a ton of sense, unless for some reason your design org requires that you use Figma.
True enough! But for whatever reason, people seem to be overlooking the loop.
what happens when you bring the file back
I got Figma and Claude talking to each other the other night. And because I am someone who does a lot of thinking in conversation with LLMs (for my job and just because I am truly that kind of nerd), what I’m most excited about is the round-trip.
You start a convo with Claude about something you want to build. It creates it in code, then generates a Figma file through the MCP. You open that file and you edit it with your own hands and your eyes and your sense of what feels right. You move stuff around. You mess with scale. You tweak spacing that looked okay in the browser but looks weird now. You make all the tiny decisions that constitute design “taste.” (I also have thoughts about the ubiquitous taste convo but I will save those for another post.)
But then, you bring the edited file back into the same conversation. This is the part people seem to be missing or ignoring.
So here’s what just happened:
The model proposed an artifact
You adjusted it with your hands, your eyes, your taste — embodied design decisions that the model literally cannot make
You brought the edited artifact back into the conversation
The model absorbed your changes as the new ground truth and kept building from them
Your hand edits become implicit instruction. You don’t have to spell out what you changed or why you did it. The delta between what the model generated and what you actually wanted IS the instruction to the model. The model reads the artifact, not your explanation of it.
That’s so different from the normal workflow of trying to put spatial decisions into words. “The header needs more breathing room, but not too much, and the weight feels off. Maybe tighten up the letter spacing -2%?” That’s a lossy translation. When the model can look at what you actually did instead of what you’re trying to say, you skip the translation entirely.
the crit wall
Here’s how I think about it: like I’m tacking the model’s output up on the crit wall. Stepping back and looking at it the way I’d look at work as a creative director, not as a collaborator in the moment but as an evaluator with taste and discernment. What would I say? What would I change? What isn’t earning its place?
Then I bring those decisions into Figma and make my edits, and bring the file back into the conversation. So it’s not prompting, it’s a crit. The round-trip is a design critique that happens to include a machine.
Someone in my LinkedIn comments raised a good challenge to this: if the model accumulates enough of those deltas — enough fragments of your taste, your heuristics, your weird obsessive predilections — does it eventually learn to build it “right” the first time? And if so, does that amplify you or slowly commoditize the judgment that used to live in your head?
I genuinely do not know. Maybe both?? But I think the answer lives in whether you choose to stay in crit mode or start rubber-stamping. GPS changes how people navigate. After depending on it long enough, some people stop forming mental maps altogether. And the same thing could very well happen here if you let it. The tool reshapes the user. The question is whether you stay aware and awake to that tendency.
GPS not chauffeur, working live
I keep talking about this “GPS not chauffeur” idea as a design philosophy because I see it playing out a dozen times a day: give people navigation tools so they can verify and decide for themselves, don’t drive for them. That’s where trust starts. But the failure mode — the “unhinged cabbie,” god forbid — is when the AI takes you where it wants to go and talks over you and you’re buckled in the backseat at the driver’s mercy.
The round-trip with Figma and Claude is what GPS not chauffeur looks like as an actual workflow. The AI doesn’t replace your design judgment. Your design judgment is what’s training the conversation in real time. Your hand edits teach the model what you actually want — within the conversation context, through the medium itself, without the lossy translation of putting spatial and aesthetic decisions into words. You get to stay in the driver’s seat. The AI updates the map based on where you’re actually going, just like your route updates on your car’s GPS as you drive.
why this reframes the “will AI replace designers” thing
The AI design discourse is stuck in a binary: AI replaces designers, or it doesn’t. But the round-trip reveals a third thing that’s more interesting than either: real co-creation where neither side could produce the result alone, and where the human’s embodied expertise isn’t bypassed but amplified. This is some cyborg shit!
The conversation accumulates the decisions of both parties as you go. That’s not replacement or even “assistance,” really. It’s distributed cognition — thinking that happens across the system of human + model + artifact, not inside any single one of them. You still have to understand the principles of good design to make this work. That knowledge and judgment are yours.
This only works if the tooling supports bidirectional flow. One-directional pipelines don’t get you here. You need the loop. The MCP enables the loop.
back and forth, forever
The Figma MCP is not interesting to me because it can generate design files. It’s interesting because it closes the loop. And once the loop is closed, something genuinely novel happens. Your embodied design sense doesn’t get flattened into a prompt. It stays embodied. You edit with your hands and the model reads your hands.
I have no idea yet what this looks like at scale, or how it might change when the models get better at spatial reasoning, or what happens when a whole team is round-tripping in the same conversation. But for me, the loop is the thing. Everything else is just rote implementation.