Lately, whenever I open social platforms, I always find posts claiming that a new tool or release launched by [insert AI name here] has killed designers, developers, or both.
For the past few months, I've been working alongside dev teams that ship 10x faster thanks to AI, and I kept running into two problems:
1. Speed
Designers have become the bottleneck. Dev teams powered by AI can go from idea to deployed product in hours. Traditional design workflows (design every screen, build a system, write specs, hand off) can't keep up. The designer either slows the team down or gets routed around.
2. Fidelity
What gets designed in Figma doesn't match what ships in code. AI generates visually plausible interfaces, but spacing drifts, tokens don't align, and components diverge from the design system. Without a quality connection between Figma and the codebase, every screen is a manual translation exercise that always drifts.
I landed on two workflows, each solving one of these problems:
Solves: Speed
Dev teams today can ship 10x faster with AI, going from idea to deployed prototype in hours. But they can't wait for Figma files. Designers have become the bottleneck in digital product development.
Traditional design workflows assume a process where the designer creates every screen, builds the system, writes specs, and hands everything off to devs. This made sense when development took months. When development takes hours, this sequence turns the designer into the slowest link in the chain.
The uncomfortable truth: if a designer's only contribution is creating pixel-perfect Figma files, AI-powered teams will route around them.
The designer's value hasn't changed: understanding what is needed (user journeys, screens, information architecture, user experience, aesthetics, etc).
What changed is HOW that value is delivered.
This workflow restructures the designer's role around five stages:
| Stage | Who leads | What happens |
|---|---|---|
| 1. Define | Designer | Define (not design) everything needed by devs and AI |
| 2. Feed the AI | Designer + Dev | Turn Stage 1 into AI context (CLAUDE.md, skill file, Figma MCP) |
| 3. Build | Dev team | Generate a working product with AI |
| 4. Quality Assurance | Designer | UX review + fix cycle |
| 5. Validation | Designer | Real users test the product |
Before a single line of code is written, the designer owns the strategy. Research, information architecture, user journeys, and a design rules document that AI will follow during the build. The goal is to give the dev team everything they need to build with AI without having to guess.
| Deliverable | Tool | Contents | Feeds into |
|---|---|---|---|
| Project brief | Notion | Problem definition, competitive analysis, target user, core metric | CLAUDE.md |
| Information architecture | Notion | Site map, content hierarchy, screen inventory, navigation structure | CLAUDE.md |
| User journeys | FigJam | Core flows, decision points, happy paths, edge cases | Figma MCP |
| Design rules | Notion | Product UX rules, visual quality standards, accessibility requirements | Skill file |
| Lo-fi Wireframes (optional) | Figma | Structure and hierarchy for complex or ambiguous flows only | Figma MCP |
The "Feeds into" column is what makes this workflow different from a traditional one. Every deliverable has a direct path into the AI's context. Stage 2 explains how.
Most teams skip this. That's a mistake.
The design rules document is a mix of three types of rules combined into one authoritative source:
The designer reads the Anthropic and Vercel skill files as reference material and picks what's relevant. One document, one priority order, no ambiguity.
The difference between a mediocre AI build and a good one is the context the AI has access to. This stage takes the Stage 1 deliverables and turns them into three things the AI reads before and during every prompt.
What it is: A markdown file in the project root that Claude Code reads before every prompt. It's the AI's persistent memory of the project.
What goes in it:
How to create it: Create a file called CLAUDE.md in the root of the GitHub repo. Paste the content from Notion. Plain text, no special tooling.
What it is: A markdown file at .claude/skills/design-rules.md that Claude Code references during every build. It contains the design rules from Stage 1.
How to create it: Create the .claude/skills/ folder in the repo and add a file called design-rules.md. Copy the design rules from the Notion document. Plain text, no tooling.
Why it's a separate file: CLAUDE.md is the project context (what to build). The skill file is the design constraints (how it should look and behave). Keeping them separate means the skill file can be reused across projects with minor edits.
What it is: A connection between Claude Code and Figma that lets the AI read designs and FigJam boards directly.
What it does: When the dev pastes a Figma or FigJam link into Claude Code, the AI reads the layout, hierarchy, flows, and structure. Wireframes and user journeys become direct input, not reference documents sitting in a browser tab.
How to set it up: One command: claude mcp add --transport http figma https://mcp.figma.com/mcp, then authenticate via /mcp in Claude Code. One-time setup, 2 minutes.
The dev team uses AI tools to generate a working product, with the Stage 2 context already in place. The designer is available for async questions but does not block the build. This stage can take hours or days, depending on complexity.
Claude Code is the recommended primary tool because the codebase stays in the team's GitHub repo, it scales directly to the Pixel-Perfect at Scale pipeline without rewriting, and it reads the CLAUDE.md, skill files, and Figma MCP context set up in Stage 2.
| Tool | Best for | Limitation |
|---|---|---|
| v0 (by Vercel) | Individual UI components, frontend only | No backend, no full-stack |
| Lovable | Non-technical founders validating solo | Code quality not production-grade |
| Bolt | Rapid full-stack prototyping | Similar to Lovable |
| Cursor / Windsurf | Devs who want AI inside their code editor | Requires dev skills |
Regardless of the tool, the output is: a deployed, functional product on Vercel that users can interact with.
Even with design rules baked into the build, AI-generated interfaces still need a trained eye. The output will be significantly better than a raw prompt, but experienced designers spot flow-level problems that no AI catches today: confusing navigation sequences, unclear information hierarchy, missing edge cases, and interactions that technically work but feel wrong.
The designer walks every screen and flow on desktop + mobile.
Because the design rules were already applied during the build, most mechanical issues (missing ARIA labels, placeholder-as-label, basic spacing) should already be handled. The designer focuses on what AI consistently misses:
This is NOT pixel-perfect checking. This IS ensuring the UX works for real users.
How every issue is logged:
Every issue needs two things:
The prompt itself already describes what's wrong and what the fix should be. No need for separate "what's wrong" / "expected behavior" / "severity" fields. The designer orders the list by priority, and the dev executes from top to bottom.
Screen: /onboarding/step-2 Prompt: "Split the form into 2 steps with max 4 fields each. Add a step progress indicator showing 'Step X of Y' above the form." Screen: /dashboard Prompt: "The primary CTA is below the fold on mobile. Move it above the activity feed so it's visible without scrolling." Screen: /settings/profile Prompt: "Add a loading state to the Save button. Show a spinner and disable the button while the request is processing."
How designer feedback becomes code changes in minutes, not days.
The designer delivers the ordered list of issues with prompts. The dev opens Claude Code and executes them from top to bottom. Dev commits and pushes once. Vercel auto-deploys. Designer verifies on staging.
Why this works: The bottleneck isn't dev execution time. It's the dev misinterpreting the designer's intent. When every issue comes with a precise AI prompt, the dev's job is: open terminal, paste, commit, push. Zero interpretation needed. For prompts that involve logic, state, or API changes, the dev uses judgment. The designer verifies after.
For designers who want to eliminate dev dependency for simple UI/UX changes:
design-fixes branchdesign-fixes branchWhen this makes sense: The designer is comfortable with basic terminal commands, and zero wait time is a priority. Best for personal projects or teams with clear boundaries on what "simple" means.
Key rule: The designer only touches UI/UX surface changes. Logic, state, APIs, and structural changes always go through the dev.
Quality assurance catches UX problems. Validation answers a bigger question: Does anyone actually want this?
The product is now deployed, the major UX issues are fixed, and it's time to put it in front of real users. This stage determines whether the idea is worth building properly (graduating to the Scale Pipeline with a full design system and token governance) or should be iterated on, pivoted, or killed.
Option A: Unmoderated (fastest, async)
Option B: Moderated (deeper insights)
| Metric | How to measure |
|---|---|
| Task completion | Did they finish the core task? (yes/no) |
| Time to complete | How long did it take? |
| Error points | Where did they make mistakes or go back? |
| Confusion points | Where did they pause or ask questions? |
| Satisfaction | Simple 1-5 rating after the task |
| Signal | Action |
|---|---|
| Users complete the core task and express interest | Graduate to the Scale Pipeline (pixel-perfect design-to-code) |
| Users struggle but the concept resonates | Restart cycle (fix and retest) |
| Users don't understand or don't care | Kill or pivot the idea |
Each cycle takes days, not weeks:
Build (with AI context) → Designer review → Fix → Deploy
↓
User test (when ready)
↓
Synthesize → Fix → Deploy
↓
Ship / Iterate / Kill / Graduate to Scale PipelineWhen the idea validates, and it's time to build it properly (pixel-perfect design system, token governance, automated visual QA), nothing gets thrown away. The codebase upgrades directly into the full design-to-code pipeline.
| What carries over | What gets added |
|---|---|
| Codebase (already Next.js + shadcn/ui) | shadcndesign Figma kit (full design system) |
| CLAUDE.md + design rules skill file | Tokens Studio + Style Dictionary (automated token pipeline) |
| User journey learnings | Code Connect (component mapping) |
| Deployed product on Vercel | Storybook + Chromatic (automated visual QA) |
Solves: Fidelity
Once we've validated our solution idea with a quick, good-enough product, it's time to build it properly. But "properly" doesn't mean going back to the old workflow. It means adding the layers that turn a validated prototype into a scalable, maintainable product, without losing the speed that got the team here.
AI-generated code works. But two things break down at scale.
1. Design fidelity: The Figma design says one thing, the code renders something else. A button that's 8px radius in Figma ships as 6px in code. A heading that uses the brand font in Figma renders in a fallback font in production. Spacing that's consistent in the design drifts in implementation. Without a quality connection between Figma tokens and code variables, every screen is a manual translation exercise, and manual translation always drifts.
2. Component consistency: A developer prompts Claude Code to build a card component and gets bg-blue-500 p-4 rounded-lg. Another developer prompts the same tool a week later and gets bg-primary p-6 rounded-xl. Same product, inconsistent output. Without shared tokens and component mapping, AI generates visually plausible code that doesn't match the design system.
This pipeline solves both problems by making Figma the single source of truth and automating every step from design token to deployed code. Design tokens flow directly from Figma Variables to CSS variables, so there's no manual translation to drift. Code Connect maps Figma components to their exact code counterparts, so AI generates the real component API instead of generic CSS. And the designer approves every visual change before it ships, catching anything that slipped through.
| Step | Tool | Owner | Purpose |
|---|---|---|---|
| 0 | GitHub | Dev Team | Central hub connecting everything |
| 1 | shadcndesign Figma Kit | Designer | Design system with Variables, components, light/dark modes |
| 2 | Tokens Studio | Designer | Sync Figma Variables to GitHub as structured JSON |
| 3 | Style Dictionary | Dev Team | Convert token JSON into CSS variables + Tailwind config |
| 4 | Next.js + shadcn/ui | Dev Team | Code project consuming the design tokens |
| 5 | Claude Code + Figma MCP | Dev Team | AI reads Figma, generates production code |
| 6 | Code Connect | Together | Links Figma components to code components |
| 7 | Storybook | Dev Team | Component catalog for testing and visual verification |
| 8 | Vercel | Dev Team | Auto-deploys from GitHub on every push |
| 9 | QA | Designer | Designer reviews new screens + Chromatic catches regressions |
Everything connects through GitHub. Tokens Studio pushes design tokens here. Style Dictionary reads them. Claude Code commits generated code. Vercel auto-deploys. Code Connect reads component files.
Action: Create a GitHub repository for the project. All team members need access.
There are multiple Shadcn/ui Figma kits. After evaluating several options, shadcndesign.com is the best fit for this pipeline. Every Figma component maps exactly to a Shadcn/ui code component. It uses Figma Variables (not old color styles) for all tokens, light/dark mode is built in, and Tailwind CSS naming aligns out of the box.
The kit includes two tools:
Figma Kit: The component library with Variables, auto-layout, and variant properties that match code prop names (variant, size, state, disabled, etc.).
Figma Plugin: A code generation tool. Select a frame or component in Figma, and it outputs clean Shadcn/ui React code. Useful for quick one-off exports. Note: it does not manage or sync tokens to a repo (that's Tokens Studio's job).
Actions:
Without this, devs manually copy hex values from Figma. With Tokens Studio, every design decision flows automatically to code. Change a color in Figma, push a PR, and the entire codebase updates.
Figma Plugin ↗ tokens.studio ↗
Tokens Studio reads all Figma Variables (colors, spacing, radius, typography), converts them to W3C Design Token format (structured JSON), pushes them to the GitHub repo as a Pull Request, and supports bi-directional sync.
Actions:
Tokens Studio pushes JSON to GitHub. But the codebase needs CSS variables and a Tailwind config, not raw JSON. Style Dictionary is the build tool that makes that conversion, handling unit transforms (% line-height to unitless, font weight names to numbers, etc.) automatically.
Style Dictionary ↗ sd-transforms ↗
Why sd-transforms is required: Tokens from Tokens Studio have a specific format. The sd-transforms package handles the conversion so Style Dictionary can process them correctly.
Actions:
npm install style-dictionary @tokens-studio/sd-transforms --save-devThe actual codebase. Next.js (React framework, industry standard) + Tailwind CSS (utility-first styling) + shadcn/ui (the component library where the team owns the code).
Why shadcn/ui: most popular React UI library (104K+ GitHub stars), components live as editable source files in the project (not in node_modules), best AI tooling support of any component library, 0KB runtime overhead, and accessible by default via Radix UI primitives.
Actions:
npx create-next-app@latest --typescript --tailwind --eslint --appnpx shadcn@latest initnpx shadcn@latest add button input badge card avatarThis is where AI generates production code. The dev pastes a Figma frame link into Claude Code. Claude reads the design via MCP (including Variables, components, and Code Connect metadata), generates React code using the real shadcn/ui imports, CSS variables, and project structure, and writes it directly into the project files.
What makes this different from other code generation: full codebase context (Claude knows the existing components, utilities, and patterns), a CLAUDE.md file that describes team conventions, skill files from shadcndesign for accurate generation, and iterative refinement ("make this responsive", "add dark mode", "add loading state").
Actions:
npm install -g @anthropic-ai/claude-codeclaude mcp add --transport http figma https://mcp.figma.com/mcp/mcp command in Claude Code.claude/skills/ (these complement the custom design-rules.md from the Rapid Validation workflow. The design rules file contains product-specific UX and visual rules, while the shadcndesign skill files teach Claude how to use shadcn/ui components and tokens correctly. Both live in the same .claude/skills/ folder.)Syncing with GitHub: Claude Code can run git commands directly ("Commit these changes and push to GitHub"). Teams can also use GitHub Desktop if preferred.
This is the highest-leverage step in the entire pipeline. Without Code Connect, Claude Code sees "a blue button with 16px padding" and generates generic HTML/CSS. With Code Connect, Claude Code sees <Button variant="primary" size="md"> and uses the actual component API.
Every step before this exists to make Code Connect work at its best.
Two setup methods: Code Connect UI (recommended, visual interface in Figma, designer-friendly) and Code Connect CLI (config files in the codebase, more flexible, better for advanced cases).
Important note on timing: Code Connect requires that coded components already exist in the repo. Step 4 must be completed first. The recommended sequence is: build base components in code (Step 4), set up Code Connect, then use Claude Code for all subsequent designs with enhanced output.
Actions:
A development environment where coded components are rendered in isolation with all their variants and states. Designers use it to visually verify that implementations match the Figma design. It serves as the living documentation of what's been built vs. what's designed.
This is not optional. Code Connect is more useful when Storybook stories are available, allowing designers to browse them without accessing the codebase.
Actions:
npx storybook@latest init@storybook/addon-themesConnect the GitHub repo, and Vercel auto-deploys the site every time code is pushed. Free tier, custom domain support, global CDN, free HTTPS. Recommended by shadcn/ui for Next.js projects. Works for landing pages and complex SaaS alike.
If the team already has a domain registered elsewhere (e.g., GoDaddy), keep the domain registration as is and point the DNS records to Vercel. Takes 5 minutes.
Actions:
Every new screen and every significant feature is reviewed by the designer before it ships. Every issue is a Screen + Prompt that the dev can execute directly in Claude Code:
Screen: /settings/profile Prompt: "The Save button has no loading state. Add a spinner and disable the button while the request is processing."
The dev executes the prompts, pushes, Vercel auto-deploys, and the designer verifies again.
For designers who want to eliminate dev dependency for simple UI/UX changes:
design-fixes branchdesign-fixes branchWhen this makes sense: The designer is comfortable with basic terminal commands, and zero wait time is a priority. Best for personal projects or teams with clear boundaries on what "simple" means.
Key rule: The designer only touches UI/UX surface changes. Logic, state, APIs, and structural changes always go through the dev.
On top of the designer's manual review, Chromatic (by the Storybook team) provides an automated layer that catches visual regressions on existing components. Every time code is pushed, it screenshots every Storybook story and diffs them against the previous approved baseline. If anything changed visually, even by 1 pixel, the designer gets notified.
This is particularly valuable for catching unintended side effects: a developer changes a token or refactors a component, accidentally breaking something elsewhere. Chromatic catches what no one was looking at.
How it works:
Example denial:
Screen: Button component (all variants) Prompt: "The border-radius changed from 8px to 6px across all button variants. Revert to radius-md (8px) to match the Figma source."
Important: Chromatic only covers components with Storybook stories and compares against a previous baseline. New screens have no baseline, which is why the designer's manual review is the primary QA layer.
No visual change ships to production without explicit designer approval.
Free tier: 5,000 snapshots/month (enough for a small team).
Chromatic setup (Dev Team):
npm install --save-dev chromaticOnce setup is complete, this is the cycle for every feature.
| Step | Who | What happens |
|---|---|---|
| 1 | Designer | Designs screens in Figma using the shadcndesign library |
| 2 | Designer | If tokens changed: pushes a PR via Tokens Studio |
| 3 | Dev | Reviews and merges the token PR. Style Dictionary rebuilds CSS variables. |
| 4 | Dev | Opens Claude Code, pastes a Figma frame link, prompts "Build this design" |
| 5 | Dev | Claude reads Figma via MCP, generates code using real components + tokens |
| 6 | Dev | Reviews, adjusts, commits, and pushes to GitHub |
| 7 | Designer | Reviews new screens (manual walkthrough + Screen/Prompt feedback). Chromatic auto-flags regressions on existing components. |
| 8 | Vercel | Auto-deploys once Chromatic approval + code review pass |
Every tool in both workflows, linked.
Hey!
I'm Diego Perez, a UX/UI product designer in sunny Valencia (Spain). Formerly a UX/UI designer, I've spent the last 17 years turning ideas into funded, acquisition-ready products.
7 years ago I founded the boutique agency Unlocal, where I direct a small team that only works on two projects at a time. We helped founders raise $60M and drive $44M in exits, designed the internal SaaS of a $25B Fortune 500, and worked on products with +12M Users.
Off the clock, you'll catch me perfecting an aquascape, hunting rare Belgian beers on local markets, dreaming about an off-grid self-sufficient living, playing crazily complex video games, or losing at board games to my five-year-old daughter.