Playbook for
AI-Powered Teams

Lately, whenever I open social platforms, I always find posts claiming that a new tool or release launched by [insert AI name here] has killed designers, developers, or both.

What AI has killed is not our profession, but our workflow.

For the past few months, I've been working alongside dev teams that ship 10x faster thanks to AI, and I kept running into two problems:

1. Speed

Designers have become the bottleneck. Dev teams powered by AI can go from idea to deployed product in hours. Traditional design workflows (design every screen, build a system, write specs, hand off) can't keep up. The designer either slows the team down or gets routed around.

2. Fidelity

What gets designed in Figma doesn't match what ships in code. AI generates visually plausible interfaces, but spacing drifts, tokens don't align, and components diverge from the design system. Without a quality connection between Figma and the codebase, every screen is a manual translation exercise that always drifts.

I landed on two workflows, each solving one of these problems:

Solves: Speed

⚡ The Rapid Validation

Dev teams today can ship 10x faster with AI, going from idea to deployed prototype in hours. But they can't wait for Figma files. Designers have become the bottleneck in digital product development.

This workflow here is defined for Figma and Claude Code, as it's my favorite stack. However, the same structure and logic could be used with other alternatives.

The Problem

Traditional design workflows assume a process where the designer creates every screen, builds the system, writes specs, and hands everything off to devs. This made sense when development took months. When development takes hours, this sequence turns the designer into the slowest link in the chain.

The uncomfortable truth: if a designer's only contribution is creating pixel-perfect Figma files, AI-powered teams will route around them.

The Shift

The designer's value hasn't changed: understanding what is needed (user journeys, screens, information architecture, user experience, aesthetics, etc).

What changed is HOW that value is delivered.

This workflow restructures the designer's role around five stages:

StageWho leadsWhat happens
1. DefineDesignerDefine (not design) everything needed by devs and AI
2. Feed the AIDesigner + DevTurn Stage 1 into AI context (CLAUDE.md, skill file, Figma MCP)
3. BuildDev teamGenerate a working product with AI
4. Quality AssuranceDesignerUX review + fix cycle
5. ValidationDesignerReal users test the product
01 Stage 1: Define Designer

Before a single line of code is written, the designer owns the strategy. Research, information architecture, user journeys, and a design rules document that AI will follow during the build. The goal is to give the dev team everything they need to build with AI without having to guess.

The deliverables

DeliverableToolContentsFeeds into
Project briefNotionProblem definition, competitive analysis, target user, core metricCLAUDE.md
Information architectureNotionSite map, content hierarchy, screen inventory, navigation structureCLAUDE.md
User journeysFigJamCore flows, decision points, happy paths, edge casesFigma MCP
Design rulesNotionProduct UX rules, visual quality standards, accessibility requirementsSkill file
Lo-fi Wireframes (optional)FigmaStructure and hierarchy for complex or ambiguous flows onlyFigma MCP

The "Feeds into" column is what makes this workflow different from a traditional one. Every deliverable has a direct path into the AI's context. Stage 2 explains how.

About design rules

Most teams skip this. That's a mistake.

The design rules document is a mix of three types of rules combined into one authoritative source:

  • Product-specific UX rules written by the designer (e.g., "forms max 5 fields per step", "CTA above the fold", "every action has visible feedback"). These take top priority.
  • Visual quality rules cherry-picked from Anthropic's open-source frontend-design skill: hierarchy, spacing, typography, and avoiding generic AI aesthetics.
  • Accessibility rules cherry-picked from Vercel's web-design-guidelines: ARIA attributes, focus states, keyboard nav, touch targets, and semantic HTML.

The designer reads the Anthropic and Vercel skill files as reference material and picks what's relevant. One document, one priority order, no ambiguity.

02 Stage 2: Feed the AI Designer + Dev

The difference between a mediocre AI build and a good one is the context the AI has access to. This stage takes the Stage 1 deliverables and turns them into three things the AI reads before and during every prompt.

CLAUDE.md

What it is: A markdown file in the project root that Claude Code reads before every prompt. It's the AI's persistent memory of the project.

What goes in it:

  • The project brief from Stage 1: what the product is, who it's for, and the core metric being validated
  • The information architecture from Stage 1: the full list of routes/screens, how they connect, navigation structure, and content hierarchy
  • Technical conventions: framework, component locations, naming patterns, routing structure

How to create it: Create a file called CLAUDE.md in the root of the GitHub repo. Paste the content from Notion. Plain text, no special tooling.

Design rules skill

What it is: A markdown file at .claude/skills/design-rules.md that Claude Code references during every build. It contains the design rules from Stage 1.

How to create it: Create the .claude/skills/ folder in the repo and add a file called design-rules.md. Copy the design rules from the Notion document. Plain text, no tooling.

Why it's a separate file: CLAUDE.md is the project context (what to build). The skill file is the design constraints (how it should look and behave). Keeping them separate means the skill file can be reused across projects with minor edits.

Figma MCP

What it is: A connection between Claude Code and Figma that lets the AI read designs and FigJam boards directly.

What it does: When the dev pastes a Figma or FigJam link into Claude Code, the AI reads the layout, hierarchy, flows, and structure. Wireframes and user journeys become direct input, not reference documents sitting in a browser tab.

How to set it up: One command: claude mcp add --transport http figma https://mcp.figma.com/mcp, then authenticate via /mcp in Claude Code. One-time setup, 2 minutes.

03 Stage 3: Build Dev Team

The dev team uses AI tools to generate a working product, with the Stage 2 context already in place. The designer is available for async questions but does not block the build. This stage can take hours or days, depending on complexity.

Claude Code is the recommended primary tool because the codebase stays in the team's GitHub repo, it scales directly to the Pixel-Perfect at Scale pipeline without rewriting, and it reads the CLAUDE.md, skill files, and Figma MCP context set up in Stage 2.

ToolBest forLimitation
v0 (by Vercel)Individual UI components, frontend onlyNo backend, no full-stack
LovableNon-technical founders validating soloCode quality not production-grade
BoltRapid full-stack prototypingSimilar to Lovable
Cursor / WindsurfDevs who want AI inside their code editorRequires dev skills

Regardless of the tool, the output is: a deployed, functional product on Vercel that users can interact with.

04 Stage 4: Quality Assurance Designer

Even with design rules baked into the build, AI-generated interfaces still need a trained eye. The output will be significantly better than a raw prompt, but experienced designers spot flow-level problems that no AI catches today: confusing navigation sequences, unclear information hierarchy, missing edge cases, and interactions that technically work but feel wrong.

1. Designer Review

The designer walks every screen and flow on desktop + mobile.

Because the design rules were already applied during the build, most mechanical issues (missing ARIA labels, placeholder-as-label, basic spacing) should already be handled. The designer focuses on what AI consistently misses:

  • Does the journey make sense end-to-end?
  • Is the information hierarchy clear on each screen?
  • Would a user know what to do next at every step?
  • Are there dead ends or confusing transitions?
  • Are edge cases handled (empty states, errors, loading)?

This is NOT pixel-perfect checking. This IS ensuring the UX works for real users.

How every issue is logged:

Every issue needs two things:

  • Screen/URL: Where the issue is
  • AI prompt: A ready-to-paste prompt the dev can execute in Claude Code

The prompt itself already describes what's wrong and what the fix should be. No need for separate "what's wrong" / "expected behavior" / "severity" fields. The designer orders the list by priority, and the dev executes from top to bottom.

Screen: /onboarding/step-2
Prompt: "Split the form into 2 steps with max 4 fields each.
Add a step progress indicator showing 'Step X of Y' above the form."

Screen: /dashboard
Prompt: "The primary CTA is below the fold on mobile.
Move it above the activity feed so it's visible without scrolling."

Screen: /settings/profile
Prompt: "Add a loading state to the Save button. Show a spinner 
and disable the button while the request is processing."

2. Fix Cycle

How designer feedback becomes code changes in minutes, not days.

The designer delivers the ordered list of issues with prompts. The dev opens Claude Code and executes them from top to bottom. Dev commits and pushes once. Vercel auto-deploys. Designer verifies on staging.

Why this works: The bottleneck isn't dev execution time. It's the dev misinterpreting the designer's intent. When every issue comes with a precise AI prompt, the dev's job is: open terminal, paste, commit, push. Zero interpretation needed. For prompts that involve logic, state, or API changes, the dev uses judgment. The designer verifies after.

Optional: Designer pushes directly

For designers who want to eliminate dev dependency for simple UI/UX changes:

  1. The designer sets up Claude Code on their machine with the project repo
  2. Creates a design-fixes branch
  3. For simple changes (reorder elements, adjust hierarchy, add loading states, change copy), prompts Claude Code directly
  4. Pushes to the design-fixes branch
  5. Dev reviews and merges (quick check, not full implementation)

When this makes sense: The designer is comfortable with basic terminal commands, and zero wait time is a priority. Best for personal projects or teams with clear boundaries on what "simple" means.

Key rule: The designer only touches UI/UX surface changes. Logic, state, APIs, and structural changes always go through the dev.

05 Stage 5: Validation Designer

Quality assurance catches UX problems. Validation answers a bigger question: Does anyone actually want this?

The product is now deployed, the major UX issues are fixed, and it's time to put it in front of real users. This stage determines whether the idea is worth building properly (graduating to the Scale Pipeline with a full design system and token governance) or should be iterated on, pivoted, or killed.

How testing is conducted

Option A: Unmoderated (fastest, async)

  • Tool: Maze, Useberry, or Hotjar
  • Create a test with 3-5 tasks matching the core user journey
  • Share a link with 5-10 users
  • Users complete tasks on their own time
  • The tool records clicks, paths, completion rates, and drop-off points
  • The result is a dashboard with heatmaps and success rates
  • No scheduling needed

Option B: Moderated (deeper insights)

  • 5 users, 20-30 min each via Zoom with screen sharing
  • Give the core task without explaining how
  • Observe where they hesitate, click wrong, or get confused
  • Record with permission
MetricHow to measure
Task completionDid they finish the core task? (yes/no)
Time to completeHow long did it take?
Error pointsWhere did they make mistakes or go back?
Confusion pointsWhere did they pause or ask questions?
SatisfactionSimple 1-5 rating after the task

How findings become changes

  1. The designer synthesizes findings in Notion: patterns ("4/5 users couldn't find the CTA"), each with what happened, why it's a problem, and a suggested fix as an AI prompt
  2. The team reviews findings (30 min async or sync)
  3. Agree on what to fix: Critical + Major only (MVP, not perfection)
  4. Defer Minor issues to the Scale Pipeline
  5. Dev implements fixes via Claude Code
  6. Deploy
  7. Revalidate again with users

The decision

SignalAction
Users complete the core task and express interestGraduate to the Scale Pipeline (pixel-perfect design-to-code)
Users struggle but the concept resonatesRestart cycle (fix and retest)
Users don't understand or don't careKill or pivot the idea

The Iteration Cycle

Each cycle takes days, not weeks:

Build (with AI context) → Designer review → Fix → Deploy
                                                    ↓
                                          User test (when ready)
                                                    ↓
                                          Synthesize → Fix → Deploy
                                                    ↓
                      Ship / Iterate / Kill / Graduate to Scale Pipeline
Graduating to the Scale Pipeline

When the idea validates, and it's time to build it properly (pixel-perfect design system, token governance, automated visual QA), nothing gets thrown away. The codebase upgrades directly into the full design-to-code pipeline.

What carries overWhat gets added
Codebase (already Next.js + shadcn/ui)shadcndesign Figma kit (full design system)
CLAUDE.md + design rules skill fileTokens Studio + Style Dictionary (automated token pipeline)
User journey learningsCode Connect (component mapping)
Deployed product on VercelStorybook + Chromatic (automated visual QA)

Solves: Fidelity

🚀 Pixel-Perfect at Scale

Once we've validated our solution idea with a quick, good-enough product, it's time to build it properly. But "properly" doesn't mean going back to the old workflow. It means adding the layers that turn a validated prototype into a scalable, maintainable product, without losing the speed that got the team here.

The Problem

AI-generated code works. But two things break down at scale.

1. Design fidelity: The Figma design says one thing, the code renders something else. A button that's 8px radius in Figma ships as 6px in code. A heading that uses the brand font in Figma renders in a fallback font in production. Spacing that's consistent in the design drifts in implementation. Without a quality connection between Figma tokens and code variables, every screen is a manual translation exercise, and manual translation always drifts.

2. Component consistency: A developer prompts Claude Code to build a card component and gets bg-blue-500 p-4 rounded-lg. Another developer prompts the same tool a week later and gets bg-primary p-6 rounded-xl. Same product, inconsistent output. Without shared tokens and component mapping, AI generates visually plausible code that doesn't match the design system.

This pipeline solves both problems by making Figma the single source of truth and automating every step from design token to deployed code. Design tokens flow directly from Figma Variables to CSS variables, so there's no manual translation to drift. Code Connect maps Figma components to their exact code counterparts, so AI generates the real component API instead of generic CSS. And the designer approves every visual change before it ships, catching anything that slipped through.

The Pipeline at a Glance

StepToolOwnerPurpose
0GitHubDev TeamCentral hub connecting everything
1shadcndesign Figma KitDesignerDesign system with Variables, components, light/dark modes
2Tokens StudioDesignerSync Figma Variables to GitHub as structured JSON
3Style DictionaryDev TeamConvert token JSON into CSS variables + Tailwind config
4Next.js + shadcn/uiDev TeamCode project consuming the design tokens
5Claude Code + Figma MCPDev TeamAI reads Figma, generates production code
6Code ConnectTogetherLinks Figma components to code components
7StorybookDev TeamComponent catalog for testing and visual verification
8VercelDev TeamAuto-deploys from GitHub on every push
9QADesignerDesigner reviews new screens + Chromatic catches regressions
Step 0: GitHub Dev Team

Everything connects through GitHub. Tokens Studio pushes design tokens here. Style Dictionary reads them. Claude Code commits generated code. Vercel auto-deploys. Code Connect reads component files.

Action: Create a GitHub repository for the project. All team members need access.

Step 1: shadcndesign Figma Kit Designer

There are multiple Shadcn/ui Figma kits. After evaluating several options, shadcndesign.com is the best fit for this pipeline. Every Figma component maps exactly to a Shadcn/ui code component. It uses Figma Variables (not old color styles) for all tokens, light/dark mode is built in, and Tailwind CSS naming aligns out of the box.

The kit includes two tools:

Figma Kit: The component library with Variables, auto-layout, and variant properties that match code prop names (variant, size, state, disabled, etc.).

Figma Plugin: A code generation tool. Select a frame or component in Figma, and it outputs clean Shadcn/ui React code. Useful for quick one-off exports. Note: it does not manage or sync tokens to a repo (that's Tokens Studio's job).

Actions:

  • Purchase the Pro or Team tier
  • Duplicate the file into the Figma workspace
  • Customize primitive tokens (brand colors, typography, spacing) while keeping naming conventions intact
  • Publish as a Figma Library so all designers can use it across projects
Step 2: Tokens Studio Designer

Without this, devs manually copy hex values from Figma. With Tokens Studio, every design decision flows automatically to code. Change a color in Figma, push a PR, and the entire codebase updates.

Figma Plugin ↗ tokens.studio ↗

Tokens Studio reads all Figma Variables (colors, spacing, radius, typography), converts them to W3C Design Token format (structured JSON), pushes them to the GitHub repo as a Pull Request, and supports bi-directional sync.

Actions:

  • Install the Tokens Studio plugin in Figma
  • Connect to the GitHub repo (requires a Personal Access Token with repo scope)
  • Push initial tokens
  • Configure PR-based push workflow so changes go through code review
Step 3: Style Dictionary Dev Team

Tokens Studio pushes JSON to GitHub. But the codebase needs CSS variables and a Tailwind config, not raw JSON. Style Dictionary is the build tool that makes that conversion, handling unit transforms (% line-height to unitless, font weight names to numbers, etc.) automatically.

Style Dictionary ↗ sd-transforms ↗

Why sd-transforms is required: Tokens from Tokens Studio have a specific format. The sd-transforms package handles the conversion so Style Dictionary can process them correctly.

Actions:

  • Dev team installs both packages: npm install style-dictionary @tokens-studio/sd-transforms --save-dev
  • Configure Style Dictionary to read from the tokens directory and output CSS + Tailwind config
  • Replace default shadcn CSS variables in globals.css with generated output
Step 4: Next.js + Tailwind + shadcn/ui Dev Team

The actual codebase. Next.js (React framework, industry standard) + Tailwind CSS (utility-first styling) + shadcn/ui (the component library where the team owns the code).

ui.shadcn.com ↗

Why shadcn/ui: most popular React UI library (104K+ GitHub stars), components live as editable source files in the project (not in node_modules), best AI tooling support of any component library, 0KB runtime overhead, and accessible by default via Radix UI primitives.

Actions:

  • Initialize: npx create-next-app@latest --typescript --tailwind --eslint --app
  • Add shadcn: npx shadcn@latest init
  • Add base components: npx shadcn@latest add button input badge card avatar
  • Wire up the Style Dictionary output to consume design tokens
Step 5: Claude Code + Figma MCP Dev Team

This is where AI generates production code. The dev pastes a Figma frame link into Claude Code. Claude reads the design via MCP (including Variables, components, and Code Connect metadata), generates React code using the real shadcn/ui imports, CSS variables, and project structure, and writes it directly into the project files.

What makes this different from other code generation: full codebase context (Claude knows the existing components, utilities, and patterns), a CLAUDE.md file that describes team conventions, skill files from shadcndesign for accurate generation, and iterative refinement ("make this responsive", "add dark mode", "add loading state").

Actions:

  • Install: npm install -g @anthropic-ai/claude-code
  • Add MCP: claude mcp add --transport http figma https://mcp.figma.com/mcp
  • Authenticate via /mcp command in Claude Code
  • Add shadcndesign skill files to .claude/skills/ (these complement the custom design-rules.md from the Rapid Validation workflow. The design rules file contains product-specific UX and visual rules, while the shadcndesign skill files teach Claude how to use shadcn/ui components and tokens correctly. Both live in the same .claude/skills/ folder.)
  • Create a CLAUDE.md describing the project structure

Syncing with GitHub: Claude Code can run git commands directly ("Commit these changes and push to GitHub"). Teams can also use GitHub Desktop if preferred.

Step 6: Code Connect Together

This is the highest-leverage step in the entire pipeline. Without Code Connect, Claude Code sees "a blue button with 16px padding" and generates generic HTML/CSS. With Code Connect, Claude Code sees <Button variant="primary" size="md"> and uses the actual component API.

Every step before this exists to make Code Connect work at its best.

Code Connect Docs ↗

Two setup methods: Code Connect UI (recommended, visual interface in Figma, designer-friendly) and Code Connect CLI (config files in the codebase, more flexible, better for advanced cases).

Important note on timing: Code Connect requires that coded components already exist in the repo. Step 4 must be completed first. The recommended sequence is: build base components in code (Step 4), set up Code Connect, then use Claude Code for all subsequent designs with enhanced output.

Actions:

  • Open the published shadcndesign library in Figma
  • In the right panel, click 'Code Connect' > 'Set up Code Connect'
  • Authorize with the GitHub repo
  • Map each component to its code file (e.g., Figma Button > src/components/ui/button.tsx)
  • Map variant properties: variant, size, disabled, etc.
  • Verify in Dev Mode: select a component instance and confirm it shows the actual code
Step 7: Storybook Dev Team

A development environment where coded components are rendered in isolation with all their variants and states. Designers use it to visually verify that implementations match the Figma design. It serves as the living documentation of what's been built vs. what's designed.

storybook.js.org ↗

This is not optional. Code Connect is more useful when Storybook stories are available, allowing designers to browse them without accessing the codebase.

Actions:

  • Install: npx storybook@latest init
  • Configure dark mode toggle via @storybook/addon-themes
  • Write stories for all base components, covering every variant and state
  • Deploy to a shared URL (Chromatic, Vercel, or GitHub Pages)
Step 8: Vercel Dev Team

Connect the GitHub repo, and Vercel auto-deploys the site every time code is pushed. Free tier, custom domain support, global CDN, free HTTPS. Recommended by shadcn/ui for Next.js projects. Works for landing pages and complex SaaS alike.

vercel.com ↗

If the team already has a domain registered elsewhere (e.g., GoDaddy), keep the domain registration as is and point the DNS records to Vercel. Takes 5 minutes.

Actions:

  • Sign up at vercel.com (using a GitHub account)
  • Connect the GitHub repo
  • Add the custom domain
  • Every push to GitHub now auto-deploys
Step 9: Quality Assurance Designer

Every new screen and every significant feature is reviewed by the designer before it ships. Every issue is a Screen + Prompt that the dev can execute directly in Claude Code:

Screen: /settings/profile
Prompt: "The Save button has no loading state. Add a spinner
and disable the button while the request is processing."

The dev executes the prompts, pushes, Vercel auto-deploys, and the designer verifies again.

Optional: Designer pushes directly

For designers who want to eliminate dev dependency for simple UI/UX changes:

  1. The designer sets up Claude Code on their machine with the project repo
  2. Creates a design-fixes branch
  3. For simple changes (reorder elements, adjust hierarchy, add loading states, change copy), prompts Claude Code directly
  4. Pushes to the design-fixes branch
  5. Dev reviews and merges (quick check, not full implementation)

When this makes sense: The designer is comfortable with basic terminal commands, and zero wait time is a priority. Best for personal projects or teams with clear boundaries on what "simple" means.

Key rule: The designer only touches UI/UX surface changes. Logic, state, APIs, and structural changes always go through the dev.

Chromatic: the automated safety net

On top of the designer's manual review, Chromatic (by the Storybook team) provides an automated layer that catches visual regressions on existing components. Every time code is pushed, it screenshots every Storybook story and diffs them against the previous approved baseline. If anything changed visually, even by 1 pixel, the designer gets notified.

chromatic.com ↗

This is particularly valuable for catching unintended side effects: a developer changes a token or refactors a component, accidentally breaking something elsewhere. Chromatic catches what no one was looking at.

How it works:

  1. Dev pushes code to GitHub
  2. Chromatic runs automatically and screenshots every Storybook story
  3. If visual changes are detected, the designer receives a notification
  4. The designer opens the Chromatic web UI and sees a side-by-side diff
  5. For each change: accept or deny with a fix prompt

Example denial:

Screen: Button component (all variants)
Prompt: "The border-radius changed from 8px to 6px across
all button variants. Revert to radius-md (8px) to match
the Figma source."

Important: Chromatic only covers components with Storybook stories and compares against a previous baseline. New screens have no baseline, which is why the designer's manual review is the primary QA layer.

No visual change ships to production without explicit designer approval.

Free tier: 5,000 snapshots/month (enough for a small team).

Chromatic setup (Dev Team):

  • Install: npm install --save-dev chromatic
  • Connect to GitHub repo and Storybook
  • Add Chromatic to CI/CD pipeline (runs on every PR)
  • Configure the designer as a reviewer in the dashboard

The Daily Workflow

Once setup is complete, this is the cycle for every feature.

StepWhoWhat happens
1DesignerDesigns screens in Figma using the shadcndesign library
2DesignerIf tokens changed: pushes a PR via Tokens Studio
3DevReviews and merges the token PR. Style Dictionary rebuilds CSS variables.
4DevOpens Claude Code, pastes a Figma frame link, prompts "Build this design"
5DevClaude reads Figma via MCP, generates code using real components + tokens
6DevReviews, adjusts, commits, and pushes to GitHub
7DesignerReviews new screens (manual walkthrough + Screen/Prompt feedback). Chromatic auto-flags regressions on existing components.
8VercelAuto-deploys once Chromatic approval + code review pass

When adding a new component

  1. The designer creates the component in Figma using library variables
  2. Dev adds the shadcn/ui component (or creates a custom one)
  3. Dev writes a Storybook story covering all variants
  4. Together: add Code Connect mapping
  5. Test: verify Claude Code generates correct output for the new component

When changing tokens (rebrand, theme update)

  1. Designer updates primitive values in Figma Variables
  2. Designer pushes a PR via Tokens Studio
  3. Dev merges, Style Dictionary rebuilds
  4. All components automatically reflect the new values (no code changes needed)

Key Resources

Every tool in both workflows, linked.

shadcndesign Kitshadcndesign.comshadcn/ui Docsui.shadcn.comTokens Studiotokens.studioStyle Dictionarygithub.comsd-transformsgithub.comCode Connectdevelopers.figma.comFigma MCP Servermcp.figma.comClaude Codeclaude.ai/codeStorybookstorybook.js.orgVercelvercel.comChromaticchromatic.comAnthropic Skills (ref)github.comVercel Skills (ref)github.comFigma SDS (ref)github.com
Diego Perez

About Me

Hey!

I'm Diego Perez, a UX/UI product designer in sunny Valencia (Spain). Formerly a UX/UI designer, I've spent the last 17 years turning ideas into funded, acquisition-ready products.

7 years ago I founded the boutique agency Unlocal, where I direct a small team that only works on two projects at a time. We helped founders raise $60M and drive $44M in exits, designed the internal SaaS of a $25B Fortune 500, and worked on products with +12M Users.

Off the clock, you'll catch me perfecting an aquascape, hunting rare Belgian beers on local markets, dreaming about an off-grid self-sufficient living, playing crazily complex video games, or losing at board games to my five-year-old daughter.