Skip to content
Homeβ€Ί JavaScriptβ€Ί How I Generate 50+ shadcn Components Faster with AI

How I Generate 50+ shadcn Components Faster with AI

Where developers are forged. Β· Structured learning Β· Free forever.
πŸ“ Part of: React.js β†’ Topic 42 of 47
A production workflow using v0 and Cursor to generate, refine, and quality-gate shadcn/ui components at scale β€” with real before/after examples.
πŸ”₯ Advanced β€” solid JavaScript foundation required
In this tutorial, you'll learn
A production workflow using v0 and Cursor to generate, refine, and quality-gate shadcn/ui components at scale β€” with real before/after examples.
  • Total workflow time is 15-17 minutes per component β€” 2 minutes to generate, 8-10 to refine, 5 to quality-gate β€” not 3 minutes
  • Run the five-check candidacy evaluation before every generation session β€” two failing checks means build manually
  • Write all specs before generating any components β€” the spec phase catches sprawl and duplicates before any code is written
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚑Quick Answer
  • The workflow combines v0 (generative scaffolding) and Cursor (contextual refinement) to accelerate shadcn/ui component creation
  • v0 produces styled React component shells from structured text prompts β€” generation takes 2 minutes per component
  • Cursor adapts that output to your project's design tokens, hooks, and patterns β€” refinement takes 8-10 minutes
  • Total workflow time: 10-12 minutes per component versus 25-35 minutes for manual creation
  • The specification document is the system's foundation β€” without it, components drift from each other at scale
  • Biggest mistake: treating generated code as final β€” the quality gate review is where the real engineering happens
🚨 START HERE
AI Component Quick Debug Cheat Sheet
Fast diagnostics for the most common AI-generated component issues.
🟑Hardcoded colors in generated component
Immediate ActionSearch for raw Tailwind color classes across generated files
Commands
grep -rn 'bg-white\|bg-gray\|text-gray\|text-black\|border-gray' src/components/
grep -rn 'bg-blue\|bg-red\|bg-green\|bg-yellow' src/components/
Fix NowReplace every match with its semantic equivalent: bg-white β†’ bg-background, text-gray-900 β†’ text-foreground, border-gray-200 β†’ border-border
🟑Bundle size increased unexpectedly after adding generated component
Immediate ActionIdentify which imports the generated component added
Commands
grep -rn '^import' src/components/NewComponent.tsx | sort
ANALYZE=true npm run build
Fix NowRemove any import that duplicates a library your project already uses. Replace lodash utility imports with your existing utility functions. Configure @next/bundle-analyzer in next.config.js if not already set up.
🟑Accessibility audit failures on generated interactive components
Immediate ActionRun automated accessibility check against the running component
Commands
npx axe-cli http://localhost:3000 --tags wcag2a,wcag2aa
grep -rn 'onClick' src/components/NewComponent.tsx | grep -v 'onKeyDown\|role='
Fix NowUse Cursor Cmd+K: 'Add ARIA roles, aria-label attributes, and onKeyDown keyboard handlers to every interactive element in this component.'
🟑Dark mode breaks component that looks correct in light mode
Immediate ActionRun the hardcoded color grep and compare with semantic token list
Commands
grep -rn 'bg-white\|bg-gray-\|text-gray-\|border-gray-' src/components/NewComponent.tsx
cat tailwind.config.ts | grep -A 40 'colors'
Fix NowFor every hardcoded color found, find its semantic equivalent in tailwind.config.ts and replace. Test by toggling dark mode in the browser after each replacement.
Production IncidentThe Design Token Drift That Broke Dark Mode for 18 ComponentsA team shipped 30 AI-generated components in one sprint. Two weeks later, dark mode broke across 18 of them simultaneously.
SymptomThe dark mode toggle caused half the UI to render white text on white backgrounds. Users reported unreadable screens within minutes of the release going live. The support queue hit 200 tickets in 4 hours.
AssumptionThe team assumed the AI-generated components used semantic Tailwind classes like bg-primary and text-foreground throughout. They expected the components to adapt to theme changes automatically, the same way manually written components did.
Root causev0 output contained 14 instances of hardcoded color values β€” bg-white, text-gray-900, border-gray-200 β€” mixed with semantic tokens. The components looked correct in light mode during development because hardcoded white and gray values match the light theme visually. The code review missed the hardcoded values because they blended in visually with the correct semantic tokens around them. Dark mode exposed the mismatch immediately.
FixAdded a custom ESLint rule that flags any Tailwind class referencing a raw color value in the components directory. Ran a one-time sweep across all generated components. Manually replaced all hardcoded colors with the correct semantic tokens from tailwind.config.ts. Added dark mode screenshot comparison to the quality gate checklist β€” both light and dark mode must pass before any component merges.
Key Lesson
Never trust AI output to use your design tokens correctly β€” v0 defaults to generic Tailwind classes because it does not know your configurationHardcoded colors pass visual review in light mode β€” the failure only surfaces in dark mode or with custom themesAutomate design system compliance checks with ESLint before any generated component reaches code reviewTest every generated component in both light and dark mode as a mandatory quality gate step, not an afterthought
Production Debug GuideCommon symptoms when integrating AI-generated shadcn/ui components
Component renders correctly in light mode but breaks in dark mode→Search for hardcoded color classes: grep -rn 'bg-white\|text-gray\|bg-gray\|border-gray' src/components/ — every match is a design token that was not used. Replace each with the correct semantic token from your tailwind.config.ts.
TypeScript errors on props immediately after pasting v0 output→Check import paths first — v0 uses generic @/components/ui/* paths that may not match your project's barrel export structure. Then check that the prop types v0 generated match your actual data model types. v0 invents prop types that look correct but do not match existing interfaces.
Component works in isolation but breaks when nested inside a form or dialog→Inspect for duplicate Radix UI context providers. v0 often wraps components in Provider layers that are already provided by a parent component. Remove the duplicate Provider — the component should consume context from its nearest ancestor, not redeclare it.
Generated table component causes visible render delay with 100+ rows→Profile with React DevTools Profiler — AI-generated tables render every row on every state change. Wrap row components in React.memo, use useCallback for row handlers, and virtualize the list with @tanstack/react-virtual for datasets over 50 rows.
Accessibility audit fails on generated interactive component→v0 generates visually correct interactive elements but commonly omits ARIA roles, aria-label attributes, and keyboard event handlers. Run npx axe-cli http://localhost:3000/component-path to get a specific violation list. Use Cursor Cmd+K: 'Add appropriate ARIA roles, aria-label, and onKeyDown handlers to all interactive elements.'
Component bundle size increased by more than expected after integration→v0 sometimes imports heavy utility libraries (date-fns, lodash) for tasks that your project already handles. Run ANALYZE=true npm run build with @next/bundle-analyzer configured to identify the source. Replace with your project's existing utilities.

Manual component creation is a scaling bottleneck. Each component requires boilerplate, variant logic, accessibility markup, and design token integration. For a team building a design system from scratch, this process does not scale beyond a handful of components per sprint.

AI tools automate the scaffolding phase. By combining v0's generative output with Cursor's contextual editing, you create a repeatable pipeline that produces consistent components in a fraction of the manual time. The developer shifts from writing boilerplate to curating and refining AI output β€” a more leveraged use of engineering judgment.

This guide covers the complete workflow with real before/after examples, a reusable specification system, prompt templates across five component types, and quality gates that prevent AI-generated drift from reaching production. It also covers when this workflow should not be used β€” not every component is a good candidate for AI generation.

When to Use This Workflow β€” and When Not To

Not every component is a good candidate for AI generation. Understanding the boundaries of this workflow prevents wasted effort and poor-quality output.

Good candidates: presentational components that display data passed via props (cards, badges, stat displays, alert banners), layout components with predictable structure (page headers, sidebars, navigation bars), form field wrappers around shadcn/ui Input and Select primitives, and data display components like tables and lists that follow a consistent pattern.

Poor candidates: components with complex custom animations or gesture handling, components that require deep accessibility work (custom date pickers, drag-and-drop file uploads, rich text editors), components that embed domain-specific business logic, and any component where the design is so custom that the AI has no useful prior patterns to draw from.

The rule: if a senior developer would describe the component as 'standard,' it is a good AI generation candidate. If they would say 'we need to design this carefully,' it is not.

src/config/component-candidates.ts Β· TYPESCRIPT
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
// component-candidates.ts
// Use this checklist before deciding whether to AI-generate a component.
// Answer each question β€” if more than two answers are NO, build manually.

interface GenerationCandidate {
  name: string;
  // YES = good candidate, NO = build manually
  checks: {
    isPresentational: boolean;        // Accepts data via props, does not own state?
    hasStandardPattern: boolean;      // Does a similar component exist in shadcn/ui docs or common design systems?
    isAccessibilityStandard: boolean; // Does WCAG handling follow a well-known pattern (not custom)?
    hasNoBusinessLogic: boolean;      // Is logic handled by a parent hook or container?
    isDecomposable: boolean;          // Can it be broken into 2-3 shadcn/ui primitives?
  };
}

// GOOD CANDIDATES β€” all or most checks pass
const goodCandidates: GenerationCandidate[] = [
  {
    name: 'MetricCard',
    checks: {
      isPresentational: true,
      hasStandardPattern: true,
      isAccessibilityStandard: true,
      hasNoBusinessLogic: true,
      isDecomposable: true,
    },
  },
  {
    name: 'StatusBadge',
    checks: {
      isPresentational: true,
      hasStandardPattern: true,
      isAccessibilityStandard: true,
      hasNoBusinessLogic: true,
      isDecomposable: true,
    },
  },
  {
    name: 'DataTable',
    checks: {
      isPresentational: true,
      hasStandardPattern: true,
      isAccessibilityStandard: true,
      hasNoBusinessLogic: true,  // sorting/filtering logic in useDataTable hook
      isDecomposable: true,
    },
  },
];

// POOR CANDIDATES β€” multiple checks fail
const poorCandidates: GenerationCandidate[] = [
  {
    name: 'DragDropFileUpload',
    checks: {
      isPresentational: false,      // Manages drag state internally
      hasStandardPattern: false,    // Interaction model varies significantly
      isAccessibilityStandard: false, // Drag-and-drop a11y is complex and non-standard
      hasNoBusinessLogic: false,    // File validation logic belongs in component
      isDecomposable: false,        // No shadcn/ui primitives cover drag-and-drop
    },
  },
  {
    name: 'RichTextEditor',
    checks: {
      isPresentational: false,
      hasStandardPattern: false,
      isAccessibilityStandard: false,
      hasNoBusinessLogic: false,
      isDecomposable: false,
    },
  },
];
Mental Model
The Standard vs. Custom Test
If a senior developer says 'this is a standard card component,' generate it. If they say 'we need to think about this carefully,' build it manually.
  • Standard components: cards, badges, alerts, form wrappers, stat displays, navigation items β€” high AI success rate
  • Custom components: drag-and-drop, rich text, complex animations, custom date pickers β€” AI output requires more work than building manually
  • The five-check candidate evaluation takes 2 minutes and saves hours of rework on poor-fit components
  • When in doubt: generate a prototype with v0 to see if the output is close enough to refine
πŸ“Š Production Insight
A team attempted to AI-generate a custom multi-step date range picker.
v0 produced four different implementations across four prompts β€” none matched the design requirements.
Manual build took 4 hours. The failed AI attempts wasted 3 hours before the team abandoned the approach.
Rule: run the candidate check before opening v0. It takes 2 minutes and prevents wasted sessions.
🎯 Key Takeaway
AI generation works best for standard, presentational components with clear shadcn/ui primitive coverage.
Run the five-check candidate evaluation before starting β€” two or more failing checks means build manually.
The workflow's time savings come from good candidate selection, not from applying it to everything.

The Specification System: Foundation Before Generation

Generating one component is a trick. Generating fifty consistently requires a system. The foundation is the Component Specification Document β€” a structured definition of every component written before any generation begins.

The spec serves three purposes: it produces a consistent v0 prompt, it documents the component's intent for future maintainers, and it creates a review artifact that can be approved before any code is written.

For each component, define: name, core function (one sentence maximum), key props (three to five), variant options with defaults, required states, and the shadcn/ui primitives it should use. This last field is critical β€” telling v0 which primitives to use dramatically reduces the amount of Cursor refactoring needed.

src/config/component-specs.ts Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118
// component-specs.ts
// Component Specification Document β€” the blueprint for AI generation.
// Write specs first. Generate second. Review the spec before the code.

interface PropSpec {
  name: string;
  type: string;
  required: boolean;
  description: string;
}

interface VariantSpec {
  name: string;
  options: string[];
  default: string;
}

interface ComponentSpec {
  name: string;              // PascalCase component name
  description: string;       // One sentence β€” core function only
  props: PropSpec[];         // 3-5 key configuration props
  variants: VariantSpec[];   // size, status, density options
  states: string[];          // loading | error | empty | disabled
  primitives: string[];      // shadcn/ui primitives to use
  doNotGenerate?: string;    // Business logic that stays in a hook
}

// ---------------------------------------------------------------
// Example specs across five component categories
// ---------------------------------------------------------------

// CATEGORY 1: Presentational display
export const metricCardSpec: ComponentSpec = {
  name: 'MetricCard',
  description: 'Displays a single KPI with label, value, delta, and trend direction.',
  props: [
    { name: 'label', type: 'string', required: true, description: 'Metric name displayed above the value' },
    { name: 'value', type: 'string | number', required: true, description: 'Primary metric value' },
    { name: 'delta', type: 'number', required: false, description: 'Change from previous period as a percentage' },
    { name: 'trend', type: "'up' | 'down' | 'neutral'", required: false, description: 'Trend direction for icon and color' },
  ],
  variants: [
    { name: 'size', options: ['sm', 'md', 'lg'], default: 'md' },
  ],
  states: ['loading'],
  primitives: ['Card', 'CardHeader', 'CardContent', 'Skeleton'],
  doNotGenerate: 'Data fetching and delta calculation belong in a useMetrics hook',
};

// CATEGORY 2: Status / feedback
export const statusBadgeSpec: ComponentSpec = {
  name: 'StatusBadge',
  description: 'Inline badge displaying an entity status with icon and semantic color.',
  props: [
    { name: 'status', type: "'active' | 'inactive' | 'pending' | 'error'", required: true, description: 'Current status value' },
    { name: 'label', type: 'string', required: false, description: 'Override the default status label' },
    { name: 'showIcon', type: 'boolean', required: false, description: 'Show status icon alongside label' },
  ],
  variants: [
    { name: 'size', options: ['sm', 'md'], default: 'md' },
  ],
  states: [],
  primitives: ['Badge'],
  doNotGenerate: 'Status-to-color mapping should reference design tokens, not hardcoded colors',
};

// CATEGORY 3: Data display
export const dataTableSpec: ComponentSpec = {
  name: 'DataTable',
  description: 'Sortable, filterable table with pagination, row selection, and column visibility.',
  props: [
    { name: 'data', type: 'T[]', required: true, description: 'Array of row data objects' },
    { name: 'columns', type: 'ColumnDef<T>[]', required: true, description: 'TanStack column definitions' },
    { name: 'onRowSelect', type: '(rows: T[]) => void', required: false, description: 'Callback when row selection changes' },
    { name: 'pageSize', type: 'number', required: false, description: 'Rows per page, defaults to 20' },
  ],
  variants: [
    { name: 'density', options: ['compact', 'default', 'comfortable'], default: 'default' },
  ],
  states: ['loading', 'error', 'empty'],
  primitives: ['Table', 'TableHeader', 'TableBody', 'TableRow', 'TableCell', 'Button', 'Input', 'Skeleton'],
  doNotGenerate: 'Sorting, filtering, and pagination logic belongs in useDataTable hook',
};

// CATEGORY 4: Form input wrapper
export const tagInputSpec: ComponentSpec = {
  name: 'TagInput',
  description: 'Text input that converts entries to removable tag pills on Enter or comma.',
  props: [
    { name: 'value', type: 'string[]', required: true, description: 'Current array of tag strings' },
    { name: 'onChange', type: '(tags: string[]) => void', required: true, description: 'Called when tags array changes' },
    { name: 'placeholder', type: 'string', required: false, description: 'Input placeholder text' },
    { name: 'maxTags', type: 'number', required: false, description: 'Maximum number of tags allowed' },
  ],
  variants: [
    { name: 'size', options: ['sm', 'md'], default: 'md' },
  ],
  states: ['disabled', 'error'],
  primitives: ['Input', 'Badge', 'Button'],
  doNotGenerate: 'Validation logic belongs in the parent form handler',
};

// CATEGORY 5: Navigation
export const sidebarNavSpec: ComponentSpec = {
  name: 'SidebarNav',
  description: 'Vertical navigation list with active state, icons, and collapsible groups.',
  props: [
    { name: 'items', type: 'NavItem[]', required: true, description: 'Navigation items with label, href, and optional icon' },
    { name: 'activeHref', type: 'string', required: true, description: 'Current route href for active state highlighting' },
    { name: 'collapsed', type: 'boolean', required: false, description: 'Whether the sidebar is in icon-only collapsed mode' },
  ],
  variants: [
    { name: 'size', options: ['sm', 'md'], default: 'md' },
  ],
  states: ['loading'],
  primitives: ['Button', 'Tooltip', 'Collapsible', 'CollapsibleTrigger', 'CollapsibleContent'],
  doNotGenerate: 'Route matching and collapse state belong in a layout-level hook',
};
Mental Model
Spec First, Prompt Second
The spec is reviewed once and takes 5 minutes to write. It saves 20 minutes of prompt iteration and produces a consistent result across every developer on the team.
  • Write all 50 specs before generating any components β€” this surfaces duplicates and sprawl before code is written
  • The doNotGenerate field is as important as the props list β€” it prevents AI from embedding logic that belongs in a hook
  • Specs double as living documentation β€” future maintainers understand the component's intent without reading the code
  • One spec reviewer can approve 10 specs in the time it takes to review one generated component
πŸ“Š Production Insight
A team started generating without specs and hit 80 components before realizing 22 were variations of the same base card.
Deleting and consolidating the duplicates took longer than generating them had.
The team that wrote specs first caught 14 duplicates before a single prompt was written.
Rule: write all specs first. Run a duplicate check. Then open v0.
🎯 Key Takeaway
The specification document is the system's foundation β€” without it, consistency at scale is impossible.
Write all specs before generating any components β€” duplicates and sprawl are caught in the spec phase, not the code phase.
The doNotGenerate field prevents AI from embedding logic that belongs in hooks.

Phase 1: Generation with v0 β€” Prompts That Produce Usable Output

v0 translates structured component descriptions into functional React components. The quality of output depends entirely on prompt specificity. A vague prompt produces a vague component that requires extensive rework. A structured prompt derived from a spec produces output that needs only targeted refinement.

The prompt template below translates directly from a ComponentSpec. Every field in the spec maps to a section of the prompt. This consistency means any developer on the team can generate the same quality of output from the same spec.

Critical prompt elements that v0 responds to well: explicit shadcn/ui primitive names, semantic token instruction ('use semantic Tailwind tokens only β€” no hardcoded colors'), state requirements ('include loading, error, and empty states'), and the separation instruction ('this is a presentational component β€” accept all data and callbacks via props, no internal data fetching').

prompts/v0-prompt-template.txt Β· TEXT
123456789101112131415161718192021222324252627
Create a shadcn/ui {COMPONENT_NAME} component.

Core function: {ONE_SENTENCE_DESCRIPTION}

Props:
- {PROP_1_NAME}: {PROP_1_TYPE} β€” {PROP_1_DESCRIPTION}
- {PROP_2_NAME}: {PROP_2_TYPE} β€” {PROP_2_DESCRIPTION}
- {PROP_3_NAME}: {PROP_3_TYPE} β€” {PROP_3_DESCRIPTION}

Variants:
- {VARIANT_NAME}: {OPTION_1} | {OPTION_2} | {OPTION_3} (default: {DEFAULT})

States: Include {STATE_1}, {STATE_2}, and {STATE_3} states.

Use these shadcn/ui primitives: {PRIMITIVE_1}, {PRIMITIVE_2}, {PRIMITIVE_3}

Style rules:
- Use semantic Tailwind tokens only: bg-background, bg-card, text-foreground,
  text-muted-foreground, border-border, bg-primary, text-primary-foreground
- No hardcoded color classes (no bg-white, bg-gray-*, text-gray-*, border-gray-*)
- Use cn() from @/lib/utils for conditional class merging

Architecture:
- This is a presentational component only
- Accept all data and callbacks via props β€” no internal data fetching or API calls
- Include TypeScript types for all props
- Export the component as a named export
πŸ’‘The Five Lines That Prevent 80% of Rework
  • List exact shadcn/ui primitive names β€” v0 uses them correctly when named explicitly
  • Include 'semantic Tailwind tokens only' with examples β€” this alone prevents the dark mode drift incident
  • Include 'no hardcoded color classes' with examples of what not to use
  • Include 'presentational component only' β€” this prevents business logic from being embedded
  • Include 'named export' β€” v0 sometimes generates default exports that conflict with barrel export patterns
πŸ“Š Production Insight
Side-by-side prompt comparison for MetricCard:
Vague prompt: 'Create a metric card component with a number and label.'
Result: generic div with hardcoded gray colors, no variants, no states, no TypeScript.
Rework required: 45 minutes.
Structured prompt: full template above with MetricCard spec values filled in.
Result: complete component with all variants, loading state with Skeleton, semantic tokens throughout.
Rework required: 8 minutes of Cursor refinement.
🎯 Key Takeaway
The structured prompt template translates directly from the ComponentSpec β€” no creative writing required.
Five style rules in the prompt prevent 80% of the rework that generic prompts create.
Document your filled-in prompts alongside the spec β€” they become reusable for similar components.

Before and After: What v0 Produces vs. What Ships

The most important section of this workflow is the one most articles skip: showing exactly what v0 output looks like before refinement, and what the same component looks like after the Cursor phase.

The before/after comparison below uses MetricCard. The v0 output is functional and visually correct in light mode. It fails in dark mode, uses broad TypeScript types, imports from direct shadcn paths instead of barrel exports, and embeds no real structure for the loading state. The after version fixes all of these in targeted Cursor steps.

src/components/MetricCard.before.tsx Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
// BEFORE: Raw v0 output β€” functional but not production-ready
// Issues marked with // ❌

import { Card, CardContent, CardHeader } from '@/components/ui/card'; // ❌ direct path, not barrel
import { Skeleton } from '@/components/ui/skeleton'; // ❌ direct path
import { ArrowUpIcon, ArrowDownIcon } from 'lucide-react';

// ❌ Broad prop types β€” trend should be a union type, not string
// ❌ delta typed as number but should handle undefined explicitly
interface MetricCardProps {
  label: string;
  value: string | number;
  delta?: number;
  trend?: string; // ❌ should be 'up' | 'down' | 'neutral'
  size?: string;  // ❌ should be 'sm' | 'md' | 'lg'
  loading?: boolean;
}

export default function MetricCard({ // ❌ default export β€” conflicts with barrel pattern
  label,
  value,
  delta,
  trend,
  size = 'md',
  loading = false,
}: MetricCardProps) {
  if (loading) {
    return (
      <div className="p-4 bg-white rounded-lg border border-gray-200"> {/* ❌ hardcoded colors */}
        <Skeleton className="h-4 w-24 mb-2" />
        <Skeleton className="h-8 w-16" />
      </div>
    );
  }

  const trendColor =
    trend === 'up' ? 'text-green-600' : // ❌ hardcoded color
    trend === 'down' ? 'text-red-600' :  // ❌ hardcoded color
    'text-gray-500';                      // ❌ hardcoded color

  const sizeClasses = {
    sm: 'p-3',
    md: 'p-4',
    lg: 'p-6',
  };

  return (
    <Card className="bg-white border-gray-200"> {/* ❌ hardcoded colors */}
      <CardHeader className="pb-2">
        <p className="text-sm text-gray-500">{label}</p> {/* ❌ hardcoded color */}
      </CardHeader>
      <CardContent>
        <p className="text-2xl font-bold text-gray-900">{value}</p> {/* ❌ hardcoded color */}
        {delta !== undefined && (
          <div className={`flex items-center gap-1 mt-1 ${trendColor}`}>
            {trend === 'up' ? <ArrowUpIcon size={14} /> : <ArrowDownIcon size={14} />}
            <span className="text-sm">{Math.abs(delta)}%</span>
          </div>
        )}
      </CardContent>
    </Card>
  );
}
⚠ What v0 Gets Wrong Every Time
πŸ“Š Production Insight
These are not random failures β€” v0 produces the same categories of issues almost every time.
Knowing the failure patterns means your Cursor refinement is targeted, not exploratory.
The five Cursor commands below fix 90% of these issues in under 10 minutes.
🎯 Key Takeaway
The v0 output is a draft β€” expect hardcoded colors, broad types, default exports, and direct import paths.
Knowing the consistent failure patterns turns the Cursor refinement phase into targeted fixes, not exploration.

Phase 2: Cursor Refinement β€” The Five Targeted Commands

Cursor's Cmd+K inline command transforms the v0 output into a production-ready component. The refinement phase is not open-ended exploration β€” it is a sequence of five targeted commands that fix the consistent failure patterns v0 produces.

Run these commands in order on the pasted v0 output. Each command is scoped to one failure pattern. Running them in sequence takes 8-10 minutes and fixes 90% of the issues in the raw output.

After the five commands, the remaining 10% is manual review: verify the TypeScript types match your actual data models, test both light and dark mode in the browser, and confirm the component integrates with the specific hook or API it will use in production.

src/components/MetricCard.after.tsx Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
// AFTER: Post-Cursor refinement β€” production-ready
// Each fix corresponds to a specific Cursor Cmd+K command

// Cmd+K #1: "Convert to named export and update import paths to use
//            @company/ui barrel exports instead of direct @/components/ui/* paths"
import { Card, CardContent, CardHeader } from '@company/ui';
import { Skeleton } from '@company/ui';
import { cn } from '@/lib/utils';
import { ArrowUpIcon, ArrowDownIcon, MinusIcon } from 'lucide-react';

// Cmd+K #2: "Refine all prop types to use strict union types.
//            Match the MetricCardSpec interface in src/config/component-specs.ts"
export interface MetricCardProps {
  /** Metric name displayed above the value */
  label: string;
  /** Primary metric value β€” string for pre-formatted values, number for raw */
  value: string | number;
  /** Change from previous period as a percentage β€” positive or negative */
  delta?: number;
  /** Trend direction controls icon and semantic color */
  trend?: 'up' | 'down' | 'neutral';
  /** Controls padding and font sizes */
  size?: 'sm' | 'md' | 'lg';
  /** Renders skeleton placeholder during data fetch */
  loading?: boolean;
  /** Additional CSS classes merged with defaults */
  className?: string;
}

// Cmd+K #3: "Replace all hardcoded Tailwind color classes with semantic tokens.
//            Reference the token list in tailwind.config.ts.
//            bg-white β†’ bg-card, text-gray-* β†’ text-muted-foreground,
//            text-green-* β†’ text-success (or use a CSS variable),
//            text-red-* β†’ text-destructive"

const trendConfig = {
  up: {
    icon: ArrowUpIcon,
    className: 'text-emerald-600 dark:text-emerald-400', // semantic pattern for success
    label: 'Trending up',
  },
  down: {
    icon: ArrowDownIcon,
    className: 'text-destructive',
    label: 'Trending down',
  },
  neutral: {
    icon: MinusIcon,
    className: 'text-muted-foreground',
    label: 'No change',
  },
};

const sizeConfig = {
  sm: {
    card: 'p-3',
    value: 'text-xl font-bold',
    label: 'text-xs',
    delta: 'text-xs',
  },
  md: {
    card: 'p-4',
    value: 'text-2xl font-bold',
    label: 'text-sm',
    delta: 'text-sm',
  },
  lg: {
    card: 'p-6',
    value: 'text-3xl font-bold',
    label: 'text-base',
    delta: 'text-sm',
  },
};

// Cmd+K #4: "Improve the loading state to use the same Card wrapper
//            so the skeleton matches the final component's dimensions exactly"
export function MetricCard({
  label,
  value,
  delta,
  trend = 'neutral',
  size = 'md',
  loading = false,
  className,
}: MetricCardProps) {
  const sizes = sizeConfig[size];

  if (loading) {
    return (
      <Card className={cn(sizes.card, className)}>
        <CardHeader className="pb-2 pt-0 px-0">
          <Skeleton className="h-3 w-20" />
        </CardHeader>
        <CardContent className="px-0 pb-0">
          <Skeleton className="h-8 w-28 mb-2" />
          <Skeleton className="h-3 w-16" />
        </CardContent>
      </Card>
    );
  }

  const trend_ = trendConfig[trend];
  const TrendIcon = trend_.icon;

  return (
    <Card className={cn(sizes.card, className)}>
      <CardHeader className="pb-2 pt-0 px-0">
        {/* text-muted-foreground β€” semantic token, adapts to dark mode */}
        <p className={cn(sizes.label, 'text-muted-foreground')}>{label}</p>
      </CardHeader>
      <CardContent className="px-0 pb-0">
        {/* text-card-foreground β€” semantic token */}
        <p className={cn(sizes.value, 'text-card-foreground')}>
          {typeof value === 'number' ? value.toLocaleString() : value}
        </p>
        {delta !== undefined && (
          // Cmd+K #5: "Add aria-label to the trend indicator so screen readers
          //            announce the direction and percentage change"
          <div
            className={cn('flex items-center gap-1 mt-1', trend_.className)}
            aria-label={`${trend_.label}: ${Math.abs(delta)}% change`}
            role="status"
          >
            <TrendIcon size={14} aria-hidden="true" />
            <span className={sizes.delta}>
              {delta > 0 ? '+' : ''}{delta}%
            </span>
          </div>
        )}
      </CardContent>
    </Card>
  );
}

MetricCard.displayName = 'MetricCard';
πŸ’‘The Five Cursor Commands in Order
  • Command 1: Convert to named export and update import paths to barrel exports
  • Command 2: Refine prop types to strict union types matching existing interfaces
  • Command 3: Replace all hardcoded color classes with semantic tokens from tailwind.config.ts
  • Command 4: Improve loading state to match the final component's Card wrapper and dimensions
  • Command 5: Add ARIA attributes and role to all interactive and status elements
πŸ“Š Production Insight
Running the five commands in sequence takes 8-10 minutes on a typical component.
Skipping command 3 (semantic tokens) is what caused the 18-component dark mode incident.
Skipping command 5 (ARIA) is what causes accessibility audit failures on 60% of generated components.
The commands are not suggestions β€” they are the quality baseline every component must meet.
🎯 Key Takeaway
The Cursor refinement phase is five targeted commands, not open-ended editing.
Each command fixes one consistent failure pattern that v0 produces on almost every output.
Running all five takes 8-10 minutes. Skipping any one creates a quality gate failure.

Scaling to 50 Components: The Batch Workflow

The batch workflow applies the two-phase process systematically across a full component library. The key insight is that specification, generation, and refinement are separate work modes β€” mixing them creates context-switching overhead that slows the process.

Block time in three stages: spec review (one session, all 50 specs reviewed and approved), generation (sequential v0 sessions, one component at a time, prompts derived from specs), and refinement (Cursor sessions applying the five commands to each generated output).

The bottleneck is not generation β€” it is the quality gate review. A component that takes 2 minutes to generate takes 8-10 minutes to refine and 5 minutes to review. Total workflow time per component is 15-17 minutes. For 50 components, that is approximately 12-14 hours of focused work spread across sessions.

scripts/component-batch-tracker.sh Β· BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
#!/usr/bin/env bash
# component-batch-tracker.sh
# Track batch generation progress across a component library sprint.
# Run from the project root. Updates component-status.md with current state.

set -euo pipefail

SPEC_FILE="src/config/component-specs.ts"
COMPONENTS_DIR="src/components"
STATUS_FILE="component-status.md"

echo "# Component Generation Status"
echo "Generated: $(date)"
echo ""

# Count specs defined
SPEC_COUNT=$(grep -c 'const.*Spec: ComponentSpec' "$SPEC_FILE" 2>/dev/null || echo 0)
echo "## Specs defined: $SPEC_COUNT"
echo ""

# Check which components exist as files
echo "## Component Status"
echo "| Component | File Exists | Has Test | Storybook Story | Dark Mode Checked |"
echo "|-----------|-------------|----------|-----------------|-------------------|"

# Extract component names from spec file
grep 'const.*Spec: ComponentSpec' "$SPEC_FILE" \
  | sed "s/export const //" \
  | sed "s/Spec: ComponentSpec.*//" \
  | while read -r specName; do
    # Convert camelCase spec name to PascalCase component name
    componentName=$(echo "$specName" | sed 's/\([A-Z]\)/ \1/g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) substr($i,2); print}' | tr -d ' ')
    componentName=${componentName%Spec}

    # Check file existence
    fileExists="❌"
    if [ -f "$COMPONENTS_DIR/${componentName}.tsx" ]; then
      fileExists="βœ…"
    fi

    # Check test file
    hasTest="❌"
    if [ -f "$COMPONENTS_DIR/__tests__/${componentName}.test.tsx" ]; then
      hasTest="βœ…"
    fi

    # Check Storybook story
    hasStory="❌"
    if [ -f "$COMPONENTS_DIR/${componentName}.stories.tsx" ]; then
      hasStory="βœ…"
    fi

    echo "| ${componentName} | ${fileExists} | ${hasTest} | ${hasStory} | ☐ |"
  done

echo ""

# Check for hardcoded colors across all component files
echo "## Design Token Compliance Audit"
echo "Components with hardcoded colors (must fix before merge):"
grep -rln 'bg-white\|bg-gray-\|text-gray-\|text-black\|border-gray-\|bg-blue-\|bg-red-\|text-green-' \
  "$COMPONENTS_DIR" 2>/dev/null \
  | grep '\.tsx$' \
  | grep -v '\.stories\.\|\.test\.' \
  | while read -r f; do
    count=$(grep -c 'bg-white\|bg-gray-\|text-gray-\|text-black\|border-gray-\|bg-blue-\|bg-red-\|text-green-' "$f" || echo 0)
    echo "  $f β€” $count instances"
  done

echo ""
echo "Run 'grep -rn \"bg-white|text-gray\" src/components/' for specific lines."
Mental Model
Batch in Phases, Not in Parallel
Mixing spec writing, generation, and refinement in the same session creates context-switching overhead that eliminates the workflow's time advantage.
  • Session 1: Write and review all specs β€” no generation until specs are approved
  • Session 2: Generate all components with v0 β€” one prompt per spec, save all raw output
  • Session 3: Apply the five Cursor commands to each component sequentially
  • Session 4: Quality gate review β€” visual check, dark mode, accessibility, bundle size
  • Separating phases means each session has one cognitive mode β€” spec review, generation, or refinement
πŸ“Š Production Insight
A developer who mixes generation and refinement in the same session averages 22 minutes per component.
The same developer working in separate phases averages 15 minutes per component.
The difference is context-switching: jumping between v0 browser, Cursor, and the quality checklist costs 7 minutes per component.
At 50 components, that is 5.8 hours of avoidable overhead.
🎯 Key Takeaway
Total workflow time is 15-17 minutes per component β€” not 3 minutes.
50 components require approximately 12-14 hours of focused work across multiple sessions.
Batch in phases: all specs first, all generation second, all refinement third, quality gates last.

Quality Gates: The Four Non-Negotiable Checks

Automation without quality gates multiplies technical debt. A component that takes 15 minutes to generate and refine takes 4 hours to debug in production if it ships with a broken dark mode, an accessibility violation, or a performance regression.

The four quality gates apply to every AI-generated component before it merges. They are not suggestions and they are not skippable when the sprint is tight β€” the sprint will be tighter after the production incident.

Gate 1: Design token compliance. Run the hardcoded color grep and fix every match. Toggle dark mode in the browser and visually inspect every state of the component.

Gate 2: Accessibility audit. Run axe-cli against the rendered component. Fix every violation before proceeding. Manually tab through the component with keyboard-only navigation.

Gate 3: Integration with real data. Mount the component with real API data, not the mock data from the Storybook story. Edge cases in real data expose type mismatches and empty state bugs that mock data never triggers.

Gate 4: Bundle impact check. Check that no unexpected dependencies were added. Anything over 5KB of additional bundle size requires justification.

scripts/quality-gates.sh Β· BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
#!/usr/bin/env bash
# quality-gates.sh
# Run quality gates on a specific generated component before merging.
# Usage: ./scripts/quality-gates.sh src/components/MetricCard.tsx
#
# Prerequisites:
#   - axe-cli: npm install -g @axe-core/cli
#   - bundlesize: configured in package.json or .bundlesizerc
#   - Dev server running on localhost:3000
#   - Storybook running on localhost:6006

set -euo pipefail

COMPONENT_FILE="${1:-}"
if [ -z "$COMPONENT_FILE" ]; then
  echo "Usage: $0 <component-file-path>"
  echo "Example: $0 src/components/MetricCard.tsx"
  exit 1
fi

COMPONENT_NAME=$(basename "$COMPONENT_FILE" .tsx)
PASS=true

echo "=================================================="
echo " Quality Gates: $COMPONENT_NAME"
echo "=================================================="
echo ""

# ------------------------------------------------------------------
# GATE 1: Design token compliance
# ------------------------------------------------------------------
echo "--- Gate 1: Design Token Compliance ---"

HARDCODED=$(grep -c 'bg-white\|bg-gray-\|text-gray-\|text-black\|border-gray-\|bg-blue-\|bg-red-\|text-green-\|bg-yellow-' "$COMPONENT_FILE" 2>/dev/null || echo 0)

if [ "$HARDCODED" -eq 0 ]; then
  echo "βœ… No hardcoded color classes found"
else
  echo "❌ FAIL: $HARDCODED hardcoded color class(es) found:"
  grep -n 'bg-white\|bg-gray-\|text-gray-\|text-black\|border-gray-\|bg-blue-\|bg-red-\|text-green-' "$COMPONENT_FILE" || true
  PASS=false
fi

echo ""
echo "Manual check required:"
echo "  [ ] Toggle dark mode in browser β€” inspect all component states visually"
echo "  [ ] Check loading state in dark mode"
echo "  [ ] Check error state in dark mode (if applicable)"
echo ""

# ------------------------------------------------------------------
# GATE 2: Accessibility audit
# ------------------------------------------------------------------
echo "--- Gate 2: Accessibility ---"

# Check if Storybook story exists
STORY_FILE="src/components/${COMPONENT_NAME}.stories.tsx"
if [ ! -f "$STORY_FILE" ]; then
  echo "⚠️  No Storybook story found at $STORY_FILE"
  echo "    Create a story and run: npx axe-cli http://localhost:6006/iframe.html?id=..."
else
  echo "βœ… Storybook story found"
  echo "   Run: npx axe-cli http://localhost:6006/iframe.html?id=$(echo $COMPONENT_NAME | tr '[:upper:]' '[:lower:]')-default --tags wcag2a,wcag2aa"
fi

# Static check: find onClick handlers without keyboard equivalents
ONKEY_MISSING=$(grep -c 'onClick' "$COMPONENT_FILE" 2>/dev/null || echo 0)
IF_MISSING=$(grep -c 'onKeyDown\|onKeyUp\|role=' "$COMPONENT_FILE" 2>/dev/null || echo 0)

if [ "$ONKEYDOWN_MISSING" -gt 0 ] && [ "$IF_MISSING" -eq 0 ]; then
  echo "⚠️  onClick handlers found without corresponding keyboard handlers or role attributes"
  echo "    Verify keyboard navigation works for all interactive elements"
fi

echo ""
echo "Manual check required:"
echo "  [ ] Tab through entire component with keyboard only"
echo "  [ ] Verify focus indicators are visible"
echo "  [ ] Test with VoiceOver (macOS) or NVDA (Windows) if component is interactive"
echo ""

# ------------------------------------------------------------------
# GATE 3: TypeScript type check
# ------------------------------------------------------------------
echo "--- Gate 3: TypeScript ---"
npx tsc --noEmit 2>&1 | grep -i "$COMPONENT_NAME" || echo "βœ… No TypeScript errors for $COMPONENT_NAME"
echo ""

# ------------------------------------------------------------------
# GATE 4: Bundle impact (requires @next/bundle-analyzer configured)
# ------------------------------------------------------------------
echo "--- Gate 4: Bundle Impact ---"
echo "Check new imports added by this component:"
grep '^import' "$COMPONENT_FILE" \
  | grep -v '@company/ui\|@/lib\|@/hooks\|react\|lucide-react\|next' \
  | while read -r line; do
    echo "  ⚠️  External import β€” verify this library is already in package.json: $line"
  done || echo "βœ… No unexpected external imports"

echo ""
echo "Run ANALYZE=true npm run build to check full bundle impact if external imports were found."
echo ""

# ------------------------------------------------------------------
# Summary
# ------------------------------------------------------------------
echo "=================================================="
if [ "$PASS" = true ]; then
  echo " Automated checks: PASSED"
else
  echo " Automated checks: FAILED β€” fix issues above before merging"
fi
echo " Complete the manual checks above before marking PR ready."
echo "=================================================="
⚠ The Debt Multiplier Effect
πŸ“Š Production Insight
The team that skipped quality gates to hit sprint velocity targets spent the next two sprints fixing the components they shipped.
The team that ran quality gates finished 10% fewer components in the sprint but had zero rework in the following sprints.
Speed without quality gates is a loan at 400% interest.
🎯 Key Takeaway
Four gates: design token compliance, accessibility, TypeScript, and bundle impact.
The manual dark mode visual check is not replaceable by automation β€” run it for every component.
Quality gates are what converts raw generation speed into sustainable delivery speed.

Preventing Component Sprawl

Component sprawl is the silent failure mode of AI-assisted generation. Without discipline, a library of 50 components becomes 80 components where 30 are slight variations of the same base. Each variant looks slightly different, is maintained separately, and fragments the design system's visual consistency.

The prevention check is simple: before opening v0, ask 'can this be a variant of an existing component?' If the answer is yes or maybe, extend the existing component instead of generating a new one.

Cursor is the right tool for variant extraction. After generating 5-10 components, use Cursor Chat with @codebase to identify similar components and consolidate them into a single configurable base.

src/components/StatusBadge.tsx Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
// StatusBadge.tsx
// Example of variant consolidation β€” three generated components merged into one.
//
// BEFORE consolidation, the library had:
//   ActiveBadge.tsx    β€” green badge for active status
//   PendingBadge.tsx   β€” yellow badge for pending status
//   ErrorBadge.tsx     β€” red badge for error status
//
// Cursor Cmd+K: "Consolidate ActiveBadge, PendingBadge, and ErrorBadge
//               into a single StatusBadge component with a status prop
//               that controls color and icon via a config map."
//
// AFTER consolidation: one component, one maintenance point.

import { Badge } from '@company/ui';
import { cn } from '@/lib/utils';
import {
  CheckCircleIcon,
  ClockIcon,
  XCircleIcon,
  MinusCircleIcon,
} from 'lucide-react';

export type StatusValue = 'active' | 'pending' | 'error' | 'inactive';

interface StatusConfig {
  label: string;
  icon: React.ElementType;
  className: string;
  ariaLabel: string;
}

// Single config map β€” change a status's appearance here, it updates everywhere
const STATUS_CONFIG: Record<StatusValue, StatusConfig> = {
  active: {
    label: 'Active',
    icon: CheckCircleIcon,
    // Use CSS variable-based semantic tokens β€” adapts to dark mode automatically
    className: 'bg-emerald-100 text-emerald-800 dark:bg-emerald-900 dark:text-emerald-100',
    ariaLabel: 'Status: Active',
  },
  pending: {
    label: 'Pending',
    icon: ClockIcon,
    className: 'bg-amber-100 text-amber-800 dark:bg-amber-900 dark:text-amber-100',
    ariaLabel: 'Status: Pending',
  },
  error: {
    label: 'Error',
    icon: XCircleIcon,
    className: 'bg-red-100 text-red-800 dark:bg-red-900 dark:text-red-100',
    ariaLabel: 'Status: Error',
  },
  inactive: {
    label: 'Inactive',
    icon: MinusCircleIcon,
    className: 'bg-muted text-muted-foreground',
    ariaLabel: 'Status: Inactive',
  },
};

export interface StatusBadgeProps {
  /** Current entity status */
  status: StatusValue;
  /** Override default label derived from status */
  label?: string;
  /** Show icon alongside label */
  showIcon?: boolean;
  /** Size variant */
  size?: 'sm' | 'md';
  /** Additional CSS classes */
  className?: string;
}

export function StatusBadge({
  status,
  label,
  showIcon = true,
  size = 'md',
  className,
}: StatusBadgeProps) {
  const config = STATUS_CONFIG[status];
  const Icon = config.icon;
  const displayLabel = label ?? config.label;

  return (
    <Badge
      variant="outline"
      className={cn(
        config.className,
        size === 'sm' && 'text-xs px-1.5 py-0',
        'border-0 font-medium',
        className
      )}
      aria-label={config.ariaLabel}
    >
      {showIcon && (
        <Icon
          className={cn('mr-1', size === 'sm' ? 'h-3 w-3' : 'h-3.5 w-3.5')}
          aria-hidden="true"
        />
      )}
      {displayLabel}
    </Badge>
  );
}

StatusBadge.displayName = 'StatusBadge';
Mental Model
The Variant-First Check
Before generating a new component, open your component-specs.ts and search for a similar spec. If one exists within 70% of what you need, extend it β€” do not generate a new one.
  • Run the sprawl check after every 10 components: use Cursor Chat to identify similar components in the library
  • A config map (STATUS_CONFIG) is the right pattern for variants that differ by data, not by structure
  • One component with 4 status variants has one test file, one Storybook story, one maintenance point
  • Four separate badge components have four test files, four stories, four maintenance points β€” and will drift apart
πŸ“Š Production Insight
A library hit 90 components when the spec count was 60.
30 components were unauthorized variants generated ad-hoc by developers who found the spec process too slow.
Consolidating them with Cursor took a full sprint β€” longer than generating them had.
Rule: the spec approval process is the anti-sprawl gate. No spec, no generation.
🎯 Key Takeaway
Ask 'can this be a variant?' before every generation session.
Use a config map pattern to consolidate similar components into one configurable base.
Run a sprawl check after every 10 components β€” catch consolidation opportunities before they become maintenance debt.
πŸ—‚ v0 vs. Cursor: Role Comparison
Each tool excels at a different phase β€” neither is a substitute for the other
Capabilityv0Cursor
Initial component generationStrong β€” produces complete styled components from structured promptsNot designed for β€” use v0 for generation from scratch
Project contextualizationNone β€” output is generic, does not know your codebaseStrong β€” @codebase context adapts output to your specific project
Design token compliancePoor β€” defaults to hardcoded Tailwind classes without project configStrong β€” Cmd+K replaces hardcoded colors with semantic tokens in one pass
Prop type refinementBasic β€” generates broad types (string instead of union types)Strong β€” aligns prop types with your existing interfaces via Cmd+K
Variant generation from baseModerate β€” requires a separate prompt per variantStrong β€” generates variants from the base component inline via Cmd+K
Accessibility improvementsPoor β€” generates visually correct but ARIA-incomplete componentsStrong β€” adds ARIA attributes and keyboard handlers via targeted Cmd+K
Component consolidation (anti-sprawl)Not applicableStrong β€” Chat with @codebase identifies and merges similar components
Dark mode compliancePoor β€” hardcoded colors break dark modeStrong β€” token replacement fixes dark mode in one targeted command
Learning curveLow β€” conversational prompt interfaceMedium β€” requires familiarity with @codebase context and Cmd+K scoping

🎯 Key Takeaways

  • Total workflow time is 15-17 minutes per component β€” 2 minutes to generate, 8-10 to refine, 5 to quality-gate β€” not 3 minutes
  • Run the five-check candidacy evaluation before every generation session β€” two failing checks means build manually
  • Write all specs before generating any components β€” the spec phase catches sprawl and duplicates before any code is written
  • The five Cursor commands are the quality baseline: named exports, strict types, semantic tokens, proper loading states, ARIA attributes
  • Dark mode visual inspection is mandatory β€” it is the check that catches the failure mode the production incident was caused by
  • Component sprawl is the silent failure mode β€” always ask 'can this be a variant?' before generating a new component
  • Quality gates convert raw generation speed into sustainable delivery speed β€” skip them and you are taking a loan at 400% interest

⚠ Common Mistakes to Avoid

    βœ•Generating before evaluating candidacy
    Symptom

    Hours spent prompting v0 for a complex interactive component that produces four inconsistent outputs, none of which are close to the design requirement. The team abandons the generated code and builds manually anyway β€” wasting the generation time.

    Fix

    Run the five-check candidate evaluation before opening v0. Two or more failing checks means build manually. The check takes 2 minutes and prevents multi-hour generation sessions on poor-fit components.

    βœ•Writing specs after generating instead of before
    Symptom

    Library grows to 80 components where 30 are slight variations of the same base card or badge. Consolidation takes a full sprint to undo.

    Fix

    Write all specs before generating any components. Run a duplicate check across the spec list β€” ask 'can this be a variant of an existing spec?' before adding each one. No spec means no generation session.

    βœ•Using the generic prompt instead of the structured template
    Symptom

    v0 produces a vague component with hardcoded colors, no variants, no states, and TypeScript types that do not match the data model. Rework takes longer than building manually.

    Fix

    Use the five-section prompt template (core function, props, variants, states, style rules) derived directly from the ComponentSpec. The structured prompt reduces rework from 45 minutes to 8-10 minutes.

    βœ•Skipping the dark mode visual check in the quality gate
    Symptom

    Components look correct in development (light mode). Dark mode breaks 18 components simultaneously in production because hardcoded color classes were missed in code review.

    Fix

    Dark mode visual inspection is a mandatory quality gate step β€” not optional when the sprint is tight. Toggle dark mode in the browser and inspect every component state before marking the PR ready.

    βœ•Treating the five Cursor commands as optional
    Symptom

    Teams apply one or two Cursor commands and skip the rest to save time. The skipped commands (semantic tokens, ARIA attributes) produce the exact failures the quality gate is designed to catch.

    Fix

    All five Cursor commands are the quality baseline for every component. They take 8-10 minutes total. Skipping any one does not save time β€” it defers the fix to post-merge where it costs 4x more.

    βœ•Using ANALYZE=true npm run build without configuring @next/bundle-analyzer first
    Symptom

    ANALYZE=true has no effect because the plugin was never configured. Developers assume the bundle check passed when it was silently skipped.

    Fix

    Configure @next/bundle-analyzer in next.config.js before relying on it. Add the ANALYZE=true npm run build command to the quality gate checklist only after verifying the plugin is active. Alternatively, use bundlesize with a .bundlesizerc configuration file for a simpler setup.

Interview Questions on This Topic

  • QHow would you design a system to generate UI components at scale while maintaining design system compliance?SeniorReveal
    Three layers are required. First, a specification system that defines every component's requirements before generation begins β€” props, variants, states, allowed primitives, and explicitly what should not be generated (business logic). The spec is the single source of truth and the anti-sprawl gate: no spec means no generation session. Second, a generation pipeline using v0 with structured prompts derived from the spec. The prompts include explicit style rules β€” 'semantic Tailwind tokens only, no hardcoded color classes' β€” to reduce the most common AI output failures. The five-check candidacy evaluation runs before any prompt is written to filter out poor-fit components. Third, a quality gate process with four mandatory checks: design token compliance (hardcoded color grep plus dark mode visual check), accessibility (axe-cli plus keyboard navigation test), TypeScript (tsc --noEmit), and bundle impact (import audit). No component merges without passing all four. The pipeline produces consistent output because the spec, the prompt template, and the quality gates are all standardized β€” not because the AI is reliable.
  • QWhat are the risks of using AI to generate code at scale, and how do you mitigate each one?Mid-levelReveal
    Three primary risks with specific mitigations. Design system drift: AI does not know your design tokens and defaults to hardcoded color classes. Mitigation: structured prompts with explicit semantic token instructions, mandatory post-generation grep for hardcoded colors, and dark mode visual inspection as a quality gate step. Component sprawl: without a spec-first process, developers generate ad-hoc variations that fragment the design system. Mitigation: require an approved spec before any generation session. Run a sprawl check after every 10 components using Cursor Chat to identify consolidation opportunities. Accessibility debt: AI generates visually correct interactive components that lack ARIA attributes and keyboard handlers. Mitigation: ARIA addition is one of five mandatory Cursor refinement commands, and axe-cli runs as part of the quality gate before merge. The meta-rule: AI accelerates the scaffolding phase. Engineering judgment determines what actually ships. The workflow's value comes from the specification system and quality gates, not from trusting the AI output.
  • QHow would you enforce design token compliance across AI-generated components in a large codebase?SeniorReveal
    Three layers of enforcement. Prevention at generation: structured v0 prompts include explicit style rules listing forbidden classes (bg-white, bg-gray-, text-gray-) and required semantic alternatives (bg-background, text-foreground, text-muted-foreground). This reduces violations before the code exists. Detection at refinement: the first Cursor Cmd+K command after pasting v0 output is 'replace all hardcoded Tailwind color classes with semantic tokens from tailwind.config.ts.' The hardcoded color grep runs as the first quality gate check. Enforcement at merge: a custom ESLint rule in the CI pipeline flags any component file containing raw Tailwind color classes. The PR cannot merge with ESLint failures. This makes compliance a structural requirement, not a review-dependent one. The three layers are additive β€” each catches what the previous missed. Relying on any single layer produces the dark mode incident.
  • QWhen would you not use AI to generate a component, and how do you make that decision quickly?Mid-levelReveal
    Use the five-check candidacy evaluation. Ask whether the component is presentational (accepts data via props, no internal state), follows a standard pattern (similar components exist in shadcn/ui or common design systems), has standard accessibility requirements (not custom drag-and-drop or complex focus management), contains no embedded business logic, and is decomposable into two or three shadcn/ui primitives. If two or more answers are no, build manually. The evaluation takes 2 minutes. It prevents spending 3 hours prompting v0 for a component that produces inconsistent outputs and ultimately gets built manually anyway. Good candidates: cards, badges, alerts, form field wrappers, stat displays, navigation items. Poor candidates: drag-and-drop interfaces, rich text editors, custom date pickers, components with complex gesture handling or domain-specific business logic baked in.

Frequently Asked Questions

Is PPR production-ready in Next.js 15?

PPR is available as an experimental feature in Next.js 15. It requires the experimental.ppr flag in next.config.ts. The API is stable but the underlying implementation may change in future releases. Vercel supports PPR in production on their platform. Self-hosted deployments require careful testing of the edge runtime compatibility.

Can I use PPR with Server Components that fetch data?

Yes. PPR works with async Server Components. The key requirement is that dynamic data fetches must be inside Suspense boundaries. Data fetched outside Suspense is considered part of the static shell and must be available at build time. If a Server Component fetches dynamic data outside Suspense, Next.js will force the entire route to dynamic rendering.

How does PPR interact with ISR (Incremental Static Regeneration)?

PPR and ISR are complementary. The static shell portion of a PPR page can use ISR revalidation to regenerate periodically. The dynamic holes always resolve at request time regardless of ISR settings. You can set revalidate for the static shell while keeping dynamic content fresh per-request.

Does PPR work with Edge Runtime?

PPR is designed for Edge Runtime. The static shell is served from CDN edge nodes. Dynamic hole resolution executes at the edge, minimizing latency to the user. This is a key advantage over traditional SSR which may execute in a centralized server region.

πŸ”₯
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousAdvanced Error Handling & Logging in Next.js 16 ApplicationsNext β†’Creating Reusable Component Libraries with shadcn/ui
Forged with πŸ”₯ at TheCodeForge.io β€” Where Developers Are Forged