How I Generate 50+ shadcn Components Automatically with AI (2026 Workflow)
- AI is a scaffolding tool, not a shipping tool β it accelerates the mechanical work; engineering judgment determines what reaches production.
- Start with a structured specification document, not a creative prompt β consistency across 50+ components comes from the spec, not from careful individual prompting.
- v0.dev generates and Design Mode polishes; Cursor contextualizes with Agent mode β each tool has a distinct, non-overlapping role.
- Combine v0.dev (generation + Design Mode) with Cursor (Agent mode + .cursorrules) to scaffold shadcn/ui components in minutes instead of hours
- Biggest risk: AI defaults to hardcoded Tailwind classes that break dark mode and custom themes β enforce semantic tokens at every step
- Treat every AI output as a first draft, not a finished component
Hardcoded colors in generated component
grep -rn -e 'bg-white' -e 'bg-black' -e 'text-gray-[0-9]' -e 'bg-gray-[0-9]' src/components/ --include='*.tsx'grep -rn -e 'bg-blue-[0-9]' -e 'bg-red-[0-9]' -e 'bg-green-[0-9]' -e 'text-black' src/components/ --include='*.tsx'Component bundle size increased unexpectedly after integration
npx @next/bundle-analyzergrep -rn "from 'lodash'" src/components/ --include='*.tsx'Accessibility audit failures on generated interactive components
npx @axe-core/cli http://localhost:3000/component-preview --tags wcag2a,wcag2aagrep -rn 'onClick' src/components/ --include='*.tsx' | grep -v 'onKeyDown'TypeScript strict mode errors after pasting generated component
npx tsc --noEmit --strict src/components/YourComponent.tsxgrep -rn ': any' src/components/YourComponent.tsxProduction Incident
Production Debug GuideCommon symptoms when integrating AI-generated shadcn/ui components β Tailwind v4 and React 19 aware
use() is more appropriate than useEffect for the data fetching pattern.Manual component creation is a scaling bottleneck. Each component requires boilerplate, variant logic, accessibility markup, and design token integration. At five components this is manageable. At fifty it is unsustainable.
AI tools automate the scaffolding phase. By combining v0.dev's generative output β now with Design Mode for visual polishing before export β with Cursor's contextual editing β now with full Agent mode and .cursorrules for project-wide rule enforcement β you create a pipeline that produces dozens of consistent components per session. The developer shifts from writing boilerplate to curating and refining AI output.
This article documents a workflow our team used to generate 52 production components for a B2B SaaS dashboard in approximately six hours of active work across two engineers. The component library covered data display, form inputs, navigation, feedback states, and layout primitives. Without this pipeline, the same output would have taken three to four days.
The risk is real: shipping AI output that drifts from your design system's tokens, breaks accessibility standards, or imports unnecessary dependencies. This article covers the workflow, the failure modes we hit, and the quality gates that prevent them from reaching production.
A note on tooling versions: this workflow reflects the state of these tools in early 2026. Tailwind CSS v4 introduced a CSS-first configuration model β the @theme directive replaces the tailwind.config.ts file for token definitions. React 19 introduced the use() hook and first-class server component support, which affects how you structure components that fetch data. Both are addressed where relevant.
The Two-Tool Workflow: v0.dev and Cursor
This workflow uses each tool for the phase where it excels. Trying to do everything in one tool produces worse results and slower output.
v0.dev handles initial generation. It translates structured text prompts into functional React components using shadcn/ui primitives and Tailwind CSS. In 2026 it also offers Design Mode β a visual editor that lets you tweak layout, spacing, and color directly in the interface before exporting code. This removes a category of small fixes that previously required a Cursor round-trip.
Cursor handles contextualization. Its AI features β Chat with @codebase context, Agent and Composer mode for multi-file autonomous edits, inline Cmd+K transformations, and .cursorrules for project-wide rule enforcement β adapt generic v0.dev output to your project's design tokens, existing hooks, type definitions, and coding conventions.
The developer's role is quality control. You write the spec, review the generated scaffold, direct the refactoring, and sign off before merge. The AI handles the mechanical labor; you handle the judgment calls.
// Conceptual pipeline β illustrates the workflow stages // v0.dev and Cursor do not expose programmatic APIs; this is a process diagram in code form interface ComponentSpec { name: string; // PascalCase: UserAvatar, MetricCard, DataTable description: string; // One sentence: what it does, not how it looks props: PropSpec[]; variants: VariantSpec[]; states: string[]; // Always include: loading, error, empty isServerComponent: boolean; // React 19: explicit decision required } interface PipelineStage { tool: 'v0.dev' | 'cursor' | 'developer'; input: string; output: string; qualityCheck: string; } const pipeline: PipelineStage[] = [ { tool: 'developer', input: 'Product requirement or design file', output: 'ComponentSpec β structured definition of props, variants, states', qualityCheck: 'Does the spec describe one cohesive component, or should it be split?', }, { tool: 'v0.dev', input: 'ComponentSpec converted to a structured prompt', output: 'React component scaffold with Tailwind classes and TypeScript types', qualityCheck: 'Does it render? Are hardcoded colors present? Check Design Mode for layout issues.', }, { tool: 'cursor', input: 'v0.dev scaffold pasted into project', output: 'Project-native component using design tokens, existing hooks, and proper types', qualityCheck: 'Does tsc --noEmit pass? Does it render correctly in light and dark mode?', }, { tool: 'developer', input: 'Cursor-adapted component', output: 'Reviewed, tested, and merged component with Storybook story', qualityCheck: 'All four quality gates passed. PR approved.', }, ];
- v0.dev output is a first draft β assume 30 to 50 percent of it needs modification even after Design Mode adjustments.
- Cursor is the structural adaptation tool β it aligns generic output to project-specific context via Agent mode and .cursorrules.
- The developer is the quality gate β no AI output ships without human review of every line.
- Speed comes from repeating the loop efficiently, not from skipping review steps.
Phase 1: Generation with v0.dev
v0.dev translates structured UI descriptions into functional React components. Prompt quality directly determines output quality β a vague prompt produces a vague component that requires extensive rework.
A strong v0.dev prompt includes: the component name, one sentence describing its core function, the key props it accepts, the variants it supports, the states it must handle, the specific shadcn/ui primitives to use, and explicit token requirements.
After initial generation, use Design Mode to fix obvious visual issues β padding, spacing, color, layout β before exporting. This takes two to three minutes and removes a round of Cursor work.
v0.dev output is complete enough to run but not complete enough to ship. It will have hardcoded colors, generic types, and no connection to your project's hooks or utilities. That is expected. That is what Phase 2 addresses.
Create a shadcn/ui {COMPONENT_NAME} component with the following requirements: Core function: {ONE_SENTENCE β what it does, not how it looks} Props: - {PROP_NAME}: {TYPE} β {ONE_LINE_DESCRIPTION} - {PROP_NAME}: {TYPE} β {ONE_LINE_DESCRIPTION} - {PROP_NAME}: {TYPE} (optional) β {ONE_LINE_DESCRIPTION} Variants: - {VARIANT_NAME}: {OPTION_1} | {OPTION_2} | {OPTION_3} - size: sm | md | lg States to handle: - loading: show a Skeleton placeholder - error: show an inline error message with retry option - empty: show an empty state with a descriptive message Technical requirements: - Use these shadcn/ui primitives: {LIST β e.g., Card, Button, Badge, Skeleton} - Style exclusively with semantic color tokens: bg-primary, text-muted-foreground, border-border, text-foreground, bg-muted (Tailwind v4 @theme tokens β no hardcoded scales) - All props must have explicit TypeScript types β no any - Export the component as a named export - Include a Props interface above the component definition - {IF CLIENT COMPONENT}: Add 'use client' directive at top - {IF SERVER COMPONENT}: No useState or useEffect β accept data as props
Phase 2: Customization with Cursor
Cursor transforms the v0.dev scaffold into a project-native component through three steps: contextualize, refactor, and validate.
Step 1 β Contextualize. Paste the v0.dev output into your project at the correct file path. Open Cursor Chat and provide context using @codebase, or explicitly reference key files: @src/styles/globals.css (for Tailwind v4 @theme tokens), @src/types/user.ts, @src/hooks/useDataTable.ts. The more precise the context, the better the adaptation.
Step 2 β Refactor. Use Cmd+K for inline targeted changes or Agent mode for multi-step transformations. Common refactoring commands are shown in the code block below. If you have a .cursorrules file, it enforces project conventions automatically β semantic token usage, import patterns, naming conventions β reducing the number of manual corrections needed.
Step 3 β Validate. Run tsc --noEmit. Render the component in both light and dark mode. Check the output of the hardcoded color audit script. Do not proceed to the quality gates until these three checks pass.
// Cursor refactoring sequence β use these as Cmd+K prompts or Agent mode instructions // Run them in order for consistent results // Step 1: Token compliance // "Replace all hardcoded Tailwind color classes with semantic tokens. // Reference the @theme block in src/styles/globals.css for available token names. // Do not use bg-white, text-gray-*, bg-gray-*, or any raw color scale." // Step 2: Hook integration // "Replace the local useState and useEffect data fetching logic with our // custom useDataTable hook from @/hooks/useDataTable. // The hook accepts: { data, pageSize, sortable, filterable }. // It returns: { rows, pagination, sort, filter, isLoading, error }." // Step 3: Type alignment // "Replace the generic Row type with the TableRow interface from @/types/table.ts. // Replace any with the specific types from that file. // Run tsc --noEmit after changes to confirm no type errors." // Step 4: State handling // "Add loading state using our Skeleton component from @/components/ui/skeleton. // Add error state using our Alert component with a retry button. // Add empty state with an EmptyState component showing the emptyMessage prop." // Step 5: Server/client boundary (React 19) // "Evaluate whether this component requires 'use client'. // If it only receives data via props and has no browser-only APIs, // remove 'use client' and convert to a server component." // Step 6: Variant extraction // "Using the base component above as the default variant, // create a compact variant that reduces row padding to py-1 // and hides the checkbox selection column. // Export it as DataTableCompact from the same file." // After all steps, the component should: // - Import from project barrel exports, not direct component paths // - Use semantic color tokens exclusively // - Delegate state management to useDataTable // - Handle loading, error, and empty states // - Pass tsc --noEmit with zero errors // - Render correctly in light and dark mode
Scaling to 50+ Components: The Specification System
Generating one component is a technique. Generating fifty consistently is a system. The difference is the Component Specification Document.
Before generating any component, define every component in a structured spec. For each component: name, one-sentence description, key props (three to five), variant options, required states, and whether it is a server or client component. This document becomes your prompt source and your living documentation.
The batch process is sequential and repeatable: spec β prompt β v0.dev generation β Design Mode review β Cursor refactor β quality gate β Storybook story β merge. Each component follows the same pipeline. Variation in output quality comes from variation in spec quality β not from the tools.
In our six-hour session generating 52 components, two engineers worked in parallel on separate component groups. One handled data display components (tables, charts, stat cards); the other handled form inputs and navigation. Parallel execution is possible because each component is self-contained and the pipeline is the same for both.
// Component Specification Schema // Every component is defined here before generation begins // This file is reviewed once; the generated component is reviewed once // Both together take less time than manual component creation interface PropSpec { name: string; type: string; required: boolean; description: string; defaultValue?: string; } interface VariantSpec { name: string; // e.g., "size", "status", "density" options: string[]; // e.g., ["sm", "md", "lg"] default: string; } interface ComponentSpec { name: string; // PascalCase description: string; // One sentence β what it does props: PropSpec[]; variants: VariantSpec[]; states: string[]; // loading, error, empty β always all three shadcnPrimitives: string[]; // Exact primitives to reference in the prompt isServerComponent: boolean; // React 19: explicit decision before generation storybook: boolean; // Always true β every component gets a story } // Example specs const componentSpecs: ComponentSpec[] = [ { name: 'DataTable', description: 'Sortable, filterable table with pagination and row selection', props: [ { name: 'data', type: 'T[]', required: true, description: 'Array of row data objects' }, { name: 'columns', type: 'ColumnDef<T>[]', required: true, description: 'Column configuration array' }, { name: 'onRowSelect', type: '(rows: T[]) => void', required: false, description: 'Callback fired when row selection changes' }, { name: 'emptyMessage', type: 'string', required: false, defaultValue: 'No results found', description: 'Message shown in empty state' }, ], variants: [ { name: 'density', options: ['compact', 'default', 'comfortable'], default: 'default' }, ], states: ['loading', 'error', 'empty'], shadcnPrimitives: ['Table', 'Checkbox', 'Button', 'Skeleton', 'Alert'], isServerComponent: false, // Requires interactivity for sorting and selection storybook: true, }, { name: 'MetricCard', description: 'Displays a single KPI metric with label, value, trend indicator, and comparison period', props: [ { name: 'label', type: 'string', required: true, description: 'Metric name' }, { name: 'value', type: 'string | number', required: true, description: 'Current metric value' }, { name: 'trend', type: "'up' | 'down' | 'neutral'", required: false, description: 'Trend direction vs comparison period' }, { name: 'trendValue', type: 'string', required: false, description: 'e.g., "+12.4%" β shown next to trend indicator' }, ], variants: [ { name: 'size', options: ['sm', 'md', 'lg'], default: 'md' }, ], states: ['loading', 'error', 'empty'], shadcnPrimitives: ['Card', 'CardHeader', 'CardContent', 'Skeleton'], isServerComponent: true, // Display only β no interactivity required storybook: true, }, ];
- A structured spec produces consistent components across the entire library because each prompt follows the same pattern.
- Without specs, each generated component drifts toward a different pattern depending on how the prompt was written.
- Specs serve as living documentation β they answer 'why does this component have these props?' without reading the implementation.
- The spec review is the cheapest review in the pipeline. Catch structural problems here, not after generation.
Quality Gates: The Non-Negotiable Checkpoint
Automation without quality gates multiplies technical debt at the same rate it accelerates production. Each of the four gates targets a distinct failure mode that AI generation introduces.
Gate 1 β Visual regression. Render the component in Storybook across all variants and all states (loading, error, empty, populated). Check both light and dark mode. Screenshot comparison catches layout breaks that look fine in isolation but break in composition.
Gate 2 β Accessibility audit. Run axe-core against the component in the browser or Storybook. AI-generated components miss ARIA labels, keyboard navigation, and focus management at a high rate. This gate is not optional β it is a legal requirement in many jurisdictions.
Gate 3 β Integration test with real data. Mock data hides edge cases that production data exposes: long strings, null values, empty arrays, deeply nested objects. Connect the component to your actual API or a fixture that mirrors production data shape.
Gate 4 β Bundle size check. AI sometimes suggests heavy dependencies for problems that have lightweight solutions. A generated table component should not pull in a full charting library. Measure the bundle impact of each component before merge.
For simple presentational components (cards, badges, alerts), all four gates take eight to ten minutes. For complex interactive components (data tables, multi-step forms), they take twenty to thirty minutes. That time is not optional β it is the price of sustainable speed.
// Quality gate runner β run before any generated component is merged // Requires: Storybook running, dev server running, tsc available interface QualityReport { componentName: string; passed: boolean; failures: string[]; warnings: string[]; } async function runQualityGates(componentPath: string, componentName: string): Promise<QualityReport> { const report: QualityReport = { componentName, passed: true, failures: [], warnings: [], }; console.log(`\nRunning quality gates for ${componentName}...`); // Gate 1: TypeScript check // Run before visual checks β type errors indicate structural problems const typeCheckPassed = await runCommand(`npx tsc --noEmit --strict ${componentPath}`); if (!typeCheckPassed) { report.failures.push('Gate 1 failed: TypeScript type errors detected. Run tsc --noEmit to see full output.'); } // Gate 2: Visual regression β requires Storybook // Checks both light and dark mode renders for all variants const visualPassed = await runVisualRegression(componentName); if (!visualPassed) { report.failures.push('Gate 2 failed: Visual regression detected. Check Storybook screenshots for diff.'); } // Gate 3: Accessibility audit β requires dev server // Target the Storybook story URL for the component const storybookUrl = `http://localhost:6006/iframe.html?id=${componentName.toLowerCase()}--default`; const a11yViolations = await runCommand( `npx @axe-core/cli "${storybookUrl}" --tags wcag2a,wcag2aa --exit` ); if (!a11yViolations) { report.failures.push('Gate 3 failed: Accessibility violations found. Run axe-core manually to see full report.'); } // Gate 4: Hardcoded color audit const colorAuditPassed = await runCommand( `! grep -rn -e 'bg-white' -e 'bg-black' -e 'text-gray-[0-9]' -e 'bg-gray-[0-9]' ${componentPath}` ); if (!colorAuditPassed) { report.failures.push('Gate 4 failed: Hardcoded color classes detected. Replace with semantic tokens.'); } // Gate 5: Bundle size impact // Warning if over 5KB, failure if over 20KB for a single component const bundleImpactKB = await measureBundleImpact(componentPath); if (bundleImpactKB > 20) { report.failures.push(`Gate 5 failed: Component adds ${bundleImpactKB}KB to bundle. Investigate imports.`); } else if (bundleImpactKB > 5) { report.warnings.push(`Gate 5 warning: Component adds ${bundleImpactKB}KB. Review imports for tree-shaking opportunities.`); } report.passed = report.failures.length === 0; if (report.passed) { console.log(`β ${componentName} passed all quality gates`); if (report.warnings.length > 0) { console.log(` Warnings: ${report.warnings.join(', ')}`); } } else { console.log(`β ${componentName} failed ${report.failures.length} gate(s):\n ${report.failures.join('\n ')}`); } return report; }
Version Control and Team Workflow at Scale
Generating 50+ components creates a version control and review workflow problem. Without a clear branching and commit strategy, the PR queue becomes unmanageable and review quality drops.
We used a component-group branching strategy: one feature branch per logical group of components (data-display, form-inputs, navigation, feedback). Each branch contained six to ten related components. This kept PR diffs reviewable and allowed parallel work without merge conflicts.
Commit strategy within each branch: one commit per component, with a consistent message format. This makes bisecting straightforward if a component introduces a regression.
Review strategy: the author runs all quality gates locally before opening the PR. The reviewer checks only that the gates passed (via CI output) and does a spot-check on one component's light and dark mode rendering. With quality gates in CI, the reviewer is not re-checking mechanical compliance β they are checking judgment calls.
#!/bin/bash # Component generation workflow β run these commands in sequence # Assumes: main branch is clean, Storybook is configured COMPONENT_NAME=$1 GROUP=$2 # e.g., data-display, form-inputs, navigation if [ -z "$COMPONENT_NAME" ] || [ -z "$GROUP" ]; then echo "Usage: ./component-workflow.sh ComponentName group-name" exit 1 fi # Step 1: Ensure you are on the correct feature branch BRANCH="feat/components-${GROUP}" git checkout "$BRANCH" 2>/dev/null || git checkout -b "$BRANCH" echo "Branch: $BRANCH" echo "Ready to generate: $COMPONENT_NAME" echo "" echo "Workflow:" echo "1. Open v0.dev β paste spec prompt β review in Design Mode β copy output" echo "2. Create file: src/components/${COMPONENT_NAME}.tsx" echo "3. Paste v0.dev output" echo "4. Open Cursor β run refactoring sequence (see Phase 2 prompts)" echo "5. Run type check:" echo " npx tsc --noEmit" echo "6. Run color audit:" echo " grep -rn -e 'bg-white' -e 'text-gray-[0-9]' src/components/${COMPONENT_NAME}.tsx" echo "7. Generate Storybook story with Cursor Agent mode" echo "8. Verify in Storybook: light mode, dark mode, all variants, all states" echo "9. Commit:" echo " git add src/components/${COMPONENT_NAME}.tsx src/components/${COMPONENT_NAME}.stories.tsx" echo " git commit -m 'feat(components): add ${COMPONENT_NAME} β ${GROUP} group'" echo "10. Next component β repeat from step 1"
Common Failure Modes at Scale
After generating components in volume, specific failure patterns become predictable. These are the five most common issues we hit and have seen other teams hit.
- A single Button with five variants is easier to maintain than five separate button components.
- A single Card with size and density variants covers most display use cases without proliferation.
- Track your component count against your spec count. If components grow faster than specs, you have sprawl.
- Use Cursor Agent mode to refactor sprawl: 'Merge ButtonSmall, ButtonLarge, and ButtonIcon into a single Button component with size and icon variant props.'
| Capability | v0.dev | Cursor |
|---|---|---|
| Initial generation from prompt | Strong β produces styled, functional React components from structured text prompts | Not designed for this β use v0.dev for generation from scratch |
| Visual polishing before code export | Strong β Design Mode provides a visual editor for layout, spacing, and color adjustments | Not applicable β Cursor works on code, not visual previews |
| Project contextualization | Limited β output is generic; does not know your hooks, types, or token definitions | Strong β @codebase context, explicit file references, Agent mode, and .cursorrules adapt output to project conventions |
| Design token compliance | Poor β defaults to hardcoded Tailwind color scales regardless of prompt instructions | Strong β can audit and replace hardcoded colors via Cmd+K or Agent mode with explicit token references |
| Variant generation | Moderate β requires separate prompts; Design Mode helps with visual variants | Strong β Agent mode generates variants from the base component in a single instruction |
| TypeScript type refinement | Basic β generates plausible types that may not match your data models | Strong β aligns generated types with existing project interfaces when given explicit file references |
| Multi-file batch refactoring | Not supported β one component output at a time | Strong β Agent mode handles changes across multiple files in a single session |
| Storybook story generation | Not supported | Strong β Agent mode generates complete Storybook v8 story files from the component and fixture data |
| Accessibility remediation | Not supported β no a11y audit or fix capability | Strong β Agent mode adds ARIA attributes and keyboard handlers when given explicit WCAG instructions |
| Learning curve | Low β prompt-based interface with visual Design Mode fallback | Medium β requires understanding of @codebase context, Agent mode workflow, and .cursorrules configuration |
π― Key Takeaways
- AI is a scaffolding tool, not a shipping tool β it accelerates the mechanical work; engineering judgment determines what reaches production.
- Start with a structured specification document, not a creative prompt β consistency across 50+ components comes from the spec, not from careful individual prompting.
- v0.dev generates and Design Mode polishes; Cursor contextualizes with Agent mode β each tool has a distinct, non-overlapping role.
- The .cursorrules file is the single highest-leverage configuration in this workflow β it enforces your design system automatically across every Cursor session.
- Quality gates convert raw speed into sustainable speed β TypeScript, visual, accessibility, and token compliance checks are not optional steps.
- Component sprawl is the silent killer of design system consistency β always check whether a variant suffices before generating a new component.
- Tailwind v4 moves token definitions to @theme in your CSS file β update your prompts, audit scripts, and .cursorrules to reference the correct location.
- React 19 makes the server/client component decision explicit β make it in the spec before you generate, not after you refactor.
β Common Mistakes to Avoid
Interview Questions on This Topic
- QHow would you design a system to automatically generate UI components that adhere to a company's design system?SeniorReveal
- QWhat are the risks of using AI to generate code at scale, and how do you mitigate them?Mid-levelReveal
- QA team generated 30 UI components with AI and shipped them. Two weeks later, dark mode is broken on 18 of them. Walk through how you would diagnose and fix this.Mid-levelReveal
- QHow do you handle the version control and review workflow when generating 50+ components in a short time period?SeniorReveal
- QWhen would you not use AI generation for a UI component?Mid-levelReveal
Frequently Asked Questions
Can I use this workflow with component libraries other than shadcn/ui?
Yes. The pipeline structure β spec, generate, contextualize, validate β applies to any component library. For Radix UI, Headless UI, Mantine, or custom component systems, adjust your v0.dev prompts to reference the correct primitives and your .cursorrules to enforce the correct import patterns. The quality gates are library-agnostic. The token compliance gate depends on your design system's implementation β update the grep patterns to match your token naming convention.
How do you handle components that require complex state management?
Generate the UI shell only β tell v0.dev explicitly to produce a presentational component that accepts data and callbacks via props. Then use Cursor Agent mode to extract any remaining state logic into a dedicated custom hook: useDataTable, useFormValidation, useModalState. The component renders; the hook manages state and side effects. This separation makes the component easier to test (mock the hook), reuse across different contexts, and replace without breaking the UI.
What about performance? Do AI-generated components have performance issues?
They can, and in predictable ways. AI-generated list and table components rarely include memoization. AI-generated form components often recreate handlers on every render. Common fixes: wrap row and cell components in React.memo, use useCallback for all event handlers passed as props, and virtualize any list exceeding 100 items with @tanstack/virtual. Profile with React DevTools Profiler before and after β look for components that render more than twice on a single state change. Performance is part of the integration test quality gate, not an afterthought.
How many components per hour can you realistically generate with this workflow?
For simple presentational components β cards, badges, stat displays, alert banners β eight to twelve per hour across the full pipeline including quality gates. For complex interactive components β data tables, multi-step forms, comboboxes, calendar inputs β three to five per hour. The bottleneck is the quality gate review, not the generation. A simple component takes two minutes to generate and eight to ten minutes to review, test, and integrate. A complex component takes three to five minutes to generate and twenty to thirty minutes to review properly. Do not measure progress by generation speed β measure it by components that have passed all quality gates.
How does Tailwind v4's CSS-first configuration change this workflow?
Two practical changes. First, your design tokens are now defined in your CSS file under @theme, not in tailwind.config.ts. Update your v0.dev prompts, Cursor .cursorrules, and audit scripts to reference globals.css instead of tailwind.config.ts. Second, token names in @theme use CSS custom property syntax internally (--color-primary) but are referenced in Tailwind classes the same way (bg-primary, text-primary). Your semantic token class names do not change β only where they are defined. If you are migrating from Tailwind v3, the main work is moving token definitions from tailwind.config.ts to the @theme block and updating any direct references to the config file in your tooling.
What is the right way to handle the React 19 server and client component decision for generated components?
Make the decision in the spec, before generation. For every component in your spec, add a boolean: isServerComponent. The rule is: if the component requires useState, useEffect, event handlers, or browser-only APIs, it is a client component and needs 'use client'. If it only receives data via props and renders it, it should be a server component. Make this explicit in your v0.dev prompt β either include 'add use client directive' or 'this is a server component β no useState or useEffect.' v0.dev defaults to client component patterns, so you must be explicit. After generation, Cursor Agent mode can convert a client component to a server component if the assessment changes during refactoring.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.