This is where the investment in token architecture pays its largest dividend. When you ask an AI agent to generate a new component, the agent needs to know what visual vocabulary is available. Without tokens, you provide a vague instruction like "match our brand style" and hope the agent guesses correctly. With tokens, you provide a machine-readable specification: here are the colors, the spacing increments, the type sizes, and the motion durations. Use only these values.
The token file becomes part of the agent's context. You include it in the system prompt or inject it as reference material. The agent reads the available tokens and constrains its output to only use declared values. The result is a generated component that is visually consistent with every other component in the system, because it draws from the same vocabulary.
const systemPrompt = `You are a UI component generator.
You MUST use only the design tokens defined below.
Never use raw color values, pixel sizes, or duration values.
Reference tokens using var(--token-name) syntax.
Available tokens:
${tokenFileContents}
Rules:
- Backgrounds: use --color-background, --color-surface, or --color-surface-raised
- Text: use --color-text-primary, --color-text-secondary, or --color-text-muted
- Spacing: use --spacing-section, --spacing-element, --spacing-inline, or --spacing-compact
- Transitions: use --transition-interaction for hover/focus, --transition-layout for size changes
- Never invent new tokens. If a token does not exist for what you need, flag it.`
// The agent now generates:
// background: var(--color-surface);
// padding: var(--spacing-element);
// color: var(--color-text-primary);
// transition: background var(--transition-interaction);
// Instead of:
// background: #ffffff;
// padding: 24px;
// color: #18181b;
// transition: background 100ms ease;
A token file in the system prompt is worth a thousand words of style guidance. Machines do not interpret aesthetic direction. They follow specifications.