AI-Native DesignPattern Posts

Component Design for Conversational Interfaces

Cards, bubbles, tool results, streaming indicators, and error states.

The Prompt Engineering Project February 22, 2025 7 min read

Quick Answer

Chat UI components are specialized interface elements designed for conversational AI products. Essential components include message bubbles with role differentiation, streaming text indicators, typing animations, tool-use and citation displays, feedback buttons, and multi-turn thread management. Effective chat UIs prioritize readability, progressive disclosure, and clear attribution of AI-generated content.

Chat-based AI interfaces are not chat applications. They look similar -- messages in a scrollable column -- but the components they require are fundamentally different from anything in a standard UI library. A traditional message bubble handles text. A conversational AI interface handles streaming tokens, tool invocation results, code output, error states that have no equivalent in human-to-human messaging, and citation metadata that links generated content back to its sources. Building these interfaces with generic chat components produces an experience that feels incomplete at best and broken at worst.

Here are six component patterns that every conversational AI interface needs, along with the design decisions and TypeScript interfaces that define them.

Message Bubbles

The message bubble is the atomic unit of a conversational interface. It must immediately communicate three things: who sent this message, when it was sent, and whether the sender was a human or an AI assistant. The visual distinction between user and assistant messages is the single most important design decision in the entire interface. If a user cannot instantly tell which messages are theirs and which are the system's, the conversation becomes unreadable.

The standard approach is alignment and color. User messages align right with a colored background. Assistant messages align left with a neutral background. Avatars reinforce the distinction but should not be the only differentiator -- they are too small to scan at speed. Timestamps should be available but not prominent. A relative format like "2 min ago" is more scannable than an absolute timestamp in most contexts.

message-bubble.types.ts
interface MessageBubbleProps {
  role: 'user' | 'assistant' | 'system'
  content: string
  timestamp: Date
  avatar?: {
    src: string
    alt: string
  }
  status?: 'sending' | 'sent' | 'error'
  isStreaming?: boolean
  onRetry?: () => void
}

// Accessibility: role="listitem" within a role="list" container.
// Assistant messages need aria-label="Assistant message"
// to distinguish from user messages for screen readers.
// Timestamp uses <time> element with datetime attribute.

Responsive behavior matters more here than in most components. On narrow screens, full-width bubbles with role-based background colors work better than alignment-based differentiation, because indented bubbles waste horizontal space that the content needs. The color distinction carries the semantic load when alignment cannot.

Streaming Indicators

AI responses do not arrive all at once. They stream token by token, and the interface must render them incrementally without jarring layout shifts, flickering, or false signals of completion. The streaming indicator is the component that communicates: the system is actively generating a response. It is not done yet. Wait.

The traditional approach -- a spinner or a typing indicator with bouncing dots -- fails for AI interfaces because it provides no information about progress. A spinner says "something is happening." A streaming text display says "here is what the system is thinking, and more is coming." The second is dramatically more useful because it lets the user evaluate the response as it forms and interrupt if the system is heading in the wrong direction.

streaming-indicator.types.ts
interface StreamingDisplayProps {
  tokens: string[]
  isComplete: boolean
  cursor?: {
    visible: boolean
    blinkRate: number  // ms, typically 500-800
    character: string  // usually '|' or a block cursor
  }
  onTokensRendered?: (count: number) => void
}

// The cursor element animates opacity between 1 and 0.
// When isComplete transitions to true, the cursor fades out
// over 200ms rather than disappearing abruptly.
// Content container uses min-height to prevent collapse
// during the brief pause between token batches.

A spinner says something is happening. Streaming text says what is happening. The difference is the difference between waiting and reading.

The key accessibility concern is that streaming content must not trigger screen reader announcements on every token. Use aria-live="polite" on the container and debounce updates so that assistive technology announces meaningful chunks rather than individual words. When streaming completes, announce the full message once.

Tool Result Cards

When an AI agent calls a tool -- a database query, a web search, a calculation, an API request -- the result needs a structured display that is visually distinct from regular text. Tool results are data, not prose. They should look like data: contained in a card with a clear label identifying which tool produced them, a structured layout appropriate to the data type, and a visual treatment that sets them apart from the conversational flow.

tool-result-card.types.ts
interface ToolResultCardProps {
  toolName: string
  toolIcon?: React.ReactNode
  status: 'running' | 'success' | 'error'
  duration?: number  // ms
  result: ToolResultContent
  isCollapsible?: boolean
  defaultCollapsed?: boolean
}

type ToolResultContent =
  | { type: 'table'; headers: string[]; rows: string[][] }
  | { type: 'key-value'; entries: { key: string; value: string }[] }
  | { type: 'text'; content: string }
  | { type: 'json'; data: unknown }
  | { type: 'image'; src: string; alt: string }

// The card shows a header bar with the tool name and status.
// While running: pulsing border animation, "Running..." label.
// On success: static border, duration badge, expandable content.
// On error: red border, error message, optional retry action.

Collapsibility is essential for tool results. A database query that returns fifty rows should not dominate the conversation. Show a summary -- the tool name, row count, and first few rows -- and let the user expand to see the full result. Default to collapsed for large results and expanded for small ones. The threshold depends on the data type, but three to five rows or ten lines of text is a reasonable default for the expanded view.

Give tool result cards a distinct visual treatment -- a different background color, a left border accent, or a subtle inset shadow. The user must be able to distinguish tool outputs from the assistant's own text at a glance.

Error States

AI interfaces produce error types that traditional UIs never encounter. Model failures, rate limit exhaustion, context window overflow, content policy violations, and tool execution errors each require different messaging and different recovery actions. A generic "Something went wrong" message is worse than useless -- it tells the user nothing about whether they should retry, rephrase, or wait.

error-state.types.ts
interface ConversationalErrorProps {
  type:
    | 'model_error'      // Model returned an error or empty response
    | 'rate_limit'       // Too many requests in the time window
    | 'context_overflow' // Conversation exceeded the context window
    | 'content_policy'   // Response filtered by safety systems
    | 'tool_error'       // A tool invocation failed
    | 'network_error'    // Connection lost during streaming
  message: string
  retryable: boolean
  retryAction?: () => void
  alternativeActions?: {
    label: string
    action: () => void
  }[]
}

// Each error type has a specific recovery suggestion:
// model_error:      "Try again" button
// rate_limit:       "Wait N seconds" countdown + auto-retry
// context_overflow: "Start a new conversation" or "Summarize and continue"
// content_policy:   "Rephrase your request" with guidance
// tool_error:       "Retry tool" or "Skip and continue"
// network_error:    "Reconnect" with automatic retry on connection restore

Context overflow deserves special attention because it is invisible until it happens. The user has been having a productive conversation, and suddenly the system cannot process their message because the conversation history exceeds the model's context window. The error state should explain what happened in plain language and offer concrete next steps: start a new conversation, or summarize the current conversation to free up context space. Never expose token counts or model limitations in raw technical terms.

Code Output

AI assistants generate code constantly, and code output in a conversational interface has requirements that a standard code block does not satisfy. It must be syntax-highlighted for readability, copyable with a single click, identifiable by language, and visually distinct from surrounding prose. It must also handle the streaming case: code that appears token by token must remain syntax-highlighted as it streams, which means the highlighter must be tolerant of incomplete syntax.

code-output.types.ts
interface CodeOutputProps {
  code: string
  language: string
  filename?: string
  isStreaming?: boolean
  showLineNumbers?: boolean
  highlightLines?: number[]
  maxHeight?: number  // px, scrollable overflow
  onCopy?: () => void
  actions?: {
    label: string
    icon: React.ReactNode
    action: () => void
  }[]
}

// The copy button appears on hover in the top-right corner.
// A "Copied" confirmation replaces the button text for 2 seconds.
// Language badge appears in the top-left corner.
// When streaming, syntax highlighting re-runs on each token batch.
// Use a debounced highlighter to avoid re-parsing on every token.

Beyond copy, consider additional actions relevant to the context. A "Run" button for executable code snippets. An "Apply" button that inserts the code into an editor. A "Diff" view when the code modifies an existing file. These actions transform the code block from a passive display into an interactive tool that reduces the friction between seeing code and using it.

Citation Components

When an AI response draws on specific sources -- retrieved documents, web pages, database records -- those sources must be visible and verifiable. Citation components serve two purposes: they build trust by showing where information came from, and they provide navigation by letting users access the original source.

citation.types.ts
interface CitationProps {
  index: number           // [1], [2], etc. inline reference
  source: {
    title: string
    url?: string
    domain?: string
    snippet: string       // relevant excerpt from the source
    type: 'document' | 'web' | 'database' | 'api'
    confidence?: number   // 0-1, how relevant this source is
    accessedAt: Date
  }
  display: 'inline' | 'footnote' | 'sidebar'
}

// Inline citations render as superscript numbers [1] that
// expand to a tooltip on hover showing title and snippet.
// Footnote citations collect at the bottom of the message.
// Sidebar citations render in a parallel column on wide screens.
// Each citation is a link to the original source when a URL exists.

The confidence indicator is optional but valuable. When the system can express how relevant a source is to the generated content, the user can calibrate their trust accordingly. A high-confidence citation means the answer is well-supported. A low-confidence citation means the system found something related but is less certain about its relevance. Display this as a subtle visual treatment -- a filled versus outlined icon, or a color gradient -- not as a raw number that means nothing to most users.


Key Takeaways

1

Message bubbles must instantly communicate role (user vs. assistant) through alignment, color, and optional avatars. The visual distinction is the most important design decision in the interface.

2

Streaming indicators should show the actual content forming, not a generic spinner. Use a blinking cursor and debounced aria-live updates for accessibility.

3

Tool result cards need structured layouts, collapsibility for large results, and distinct visual treatment that separates data from prose in the conversation flow.

4

Error states must be specific to AI failure modes -- rate limits, context overflow, content policy -- with concrete recovery actions, not generic error messages.

5

Code output requires syntax highlighting that works during streaming, one-click copy, and contextual actions like Run or Apply that reduce the gap between reading and using.

6

Citation components build trust by linking generated content to verifiable sources. Use inline references with expandable details, not raw URLs.

Frequently Asked Questions

Common questions about this topic

The Typography Stack: Why Font Choice Signals Product QualityToken-Based Theming: Why It Matters for AI-Generated UI

Related Articles

AI-Native Design

Motion Design for AI Products: Less Is More

The right motion design makes AI products feel responsive. The wrong motion design makes them feel slow. Here's the diff...

AI-Native Design

Token-Based Theming: Why It Matters for AI-Generated UI

Design tokens enable consistent AI-generated interfaces. Here's how CSS custom properties create a machine-readable desi...

AI-Native Design

Why We Built a Design System for an AI Project

Most AI products look the same. We built a full design system with tokens, components, and principles to prove they don'...

All Articles