Prompt Library ArchitectureDeep Dives

How 23 Column Prompts Become One Synchronized Knowledge Base

One questionnaire. Twenty-three prompts. A complete company identity in under four minutes.

The Prompt Engineering Project March 15, 2026 9 min read

Most teams using language models operate in a fundamentally serial mode. They write a prompt, run it, read the output, write another prompt that builds on the first, run that, and repeat. Each step depends on the last. Each step adds latency. And each step accumulates context that the model must carry forward, increasing the probability of drift, hallucination, and contradiction. By the time the fifth prompt executes, the model is working with a context window stuffed with its own prior outputs, and the quality curve is pointing down.

The Prompt Library System works differently. It takes a single input -- one questionnaire -- and dispatches it simultaneously to nine specialized prompt libraries. Each library contains column prompts: structured, single-purpose prompts that each produce exactly one piece of a larger knowledge base. Twenty-three column prompts execute in parallel, reading from the same source, writing to isolated outputs, and completing in under four minutes. The result is not a conversation transcript. It is a synchronized knowledge base.

This article explains what column prompts are, how twenty-three of them execute from one input, and why this architecture produces better results than any sequential prompting approach.

The Problem: Single-Prompt Tools Fail at Scale

The single-prompt paradigm has a scaling ceiling, and most teams hit it before they realize it exists. A single prompt that generates a company mission statement works fine. A single prompt that generates a mission statement, vision, values, positioning, competitive advantages, and bold claims does not. Not because the model cannot produce all of those things, but because asking for all of them in one pass forces the model to manage too many competing objectives simultaneously.

The failure mode is subtle. The model does not refuse. It does not throw an error. It produces output that looks plausible but is internally incoherent. The mission statement and the positioning drift apart. The competitive advantages do not map to the values. The bold claims contradict the differentiators. Each individual section reads well in isolation, but the whole does not hold together as a system.

Sequential chaining -- running prompts one after another and feeding each output into the next -- solves the coherence problem but introduces a new one: context accumulation. By prompt seven, the model is carrying forward the full text of six prior outputs plus the original input. The context window fills. The model starts summarizing rather than preserving detail. Quality degrades in ways that are difficult to detect and impossible to predict.

23
Column Prompts
9
Prompt Libraries
<4
Minutes to Complete
1
Questionnaire Input

What Column Prompts Are

A column prompt is a structured prompt that produces exactly one column of output in a knowledge base. It has a single responsibility: read the questionnaire input, apply its specialized logic, and write one defined output. A column prompt that generates a mission statement does nothing else. It does not also generate values. It does not reference the output of the positioning prompt. It reads from the questionnaire and writes its single column.

The term "column" is literal. In the Notion database that stores the knowledge base, each column prompt maps to one database column. The mission statement prompt fills the Mission Statement column. The target audience prompt fills the Target Audience column. The competitive advantages prompt fills the Competitive Advantages column. Twenty-three prompts, twenty-three columns, one row per company.

column-prompt-structure.ts
interface ColumnPrompt {
  id: string;                    // Unique identifier
  column: string;                // Target database column
  library: string;               // Parent prompt library
  input: 'questionnaire';       // Always reads from questionnaire
  outputFormat: 'text' | 'json' | 'markdown';
  dependencies: string[];        // Other columns this reads (usually empty)
  maxTokens: number;             // Output budget
  temperature: number;           // Creativity dial

  // The prompt itself
  systemPrompt: string;          // Role and constraints
  userPrompt: string;            // Questionnaire data + instruction
}

// Example: Mission Statement column prompt
const missionStatement: ColumnPrompt = {
  id: 'company-identity-mission',
  column: 'Mission Statement',
  library: 'company-identity',
  input: 'questionnaire',
  outputFormat: 'text',
  dependencies: [],              // No cross-column dependencies
  maxTokens: 500,
  temperature: 0.7,
  systemPrompt: 'You are a brand strategist...',
  userPrompt: '{questionnaire_data}\n\nGenerate a mission statement...',
};

The critical design constraint is isolation. Column prompts do not read each other's outputs during execution. They all read from the same source -- the questionnaire -- and they all write to isolated targets. This means they can execute in any order, including simultaneously, without race conditions or dependency conflicts.

A column prompt has one job: read the questionnaire, produce one output. No cross-dependencies, no accumulated context, no drift.

How 23 Prompts Execute

Execution follows a fan-out pattern. The questionnaire is submitted once. The system reads it, validates it, and then dispatches it to all nine prompt libraries simultaneously. Each library receives the full questionnaire, extracts the fields relevant to its domain, and runs its column prompts. The Company Identity library runs its prompts. The Content Strategy library runs its prompts. The Target Audience library runs its prompts. All at the same time.

Within each library, column prompts execute in parallel where possible and in sequence where dependencies exist. Most column prompts are fully independent -- the mission statement prompt and the values prompt can run simultaneously because neither needs the other's output. A few column prompts have soft dependencies -- the bold claims prompt benefits from seeing the positioning output -- but these are handled through a second pass, not through serial chaining.

1

Questionnaire validation

The system checks that all required fields are present and well-formed. Missing fields trigger a specific error rather than allowing prompts to operate on incomplete data.

2

Library dispatch (fan-out)

The validated questionnaire is sent to all nine prompt libraries simultaneously. Each library receives the same input. No library waits for another.

3

Column prompt execution

Within each library, independent column prompts run in parallel. Each prompt reads from the questionnaire, not from other prompts. Output is written to the target column.

4

Dependent prompt execution

A small number of column prompts with cross-column dependencies run in a second pass, after their dependencies have completed. This is typically fewer than five prompts out of twenty-three.

5

Knowledge base assembly (fan-in)

All column outputs are assembled into a single knowledge base record. The system validates completeness -- every column must be populated -- before marking the record as complete.

The entire process completes in under four minutes. This is not because each prompt is fast -- individual prompts take ten to forty-five seconds depending on complexity -- but because parallelism eliminates the serial bottleneck. Twenty-three prompts running in parallel complete faster than seven prompts running in sequence.

The four-minute completion time assumes a standard questionnaire with all fields populated. Incomplete questionnaires still execute but may produce lower-quality outputs for columns that depend on the missing fields.

The Questionnaire as Single Input

The questionnaire is the architectural linchpin. It is not a form in the casual sense -- it is a structured data artifact that contains everything the system needs to produce a complete knowledge base. Brand name, business type, target audience description, key messages, competitive landscape, visual style preferences, SEO focus areas, and tone parameters. Every field exists because at least one column prompt reads it.

The power of the single-input model is reproducibility. The same questionnaire, run against the same prompt libraries, produces the same knowledge base. Change one field -- say, the target audience description -- and every column prompt that reads that field produces a different output. The system behaves deterministically with respect to its input, which makes it testable, debuggable, and auditable.

This is fundamentally different from the conversational model, where the same "input" (a chat history) is never exactly the same twice because it depends on the model's prior responses, which are non-deterministic. The questionnaire removes the model from the input pipeline entirely. The human fills out the questionnaire. The system dispatches it. The model only appears at the column prompt level, where its non-determinism is bounded and isolated.

The questionnaire removes the model from the input pipeline. Humans define the input. The model only appears where its non-determinism is bounded and isolated.

The Knowledge Base Output

The output is not a document. It is a structured database record with twenty-three populated columns. Each column contains the output of one column prompt, formatted according to its specification. Some columns contain short text -- a one-sentence mission statement. Others contain structured data -- a JSON array of competitive advantages. Others contain long-form content -- a two-paragraph brand narrative.

The knowledge base record is the unit of value. It is not an intermediate artifact that needs further processing. It is the deliverable. A company's identity, strategy, audience profiles, content pillars, and competitive positioning -- all in one record, all derived from one input, all produced in under four minutes.

Knowledge Base Record (Partial)
Column                    | Source Library       | Sample Output
──────────────────────────┼──────────────────────┼──────────────────────
Mission Statement         | Company Identity     | "We build the tools..."
Core Values               | Company Identity     | ["Innovation", "Trans..."]
Positioning Statement     | Company Identity     | "For [audience] who..."
Competitive Advantages    | Company Identity     | ["First-mover in...", ...]
Primary Persona           | Target Audience      | { name: "Technical...", ... }
Content Pillars           | Content Strategy     | ["AI Infrastructure", ...]
Editorial Calendar        | Content Strategy     | { Q2: [...], Q3: [...] }
SEO Keyword Clusters      | SEO Library          | { primary: [...], ... }
Social Media Strategy     | Social Media         | { platforms: [...], ... }
Brand Voice Guidelines    | Brand Identity       | { tone: "...", ... }

Because each column is independently produced, each column can be independently updated. If the company changes its positioning, only the positioning-related column prompts need to re-execute. The mission statement, values, and audience profiles remain unchanged. This granular updateability is impossible with a monolithic prompt that produces everything at once -- changing one section requires regenerating all sections.

The knowledge base also serves as the input for downstream systems. The social media library reads from the knowledge base to generate platform-specific content. The sales enablement library reads from it to generate outreach sequences. The knowledge base is not the end of the pipeline -- it is the foundation that every subsequent operation builds on.

The column prompt model inverts the assumptions that most teams bring to language model work. Instead of asking one prompt to do many things, it asks many prompts to each do one thing. Instead of chaining outputs through a growing context window, it fans out from a single, clean input. Instead of producing a document, it produces a database record. Instead of taking twenty minutes of iterative conversation, it takes four minutes of parallel execution.

The result is not just faster. It is structurally better. Each column is isolated, testable, and independently updatable. The knowledge base is reproducible from its input. The system scales by adding columns, not by making prompts longer. And the quality of each column is independent of the number of columns -- the twenty-third prompt executes with the same clean context as the first.


Key Takeaways

1

Column prompts are single-purpose prompts that each produce one column of a knowledge base. They read from the questionnaire input and write to an isolated output -- no cross-dependencies during execution.

2

The fan-out execution model dispatches one questionnaire to nine prompt libraries simultaneously. Twenty-three column prompts run in parallel, completing a full knowledge base in under four minutes.

3

Single-prompt and sequential-chaining approaches fail at scale because of competing objectives and context accumulation. Column prompts eliminate both problems through isolation and parallelism.

4

The knowledge base is a structured database record, not a document. Each column is independently produced, independently testable, and independently updatable without regenerating the entire record.

5

The questionnaire as single input makes the system reproducible, testable, and auditable. The same input always produces the same knowledge base, and changing one field only affects the columns that read it.

The Nine Libraries Article Series: Context BriefHow 9 Content Libraries Become One Synchronized System

Related Articles

Prompt Library Architecture

The Questionnaire: The One Input That Powers Every Column Prompt

Most people think the questionnaire is just a form. It is an architectural artifact — the single source of truth from wh...

Prompt Library Architecture

Inside the Company Identity Prompt Library: How 23 Prompts Build Your Brand DNA

23 column prompts chain structured inputs to produce a complete brand knowledge base — mission, values, positioning, com...

Prompt Library Architecture

The Fan-Out to Fan-In Architecture: Why Prompt Libraries Scale Without Drift

The architectural pattern that lets prompt libraries scale to hundreds of columns without quality degradation — while si...

All Articles