Prompt Studio lets you manage prompts on the Infinium platform and fetch them at runtime. This keeps prompt content out of your codebase and lets you version, test, and update prompts without redeploying.

Fetching a Prompt

import { InfiniumClient } from 'infinium-o2';

const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });

const prompt = await client.getPrompt(
  'your-prompt-id',
  'your-prompt-key',
  'latest',
);

console.log(prompt.name);     // Prompt name from the platform
console.log(prompt.version);  // Version number
console.log(prompt.content);  // The prompt text

Parameters

ParameterTypeDefaultDescription
promptIdstringrequiredUUID of the prompt
promptKeystringrequiredSecret key for prompt authentication
versionstring | number'latest'Version number or 'latest'
variablesRecord<string, string>undefinedVariables to substitute into the template

Authentication

Prompt Studio uses its own auth scheme separate from agent credentials:

  • promptId — identifies which prompt to fetch
  • promptKey — authenticates access (sent as x-prompt-id and x-prompt-key headers)

Variable Substitution

If your prompt template contains {{variableName}} placeholders, pass a variables object to substitute them:

const prompt = await client.getPrompt(
  'your-prompt-id',
  'your-prompt-key',
  'latest',
  {
    customerName: 'Acme Corp',
    tone: 'professional',
    language: 'English',
  },
);

// Raw template (with placeholders)
console.log(prompt.content);
// "Write a {{tone}} email to {{customerName}} in {{language}}."

// Rendered (with variables substituted)
console.log(prompt.renderedContent);
// "Write a professional email to Acme Corp in English."

Version Pinning

Pin to a specific version for stability, or use 'latest' for the most recent:

// Always use version 3 (stable, tested)
const prompt = await client.getPrompt('...', '...', 3);

// Always use the latest version (may change)
const prompt = await client.getPrompt('...', '...', 'latest');

Using with LLM Calls

A common pattern is to fetch a prompt and use it as the system message:

import OpenAI from 'openai';

const openai = new OpenAI();

const prompt = await client.getPrompt(
  '...', '...',
  'latest',
  { tone: 'friendly', maxWords: '200' },
);

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: prompt.renderedContent! },
    { role: 'user', content: userMessage },
  ],
});

Auto-Capture in Traces

When getPrompt() is called inside a client.trace() wrapped function, the prompt fetch is automatically recorded in the trace as a CapturedPromptFetch:

const reply = client.trace('Customer Reply Agent')(
  async (message: string) => {
    // This prompt fetch is auto-captured in the trace
    const prompt = await client.getPrompt(
      '...', '...',
      'latest',
      { tone: 'empathetic' },
    );

    const resp = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [
        { role: 'system', content: prompt.renderedContent! },
        { role: 'user', content: message },
      ],
    });
    return resp.choices[0].message.content;
  }
);

The trace will include:

  • promptId — which prompt was fetched
  • promptName — the prompt’s name
  • version — which version was used
  • variablesUsed — which variable keys were substituted
  • fetchedAt — ISO timestamp

PromptContent Return Type

FieldTypeDescription
promptIdstringThe prompt’s UUID
namestringDisplay name
versionnumberVersion number
contentstringRaw template content
createdAtstringISO timestamp of version creation
renderedContentstring | undefinedContent with variables substituted