Prompt Studio lets you manage prompts on the Infinium platform and fetch them at runtime. This keeps prompt content out of your codebase and lets you version, test, and update prompts without redeploying.
Fetching a Prompt
import { InfiniumClient } from 'infinium-o2';
const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const prompt = await client.getPrompt(
'your-prompt-id',
'your-prompt-key',
'latest',
);
console.log(prompt.name); // Prompt name from the platform
console.log(prompt.version); // Version number
console.log(prompt.content); // The prompt text
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
promptId | string | required | UUID of the prompt |
promptKey | string | required | Secret key for prompt authentication |
version | string | number | 'latest' | Version number or 'latest' |
variables | Record<string, string> | undefined | Variables to substitute into the template |
Authentication
Prompt Studio uses its own auth scheme separate from agent credentials:
promptId— identifies which prompt to fetchpromptKey— authenticates access (sent asx-prompt-idandx-prompt-keyheaders)
Variable Substitution
If your prompt template contains {{variableName}} placeholders, pass a variables object to substitute them:
const prompt = await client.getPrompt(
'your-prompt-id',
'your-prompt-key',
'latest',
{
customerName: 'Acme Corp',
tone: 'professional',
language: 'English',
},
);
// Raw template (with placeholders)
console.log(prompt.content);
// "Write a {{tone}} email to {{customerName}} in {{language}}."
// Rendered (with variables substituted)
console.log(prompt.renderedContent);
// "Write a professional email to Acme Corp in English."
Version Pinning
Pin to a specific version for stability, or use 'latest' for the most recent:
// Always use version 3 (stable, tested)
const prompt = await client.getPrompt('...', '...', 3);
// Always use the latest version (may change)
const prompt = await client.getPrompt('...', '...', 'latest');
Using with LLM Calls
A common pattern is to fetch a prompt and use it as the system message:
import OpenAI from 'openai';
const openai = new OpenAI();
const prompt = await client.getPrompt(
'...', '...',
'latest',
{ tone: 'friendly', maxWords: '200' },
);
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: prompt.renderedContent! },
{ role: 'user', content: userMessage },
],
});
Auto-Capture in Traces
When getPrompt() is called inside a client.trace() wrapped function, the prompt fetch is automatically recorded in the trace as a CapturedPromptFetch:
const reply = client.trace('Customer Reply Agent')(
async (message: string) => {
// This prompt fetch is auto-captured in the trace
const prompt = await client.getPrompt(
'...', '...',
'latest',
{ tone: 'empathetic' },
);
const resp = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: prompt.renderedContent! },
{ role: 'user', content: message },
],
});
return resp.choices[0].message.content;
}
);
The trace will include:
promptId— which prompt was fetchedpromptName— the prompt’s nameversion— which version was usedvariablesUsed— which variable keys were substitutedfetchedAt— ISO timestamp
PromptContent Return Type
| Field | Type | Description |
|---|---|---|
promptId | string | The prompt’s UUID |
name | string | Display name |
version | number | Version number |
content | string | Raw template content |
createdAt | string | ISO timestamp of version creation |
renderedContent | string | undefined | Content with variables substituted |