Prompt Studio lets you manage prompts on the Infinium platform and fetch them at runtime. This keeps prompt content out of your codebase and lets you version, test, and update prompts without redeploying.

Fetching a Prompt

from infinium import InfiniumClient

client = InfiniumClient(agent_id="...", agent_secret="...")

prompt = client.get_prompt(
    prompt_id="your-prompt-id",
    prompt_key="your-prompt-key",
    version="latest",
)

print(prompt.name)     # Prompt name from the platform
print(prompt.version)  # Version number
print(prompt.content)  # The prompt text

Parameters

ParameterTypeDefaultDescription
prompt_idstrrequiredUUID of the prompt
prompt_keystrrequiredSecret key for prompt authentication
versionstr or int"latest"Version number or "latest"
variablesdict[str, str]NoneVariables to substitute into the template

Authentication

Prompt Studio uses its own auth scheme separate from agent credentials:

  • prompt_id — identifies which prompt to fetch
  • prompt_key — authenticates access (sent as x-prompt-id and x-prompt-key headers)

Variable Substitution

If your prompt template contains placeholders, pass a variables dict to substitute them:

prompt = client.get_prompt(
    prompt_id="your-prompt-id",
    prompt_key="your-prompt-key",
    variables={
        "customer_name": "Acme Corp",
        "tone": "professional",
        "language": "English",
    },
)

# Raw template (with placeholders)
print(prompt.content)
# "Write a {{tone}} email to {{customer_name}} in {{language}}."

# Rendered (with variables substituted)
print(prompt.rendered_content)
# "Write a professional email to Acme Corp in English."

Version Pinning

Pin to a specific version for stability, or use "latest" for the most recent:

# Always use version 3 (stable, tested)
prompt = client.get_prompt(prompt_id="...", prompt_key="...", version=3)

# Always use the latest version (may change)
prompt = client.get_prompt(prompt_id="...", prompt_key="...", version="latest")

Using with LLM Calls

A common pattern is to fetch a prompt and use it as the system message:

from openai import OpenAI

openai = OpenAI()

prompt = client.get_prompt(
    prompt_id="...",
    prompt_key="...",
    variables={"tone": "friendly", "max_words": "200"},
)

response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": prompt.rendered_content},
        {"role": "user", "content": user_message},
    ],
)

Auto-Capture in Traces

When get_prompt() is called inside a @trace_agent, @async_trace_agent, or @client.trace() decorated function, the prompt fetch is automatically recorded in the trace as a CapturedPromptFetch:

@client.trace("Customer Reply Agent")
def reply(message: str) -> str:
    # This prompt fetch is auto-captured in the trace
    prompt = client.get_prompt(
        prompt_id="...", prompt_key="...",
        variables={"tone": "empathetic"},
    )

    resp = openai.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": prompt.rendered_content},
            {"role": "user", "content": message},
        ],
    )
    return resp.choices[0].message.content

The trace will include:

  • prompt_id — which prompt was fetched
  • prompt_name — the prompt’s name
  • version — which version was used
  • variables_used — which variable keys were substituted
  • fetched_at — ISO timestamp of when it was fetched

PromptContent Return Type

FieldTypeDescription
prompt_idstrThe prompt’s UUID
namestrDisplay name
versionintVersion number
contentstrRaw template content
created_atstrISO timestamp of version creation
rendered_contentstr or NoneContent with variables substituted (only if variables were provided)

Async Usage

from infinium import AsyncInfiniumClient

async with AsyncInfiniumClient(agent_id="...", agent_secret="...") as client:
    prompt = await client.get_prompt(
        prompt_id="...",
        prompt_key="...",
        version="latest",
        variables={"tone": "casual"},
    )