Prompt Studio lets you manage prompts on the Infinium platform and fetch them at runtime. This keeps prompt content out of your codebase and lets you version, test, and update prompts without redeploying.
Fetching a Prompt
from infinium import InfiniumClient
client = InfiniumClient(agent_id="...", agent_secret="...")
prompt = client.get_prompt(
prompt_id="your-prompt-id",
prompt_key="your-prompt-key",
version="latest",
)
print(prompt.name) # Prompt name from the platform
print(prompt.version) # Version number
print(prompt.content) # The prompt text
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
prompt_id | str | required | UUID of the prompt |
prompt_key | str | required | Secret key for prompt authentication |
version | str or int | "latest" | Version number or "latest" |
variables | dict[str, str] | None | Variables to substitute into the template |
Authentication
Prompt Studio uses its own auth scheme separate from agent credentials:
prompt_id— identifies which prompt to fetchprompt_key— authenticates access (sent asx-prompt-idandx-prompt-keyheaders)
Variable Substitution
If your prompt template contains placeholders, pass a variables dict to substitute them:
prompt = client.get_prompt(
prompt_id="your-prompt-id",
prompt_key="your-prompt-key",
variables={
"customer_name": "Acme Corp",
"tone": "professional",
"language": "English",
},
)
# Raw template (with placeholders)
print(prompt.content)
# "Write a {{tone}} email to {{customer_name}} in {{language}}."
# Rendered (with variables substituted)
print(prompt.rendered_content)
# "Write a professional email to Acme Corp in English."
Version Pinning
Pin to a specific version for stability, or use "latest" for the most recent:
# Always use version 3 (stable, tested)
prompt = client.get_prompt(prompt_id="...", prompt_key="...", version=3)
# Always use the latest version (may change)
prompt = client.get_prompt(prompt_id="...", prompt_key="...", version="latest")
Using with LLM Calls
A common pattern is to fetch a prompt and use it as the system message:
from openai import OpenAI
openai = OpenAI()
prompt = client.get_prompt(
prompt_id="...",
prompt_key="...",
variables={"tone": "friendly", "max_words": "200"},
)
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": prompt.rendered_content},
{"role": "user", "content": user_message},
],
)
Auto-Capture in Traces
When get_prompt() is called inside a @trace_agent, @async_trace_agent, or @client.trace() decorated function, the prompt fetch is automatically recorded in the trace as a CapturedPromptFetch:
@client.trace("Customer Reply Agent")
def reply(message: str) -> str:
# This prompt fetch is auto-captured in the trace
prompt = client.get_prompt(
prompt_id="...", prompt_key="...",
variables={"tone": "empathetic"},
)
resp = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": prompt.rendered_content},
{"role": "user", "content": message},
],
)
return resp.choices[0].message.content
The trace will include:
prompt_id— which prompt was fetchedprompt_name— the prompt’s nameversion— which version was usedvariables_used— which variable keys were substitutedfetched_at— ISO timestamp of when it was fetched
PromptContent Return Type
| Field | Type | Description |
|---|---|---|
prompt_id | str | The prompt’s UUID |
name | str | Display name |
version | int | Version number |
content | str | Raw template content |
created_at | str | ISO timestamp of version creation |
rendered_content | str or None | Content with variables substituted (only if variables were provided) |
Async Usage
from infinium import AsyncInfiniumClient
async with AsyncInfiniumClient(agent_id="...", agent_secret="...") as client:
prompt = await client.get_prompt(
prompt_id="...",
prompt_key="...",
version="latest",
variables={"tone": "casual"},
)