Installation
pip install infinium-o2
Requirements: Python 3.9 or higher.
The SDK depends on httpx for HTTP communication. Provider SDKs (OpenAI, Anthropic, Google) are not required — install only the ones your agent uses.
Credentials
You need an agent ID and agent secret from the Infinium platform. These authenticate your agent when sending traces.
Initialize the Client
Sync
from infinium import InfiniumClient
client = InfiniumClient(
agent_id="your-agent-id",
agent_secret="your-agent-secret",
)
Async
from infinium import AsyncInfiniumClient
client = AsyncInfiniumClient(
agent_id="your-agent-id",
agent_secret="your-agent-secret",
)
Both clients support context managers for automatic cleanup:
# Sync
with InfiniumClient(agent_id="...", agent_secret="...") as client:
client.send_task(...)
# Async
async with AsyncInfiniumClient(agent_id="...", agent_secret="...") as client:
await client.send_task(...)
Send Your First Trace
The simplest way to send a trace is send_task():
response = client.send_task(
name="Classify support ticket",
description="Categorized an inbound support ticket by type and urgency.",
duration=2.4,
input_summary="Customer ticket about billing issue",
output_summary="Category: billing, Urgency: high",
llm_usage={
"model": "gpt-4o",
"provider": "openai",
"prompt_tokens": 320,
"completion_tokens": 45,
},
)
if response.success:
trace_id = response.data.get("traceId")
print(f"Trace sent: {trace_id}")
Verify on the Platform
After sending a trace, log into the Infinium platform to see:
- Your trace data in the agent dashboard
- Maestro’s interpretation and scoring (available after a few seconds)
You can also poll for Maestro’s result programmatically:
interpretation = client.wait_for_interpretation(trace_id, timeout=120)
print(interpretation.data["interpretedTraceResult"])
Next Steps
- Auto-Instrumentation — Automatically capture LLM calls with
watch() - Sending Traces — Learn all three methods for sending traces
- Trace Decorators — Zero-boilerplate tracing with
@trace_agent