Decorators are the easiest way to trace a function. They automatically handle timing, input/output capture, error recording, and sending.
@client.trace() — Recommended
The trace() method on InfiniumClient and AsyncInfiniumClient auto-detects whether the decorated function is sync or async:
from infinium import InfiniumClient
client = InfiniumClient(agent_id="...", agent_secret="...")
@client.trace("Email Classifier")
def classify_email(email_body: str) -> dict:
resp = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Classify this email. Return JSON."},
{"role": "user", "content": email_body},
],
response_format={"type": "json_object"},
)
return json.loads(resp.choices[0].message.content)
# Works on async functions too
@client.trace("Async Classifier")
async def classify_async(email_body: str) -> dict:
resp = await async_openai.chat.completions.create(...)
return json.loads(resp.choices[0].message.content)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Name for the trace |
auto_send | bool | True | Automatically send the trace when the function returns |
description | str | None | Optional description for the trace |
What It Captures
- Duration — wall-clock time from function entry to exit
- Input — string representation of function arguments (as
input_summary) - Output — string representation of return value (as
output_summary) - Errors — if the function raises, the exception is captured as an
ErrorDetail, the trace is still sent, and the exception is re-raised - LLM calls — if the function calls a
watch()-patched LLM client, those calls are automatically incorporated into the trace
Disabling Auto-Send
Set auto_send=False to build the trace without sending it:
@client.trace("Draft Classifier", auto_send=False)
def classify(text: str) -> dict:
...
result = classify("some text")
# Trace is built but not sent -- useful for testing or custom post-processing
@trace_agent — Standalone Sync Decorator
If you prefer to pass the client explicitly rather than using client.trace():
from infinium import trace_agent, InfiniumClient
client = InfiniumClient(agent_id="...", agent_secret="...")
@trace_agent("Email Classifier", client)
def classify_email(email_body: str) -> dict:
resp = openai.chat.completions.create(...)
return json.loads(resp.choices[0].message.content)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Name for the trace |
client | InfiniumClient | None | Client to send the trace through. If None, the trace is built but not sent |
auto_send | bool | True | Automatically send on completion |
description | str | None | Optional description |
@async_trace_agent — Standalone Async Decorator
The async version for coroutine functions:
from infinium import async_trace_agent, AsyncInfiniumClient
client = AsyncInfiniumClient(agent_id="...", agent_secret="...")
@async_trace_agent("Content Moderator", client)
async def moderate(content: str) -> str:
resp = await async_anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=256,
messages=[{"role": "user", "content": f"Moderate this: {content}"}],
)
return resp.content[0].text
Error Handling
When a decorated function raises an exception:
- The exception is captured as an
ErrorDetailin the trace (type, message, stack trace) - The trace is sent (if
auto_send=Trueand a client is provided) - The exception is re-raised — the decorator never swallows errors
@client.trace("Risky Operation")
def do_something():
raise ValueError("something went wrong")
try:
do_something()
except ValueError:
# The trace was already sent with the error recorded
pass
Combining with watch()
The most powerful pattern combines watch() with decorators. LLM calls are captured automatically without any manual recording:
from openai import OpenAI
from infinium import InfiniumClient
from infinium.integrations import watch
client = InfiniumClient(agent_id="...", agent_secret="...")
openai = watch(OpenAI())
@client.trace("Research Agent")
def research(query: str) -> str:
# Step 1: Search (not an LLM call, not captured)
results = search_database(query)
# Step 2: Analyze with LLM (auto-captured by watch())
resp = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Analyze these results."},
{"role": "user", "content": str(results)},
],
)
analysis = resp.choices[0].message.content
# Step 3: Summarize with LLM (also auto-captured)
resp = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Summarize this analysis."},
{"role": "user", "content": analysis},
],
)
return resp.choices[0].message.content
# The trace includes both LLM calls with tokens, latency, and model info
result = research("What are the latest trends in AI?")
Nested Traces
Trace decorators use contextvars.ContextVar for context management, which supports nesting:
@client.trace("Outer Agent")
def outer():
# This creates one trace
result = inner("sub-task data")
return result
@client.trace("Inner Agent")
def inner(data: str):
# This creates a separate trace
resp = openai.chat.completions.create(...)
return resp.choices[0].message.content
Each decorated function produces its own independent trace. Context is properly restored after the inner function returns, using ContextVar.reset(token).
Comparison
| Feature | @client.trace() | @trace_agent | @async_trace_agent |
|---|---|---|---|
| Auto-detects sync/async | Yes | No (sync only) | No (async only) |
| Client passed via | Method on client | Argument | Argument |
| Auto-send | Yes | Yes | Yes |
Works with watch() | Yes | Yes | Yes |