The SDK includes an optional OpenTelemetry exporter that emits OTel spans for traces captured by Infinium. This lets you bridge Infinium traces into your existing OTel pipeline.
Requirements
Install the OpenTelemetry dependencies (not included with the SDK):
pip install opentelemetry-api opentelemetry-sdk
Setup
from infinium.integrations.otel import InfiniumOTelExporter
exporter = InfiniumOTelExporter(
service_name="my-agent", # Service identifier for spans
tracer_name="infinium", # Tracer name (default: "infinium")
)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
service_name | str | "infinium-agent" | Service name for OTel resource |
tracer_name | str | "infinium" | Tracer name |
Exporting a Trace
Call export_trace() to create OTel spans from trace data:
from infinium.integrations._context import TraceContext
exporter.export_trace(
trace_name="Email Classifier",
duration_s=2.4,
ctx=trace_context, # Optional TraceContext with captured calls
)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
trace_name | str | required | Name for the root span |
duration_s | float | required | Trace duration in seconds |
ctx | TraceContext | None | Context with captured LLM calls and prompt fetches |
Span Attributes
Root Span
| Attribute | Description |
|---|---|
infinium.service | Service name |
infinium.duration_s | Total duration in seconds |
infinium.llm_call_count | Number of LLM calls in the trace |
infinium.prompt_fetch_count | Number of prompt fetches in the trace |
LLM Call Child Spans
Each captured LLM call is emitted as a child span:
| Attribute | Description |
|---|---|
llm.provider | Provider name (openai, anthropic, google, xai) |
llm.model | Model identifier |
llm.prompt_tokens | Input token count |
llm.completion_tokens | Output token count |
llm.temperature | Temperature parameter |
llm.latency_ms | Call latency in milliseconds |
error.type | Error type (if the call failed) |
Prompt Fetch Events
Prompt fetches are recorded as span events (not child spans):
| Attribute | Description |
|---|---|
prompt.id | Prompt UUID |
prompt.name | Prompt display name |
prompt.version | Version number |
Example: Full Pipeline
from openai import OpenAI
from infinium import InfiniumClient
from infinium.integrations import watch
from infinium.integrations.otel import InfiniumOTelExporter
from infinium.integrations._context import get_current_trace_context
# Set up OTel exporter
exporter = InfiniumOTelExporter(service_name="research-agent")
# Set up Infinium client with auto-instrumentation
client = InfiniumClient(agent_id="...", agent_secret="...")
openai = watch(OpenAI())
@client.trace("Research Task")
def research(query: str) -> str:
resp = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
return resp.choices[0].message.content
result = research("Latest AI trends")
# Export the same trace data to OTel
# (In practice, you'd integrate this into your trace pipeline)
Notes
- The OTel exporter is optional — it does not affect normal SDK operation
- If
opentelemetry-apioropentelemetry-sdkare not installed, importing the exporter will raise anImportError - The exporter creates spans using the standard OTel API, so they integrate with any configured OTel pipeline (Jaeger, Zipkin, OTLP, etc.)