The SDK includes an optional OpenTelemetry exporter that emits OTel spans for traces captured by Infinium. This lets you bridge Infinium traces into your existing OTel pipeline.

Requirements

Install the OpenTelemetry dependencies (not included with the SDK):

pip install opentelemetry-api opentelemetry-sdk

Setup

from infinium.integrations.otel import InfiniumOTelExporter

exporter = InfiniumOTelExporter(
    service_name="my-agent",    # Service identifier for spans
    tracer_name="infinium",     # Tracer name (default: "infinium")
)

Parameters

ParameterTypeDefaultDescription
service_namestr"infinium-agent"Service name for OTel resource
tracer_namestr"infinium"Tracer name

Exporting a Trace

Call export_trace() to create OTel spans from trace data:

from infinium.integrations._context import TraceContext

exporter.export_trace(
    trace_name="Email Classifier",
    duration_s=2.4,
    ctx=trace_context,  # Optional TraceContext with captured calls
)

Parameters

ParameterTypeDefaultDescription
trace_namestrrequiredName for the root span
duration_sfloatrequiredTrace duration in seconds
ctxTraceContextNoneContext with captured LLM calls and prompt fetches

Span Attributes

Root Span

AttributeDescription
infinium.serviceService name
infinium.duration_sTotal duration in seconds
infinium.llm_call_countNumber of LLM calls in the trace
infinium.prompt_fetch_countNumber of prompt fetches in the trace

LLM Call Child Spans

Each captured LLM call is emitted as a child span:

AttributeDescription
llm.providerProvider name (openai, anthropic, google, xai)
llm.modelModel identifier
llm.prompt_tokensInput token count
llm.completion_tokensOutput token count
llm.temperatureTemperature parameter
llm.latency_msCall latency in milliseconds
error.typeError type (if the call failed)

Prompt Fetch Events

Prompt fetches are recorded as span events (not child spans):

AttributeDescription
prompt.idPrompt UUID
prompt.namePrompt display name
prompt.versionVersion number

Example: Full Pipeline

from openai import OpenAI
from infinium import InfiniumClient
from infinium.integrations import watch
from infinium.integrations.otel import InfiniumOTelExporter
from infinium.integrations._context import get_current_trace_context

# Set up OTel exporter
exporter = InfiniumOTelExporter(service_name="research-agent")

# Set up Infinium client with auto-instrumentation
client = InfiniumClient(agent_id="...", agent_secret="...")
openai = watch(OpenAI())

@client.trace("Research Task")
def research(query: str) -> str:
    resp = openai.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": query}],
    )
    return resp.choices[0].message.content

result = research("Latest AI trends")

# Export the same trace data to OTel
# (In practice, you'd integrate this into your trace pipeline)

Notes

  • The OTel exporter is optional — it does not affect normal SDK operation
  • If opentelemetry-api or opentelemetry-sdk are not installed, importing the exporter will raise an ImportError
  • The exporter creates spans using the standard OTel API, so they integrate with any configured OTel pipeline (Jaeger, Zipkin, OTLP, etc.)