Complete reference for all public classes, methods, and functions in the Infinium Python SDK.
Clients
InfiniumClient
Synchronous client for the Infinium API.
from infinium import InfiniumClient
Constructor
InfiniumClient(
agent_id: str,
agent_secret: str,
base_url: str = "https://platform.i42m.ai/api/v1",
timeout: float = 30.0,
max_retries: int = 3,
enable_rate_limiting: bool = True,
requests_per_second: float = 10.0,
user_agent: str = "infinium-python/1.0.0",
enable_logging: bool = False,
log_level: str = "INFO",
verify_ssl: bool = True,
)
Methods
close() -> None
Close the HTTP client and release resources.
send_task(name, description, duration, current_datetime=None, **kwargs) -> ApiResponse
Send a trace with simple parameters.
| Parameter | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Task name (max 500 chars) |
description | str | Yes | Task description (max 10,000 chars) |
duration | float | Yes | Duration in seconds (0-86,400) |
current_datetime | str | No | ISO 8601 timestamp (auto-generated if omitted) |
**kwargs | No | Any field from TaskData (e.g., llm_usage, steps, customer) |
Returns: ApiResponse
send_task_data(task_data: TaskData) -> ApiResponse
Send a complete TaskData object.
| Parameter | Type | Required | Description |
|---|---|---|---|
task_data | TaskData | Yes | The trace data to send |
Returns: ApiResponse
send_tasks_batch(tasks, max_concurrent=5) -> BatchResult
Send multiple tasks in chunks.
| Parameter | Type | Default | Description |
|---|---|---|---|
tasks | list[TaskData] | required | Tasks to send |
max_concurrent | int | 5 | Chunk size |
Returns: BatchResult
get_interpreted_task_result(task_id: str) -> ApiResponse
Get Maestro’s interpretation for a trace.
| Parameter | Type | Description |
|---|---|---|
task_id | str | Trace ID (UUID) |
Returns: ApiResponse
wait_for_interpretation(trace_id, timeout=120.0, poll_interval=3.0) -> ApiResponse
Poll until Maestro finishes interpreting a trace.
| Parameter | Type | Default | Description |
|---|---|---|---|
trace_id | str | required | Trace ID (UUID) |
timeout | float | 120.0 | Max seconds to wait |
poll_interval | float | 3.0 | Seconds between polls |
Returns: ApiResponse
get_prompt(prompt_id, prompt_key, version="latest", variables=None) -> PromptContent
Fetch a prompt from Prompt Studio.
| Parameter | Type | Default | Description |
|---|---|---|---|
prompt_id | str | required | Prompt UUID |
prompt_key | str | required | Prompt secret key |
version | str or int | "latest" | Version number or "latest" |
variables | dict[str, str] | None | Template variables |
Returns: PromptContent
trace(name, *, auto_send=True, description=None)
Decorator that auto-detects sync/async and traces the function.
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Trace name |
auto_send | bool | True | Send trace on completion |
description | str | None | Optional description |
Returns: Decorator
get_current_iso_datetime() -> str (static method)
Returns current UTC datetime as ISO 8601 string.
create_task_data(name, description, duration, current_datetime=None, **sections) -> TaskData (static method)
Create a validated TaskData object. Raw dicts are auto-coerced to dataclasses.
| Parameter | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Task name |
description | str | Yes | Task description |
duration | float | Yes | Duration in seconds |
current_datetime | str | No | ISO timestamp |
**sections | No | Any TaskData field |
Returns: TaskData
AsyncInfiniumClient
Asynchronous client for the Infinium API. Has the same constructor and methods as InfiniumClient, but all methods are async.
from infinium import AsyncInfiniumClient
All methods have the same signatures as InfiniumClient, but are coroutines:
response = await client.send_task(...)
response = await client.send_task_data(...)
result = await client.send_tasks_batch(tasks, max_concurrent=5, fail_fast=False)
response = await client.get_interpreted_task_result(...)
response = await client.wait_for_interpretation(...)
prompt = await client.get_prompt(...)
await client.close()
Additional parameter on send_tasks_batch():
| Parameter | Type | Default | Description |
|---|---|---|---|
fail_fast | bool | False | Stop on first error |
Context manager: Use async with for automatic cleanup.
Tracing
TraceBuilder
Incrementally construct a TaskData during agent execution.
from infinium import TraceBuilder
Constructor
TraceBuilder(name: str, description: str)
Methods
step(action: str, description: str) -> StepContext
Create an auto-timed, auto-numbered step. Use as a context manager.
set_input_summary(summary: str) -> TraceBuilder
Set the input summary.
set_output_summary(summary: str) -> TraceBuilder
Set the output summary.
set_expected_outcome(task_objective, *, required_deliverables=None, constraints=None, acceptance_criteria=None) -> TraceBuilder
Set the expected outcome for Maestro evaluation.
| Parameter | Type | Default | Description |
|---|---|---|---|
task_objective | str | required | The goal |
required_deliverables | list[str] | None | What to produce |
constraints | list[str] | None | Rules to follow |
acceptance_criteria | list[str] | None | Success criteria |
set_environment(*, framework=None, framework_version=None, python_version=None, runtime=None, region=None, custom_tags=None) -> TraceBuilder
Set runtime environment metadata.
set_llm_usage(llm_usage: LlmUsage) -> TraceBuilder
Set aggregate LLM usage statistics.
set_section(name: str, value: Any) -> TraceBuilder
Set a domain section by name (e.g., "customer", "research").
add_error(error: ErrorDetail) -> TraceBuilder
Record an error.
build(_trace_ctx=None) -> TaskData
Build the final TaskData. Auto-computes duration from wall clock. If _trace_ctx is provided, incorporates auto-captured LLM calls and prompt fetches.
All setter methods return self for fluent chaining:
trace.set_input_summary("...").set_expected_outcome(...).set_environment(...)
StepContext
Context manager for a single execution step. Created by TraceBuilder.step().
from infinium import StepContext
Constructor
StepContext(step_number: int, action: str, description: str)
Methods
set_input(preview: str) -> StepContext
Set input preview (truncated to 500 chars).
set_output(preview: str) -> StepContext
Set output preview (truncated to 500 chars).
record_tool_call(tool_name, *, duration_ms=None, input_summary=None, output_summary=None, http_status=None, error=None) -> StepContext
Record a tool invocation.
| Parameter | Type | Default | Description |
|---|---|---|---|
tool_name | str | required | Name of the tool |
duration_ms | int | None | Call duration in ms |
input_summary | str | None | Input summary |
output_summary | str | None | Output summary |
http_status | int | None | HTTP status code |
error | ErrorDetail | None | Error details |
record_llm_call(model, *, provider=None, prompt_tokens=None, completion_tokens=None, latency_ms=None, temperature=None, purpose=None) -> StepContext
Record an LLM call.
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | required | Model identifier |
provider | str | None | Provider name |
prompt_tokens | int | None | Input tokens |
completion_tokens | int | None | Output tokens |
latency_ms | int | None | Call latency in ms |
temperature | float | None | Temperature |
purpose | str | None | Purpose of this call |
set_metadata(metadata: dict) -> StepContext
Attach arbitrary metadata to the step.
All methods return self for chaining. When used as a context manager, timing is automatic and unhandled exceptions are captured as ErrorDetail.
Decorators
trace_agent
Sync decorator for auto-instrumentation.
from infinium import trace_agent
@trace_agent(name: str, client: InfiniumClient = None, *, auto_send: bool = True, description: str = None)
def my_function(...):
...
async_trace_agent
Async decorator for auto-instrumentation.
from infinium import async_trace_agent
@async_trace_agent(name: str, client: AsyncInfiniumClient = None, *, auto_send: bool = True, description: str = None)
async def my_function(...):
...
Integrations
watch()
Auto-detect and monkey-patch a single LLM client instance.
from infinium.integrations import watch
patched_client = watch(client: T, capture_content: bool = False) -> T
| Parameter | Type | Default | Description |
|---|---|---|---|
client | LLM client | required | OpenAI, Anthropic, Gemini, or xAI client |
capture_content | bool | False | Capture input/output previews |
Returns: The same client object (patched in place).
Supported clients: openai.OpenAI, openai.AsyncOpenAI, anthropic.Anthropic, anthropic.AsyncAnthropic, google.generativeai.GenerativeModel
OpenAIInstrumentor
Class-level instrumentation for OpenAI clients.
from infinium.integrations import OpenAIInstrumentor
instrumentor = OpenAIInstrumentor(capture_content: bool = False)
instrumentor.instrument() # Patch at class level
instrumentor.uninstrument() # Restore originals
AnthropicInstrumentor
Class-level instrumentation for Anthropic clients.
from infinium.integrations import AnthropicInstrumentor
instrumentor = AnthropicInstrumentor(capture_content: bool = False)
instrumentor.instrument()
instrumentor.uninstrument()
GoogleInstrumentor
Class-level instrumentation for Google Gemini models.
from infinium.integrations import GoogleInstrumentor
instrumentor = GoogleInstrumentor(capture_content: bool = False)
instrumentor.instrument()
instrumentor.uninstrument()
InfiniumOTelExporter
Optional OpenTelemetry span exporter.
from infinium.integrations.otel import InfiniumOTelExporter
exporter = InfiniumOTelExporter(
service_name: str = "infinium-agent",
tracer_name: str = "infinium",
)
exporter.export_trace(
trace_name: str,
duration_s: float,
ctx: TraceContext = None,
)
Context Management
TraceContext
Mutable context attached to the current ContextVar token.
from infinium.integrations._context import TraceContext
| Field | Type | Description |
|---|---|---|
calls | list[CapturedLlmCall] | Auto-captured LLM calls |
prompts | list[CapturedPromptFetch] | Auto-captured prompt fetches |
get_current_trace_context() -> Optional[TraceContext]
Return the active trace context, or None if outside a trace.
set_trace_context(ctx) -> contextvars.Token
Set the active trace context. Returns a reset token for restoring the previous context.
Utility Functions
Validation
from infinium import (
validate_path_id,
validate_string_length,
validate_payload_size,
validate_iso_datetime,
validate_duration,
)
validate_path_id(value: str, field_name: str) -> str
Validates UUID format for URL path segments. Raises ValidationError.
validate_string_length(value: str, field_name: str, max_length: int) -> str
Validates string doesn’t exceed max length. Raises ValidationError.
validate_payload_size(payload: dict) -> None
Validates serialized payload doesn’t exceed 1 MB. Raises ValidationError.
validate_iso_datetime(datetime_str: str) -> bool
Returns True if the string is valid ISO 8601 format.
validate_duration(duration: Union[int, float]) -> float
Validates duration is between 0 and 86,400 seconds. Raises ValidationError.
DateTime
from infinium import get_current_iso_datetime
timestamp = get_current_iso_datetime() # "2024-01-15T10:30:00Z"
Data Conversion
from infinium import coerce_section
# Convert a dict to a dataclass
customer = coerce_section({"customer_name": "Acme"}, Customer)
Logging
from infinium import setup_logging
setup_logging(level="DEBUG")
Constants
from infinium import (
MAX_NAME_LENGTH, # 500
MAX_DESCRIPTION_LENGTH, # 10,000
MAX_STRING_LENGTH, # 2,000
MAX_PAYLOAD_SIZE, # 1,048,576 (1 MB)
)
Exceptions
All exceptions inherit from InfiniumError. See Error Handling for full details.
from infinium.exceptions import (
InfiniumError,
AuthenticationError,
ValidationError,
RateLimitError,
NetworkError,
InfiniumTimeoutError,
TimeoutError, # Alias for InfiniumTimeoutError
ServerError,
NotFoundError,
BatchError,
ConfigurationError,
)