All types are Python dataclasses importable from infinium.
Core Types
TaskData
The central data structure sent to the Infinium API. Represents a complete trace of an agent’s work.
from infinium import TaskData
| Field | Type | Default | Description |
|---|---|---|---|
name | str | required | Short task name (max 500 chars) |
description | str | required | What the agent did (max 10,000 chars) |
current_datetime | str | required | ISO 8601 timestamp |
duration | float | required | Wall-clock time in seconds |
time_tracking | TimeTracking | None | Start/end timestamps |
customer | Customer | None | Customer context |
support | Support | None | Support ticket context |
sales | Sales | None | Sales pipeline context |
marketing | Marketing | None | Marketing campaign context |
content | Content | None | Content creation context |
research | Research | None | Research task context |
project | Project | None | Project management context |
development | Development | None | Software development context |
executive | Executive | None | Executive/meeting context |
general | General | None | General-purpose context |
llm_usage | LlmUsage | None | Aggregate token/cost stats |
input_summary | str | None | Summary of agent input |
output_summary | str | None | Summary of agent output |
steps | list[ExecutionStep] | None | Ordered execution steps |
expected_outcome | ExpectedOutcome | None | Maestro’s grading rubric |
environment | EnvironmentContext | None | Runtime metadata |
errors | list[ErrorDetail] | None | Errors encountered |
ApiResponse
Returned by send_task(), send_task_data(), and get_interpreted_task_result().
from infinium import ApiResponse
| Field | Type | Default | Description |
|---|---|---|---|
success | bool | required | Whether the request succeeded |
status_code | int | None | HTTP status code |
message | str | "" | Response message |
data | dict | None | Response payload (includes traceId on success) |
BatchResult
Returned by send_tasks_batch().
from infinium import BatchResult
| Field | Type | Default | Description |
|---|---|---|---|
successful | int | required | Number of traces sent successfully |
failed | int | required | Number of traces that failed |
results | list[ApiResponse] | required | Individual response per task |
errors | list[str] | required | Error messages for failures |
PromptContent
Returned by get_prompt().
from infinium import PromptContent
| Field | Type | Default | Description |
|---|---|---|---|
prompt_id | str | required | Prompt UUID |
name | str | required | Display name |
version | int | required | Version number |
content | str | required | Raw template content |
created_at | str | required | ISO timestamp of version creation |
rendered_content | str | None | Content with variables substituted |
Trace Enrichment Types
ExecutionStep
Represents one thing the agent did.
from infinium import ExecutionStep
| Field | Type | Default | Description |
|---|---|---|---|
step_number | int | required | Sequential step number |
action | str | required | Action type (e.g., "llm_inference", "tool_use", "decision") |
description | str | required | What this step did |
duration_ms | int | None | Step duration in milliseconds |
input_preview | str | None | Preview of step input (max 500 chars) |
output_preview | str | None | Preview of step output (max 500 chars) |
error | ErrorDetail | None | Error that occurred during this step |
tool_call | ToolCall | None | Tool/API invocation details |
llm_call | LlmCall | None | LLM invocation details |
metadata | dict | None | Arbitrary metadata |
ErrorDetail
A factual error record.
from infinium import ErrorDetail
| Field | Type | Default | Description |
|---|---|---|---|
error_type | str | required | Exception class name |
message | str | required | Error message |
recoverable | bool | False | Whether the agent recovered |
retry_count | int | 0 | Number of retries attempted |
stack_trace | str | None | Full stack trace |
error_code | str | None | Application error code |
ToolCall
A tool or API invocation.
from infinium import ToolCall
| Field | Type | Default | Description |
|---|---|---|---|
tool_name | str | required | Name of the tool |
duration_ms | int | None | Call duration in milliseconds |
input_summary | str | None | Summary of input |
output_summary | str | None | Summary of output |
error | ErrorDetail | None | Error details if the call failed |
http_status | int | None | HTTP status code |
LlmCall
A single LLM invocation.
from infinium import LlmCall
| Field | Type | Default | Description |
|---|---|---|---|
model | str | required | Model identifier |
provider | str | None | Provider name (openai, anthropic, google, xai) |
prompt_tokens | int | None | Input token count |
completion_tokens | int | None | Output token count |
latency_ms | int | None | Call latency in milliseconds |
temperature | float | None | Temperature parameter |
purpose | str | None | What this LLM call was for |
LlmUsage
Aggregate token and cost statistics across the entire trace.
from infinium import LlmUsage
| Field | Type | Default | Description |
|---|---|---|---|
model | str | None | Primary model used |
provider | str | None | Primary provider |
prompt_tokens | int | None | Total input tokens |
completion_tokens | int | None | Total output tokens |
total_tokens | int | None | Total tokens (prompt + completion) |
estimated_cost_usd | float | None | Estimated cost in USD |
api_calls_count | int | None | Number of API calls made |
total_latency_ms | int | None | Sum of all call latencies |
calls | list[LlmCall] | None | Individual call details |
ExpectedOutcome
Defines what Maestro should evaluate the trace against.
from infinium import ExpectedOutcome
| Field | Type | Default | Description |
|---|---|---|---|
task_objective | str | required | The goal of the task |
required_deliverables | list[str] | None | What the agent should produce |
constraints | list[str] | None | Rules the agent should follow |
acceptance_criteria | list[str] | None | How to judge success |
EnvironmentContext
Runtime metadata about the execution environment.
from infinium import EnvironmentContext
| Field | Type | Default | Description |
|---|---|---|---|
framework | str | None | Framework name (e.g., “langchain”, “crewai”) |
framework_version | str | None | Framework version |
python_version | str | None | Python version |
sdk_version | str | None | Infinium SDK version (auto-populated) |
runtime | str | None | Runtime environment |
region | str | None | Deployment region |
custom_tags | dict[str, str] | None | Custom key-value tags |
Dict Coercion
Anywhere the SDK expects a dataclass, you can pass a plain dict instead. The SDK auto-coerces it:
# These are equivalent:
td = InfiniumClient.create_task_data(
name="Task", description="...", duration=1.0,
llm_usage=LlmUsage(model="gpt-4o", prompt_tokens=100),
)
td = InfiniumClient.create_task_data(
name="Task", description="...", duration=1.0,
llm_usage={"model": "gpt-4o", "prompt_tokens": 100},
)
Extra keys in the dict that don’t match dataclass fields are collected into a metadata field (if the dataclass has one).