All types are Python dataclasses importable from infinium.

Core Types

TaskData

The central data structure sent to the Infinium API. Represents a complete trace of an agent’s work.

from infinium import TaskData
FieldTypeDefaultDescription
namestrrequiredShort task name (max 500 chars)
descriptionstrrequiredWhat the agent did (max 10,000 chars)
current_datetimestrrequiredISO 8601 timestamp
durationfloatrequiredWall-clock time in seconds
time_trackingTimeTrackingNoneStart/end timestamps
customerCustomerNoneCustomer context
supportSupportNoneSupport ticket context
salesSalesNoneSales pipeline context
marketingMarketingNoneMarketing campaign context
contentContentNoneContent creation context
researchResearchNoneResearch task context
projectProjectNoneProject management context
developmentDevelopmentNoneSoftware development context
executiveExecutiveNoneExecutive/meeting context
generalGeneralNoneGeneral-purpose context
llm_usageLlmUsageNoneAggregate token/cost stats
input_summarystrNoneSummary of agent input
output_summarystrNoneSummary of agent output
stepslist[ExecutionStep]NoneOrdered execution steps
expected_outcomeExpectedOutcomeNoneMaestro’s grading rubric
environmentEnvironmentContextNoneRuntime metadata
errorslist[ErrorDetail]NoneErrors encountered

ApiResponse

Returned by send_task(), send_task_data(), and get_interpreted_task_result().

from infinium import ApiResponse
FieldTypeDefaultDescription
successboolrequiredWhether the request succeeded
status_codeintNoneHTTP status code
messagestr""Response message
datadictNoneResponse payload (includes traceId on success)

BatchResult

Returned by send_tasks_batch().

from infinium import BatchResult
FieldTypeDefaultDescription
successfulintrequiredNumber of traces sent successfully
failedintrequiredNumber of traces that failed
resultslist[ApiResponse]requiredIndividual response per task
errorslist[str]requiredError messages for failures

PromptContent

Returned by get_prompt().

from infinium import PromptContent
FieldTypeDefaultDescription
prompt_idstrrequiredPrompt UUID
namestrrequiredDisplay name
versionintrequiredVersion number
contentstrrequiredRaw template content
created_atstrrequiredISO timestamp of version creation
rendered_contentstrNoneContent with variables substituted

Trace Enrichment Types

ExecutionStep

Represents one thing the agent did.

from infinium import ExecutionStep
FieldTypeDefaultDescription
step_numberintrequiredSequential step number
actionstrrequiredAction type (e.g., "llm_inference", "tool_use", "decision")
descriptionstrrequiredWhat this step did
duration_msintNoneStep duration in milliseconds
input_previewstrNonePreview of step input (max 500 chars)
output_previewstrNonePreview of step output (max 500 chars)
errorErrorDetailNoneError that occurred during this step
tool_callToolCallNoneTool/API invocation details
llm_callLlmCallNoneLLM invocation details
metadatadictNoneArbitrary metadata

ErrorDetail

A factual error record.

from infinium import ErrorDetail
FieldTypeDefaultDescription
error_typestrrequiredException class name
messagestrrequiredError message
recoverableboolFalseWhether the agent recovered
retry_countint0Number of retries attempted
stack_tracestrNoneFull stack trace
error_codestrNoneApplication error code

ToolCall

A tool or API invocation.

from infinium import ToolCall
FieldTypeDefaultDescription
tool_namestrrequiredName of the tool
duration_msintNoneCall duration in milliseconds
input_summarystrNoneSummary of input
output_summarystrNoneSummary of output
errorErrorDetailNoneError details if the call failed
http_statusintNoneHTTP status code

LlmCall

A single LLM invocation.

from infinium import LlmCall
FieldTypeDefaultDescription
modelstrrequiredModel identifier
providerstrNoneProvider name (openai, anthropic, google, xai)
prompt_tokensintNoneInput token count
completion_tokensintNoneOutput token count
latency_msintNoneCall latency in milliseconds
temperaturefloatNoneTemperature parameter
purposestrNoneWhat this LLM call was for

LlmUsage

Aggregate token and cost statistics across the entire trace.

from infinium import LlmUsage
FieldTypeDefaultDescription
modelstrNonePrimary model used
providerstrNonePrimary provider
prompt_tokensintNoneTotal input tokens
completion_tokensintNoneTotal output tokens
total_tokensintNoneTotal tokens (prompt + completion)
estimated_cost_usdfloatNoneEstimated cost in USD
api_calls_countintNoneNumber of API calls made
total_latency_msintNoneSum of all call latencies
callslist[LlmCall]NoneIndividual call details

ExpectedOutcome

Defines what Maestro should evaluate the trace against.

from infinium import ExpectedOutcome
FieldTypeDefaultDescription
task_objectivestrrequiredThe goal of the task
required_deliverableslist[str]NoneWhat the agent should produce
constraintslist[str]NoneRules the agent should follow
acceptance_criterialist[str]NoneHow to judge success

EnvironmentContext

Runtime metadata about the execution environment.

from infinium import EnvironmentContext
FieldTypeDefaultDescription
frameworkstrNoneFramework name (e.g., “langchain”, “crewai”)
framework_versionstrNoneFramework version
python_versionstrNonePython version
sdk_versionstrNoneInfinium SDK version (auto-populated)
runtimestrNoneRuntime environment
regionstrNoneDeployment region
custom_tagsdict[str, str]NoneCustom key-value tags

Dict Coercion

Anywhere the SDK expects a dataclass, you can pass a plain dict instead. The SDK auto-coerces it:

# These are equivalent:
td = InfiniumClient.create_task_data(
    name="Task", description="...", duration=1.0,
    llm_usage=LlmUsage(model="gpt-4o", prompt_tokens=100),
)

td = InfiniumClient.create_task_data(
    name="Task", description="...", duration=1.0,
    llm_usage={"model": "gpt-4o", "prompt_tokens": 100},
)

Extra keys in the dict that don’t match dataclass fields are collected into a metadata field (if the dataclass has one).