After sending a trace to Infinium, Maestro (the behavioral intelligence engine) analyzes it and produces an interpretation. You can retrieve this result programmatically.
Single Poll
Use get_interpreted_task_result() to check once whether Maestro has finished:
response = client.get_interpreted_task_result(task_id=trace_id)
if response.success:
print(response.data["interpretedTraceResult"])
else:
print("Not ready yet or not found")
Parameters
| Parameter | Type | Description |
|---|---|---|
task_id | str | The trace ID returned from send_task_data() |
Polling Loop
Use wait_for_interpretation() to poll until Maestro completes or a timeout is reached:
# Send a trace
response = client.send_task_data(task_data)
trace_id = response.data.get("traceId")
# Wait for Maestro (blocks until ready or timeout)
interpretation = client.wait_for_interpretation(
trace_id,
timeout=120.0, # Max seconds to wait
poll_interval=3.0, # Seconds between polls
)
if interpretation.success:
result = interpretation.data["interpretedTraceResult"]
print(result)
else:
print(f"Timed out or failed: {interpretation.message}")
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
trace_id | str | required | The trace ID from send_task_data() response |
timeout | float | 120.0 | Maximum seconds to wait |
poll_interval | float | 3.0 | Seconds between each poll |
Behavior
- Calls
get_interpreted_task_result()everypoll_intervalseconds - Returns immediately when a successful result is received
- If
timeoutis reached without a result, returns the lastApiResponse(withsuccess=False) - Raises
InfiniumTimeoutErroronly on network-level timeouts, not on Maestro processing timeouts
Async Usage
response = await client.send_task_data(task_data)
trace_id = response.data.get("traceId")
interpretation = await client.wait_for_interpretation(
trace_id,
timeout=120.0,
poll_interval=3.0,
)
Best Practices
- Typical processing time — Maestro usually completes within 5-30 seconds depending on trace complexity
- Timeout tuning — 120 seconds is generous for most traces. Reduce for latency-sensitive applications
- Poll interval — 3 seconds balances responsiveness with API courtesy. Don’t go below 1 second
- Fire and forget — If you don’t need the interpretation immediately, skip polling entirely. The result is always available on the platform dashboard
- Background polling — In production, consider polling in a background task rather than blocking your main flow