After sending a trace to Infinium, Maestro (the behavioral intelligence engine) analyzes it and produces an interpretation. You can retrieve this result programmatically.

Single Poll

Use get_interpreted_task_result() to check once whether Maestro has finished:

response = client.get_interpreted_task_result(task_id=trace_id)

if response.success:
    print(response.data["interpretedTraceResult"])
else:
    print("Not ready yet or not found")

Parameters

ParameterTypeDescription
task_idstrThe trace ID returned from send_task_data()

Polling Loop

Use wait_for_interpretation() to poll until Maestro completes or a timeout is reached:

# Send a trace
response = client.send_task_data(task_data)
trace_id = response.data.get("traceId")

# Wait for Maestro (blocks until ready or timeout)
interpretation = client.wait_for_interpretation(
    trace_id,
    timeout=120.0,     # Max seconds to wait
    poll_interval=3.0, # Seconds between polls
)

if interpretation.success:
    result = interpretation.data["interpretedTraceResult"]
    print(result)
else:
    print(f"Timed out or failed: {interpretation.message}")

Parameters

ParameterTypeDefaultDescription
trace_idstrrequiredThe trace ID from send_task_data() response
timeoutfloat120.0Maximum seconds to wait
poll_intervalfloat3.0Seconds between each poll

Behavior

  1. Calls get_interpreted_task_result() every poll_interval seconds
  2. Returns immediately when a successful result is received
  3. If timeout is reached without a result, returns the last ApiResponse (with success=False)
  4. Raises InfiniumTimeoutError only on network-level timeouts, not on Maestro processing timeouts

Async Usage

response = await client.send_task_data(task_data)
trace_id = response.data.get("traceId")

interpretation = await client.wait_for_interpretation(
    trace_id,
    timeout=120.0,
    poll_interval=3.0,
)

Best Practices

  • Typical processing time — Maestro usually completes within 5-30 seconds depending on trace complexity
  • Timeout tuning — 120 seconds is generous for most traces. Reduce for latency-sensitive applications
  • Poll interval — 3 seconds balances responsiveness with API courtesy. Don’t go below 1 second
  • Fire and forget — If you don’t need the interpretation immediately, skip polling entirely. The result is always available on the platform dashboard
  • Background polling — In production, consider polling in a background task rather than blocking your main flow