There are three ways to send traces to Infinium, from simplest to most detailed.
Comparison
| Method | Best For | Auto Timing | Step-Level Detail | Auto-Send |
|---|---|---|---|---|
sendTask() | Simple, single-step agents | No | No | Yes |
sendTaskData() | Full control, structured data | No | Yes | Yes |
TraceBuilder | Real agents with multiple steps | Yes | Yes | No |
1. sendTask() — Simple, Inline
The quickest way. Call your LLM, then report what happened:
import OpenAI from 'openai';
import { InfiniumClient } from 'infinium-o2';
const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const openai = new OpenAI();
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'Classify this support ticket. Return JSON.' },
{ role: 'user', content: ticketText },
],
});
const duration = (Date.now() - start) / 1000;
const result = response.choices[0].message.content!;
const usage = response.usage!;
await client.sendTask({
name: 'Classify support ticket',
description: 'Categorized an inbound support ticket by type and urgency.',
duration,
inputSummary: `Customer ticket: ${ticketText.slice(0, 200)}`,
outputSummary: `Classification: ${result.slice(0, 300)}`,
llmUsage: {
model: 'gpt-4o',
provider: 'openai',
promptTokens: usage.prompt_tokens,
completionTokens: usage.completion_tokens,
totalTokens: usage.total_tokens,
apiCallsCount: 1,
},
customer: { customerName, customerEmail },
});
Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Task name (max 500 chars) |
description | string | Yes | What the agent did (max 10,000 chars) |
duration | number | Yes | Duration in seconds (0-86,400) |
currentDatetime | string | No | ISO 8601 timestamp (auto-generated if omitted) |
inputSummary | string | No | Summary of input |
outputSummary | string | No | Summary of output |
llmUsage | LlmUsage | No | Token counts, model, cost |
steps | ExecutionStep[] | No | Execution steps |
expectedOutcome | ExpectedOutcome | No | Maestro’s grading rubric |
environment | EnvironmentContext | No | Runtime metadata |
errors | ErrorDetail[] | No | Errors encountered |
sendTask() also accepts any domain section (e.g., customer, sales, research).
2. sendTaskData() — Structured
When you need full control, build a complete TaskData object:
import { InfiniumClient, ExecutionStep, LlmUsage, ExpectedOutcome } from 'infinium-o2';
const steps: ExecutionStep[] = [];
// Step 1: Search
steps.push({
stepNumber: 1,
action: 'tool_use',
description: 'Search knowledge base for relevant documents',
durationMs: 340,
outputPreview: `Found ${results.length} results`,
});
// Step 2: LLM analysis
steps.push({
stepNumber: 2,
action: 'llm_inference',
description: 'Analyze search results with GPT-4o',
durationMs: 1200,
llmCall: {
model: 'gpt-4o',
provider: 'openai',
promptTokens: usage.prompt_tokens,
completionTokens: usage.completion_tokens,
latencyMs: 1200,
},
outputPreview: analysis.slice(0, 500),
});
await client.sendTaskData({
name: 'Research: quarterly earnings impact',
description: 'Multi-step research with source gathering and LLM analysis',
currentDatetime: client.getCurrentIsoDatetime(),
duration: totalDuration,
steps,
expectedOutcome: {
taskObjective: 'Produce sourced analysis of quarterly earnings impact',
requiredDeliverables: ['Source list', 'Analysis', 'Executive summary'],
acceptanceCriteria: ['At least 3 sources', 'Specific data points cited'],
},
llmUsage: {
model: 'gpt-4o',
provider: 'openai',
promptTokens: totalPrompt,
completionTokens: totalCompletion,
totalTokens: totalPrompt + totalCompletion,
apiCallsCount: 1,
},
environment: { framework: 'custom', nodeVersion: process.version },
});
3. TraceBuilder — Recommended for Real Agents
TraceBuilder provides a fluent API. Steps auto-capture timing via the run() method:
import { TraceBuilder } from 'infinium-o2';
const trace = new TraceBuilder(
'Blog Post Generator',
'Generate a publish-ready blog post on a given topic.',
);
trace.setExpectedOutcome({
taskObjective: 'Produce a 600-word blog post with SEO metadata',
requiredDeliverables: ['Blog post', 'SEO title', 'Meta description'],
});
trace.setInputSummary(`Topic: ${topic}. Audience: ${audience}.`);
// Step 1 -- auto-timed via run()
const outline = await trace.step('llm_inference', 'Create outline').run(async (step) => {
const resp = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: topic }],
});
step.setOutput(resp.choices[0].message.content!.slice(0, 500));
step.recordLlmCall({
model: 'gpt-4o', provider: 'openai',
promptTokens: resp.usage!.prompt_tokens,
completionTokens: resp.usage!.completion_tokens,
});
return resp.choices[0].message.content!;
});
// Step 2
const blogPost = await trace.step('llm_inference', 'Write full post').run(async (step) => {
const resp = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: outline }],
});
step.setOutput(resp.choices[0].message.content!.slice(0, 500));
step.recordLlmCall({
model: 'gpt-4o', provider: 'openai',
promptTokens: resp.usage!.prompt_tokens,
completionTokens: resp.usage!.completion_tokens,
});
return resp.choices[0].message.content!;
});
trace.setOutputSummary(`Generated ${blogPost.split(' ').length}-word blog post.`);
const taskData = trace.build(); // Auto-computes total duration
await client.sendTaskData(taskData);
TraceBuilder Methods
| Method | Returns | Description |
|---|---|---|
step(action, description) | StepContext | Create an auto-numbered step |
setInputSummary(summary) | TraceBuilder | What the agent received |
setOutputSummary(summary) | TraceBuilder | What the agent produced |
setExpectedOutcome(outcome) | TraceBuilder | Maestro’s grading rubric |
setEnvironment(env) | TraceBuilder | Runtime metadata |
setLlmUsage(usage) | TraceBuilder | Aggregate token stats |
setSection(name, data) | TraceBuilder | Set a domain section |
addError(error) | TraceBuilder | Record an error |
build(traceCtx?) | TaskData | Build the final TaskData |
All setters return this for fluent chaining:
trace.setInputSummary('...').setExpectedOutcome({...}).setEnvironment({...});
StepContext Methods
| Method | Returns | Description |
|---|---|---|
run(fn) | Promise<T> | Execute with auto-timing |
setInput(preview) | StepContext | Input preview (truncated to 500 chars) |
setOutput(preview) | StepContext | Output preview (truncated to 500 chars) |
recordToolCall(call) | StepContext | Record a tool invocation |
recordLlmCall(call) | StepContext | Record an LLM call |
setMetadata(meta) | StepContext | Attach arbitrary metadata |