There are three ways to send traces to Infinium, from simplest to most detailed.

Comparison

MethodBest ForAuto TimingStep-Level DetailAuto-Send
sendTask()Simple, single-step agentsNoNoYes
sendTaskData()Full control, structured dataNoYesYes
TraceBuilderReal agents with multiple stepsYesYesNo

1. sendTask() — Simple, Inline

The quickest way. Call your LLM, then report what happened:

import OpenAI from 'openai';
import { InfiniumClient } from 'infinium-o2';

const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const openai = new OpenAI();

const start = Date.now();
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'Classify this support ticket. Return JSON.' },
    { role: 'user', content: ticketText },
  ],
});
const duration = (Date.now() - start) / 1000;
const result = response.choices[0].message.content!;
const usage = response.usage!;

await client.sendTask({
  name: 'Classify support ticket',
  description: 'Categorized an inbound support ticket by type and urgency.',
  duration,
  inputSummary: `Customer ticket: ${ticketText.slice(0, 200)}`,
  outputSummary: `Classification: ${result.slice(0, 300)}`,
  llmUsage: {
    model: 'gpt-4o',
    provider: 'openai',
    promptTokens: usage.prompt_tokens,
    completionTokens: usage.completion_tokens,
    totalTokens: usage.total_tokens,
    apiCallsCount: 1,
  },
  customer: { customerName, customerEmail },
});

Fields

FieldTypeRequiredDescription
namestringYesTask name (max 500 chars)
descriptionstringYesWhat the agent did (max 10,000 chars)
durationnumberYesDuration in seconds (0-86,400)
currentDatetimestringNoISO 8601 timestamp (auto-generated if omitted)
inputSummarystringNoSummary of input
outputSummarystringNoSummary of output
llmUsageLlmUsageNoToken counts, model, cost
stepsExecutionStep[]NoExecution steps
expectedOutcomeExpectedOutcomeNoMaestro’s grading rubric
environmentEnvironmentContextNoRuntime metadata
errorsErrorDetail[]NoErrors encountered

sendTask() also accepts any domain section (e.g., customer, sales, research).


2. sendTaskData() — Structured

When you need full control, build a complete TaskData object:

import { InfiniumClient, ExecutionStep, LlmUsage, ExpectedOutcome } from 'infinium-o2';

const steps: ExecutionStep[] = [];

// Step 1: Search
steps.push({
  stepNumber: 1,
  action: 'tool_use',
  description: 'Search knowledge base for relevant documents',
  durationMs: 340,
  outputPreview: `Found ${results.length} results`,
});

// Step 2: LLM analysis
steps.push({
  stepNumber: 2,
  action: 'llm_inference',
  description: 'Analyze search results with GPT-4o',
  durationMs: 1200,
  llmCall: {
    model: 'gpt-4o',
    provider: 'openai',
    promptTokens: usage.prompt_tokens,
    completionTokens: usage.completion_tokens,
    latencyMs: 1200,
  },
  outputPreview: analysis.slice(0, 500),
});

await client.sendTaskData({
  name: 'Research: quarterly earnings impact',
  description: 'Multi-step research with source gathering and LLM analysis',
  currentDatetime: client.getCurrentIsoDatetime(),
  duration: totalDuration,
  steps,
  expectedOutcome: {
    taskObjective: 'Produce sourced analysis of quarterly earnings impact',
    requiredDeliverables: ['Source list', 'Analysis', 'Executive summary'],
    acceptanceCriteria: ['At least 3 sources', 'Specific data points cited'],
  },
  llmUsage: {
    model: 'gpt-4o',
    provider: 'openai',
    promptTokens: totalPrompt,
    completionTokens: totalCompletion,
    totalTokens: totalPrompt + totalCompletion,
    apiCallsCount: 1,
  },
  environment: { framework: 'custom', nodeVersion: process.version },
});

TraceBuilder provides a fluent API. Steps auto-capture timing via the run() method:

import { TraceBuilder } from 'infinium-o2';

const trace = new TraceBuilder(
  'Blog Post Generator',
  'Generate a publish-ready blog post on a given topic.',
);

trace.setExpectedOutcome({
  taskObjective: 'Produce a 600-word blog post with SEO metadata',
  requiredDeliverables: ['Blog post', 'SEO title', 'Meta description'],
});
trace.setInputSummary(`Topic: ${topic}. Audience: ${audience}.`);

// Step 1 -- auto-timed via run()
const outline = await trace.step('llm_inference', 'Create outline').run(async (step) => {
  const resp = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: topic }],
  });
  step.setOutput(resp.choices[0].message.content!.slice(0, 500));
  step.recordLlmCall({
    model: 'gpt-4o', provider: 'openai',
    promptTokens: resp.usage!.prompt_tokens,
    completionTokens: resp.usage!.completion_tokens,
  });
  return resp.choices[0].message.content!;
});

// Step 2
const blogPost = await trace.step('llm_inference', 'Write full post').run(async (step) => {
  const resp = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: outline }],
  });
  step.setOutput(resp.choices[0].message.content!.slice(0, 500));
  step.recordLlmCall({
    model: 'gpt-4o', provider: 'openai',
    promptTokens: resp.usage!.prompt_tokens,
    completionTokens: resp.usage!.completion_tokens,
  });
  return resp.choices[0].message.content!;
});

trace.setOutputSummary(`Generated ${blogPost.split(' ').length}-word blog post.`);
const taskData = trace.build(); // Auto-computes total duration
await client.sendTaskData(taskData);

TraceBuilder Methods

MethodReturnsDescription
step(action, description)StepContextCreate an auto-numbered step
setInputSummary(summary)TraceBuilderWhat the agent received
setOutputSummary(summary)TraceBuilderWhat the agent produced
setExpectedOutcome(outcome)TraceBuilderMaestro’s grading rubric
setEnvironment(env)TraceBuilderRuntime metadata
setLlmUsage(usage)TraceBuilderAggregate token stats
setSection(name, data)TraceBuilderSet a domain section
addError(error)TraceBuilderRecord an error
build(traceCtx?)TaskDataBuild the final TaskData

All setters return this for fluent chaining:

trace.setInputSummary('...').setExpectedOutcome({...}).setEnvironment({...});

StepContext Methods

MethodReturnsDescription
run(fn)Promise<T>Execute with auto-timing
setInput(preview)StepContextInput preview (truncated to 500 chars)
setOutput(preview)StepContextOutput preview (truncated to 500 chars)
recordToolCall(call)StepContextRecord a tool invocation
recordLlmCall(call)StepContextRecord an LLM call
setMetadata(meta)StepContextAttach arbitrary metadata