client.trace() wraps a function with automatic tracing. It handles timing, input/output capture, error recording, and sending.
client.trace()
import { InfiniumClient } from 'infinium-o2';
const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const classify = client.trace('Email Classifier')(
async (emailBody: string) => {
const resp = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'Classify this email. Return JSON.' },
{ role: 'user', content: emailBody },
],
});
return JSON.parse(resp.choices[0].message.content!);
}
);
// Every call auto-sends a trace
const result = await classify('Hi, I was charged twice for my subscription...');
Signature
client.trace(name: string, options?: {
autoSend?: boolean; // Default: true
description?: string; // Optional trace description
}): <T extends (...args: any[]) => any>(fn: T) => (...args: Parameters<T>) => Promise<ReturnType<T>>
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | string | required | Name for the trace |
options.autoSend | boolean | true | Automatically send the trace when the function returns |
options.description | string | undefined | Optional description |
What It Captures
- Duration — wall-clock time from function entry to exit (via
process.hrtime.bigint()) - Input — string representation of function arguments (as
inputSummary, truncated to 500 chars) - Output — string representation of return value (as
outputSummary, truncated to 500 chars) - Errors — if the function throws, the error is captured as an
ErrorDetail, the trace is still sent, and the error is re-thrown - LLM calls — if the function calls a
watch()-patched LLM client, those calls are automatically incorporated
Disabling Auto-Send
const classify = client.trace('Draft Classifier', { autoSend: false })(
async (text: string) => {
// ...
}
);
const result = await classify('some text');
// Trace is built but not sent -- useful for testing
Error Handling
When a traced function throws:
- The error is captured as an
ErrorDetailin the trace (type, message, stack trace) - The trace is sent (if
autoSendistrue) - The error is re-thrown — the wrapper never swallows errors
const riskyOp = client.trace('Risky Operation')(
async () => {
throw new Error('something went wrong');
}
);
try {
await riskyOp();
} catch (error) {
// The trace was already sent with the error recorded
}
Combining with watch()
The most powerful pattern combines watch() with client.trace(). LLM calls are captured automatically:
import OpenAI from 'openai';
import { InfiniumClient, watch } from 'infinium-o2';
const client = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const openai = watch(new OpenAI());
const research = client.trace('Research Agent')(
async (query: string) => {
// Step 1: Search (not an LLM call, not captured)
const results = await searchDatabase(query);
// Step 2: Analyze with LLM (auto-captured by watch())
const resp1 = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'Analyze these results.' },
{ role: 'user', content: JSON.stringify(results) },
],
});
// Step 3: Summarize with LLM (also auto-captured)
const resp2 = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'Summarize this analysis.' },
{ role: 'user', content: resp1.choices[0].message.content! },
],
});
return resp2.choices[0].message.content;
}
);
// The trace includes both LLM calls with tokens, latency, and model info
const result = await research('What are the latest trends in AI?');
Context Management
client.trace() uses Node.js AsyncLocalStorage for context management:
- Each wrapped function execution gets its own
TraceContext watch()-patched LLM calls automatically record into the active context- Context is async-safe — concurrent executions don’t interfere
// These run concurrently with separate trace contexts
await Promise.all([
classify('email 1'), // Separate trace
classify('email 2'), // Separate trace
classify('email 3'), // Separate trace
]);
Usage on Both Clients
trace() works identically on both InfiniumClient and AsyncInfiniumClient:
import { InfiniumClient, AsyncInfiniumClient } from 'infinium-o2';
const sync = new InfiniumClient({ agentId: '...', agentSecret: '...' });
const async_ = new AsyncInfiniumClient({ agentId: '...', agentSecret: '...' });
const fn1 = sync.trace('Task A')(async () => { /* ... */ });
const fn2 = async_.trace('Task B')(async () => { /* ... */ });