The SDK provides two clients. InfiniumClient is the standard choice — all its methods already return Promises. AsyncInfiniumClient adds explicit lifecycle management with close() and additional batch methods.
When to Use Which
| Feature | InfiniumClient | AsyncInfiniumClient |
|---|---|---|
sendTask() | Promise<ApiResponse> | Promise<ApiResponse> |
sendBatch() | Sequential | Concurrent |
sendBatchSequential() | N/A | Sequential |
close() | No-op | Cleanup resources |
isClosed() | N/A | Check state |
Use AsyncInfiniumClient when you need concurrent batch processing, explicit resource cleanup, or graceful shutdown handling.
AsyncInfiniumClient
import { AsyncInfiniumClient } from 'infinium-o2';
const client = new AsyncInfiniumClient({
agentId: 'your-agent-id',
agentSecret: 'your-agent-secret',
});
try {
await client.sendTask({
name: 'Moderate content',
description: 'Check user content for policy violations.',
duration: 1.8,
});
} finally {
await client.close();
}
Lifecycle Methods
close(): Promise<void> — Clean up resources. After calling, further requests will fail.
isClosed(): boolean — Check if the client has been closed.
Concurrent Batch
AsyncInfiniumClient.sendBatch() processes tasks concurrently:
const result = await client.sendBatch(tasks);
console.log(`Sent ${result.successfulTasks}/${result.totalTasks} traces`);
For strict rate limiting, use sequential processing:
const result = await client.sendBatchSequential(tasks);
Trace Wrapper
Works identically to InfiniumClient:
const classify = client.trace('Classifier')(
async (text: string) => {
const resp = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: text }],
});
return JSON.parse(resp.choices[0].message.content!);
}
);
With watch()
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
import { AsyncInfiniumClient, watch } from 'infinium-o2';
const client = new AsyncInfiniumClient({ agentId: '...', agentSecret: '...' });
const openai = watch(new OpenAI());
const anthropic = watch(new Anthropic());
const process_ = client.trace('Multi-Provider Agent')(
async (query: string) => {
const resp1 = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: query }],
});
const resp2 = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [{ role: 'user', content: query }],
});
return { openai: resp1.choices[0].message.content, anthropic: resp2.content[0].text };
}
);
Graceful Shutdown
Handle process signals for clean shutdown:
const client = new AsyncInfiniumClient({ agentId: '...', agentSecret: '...' });
async function shutdown() {
console.log('Shutting down...');
await client.close();
process.exit(0);
}
process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);
Maestro Polling
const response = await client.sendTaskData(taskData);
const traceId = response.data?.traceId;
const interpretation = await client.getInterpretedTaskResult(traceId);
console.log(interpretation.data);
Prompt Studio
const prompt = await client.getPrompt(
'...', '...',
'latest',
{ tone: 'professional' },
);