AI Agent v3 Launch
· 3 min read
We're excited to announce the launch of AI Agent v3, a complete redesign of our AI architecture built on the Vercel AI SDK. This release brings modular architecture, improved streaming, and real-time progress tracking.
What's New in v3?
AI SDK Integration
The v3 agent is built on top of the Vercel AI SDK, providing:
- Unified API for multiple LLM providers
- Built-in streaming support
- Type-safe interfaces
- First-class tool support
Modular Architecture
The agent is now split into focused modules:
agent/v3/
├── core/
│ ├── molos-agent.ts # Main agent orchestration
│ ├── config.ts # Agent configuration
│ └── state.ts # State management
├── multi-agent/
│ ├── module-registry.ts # Module registration
│ ├── delegation-tools.ts # Module delegation
│ └── module-spec.ts # Module specification
├── providers/
│ ├── factory.ts # Provider factory
│ └── index.ts # Provider implementations
├── tools/
│ ├── schema-converter.ts # Schema conversion
│ ├── tool-wrapper.ts # Tool execution
│ └── index.ts # Tool registry
└── types/
└── index.ts # TypeScript definitions
Key Features
Real-Time Progress Streaming
Watch your agent execute tasks in real-time with detailed progress updates:
interface ProgressEvent {
type: 'step_start' | 'step_complete' | 'step_error' | 'planning' | 'complete';
stepNumber: number;
message: string;
details?: Record<string, unknown>;
timestamp: Date;
}
Modular Tool System
The agent discovers and uses tools from registered modules automatically:
interface ModuleTool {
name: string;
description: string;
parameters: Record<string, ToolParameter>;
handler: (params: unknown) => Promise<ToolResult>;
}
interface ToolParameter {
type: 'string' | 'number' | 'boolean' | 'array' | 'object';
description: string;
required: boolean;
default?: unknown;
}
Multi-Provider Support
Support for multiple LLM providers with automatic fallback:
const supportedProviders = [
'openai',
'anthropic',
'google',
'azure',
'openai-compatible', // For Z.AI and others
'custom'
];
Enhanced Error Handling
Intelligent error recovery and user-friendly messages:
try {
const result = await agent.execute(prompt);
} catch (error) {
if (error instanceof ModuleExecutionError) {
// Provide guidance for module errors
showError(`Module ${error.moduleName} failed: ${error.message}`);
} else if (error instanceof ProviderError) {
// Handle provider errors
showError(`Provider error: ${error.message}`);
}
}
Migration from v2
The v2 adapter provides backward compatibility:
// src/lib/server/ai/agent-v2-adapter.ts
export async function executeV2Adapter(
prompt: string,
options: V2Options
): Promise<V2Response> {
const v3Agent = new MolosAgent(options);
const stream = await v3Agent.stream(prompt, {
onProgress: (event) => {
// Convert v3 events to v2 format
sendV2Progress(convertToV2Progress(event));
}
});
return convertToV2Response(await stream.result);
}
Getting Started
Basic Usage
import { MolosAgent } from '$lib/server/ai/agent/v3/core/molos-agent';
const agent = new MolosAgent({
provider: 'openai',
model: 'gpt-4-turbo',
apiKey: process.env.OPENAI_API_KEY
});
const result = await agent.execute('Analyze the latest sales data');
console.log(result.content);
Streaming Responses
const agent = new MolosAgent(options);
const stream = await agent.stream('Generate a report', {
onProgress: (event) => {
console.log(`Step ${event.stepNumber}: ${event.message}`);
}
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Using Modules
const agent = new MolosAgent({
provider: 'anthropic',
model: 'claude-3-opus',
modules: ['sales', 'inventory', 'analytics']
});
// Agent automatically discovers tools from modules
const result = await agent.execute('Check inventory status');
Provider Configuration
OpenAI
const agent = new MolosAgent({
provider: 'openai',
model: 'gpt-4-turbo-preview',
apiKey: process.env.OPENAI_API_KEY,
baseUrl: 'https://api.openai.com/v1'
});
Anthropic
const agent = new MolosAgent({
provider: 'anthropic',
model: 'claude-3-opus-20240229',
apiKey: process.env.ANTHROPIC_API_KEY
});
Z.AI (OpenAI-compatible)
const agent = new MolosAgent({
provider: 'openai-compatible',
model: 'zai-v3',
apiKey: process.env.ZAI_API_KEY,
baseUrl: 'https://api.z.ai/v1',
useCompatibilityMode: true
});
Performance Improvements
- Faster streaming: Optimized message chunking reduces latency
- Parallel execution: Multiple tools can execute concurrently
- Smart caching: Repeated tool calls are cached automatically
- Efficient memory: Improved memory management for long conversations
What's Next
Future enhancements include:
- Multi-agent collaboration capabilities
- Improved tool discovery and suggestion
- Custom tool registration API
- Agent persistence and conversation history
Learn about the technical implementation in Devlog Feb 8: AI Agent Improvements.
