MoLOS-LLM-Council Module
Multi-LLM consultation system that brings multiple AI perspectives together through a structured 3-stage deliberation process, enabling better decision-making through diverse expert viewpoints.
Overview
Location: /ui/MoLOS-LLM-Council
The LLM Council module implements a "council of experts" approach to AI consultation. Instead of relying on a single LLM response, you can query multiple AI models simultaneously, have them critique each other's responses, and synthesize a final, well-considered answer.
Placeholder: Screenshot showing the main council interface with the consultation input and persona selection
The Council Concept
Think of it as convening a panel of experts:
- Multiple Perspectives: Different LLMs bring different strengths and viewpoints
- Structured Deliberation: A 3-stage process ensures thorough analysis
- Persona-Based Expertise: Assign specific roles and expertise to AI participants
- Synthesized Outcomes: The President persona combines insights into a final recommendation
Navigation
The module provides four main sections:
| Section | Path | Description |
|---|---|---|
| Council | /ui/MoLOS-LLM-Council | Start and conduct consultations |
| Personas | /ui/MoLOS-LLM-Council/personas | Create and manage AI personas |
| History | /ui/MoLOS-LLM-Council/history | Browse and search past consultations |
| Settings | /ui/MoLOS-LLM-Council/settings | Configure providers and module options |
The 3-Stage Council Process
The LLM Council uses a structured deliberation process inspired by real-world expert panels:
Stage 1: Initial Responses
Placeholder: Screenshot showing multiple persona responses appearing simultaneously in Stage 1
All selected personas receive your prompt and provide their initial responses independently. This ensures diverse, unbiased perspectives before any cross-influence occurs.
What happens:
- Each persona generates an independent response
- Responses are displayed side-by-side for comparison
- You can review each perspective before proceeding
Use diverse personas in Stage 1 to maximize perspective variety. A "Devil's Advocate" persona alongside an "Optimist" persona yields richer discussions.
Stage 2: Discussion and Critique
Placeholder: Screenshot showing personas responding to and critiquing each other's Stage 1 responses
Personas review each other's Stage 1 responses and engage in discussion. They can agree, disagree, add nuance, or challenge points made by other council members.
What happens:
- Each persona sees all Stage 1 responses
- Personas provide critiques and additional insights
- Areas of agreement and disagreement become clear
- Weak arguments are challenged and strengthened
Stage 2 helps identify blind spots and strengthens weak arguments through peer review among the AI personas.
Stage 3: Final Synthesis
Placeholder: Screenshot showing the President persona's synthesized final response incorporating all perspectives
The President persona (or designated synthesizer) reviews all discussion and produces a unified, synthesized response that incorporates the best insights from the entire council.
What happens:
- President persona reviews Stage 1 and Stage 2 outputs
- Key insights are extracted and combined
- Conflicts are resolved with reasoned explanations
- Final recommendation is presented with supporting rationale
The President persona can be customized with specific synthesis instructions. For technical decisions, configure it to prioritize accuracy. For creative tasks, configure it to preserve diverse viewpoints.
Core Features
Multi-LLM Consultation
Query multiple LLMs simultaneously and compare their responses in real-time.
Capabilities:
- Simultaneous Queries: Send identical prompts to all selected personas at once
- Parallel Processing: Responses arrive as they're generated
- Side-by-Side Comparison: View all responses in a unified interface
- Token Tracking: Monitor usage and costs per consultation
Persona Management
Placeholder: Screenshot showing the personas management page with a list of created personas
Create AI personas with specific expertise, personalities, and viewpoints.
Placeholder: Screenshot showing the persona creation/edit form with system prompt, provider, and model selection
Persona Configuration:
- Name: Descriptive identifier (e.g., "Senior Developer", "UX Researcher")
- System Prompt: Custom instructions that define expertise and behavior
- Provider Assignment: Link persona to specific LLM provider
- Model Selection: Choose specific model (GPT-4, Claude 3, etc.)
- President Role: Mark persona as the synthesizer for Stage 3
Built-in Persona Types:
| Type | Best For | Typical Traits |
|---|---|---|
| Analyst | Data-driven decisions | Logical, methodical, evidence-focused |
| Critic | Risk assessment | Skeptical, thorough, identifies flaws |
| Creative | Brainstorming | Imaginative, unconventional, exploratory |
| Pragmatist | Implementation | Practical, resource-aware, actionable |
| President | Synthesis | Balanced, comprehensive, decisive |
Provider Support
The module supports multiple LLM providers:
| Provider | Models | Best For |
|---|---|---|
| OpenAI | GPT-4, GPT-4 Turbo, GPT-3.5 | General purpose, code, analysis |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | Nuanced reasoning, safety-critical |
| OpenRouter | Multiple via single API | Access to many models |
| Custom | Any OpenAI-compatible API | Self-hosted, specialized models |
Response Ranking
Rate and rank LLM responses to track which personas perform best for different query types.
- Star Ratings: Rate response quality (1-5 stars)
- Comparative Ranking: Rank responses against each other
- Notes: Add context for why a response was preferred
- Analytics: View aggregate performance over time
Conversation History
Placeholder: Screenshot showing the consultation history page with search and filter options
Track and review all past consultations with full search capabilities.
Features:
- Full Text Search: Search across all consultation content
- Date Filtering: Find consultations by date range
- Persona Filtering: Filter by participating personas
- Stage Filtering: View consultations at specific stages
- Replay: Review complete 3-stage deliberation
- Export: Export consultations for external documentation
Settings and Configuration
Placeholder: Screenshot showing the settings page with provider configuration and module options
Provider Configuration
Configure each LLM provider with:
Provider Settings:
- API Endpoint: Provider's API URL
- API Key: Authentication credentials
- Default Model: Primary model for this provider
- Max Tokens: Response length limit
- Temperature: Creativity/randomness (0.0-2.0)
Module Settings
- Default Personas: Personas included in new councils by default
- Auto-Advance Stages: Automatically progress through stages
- Synthesis Model: Model used for final synthesis
- Response Timeout: Maximum wait time for responses
- Cost Alerts: Notification thresholds for spending
AI Tools (MCP)
The module exposes the following AI tools for the Architect Agent:
Council Management
| Tool | Description |
|---|---|
create_council | Start a new council consultation with prompt and selected personas |
get_council | Retrieve a specific council by ID |
list_councils | List councils with filtering options |
advance_stage | Move council to next deliberation stage |
Conversation & Messages
| Tool | Description |
|---|---|
get_conversation | Retrieve full conversation with all stages |
get_messages | Get messages for a specific stage |
add_message | Add user message to conversation |
Persona Management
| Tool | Description |
|---|---|
get_personas | Retrieve all defined personas |
get_persona | Get specific persona by ID |
create_persona | Create new persona with configuration |
update_persona | Update existing persona |
delete_persona | Remove a persona |
Provider & Settings
| Tool | Description |
|---|---|
get_providers | List configured LLM providers |
get_settings | Retrieve module settings |
update_settings | Update module configuration |
Database Schema
The module uses these repositories:
| Repository | Table | Purpose |
|---|---|---|
ConversationRepository | conversations | Council consultations |
MessageRepository | messages | Individual messages within stages |
PersonaRepository | personas | AI persona definitions |
ProviderRepository | providers | LLM provider configurations |
SettingsRepository | settings | Module-level settings |
Use Cases
Code Review Council
Scenario: You need a thorough code review before merging a critical feature.
Council Setup:
- Security Expert (Claude 3 Opus): Focus on vulnerabilities
- Performance Analyst (GPT-4): Focus on efficiency
- Maintainability Reviewer (GPT-4 Turbo): Focus on code quality
Prompt:
Review this pull request for:
1. Security vulnerabilities
2. Performance implications
3. Code maintainability
4. Best practices adherence
[code or PR link]
Outcome:
- Stage 1: Each expert provides independent review
- Stage 2: Experts discuss disagreements and edge cases
- Stage 3: President synthesizes actionable review summary
Strategic Decision Making
Scenario: Evaluating whether to build a feature in-house or use a third-party solution.
Council Setup:
- Build Advocate (GPT-4): Arguments for in-house development
- Buy Advocate (Claude 3 Sonnet): Arguments for third-party
- Risk Analyst (GPT-4): Risk assessment for both options
Prompt:
We need [feature capability]. Should we:
A) Build in-house using [tech stack]
B) Integrate with [third-party solution]
Budget: [amount]
Timeline: [deadline]
Team size: [number] engineers
Provide recommendation with rationale.
Outcome:
- Stage 1: Each advocate presents their case
- Stage 2: Cross-examination and challenge
- Stage 3: Balanced recommendation with decision criteria
Content Generation
Scenario: Creating marketing copy that needs to balance multiple brand voices.
Council Setup:
- Brand Voice A (Claude 3): Professional, authoritative tone
- Brand Voice B (GPT-4): Friendly, approachable tone
- Editor (GPT-4 Turbo): Ensures consistency and clarity
Prompt:
Write product description for [product].
Key points to include:
- [point 1]
- [point 2]
- [point 3]
Target audience: [description]
Brand guidelines: [guidelines]
Outcome:
- Stage 1: Multiple draft versions
- Stage 2: Editor critiques and suggests improvements
- Stage 3: Polished final version incorporating best elements
Integration with MoLOS
Cross-Module Integration
- MoLOS-AI-Knowledge: Use saved prompts as council inputs
- MoLOS-Tasks: Create task-linked consultations for project decisions
- MoLOS-Goals: Council input for goal planning and review
- Architect Agent: Direct access to all council features via MCP tools
Architect Agent Access
The Architect Agent can leverage the LLM Council for:
- Multi-perspective analysis of complex problems
- Consensus building on recommendations
- Diverse brainstorming sessions
- Quality assurance through multiple reviewers
Getting Started
Quick Start
- Navigate to Council: Go to
/ui/MoLOS-LLM-Council - Configure Providers: Add API keys in Settings
- Create Personas: Build 2-3 personas with different perspectives
- Start a Council: Enter your prompt and select personas
- Progress Through Stages: Review responses at each stage
- Get Synthesis: Review the President's final synthesis
Best Practices
Create personas with genuinely different perspectives. Two "helpful assistant" personas add less value than pairing an "optimist" with a "skeptic."
Don't rush through stages. Stage 2 discussions often reveal insights that weren't apparent in Stage 1.
Customize your President persona for your use case. For technical decisions, emphasize accuracy. For creative work, emphasize preserving unique insights.
Multi-stage consultations with multiple personas increase token usage. Monitor costs in Settings and start with fewer personas for routine queries.
Troubleshooting
Provider Not Responding
Symptoms: Timeout errors, missing responses
Solutions:
- Verify API key is valid and has credits
- Check provider status page for outages
- Confirm API endpoint is correct
- Test provider independently first
Stage Not Advancing
Symptoms: Council stuck at current stage
Solutions:
- Check if all expected responses have arrived
- Verify auto-advance is enabled in Settings
- Manually advance using the "Next Stage" button
- Check browser console for errors
High Token Costs
Symptoms: Unexpected API charges
Solutions:
- Review cost tracking in Settings
- Reduce number of personas per council
- Use smaller models for routine queries
- Enable cost alerts for budget management
Synthesis Quality Issues
Symptoms: Stage 3 output is generic or misses key points
Solutions:
- Refine President persona's system prompt
- Add explicit synthesis instructions to President
- Ensure Stage 2 discussion is substantive
- Try different models for President role
Related Documentation
- MoLOS Modules Overview - All available modules
- Module Development Guide - Creating custom modules
- AI Integration - Building MCP tools
- v1.1.0 Release Notes - Full release details
Module Version: 1.1.0
Last Updated: March 21, 2026