Skip to main content

Agents

Agents are your AI-powered conversation systems. Whether you built them on VAPI, Bland, or another platform, Chanl helps you test, monitor, and continuously improve them.

Why Agent Management Matters

You built an AI agent, but how do you know it’s actually working well? Agent management helps you:
  • Connect easily - Sync agents from VAPI, Bland, or custom platforms
  • Test thoroughly - Validate behavior before customers interact
  • Monitor continuously - Track real-time performance
  • Improve systematically - Use data to make agents better
Example: Connect your VAPI agent to Chanl. Run it through 10 test scenarios. Discover it struggles with angry customers. Update the prompt. Test again. Deploy with confidence.

How Agents Work in Chanl

When you connect an agent to Chanl, we sync its configuration and give you tools to improve it:
Connect Agent → Test with Scenarios → Monitor Live Calls → Analyze Results → Optimize → Repeat
Think of Chanl as your agent’s quality assurance and improvement platform.

Connecting Your First Agent

Connect a VAPI agent using your API credentials:
curl -X POST https://api.chanl.ai/v1/agents/connect \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "vapi",
    "agent_id": "vapi-agent-123",
    "credentials": {
      "api_key": "YOUR_VAPI_API_KEY"
    }
  }'
Chanl will automatically sync:
  • Agent name and description
  • System prompt and instructions
  • Voice and model settings
  • Tool configurations
  • Phone numbers

What Gets Synced?

Configuration

  • System prompt
  • Model settings (provider, temperature, max tokens)
  • Voice configuration
  • Language settings

Capabilities

  • Integrated tools and functions
  • API connections
  • Knowledge bases
  • Call routing rules

Identity

  • Agent name and description
  • Provider information
  • Phone numbers
  • Metadata

Performance

  • Call history
  • Quality metrics
  • Usage statistics
  • Error logs

Managing Agent Configuration

Viewing Agent Details

Check your agent’s current configuration:
curl https://api.chanl.ai/v1/agents/agent_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"
Response includes everything about your agent:
{
  "id": "agent_abc123",
  "name": "Customer Service Agent",
  "provider": "vapi",
  "status": "active",
  "configuration": {
    "model": {
      "provider": "openai",
      "name": "gpt-4",
      "temperature": 0.7,
      "maxTokens": 250
    },
    "voice": {
      "provider": "elevenlabs",
      "voiceId": "21m00Tcm4TlvDq8ikWAM",
      "stability": 0.5,
      "similarityBoost": 0.75
    },
    "prompt": "You are a helpful customer service agent...",
    "tools": [
      {
        "name": "lookup_order",
        "description": "Retrieves order information",
        "endpoint": "https://api.company.com/orders"
      }
    ]
  },
  "stats": {
    "totalCalls": 1247,
    "avgScore": 87.3,
    "successRate": 94.2
  }
}

Updating Configuration

Changes made in VAPI or Bland automatically sync to Chanl. You can also update directly:
curl -X PATCH https://api.chanl.ai/v1/agents/agent_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "configuration": {
      "model": {
        "temperature": 0.8
      }
    }
  }'
Changes made in Chanl won’t automatically sync back to your provider (VAPI, Bland, etc.). Use Chanl for testing configurations, then apply them in your provider’s platform.

Testing Agents Before Production

Before deploying to customers, test your agent thoroughly:

Create a Test Scenario

const chanl = require('@chanl/sdk');

// Test your agent with different customer types
const scenario = await chanl.scenarios.create({
  name: 'Customer Service Quality Check',
  prompt: 'Customer has a billing question about their recent charge',
  personas: ['polite-customer', 'frustrated-customer', 'confused-customer'],
  agents: ['agent_abc123'],
  scorecard: 'customer-service-quality'
});

// Wait for results
const results = await chanl.scenarios.waitForCompletion(scenario.id);

if (results.avgScore < 80) {
  console.log('⚠️ Agent needs improvement');
  console.log('Lowest scoring area:', results.weakestCategory);
} else {
  console.log('✅ Agent ready for production');
}

Common Test Scenarios

Test standard conversations where everything goes smoothly:
{
  "name": "Standard Order Inquiry",
  "prompt": "Customer wants to check order status",
  "personas": ["polite-customer"],
  "expectedOutcome": "Agent provides accurate order information"
}
Test difficult situations:
{
  "name": "Angry Customer Escalation",
  "prompt": "Customer is furious about late delivery and wants refund",
  "personas": ["angry-customer"],
  "expectedOutcome": "Agent de-escalates and offers solution"
}
Verify regulatory requirements:
{
  "name": "TCPA Compliance Check",
  "prompt": "Outbound sales call",
  "personas": ["skeptical-customer"],
  "scorecard": "compliance-tcpa",
  "expectedOutcome": "All required disclosures provided"
}

Monitoring Agent Performance

Real-Time Monitoring

Watch your agent handle live calls:
Navigate to Live Calls to see:
  • Active conversations in real-time
  • Transcript as it happens
  • Current score and quality indicators
  • Option to intervene if needed

Key Performance Metrics

Quality Score

Average score across all calls based on your scorecardsTarget: >85 for production agents

Success Rate

Percentage of calls that achieve desired outcomeTarget: >90% for most use cases

Average Call Time

How long conversations typically lastTarget: Depends on use case, track trends

Escalation Rate

How often agent transfers to humanTarget: <10% for well-tuned agents

Customer Satisfaction

Sentiment analysis from conversationsTarget: >80% positive sentiment

Tool Success Rate

How often agent tools execute correctlyTarget: >95% for critical integrations

Comparing Agent Versions

Test different configurations to find what works best:
const chanl = require('@chanl/sdk');

// Create two versions with different prompts
const v1 = await chanl.agents.duplicate('agent_abc123', {
  name: 'Customer Agent V1 - Formal',
  configuration: {
    prompt: 'You are a professional customer service representative. Maintain a formal, courteous tone...'
  }
});

const v2 = await chanl.agents.duplicate('agent_abc123', {
  name: 'Customer Agent V2 - Friendly',
  configuration: {
    prompt: 'You are a friendly customer service agent. Be warm, conversational, and helpful...'
  }
});

// Test both versions
const comparison = await chanl.scenarios.create({
  name: 'Prompt Comparison Test',
  prompt: 'Customer has a billing question',
  personas: ['polite', 'frustrated', 'confused'],
  agents: [v1.id, v2.id],
  scorecard: 'customer-service'
});

// View results
const results = await chanl.scenarios.waitForCompletion(comparison.id);
console.log('V1 avg score:', results.agents[v1.id].avgScore);
console.log('V2 avg score:', results.agents[v2.id].avgScore);
console.log('Winner:', results.agents[v2.id].avgScore > results.agents[v1.id].avgScore ? 'V2' : 'V1');

Optimizing Agent Performance

Using Performance Data

Identify what to improve:
# Get performance breakdown by category
curl https://api.chanl.ai/v1/agents/agent_abc123/analytics?breakdown=category \
  -H "Authorization: Bearer YOUR_API_KEY"
Response shows where agent excels and struggles:
{
  "agent": "agent_abc123",
  "timeRange": "30 days",
  "overallScore": 82,
  "categoryBreakdown": [
    {
      "category": "Communication",
      "score": 91,
      "trend": "stable",
      "status": "good"
    },
    {
      "category": "Problem Resolution",
      "score": 78,
      "trend": "declining",
      "status": "needs_improvement",
      "commonIssues": [
        "Takes too long to identify root cause",
        "Doesn't always confirm resolution"
      ]
    },
    {
      "category": "Compliance",
      "score": 87,
      "trend": "improving",
      "status": "good"
    }
  ],
  "recommendations": [
    "Add explicit problem identification step to prompt",
    "Include confirmation checklist at end of calls",
    "Test with more analytical personas"
  ]
}

Common Optimization Patterns

1

Identify Weakness

Use analytics to find lowest scoring category or most common failure
2

Form Hypothesis

Review failing call transcripts to understand why issues occur
3

Make Targeted Change

Update prompt, adjust temperature, add tool, or modify configuration
4

Test Change

Run scenarios comparing old vs new configuration
5

Validate Improvement

Ensure new version scores better without breaking other areas
6

Deploy and Monitor

Roll out change and watch for impact on live calls

Agent Tools and Capabilities

Viewing Agent Tools

See what your agent can do:
curl https://api.chanl.ai/v1/agents/agent_abc123/tools \
  -H "Authorization: Bearer YOUR_API_KEY"
{
  "tools": [
    {
      "name": "lookup_order",
      "description": "Retrieves customer order information",
      "type": "api",
      "endpoint": "https://api.company.com/orders/{orderId}",
      "method": "GET",
      "auth": "bearer_token",
      "successRate": 97.3,
      "avgResponseTime": 234
    },
    {
      "name": "process_refund",
      "description": "Initiates refund for order",
      "type": "api",
      "endpoint": "https://api.company.com/refunds",
      "method": "POST",
      "auth": "bearer_token",
      "successRate": 99.1,
      "avgResponseTime": 412
    }
  ]
}

Tool Performance

Monitor how well tools work:
const chanl = require('@chanl/sdk');

// Get tool usage statistics
const toolStats = await chanl.agents.toolAnalytics('agent_abc123', {
  timeRange: '7d'
});

toolStats.tools.forEach(tool => {
  console.log(`${tool.name}:`);
  console.log(`  Uses: ${tool.callCount}`);
  console.log(`  Success rate: ${tool.successRate}%`);
  console.log(`  Avg response time: ${tool.avgResponseTime}ms`);

  if (tool.successRate < 95) {
    console.log(`  ⚠️ Success rate below target`);
  }
  if (tool.avgResponseTime > 1000) {
    console.log(`  ⚠️ Response time slow`);
  }
});

Managing Multiple Agents

Listing All Agents

curl https://api.chanl.ai/v1/agents \
  -H "Authorization: Bearer YOUR_API_KEY"

Organizing by Tags

Group agents logically:
# Tag agents by purpose
curl -X PATCH https://api.chanl.ai/v1/agents/agent_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tags": ["customer-service", "billing", "production"]
  }'

# Find all production agents
curl https://api.chanl.ai/v1/agents?tags=production \
  -H "Authorization: Bearer YOUR_API_KEY"

Best Practices

1

Start with Clear Prompts

Write specific instructions about behavior, tone, and goals. Vague prompts lead to inconsistent behavior.
2

Test with Diverse Personas

Don’t just test happy paths. Include difficult customers, edge cases, and compliance scenarios.
3

Monitor from Day One

Enable alerts and live monitoring immediately. Catch issues before they become patterns.
4

Make Small, Measured Changes

Change one thing at a time. Test the impact. Avoid changing multiple variables simultaneously.
5

Keep a Changelog

Document what you changed and why. Makes it easy to understand performance shifts.
6

Review Performance Weekly

Set a recurring time to review analytics, alerts, and improvement opportunities.

Troubleshooting

Problem: Connected agent but configuration not appearing in ChanlSolutions:
  • Verify API credentials are correct and have proper permissions
  • Check if agent exists in provider platform (VAPI, Bland)
  • Trigger manual sync: chanl.agents.sync('agent_id')
  • Review error logs in sync history
  • Contact support if provider integration is down
Problem: Agent consistently scoring below 70Investigate:
  • Review failing call transcripts to identify patterns
  • Check if scorecard criteria are too strict
  • Verify prompt is clear and specific
  • Ensure tools are working (check success rates)
  • Test with simpler personas first
  • Compare against baseline agent if available
Problem: Agent frequently transfers to humanSolutions:
  • Review escalation triggers in prompt
  • Add more specific handling instructions
  • Provide additional tools or knowledge
  • Test with personas that trigger escalations
  • Adjust confidence thresholds if too conservative
Problem: Agent responses vary wildly between similar callsSolutions:
  • Lower temperature setting (try 0.5-0.7)
  • Make prompt instructions more explicit
  • Add examples of desired responses
  • Review if tools are returning inconsistent data
  • Check for conflicting instructions in prompt

What’s Next?