Skip to main content

Testing Servers and Tools

Learn how to test your MCP servers and tools before deploying them to production.

Testing Methods

Reeva provides two ways to test your servers and tools:

MethodBest ForAccess
PlaygroundDirect tool testing, debugging parameters, validating configurationServer Details page
ChatEnd-to-end testing, natural language validation, multi-step workflowsChat page

When to Use Each

Use Playground when you need to:

  • Test a specific tool in isolation
  • Debug parameter issues
  • Validate credential linking
  • Verify server connectivity
  • See the exact request/response payloads

Use Chat when you need to:

  • Test how AI selects and uses your tools
  • Validate natural language prompts
  • Test multi-step workflows
  • Observe end-to-end behavior

Server Playground

The Playground lets you execute individual tools directly with full control over parameters.

Accessing the Playground

  1. Navigate to Servers in the sidebar
  2. Click on a server name to open its details
  3. Scroll down to the Playground section

Available Actions

The Playground supports three types of operations:

List Tools (tools/list)

  • Returns all tools available on the server
  • Useful for verifying which tools are configured

List Prompts (prompts/list)

  • Returns configured prompts
  • Shows prompt templates available to AI

Tool Call

  • Execute any tool with custom arguments
  • See the full request and response

Authorization

By default, the Playground uses your current session for authentication. You can also test with an API key:

  1. Find the Authorization field
  2. Enter your API key (e.g., rk_abc123...)
  3. Leave blank to use session authentication

This is useful for verifying that API keys work correctly before sharing them.

Executing Tool Calls

  1. Select a Tool

    • Click a tool name in the left panel
    • The tool switches to "Tool Call" mode automatically
  2. Fill in Arguments

    • Required fields are marked with a red asterisk (*)
    • The form adapts to each parameter type:
      • String: Text input
      • Number/Integer: Numeric input
      • Boolean: Checkbox
      • Array: Comma-separated values
      • Object: JSON text area
      • Enum: Dropdown selection
  3. Review the Request

    • The Request Payload section shows the exact JSON-RPC request
    • Verify parameters look correct before sending
  4. Execute

    • Click Test Call
    • Wait for the response
  5. Review the Response

    • Success responses show the tool's output
    • Errors display the error message and details

Example: Testing a Search Tool

Tool: google_search
Arguments:
query: "MCP protocol specification" (required)
num_results: 5

Request Payload:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "google_search",
"arguments": {
"query": "MCP protocol specification",
"num_results": 5
}
}
}

Playground Use Cases

  • Validating Tool Configuration — Confirm parameter overrides are applied and default values work as expected
  • Debugging Parameter Issues — Test edge cases (empty strings, large numbers) and identify required vs optional fields
  • Testing Credentials — Verify linked accounts authenticate correctly before deploying to production
  • Verifying Server Connectivity — Confirm the server responds and tools are properly registered

Chat Testing

Chat provides a conversational interface to test how AI uses your tools in realistic scenarios.

Accessing Chat

  1. Navigate to Chat in the sidebar
  2. Select a server from the dropdown at the top
  3. Start typing your message

Sending Messages

Zero State

  • When starting fresh, you'll see a centered input with "What would you like to do?"
  • Type your request and press Enter

Conversation State

  • After sending a message, the input moves to the bottom
  • Previous messages are displayed above

Keyboard Shortcuts

  • Enter — Send message
  • Shift + Enter — New line (for multi-line messages)
  • Escape — Stop streaming response

Understanding Responses

Each AI response can include several sections:

Tool Activity (collapsible)

  • Shows which tools the AI called
  • Displays the arguments passed to each tool
  • Shows tool results with success or failure indicators
  • Click the copy icon to copy JSON for debugging

Thinking (collapsible)

  • Shows the AI's reasoning process
  • Helpful for understanding why certain tools were selected
  • Open by default, click to collapse

Response Content

  • The AI's final text response
  • Hover to reveal copy button (copies as markdown)

Streaming Indicators

  • "Generating response..." appears while waiting for text
  • "Typing..." shows during active streaming

Session Statistics

At the bottom of the chat, you'll see usage metrics:

  • Tokens: Input and output token counts
  • Credits: Credits consumed this session
  • Remaining: Your remaining credit balance

This helps you understand the cost of your testing.

Controls

Stop Button

  • Appears during streaming
  • Immediately halts the response
  • Useful for long or incorrect responses

New Chat Button

  • Resets the conversation
  • Clears message history
  • Resets token/credit counters for the session

Chat Use Cases

  • End-to-End Integration Testing — Verify the complete flow from prompt to tool execution to response
  • Natural Language Validation — Test that your tool descriptions help the AI select the right tools
  • Multi-Step Workflow Testing — Observe how AI chains multiple tool calls together
  • Tool Selection Behavior — See which tools AI chooses for ambiguous requests

For the best results, follow this testing flow:

Step 1: Test Tools Individually (Playground)

  1. Open the server's Playground
  2. Test each tool with known-good inputs
  3. Verify responses match expectations
  4. Test edge cases and error conditions

Step 2: Test Conversational Flows (Chat)

  1. Open Chat and select your server
  2. Ask questions that should trigger your tools
  3. Expand Tool Activity to verify correct tool selection
  4. Check that arguments passed match your intent

Step 3: Validate Tool Descriptions

If the AI selects the wrong tool:

  1. Review the Thinking section to understand AI reasoning
  2. Update your tool's name or description to be more specific
  3. Re-test in Chat

Debugging Tips

Tool not being called?

  • Check that the tool is added to the server (Playground > List Tools)
  • Verify the tool description clearly explains when to use it
  • Try a more explicit prompt

Wrong arguments passed?

  • Expand Tool Activity and copy the JSON
  • Compare with expected values
  • Check if parameter descriptions are clear

Unexpected errors?

  • Test the tool directly in Playground first
  • Verify credentials are linked correctly
  • Check if required parameters are missing

AI selecting wrong tool?

  • Review the Thinking section
  • Make tool names and descriptions more distinct
  • Avoid overlapping functionality between tools

Best Practices

Before Adding Tools to Servers

  • Test each tool individually in Playground
  • Verify credentials work correctly
  • Confirm parameter overrides are applied

When Testing in Chat

  • Start with simple, unambiguous prompts
  • Gradually increase complexity
  • Monitor credit usage during testing

For Production Readiness

  • Test with both session auth and API keys
  • Verify error handling works correctly
  • Document expected behavior for edge cases

Troubleshooting

"Tool not found" Error

Problem: Playground shows "Tool not found" when executing.

Solution:

  1. Click "List Tools" to refresh the tool list
  2. Verify the tool is added to this server
  3. Check that the tool hasn't been deleted

No Tools Appearing in Playground

Problem: The tools list is empty.

Solution:

  1. Ensure you've added tools to this server
  2. Navigate to the server settings and add tools
  3. Refresh the page

Chat Not Using My Tools

Problem: AI responds without calling any tools.

Solution:

  1. Check that tools are added to the selected server
  2. Make your prompt more specific about what you need
  3. Review tool descriptions for clarity

Credit Errors

Problem: "Credit Error" message appears.

Solution:

  1. Check your remaining credit balance
  2. Add more credits to your account
  3. Reduce the complexity of your requests

See Also