Testing Servers and Tools
Learn how to test your MCP servers and tools before deploying them to production.
Testing Methods
Reeva provides two ways to test your servers and tools:
| Method | Best For | Access |
|---|---|---|
| Playground | Direct tool testing, debugging parameters, validating configuration | Server Details page |
| Chat | End-to-end testing, natural language validation, multi-step workflows | Chat page |
When to Use Each
Use Playground when you need to:
- Test a specific tool in isolation
- Debug parameter issues
- Validate credential linking
- Verify server connectivity
- See the exact request/response payloads
Use Chat when you need to:
- Test how AI selects and uses your tools
- Validate natural language prompts
- Test multi-step workflows
- Observe end-to-end behavior
Server Playground
The Playground lets you execute individual tools directly with full control over parameters.
Accessing the Playground
- Navigate to Servers in the sidebar
- Click on a server name to open its details
- Scroll down to the Playground section
Available Actions
The Playground supports three types of operations:
List Tools (tools/list)
- Returns all tools available on the server
- Useful for verifying which tools are configured
List Prompts (prompts/list)
- Returns configured prompts
- Shows prompt templates available to AI
Tool Call
- Execute any tool with custom arguments
- See the full request and response
Authorization
By default, the Playground uses your current session for authentication. You can also test with an API key:
- Find the Authorization field
- Enter your API key (e.g.,
rk_abc123...) - Leave blank to use session authentication
This is useful for verifying that API keys work correctly before sharing them.
Executing Tool Calls
-
Select a Tool
- Click a tool name in the left panel
- The tool switches to "Tool Call" mode automatically
-
Fill in Arguments
- Required fields are marked with a red asterisk (*)
- The form adapts to each parameter type:
- String: Text input
- Number/Integer: Numeric input
- Boolean: Checkbox
- Array: Comma-separated values
- Object: JSON text area
- Enum: Dropdown selection
-
Review the Request
- The Request Payload section shows the exact JSON-RPC request
- Verify parameters look correct before sending
-
Execute
- Click Test Call
- Wait for the response
-
Review the Response
- Success responses show the tool's output
- Errors display the error message and details
Example: Testing a Search Tool
Tool: google_search
Arguments:
query: "MCP protocol specification" (required)
num_results: 5
Request Payload:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "google_search",
"arguments": {
"query": "MCP protocol specification",
"num_results": 5
}
}
}
Playground Use Cases
- Validating Tool Configuration — Confirm parameter overrides are applied and default values work as expected
- Debugging Parameter Issues — Test edge cases (empty strings, large numbers) and identify required vs optional fields
- Testing Credentials — Verify linked accounts authenticate correctly before deploying to production
- Verifying Server Connectivity — Confirm the server responds and tools are properly registered
Chat Testing
Chat provides a conversational interface to test how AI uses your tools in realistic scenarios.
Accessing Chat
- Navigate to Chat in the sidebar
- Select a server from the dropdown at the top
- Start typing your message
Sending Messages
Zero State
- When starting fresh, you'll see a centered input with "What would you like to do?"
- Type your request and press Enter
Conversation State
- After sending a message, the input moves to the bottom
- Previous messages are displayed above
Keyboard Shortcuts
Enter— Send messageShift + Enter— New line (for multi-line messages)Escape— Stop streaming response
Understanding Responses
Each AI response can include several sections:
Tool Activity (collapsible)
- Shows which tools the AI called
- Displays the arguments passed to each tool
- Shows tool results with success or failure indicators
- Click the copy icon to copy JSON for debugging
Thinking (collapsible)
- Shows the AI's reasoning process
- Helpful for understanding why certain tools were selected
- Open by default, click to collapse
Response Content
- The AI's final text response
- Hover to reveal copy button (copies as markdown)
Streaming Indicators
- "Generating response..." appears while waiting for text
- "Typing..." shows during active streaming
Session Statistics
At the bottom of the chat, you'll see usage metrics:
- Tokens: Input and output token counts
- Credits: Credits consumed this session
- Remaining: Your remaining credit balance
This helps you understand the cost of your testing.
Controls
Stop Button
- Appears during streaming
- Immediately halts the response
- Useful for long or incorrect responses
New Chat Button
- Resets the conversation
- Clears message history
- Resets token/credit counters for the session
Chat Use Cases
- End-to-End Integration Testing — Verify the complete flow from prompt to tool execution to response
- Natural Language Validation — Test that your tool descriptions help the AI select the right tools
- Multi-Step Workflow Testing — Observe how AI chains multiple tool calls together
- Tool Selection Behavior — See which tools AI chooses for ambiguous requests
Recommended Testing Workflow
For the best results, follow this testing flow:
Step 1: Test Tools Individually (Playground)
- Open the server's Playground
- Test each tool with known-good inputs
- Verify responses match expectations
- Test edge cases and error conditions
Step 2: Test Conversational Flows (Chat)
- Open Chat and select your server
- Ask questions that should trigger your tools
- Expand Tool Activity to verify correct tool selection
- Check that arguments passed match your intent
Step 3: Validate Tool Descriptions
If the AI selects the wrong tool:
- Review the Thinking section to understand AI reasoning
- Update your tool's name or description to be more specific
- Re-test in Chat
Debugging Tips
Tool not being called?
- Check that the tool is added to the server (Playground > List Tools)
- Verify the tool description clearly explains when to use it
- Try a more explicit prompt
Wrong arguments passed?
- Expand Tool Activity and copy the JSON
- Compare with expected values
- Check if parameter descriptions are clear
Unexpected errors?
- Test the tool directly in Playground first
- Verify credentials are linked correctly
- Check if required parameters are missing
AI selecting wrong tool?
- Review the Thinking section
- Make tool names and descriptions more distinct
- Avoid overlapping functionality between tools
Best Practices
Before Adding Tools to Servers
- Test each tool individually in Playground
- Verify credentials work correctly
- Confirm parameter overrides are applied
When Testing in Chat
- Start with simple, unambiguous prompts
- Gradually increase complexity
- Monitor credit usage during testing
For Production Readiness
- Test with both session auth and API keys
- Verify error handling works correctly
- Document expected behavior for edge cases
Troubleshooting
"Tool not found" Error
Problem: Playground shows "Tool not found" when executing.
Solution:
- Click "List Tools" to refresh the tool list
- Verify the tool is added to this server
- Check that the tool hasn't been deleted
No Tools Appearing in Playground
Problem: The tools list is empty.
Solution:
- Ensure you've added tools to this server
- Navigate to the server settings and add tools
- Refresh the page
Chat Not Using My Tools
Problem: AI responds without calling any tools.
Solution:
- Check that tools are added to the selected server
- Make your prompt more specific about what you need
- Review tool descriptions for clarity
Credit Errors
Problem: "Credit Error" message appears.
Solution:
- Check your remaining credit balance
- Add more credits to your account
- Reduce the complexity of your requests
See Also
- Creating Custom Tools — Configure tools before testing
- API Keys Guide — Manage authentication for testing
- Creating Servers — Set up servers with your tools
- Credentials Guide — Link service accounts for tool authentication