Web Search
Search the web using Google and DuckDuckGo powered search tools.
Provider: Built-in (SerpAPI)
Authentication: None required (uses platform API key)
Category: Search & Research
Credit Cost: 1 credit per search
Overview
Web Search tools provide access to Google and DuckDuckGo search results without requiring your own API keys. Perfect for research, finding URLs to scrape, news monitoring, and academic research workflows.
Available Tools
Google Search
Full-featured Google search with customizable results, geolocation, and language settings.
Tool ID: web_search_Google_Search
Credit Cost: 1 credit
Parameters:
query(string, required): Search query stringnum_results(integer, optional): Number of results to return- Default: 10
- Range: 1-100
gl(string, optional): Geolocation country code- Default:
"us" - Examples:
"uk","de","fr","jp"
- Default:
hl(string, optional): Interface language code- Default:
"en" - Examples:
"es","fr","de","ja"
- Default:
safe(string, optional): Safe search filter- Default:
"off" - Options:
"off","active"
- Default:
Response:
{
"results": [
{
"title": "Example Domain",
"link": "https://example.com",
"snippet": "Example Domain. This domain is for use in illustrative examples..."
},
{
"title": "Another Result",
"link": "https://another-example.com",
"snippet": "Description of the page content..."
}
],
"total_results": 1234567890,
"search_time": 0.45
}
Example Usage:
# Python - Basic search
response = client.call_tool(
name="web_search_Google_Search",
arguments={
"query": "MCP protocol AI agents",
"num_results": 10
}
)
for result in response["results"]:
print(f"{result['title']}: {result['link']}")
// TypeScript - Search with localization
const response = await client.callTool({
name: "web_search_Google_Search",
arguments: {
query: "machine learning tutorials",
num_results: 20,
gl: "uk",
hl: "en"
}
});
response.results.forEach(result => {
console.log(`${result.title}: ${result.link}`);
});
# cURL
curl -X POST https://api.joinreeva.com/mcp/server_YOUR_ID \
-H "Authorization: Bearer mcpk_your_key" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "web_search_Google_Search",
"arguments": {
"query": "AI agent frameworks",
"num_results": 10
}
},
"id": 1
}'
Use Cases:
- Research and information gathering
- Find URLs for web scraping
- Market research and competitor analysis
- Content discovery
Google Search Lite
Lightweight and faster Google search with location-based results.
Tool ID: web_search_Google_Search_Lite
Credit Cost: 1 credit
Parameters:
query(string, required): Search query stringlocation(string, optional): Location for localized results- Example:
"Austin, Texas, United States"
- Example:
hl(string, optional): Interface language code- Default:
"en"
- Default:
gl(string, optional): Country code for geolocation- Default:
"us"
- Default:
device(string, optional): Device type for results- Default:
"desktop" - Options:
"desktop","mobile","tablet"
- Default:
Response:
{
"results": [
{
"title": "Local Business Name",
"link": "https://localbusiness.com",
"snippet": "Local business description..."
}
]
}
Example Usage:
# Python - Location-based search
response = client.call_tool(
name="web_search_Google_Search_Lite",
arguments={
"query": "coffee shops",
"location": "San Francisco, California, United States",
"device": "mobile"
}
)
// TypeScript - Quick search
const response = await client.callTool({
name: "web_search_Google_Search_Lite",
arguments: {
query: "best restaurants",
location: "New York, NY, United States"
}
});
Use Cases:
- Quick searches when speed matters
- Location-based queries
- Mobile-optimized results
- Local business discovery
When to use Lite vs Full:
- Use Lite for faster results with fewer options
- Use Full for more control over result count and safe search
Google Scholar Search
Search academic papers, research publications, and scholarly articles.
Tool ID: web_search_Google_Scholar_Search
Credit Cost: 1 credit
Parameters:
query(string, required): Research query or topic to search fordevice(string, optional): Device type- Default:
"desktop" - Options:
"desktop","mobile","tablet"
- Default:
hl(string, optional): Interface language code- Default:
"en"
- Default:
gl(string, optional): Country code for geolocation- Default:
"us"
- Default:
Response:
{
"results": [
{
"title": "Attention Is All You Need",
"link": "https://arxiv.org/abs/1706.03762",
"snippet": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks...",
"authors": "A Vaswani, N Shazeer, N Parmar...",
"cited_by": 85000,
"year": 2017
}
]
}
Example Usage:
# Python - Academic research
response = client.call_tool(
name="web_search_Google_Scholar_Search",
arguments={
"query": "transformer architecture neural networks"
}
)
for paper in response["results"]:
print(f"{paper['title']} ({paper.get('year', 'N/A')})")
print(f" Cited by: {paper.get('cited_by', 'N/A')}")
print(f" {paper['link']}")
// TypeScript - Find research papers
const response = await client.callTool({
name: "web_search_Google_Scholar_Search",
arguments: {
query: "large language models reasoning",
hl: "en"
}
});
Use Cases:
- Academic research and literature review
- Finding citations and references
- Discovering recent publications
- Research paper discovery
- Building bibliographies
Google News Search
Search for news articles with time-based filtering.
Tool ID: web_search_Google_News_Search
Credit Cost: 1 credit
Parameters:
query(string, required): News topic or keywords to search forlocation(string, optional): Location for localized news- Example:
"Austin, TX, Texas, United States"
- Example:
hl(string, optional): Interface language code- Default:
"en"
- Default:
gl(string, optional): Country code for geolocation- Default:
"us"
- Default:
tbs(string, optional): Time-based search filters"qdr:h"- Past hour"qdr:d"- Past day (24 hours)"qdr:w"- Past week"qdr:m"- Past month"qdr:y"- Past year
Response:
{
"results": [
{
"title": "Breaking: Tech Company Announces New AI Product",
"link": "https://news.example.com/article",
"snippet": "A major tech company today announced...",
"source": "Tech News Daily",
"date": "2 hours ago"
}
]
}
Example Usage:
# Python - Recent news on a topic
response = client.call_tool(
name="web_search_Google_News_Search",
arguments={
"query": "artificial intelligence",
"tbs": "qdr:d" # Past 24 hours
}
)
for article in response["results"]:
print(f"[{article.get('source', 'Unknown')}] {article['title']}")
print(f" {article.get('date', '')}")
// TypeScript - Local news
const response = await client.callTool({
name: "web_search_Google_News_Search",
arguments: {
query: "tech startups",
location: "San Francisco, CA, United States",
tbs: "qdr:w" // Past week
}
});
# cURL - Breaking news
curl -X POST https://api.joinreeva.com/mcp/server_YOUR_ID \
-H "Authorization: Bearer mcpk_your_key" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "web_search_Google_News_Search",
"arguments": {
"query": "climate change",
"tbs": "qdr:h"
}
},
"id": 1
}'
Use Cases:
- News monitoring and alerts
- Trend analysis
- Competitor news tracking
- Industry updates
- Event coverage
Time Filter Reference:
| Filter | Description |
|---|---|
qdr:h | Past hour |
qdr:d | Past 24 hours |
qdr:w | Past week |
qdr:m | Past month |
qdr:y | Past year |
DuckDuckGo Search
Privacy-focused search engine with no tracking.
Tool ID: web_search_DuckDuckGo_Search
Credit Cost: 1 credit
Parameters:
query(string, required): Search query stringnum_results(integer, optional): Number of results to return- Default: 10
- Range: 1-100
kl(string, optional): Region and language code- Default:
"us-en" - Examples:
"uk-en","de-de","fr-fr","jp-jp"
- Default:
safe_search(string, optional): Safe search level- Default:
"moderate" - Options:
"strict","moderate","off"
- Default:
Response:
{
"results": [
{
"title": "DuckDuckGo — Privacy, simplified.",
"link": "https://duckduckgo.com",
"snippet": "The search engine that doesn't track you..."
}
]
}
Example Usage:
# Python - Privacy-focused search
response = client.call_tool(
name="web_search_DuckDuckGo_Search",
arguments={
"query": "privacy tools for developers",
"num_results": 15,
"safe_search": "moderate"
}
)
for result in response["results"]:
print(f"{result['title']}: {result['link']}")
// TypeScript - Regional search
const response = await client.callTool({
name: "web_search_DuckDuckGo_Search",
arguments: {
query: "open source software",
num_results: 20,
kl: "uk-en"
}
});
Use Cases:
- Privacy-conscious searches
- Alternative to Google results
- Unbiased search results
- International searches with regional settings
Region Codes:
| Code | Region |
|---|---|
us-en | United States (English) |
uk-en | United Kingdom (English) |
de-de | Germany (German) |
fr-fr | France (French) |
es-es | Spain (Spanish) |
jp-jp | Japan (Japanese) |
Common Patterns
Research Workflow
# Comprehensive research combining search types
def research_topic(topic):
# General web search
web_results = client.call_tool(
name="web_search_Google_Search",
arguments={"query": topic, "num_results": 10}
)
# Academic papers
academic_results = client.call_tool(
name="web_search_Google_Scholar_Search",
arguments={"query": topic}
)
# Recent news
news_results = client.call_tool(
name="web_search_Google_News_Search",
arguments={"query": topic, "tbs": "qdr:w"}
)
return {
"web": web_results["results"],
"academic": academic_results["results"],
"news": news_results["results"]
}
News Monitoring
# Daily news digest for multiple topics
def create_news_digest(topics):
digest = {}
for topic in topics:
results = client.call_tool(
name="web_search_Google_News_Search",
arguments={
"query": topic,
"tbs": "qdr:d" # Last 24 hours
}
)
digest[topic] = results["results"][:5] # Top 5 per topic
return digest
# Usage
topics = ["AI", "climate", "technology"]
daily_digest = create_news_digest(topics)
Search and Scrape Pipeline
# Find URLs then scrape content
def search_and_scrape(query):
# Search for relevant pages
search_results = client.call_tool(
name="web_search_Google_Search",
arguments={"query": query, "num_results": 5}
)
# Scrape each result
scraped_content = []
for result in search_results["results"]:
content = client.call_tool(
name="web_scraper_Get_Website_Markdown",
arguments={"url": result["link"]}
)
scraped_content.append({
"title": result["title"],
"url": result["link"],
"content": content["markdown"]
})
return scraped_content
Multi-Engine Search
# Compare results from multiple search engines
def multi_engine_search(query):
google_results = client.call_tool(
name="web_search_Google_Search",
arguments={"query": query, "num_results": 10}
)
ddg_results = client.call_tool(
name="web_search_DuckDuckGo_Search",
arguments={"query": query, "num_results": 10}
)
# Combine and deduplicate by URL
all_urls = set()
combined = []
for result in google_results["results"] + ddg_results["results"]:
if result["link"] not in all_urls:
all_urls.add(result["link"])
combined.append(result)
return combined
Best Practices
Query Optimization
- Use specific, targeted queries for better results
- Include relevant keywords and phrases
- Use quotes for exact phrase matching:
"exact phrase" - Exclude terms with minus:
query -exclude
Performance
- Cache results to avoid redundant searches
- Use
Google_Search_Litefor speed when location matters - Limit
num_resultsto what you actually need - Batch related searches efficiently
Cost Management
- Each search costs 1 credit regardless of result count
- Combine multiple queries strategically
- Cache and reuse results when appropriate
- Use time filters to get more relevant results
Reliability
- Handle empty results gracefully
- Implement retry logic for transient failures
- Validate URLs before scraping
- Consider using multiple search engines for important queries
Troubleshooting
No Results Returned
Cause: Query too specific or no matching content
Solutions:
- Broaden your search query
- Remove restrictive filters
- Try alternative phrasing
- Use different search engine
Unexpected Results
Cause: Query interpretation differs from intent
Solutions:
- Use quotes for exact phrases
- Add context keywords
- Use specific language/region codes
- Try more specific queries
Rate Limiting
Cause: Too many requests in short time
Solutions:
- Add delays between searches
- Cache results to reduce requests
- Batch related queries
- Use appropriate request intervals
Regional Results Not Working
Cause: Incorrect country/language codes
Solutions:
- Verify country code format (e.g.,
"us","uk") - Check language code format (e.g.,
"en","de") - For DuckDuckGo, use combined format (
"us-en") - Ensure codes are lowercase
Integration Examples
Example 1: Research Assistant
# Complete research assistant
def research_assistant(topic):
# Step 1: Web search
web = client.call_tool(
name="web_search_Google_Search",
arguments={"query": topic, "num_results": 10}
)
# Step 2: Academic sources
academic = client.call_tool(
name="web_search_Google_Scholar_Search",
arguments={"query": topic}
)
# Step 3: Summarize findings
summary_prompt = f"Summarize research on: {topic}\n\nWeb sources:\n"
for r in web["results"][:5]:
summary_prompt += f"- {r['title']}: {r['snippet']}\n"
summary_prompt += "\nAcademic sources:\n"
for r in academic["results"][:5]:
summary_prompt += f"- {r['title']}: {r['snippet']}\n"
# Use Perplexity to synthesize
summary = client.call_tool(
name="Perplexity_Ask",
arguments={"question": summary_prompt}
)
return {
"web_sources": web["results"],
"academic_sources": academic["results"],
"summary": summary["answer"]
}
Example 2: News Alert System
# Monitor news for keywords
def news_alert(keywords, threshold_hours=24):
alerts = []
for keyword in keywords:
results = client.call_tool(
name="web_search_Google_News_Search",
arguments={
"query": keyword,
"tbs": "qdr:d" # Last 24 hours
}
)
if results["results"]:
alerts.append({
"keyword": keyword,
"count": len(results["results"]),
"top_stories": results["results"][:3]
})
return alerts
# Usage
keywords = ["your company", "competitor name", "industry trend"]
daily_alerts = news_alert(keywords)
Example 3: Content Aggregator
# Aggregate content from search results
def aggregate_content(topics):
aggregated = []
for topic in topics:
# Search for content
results = client.call_tool(
name="web_search_Google_Search",
arguments={"query": topic, "num_results": 5}
)
for result in results["results"]:
# Get full content
content = client.call_tool(
name="web_scraper_Get_Website_Markdown",
arguments={"url": result["link"]}
)
# Store in database
client.call_tool(
name="supabase_create_records",
arguments={
"table": "content",
"records": [{
"topic": topic,
"title": result["title"],
"url": result["link"],
"content": content["markdown"],
"scraped_at": datetime.now().isoformat()
}]
}
)
aggregated.append(result)
return aggregated
Choosing the Right Search Tool
| Use Case | Recommended Tool |
|---|---|
| General research | Google Search |
| Quick local search | Google Search Lite |
| Academic papers | Google Scholar Search |
| News monitoring | Google News Search |
| Privacy-focused | DuckDuckGo Search |
| Multiple regions | DuckDuckGo Search |
Related Tools
- Firecrawl - Scrape search result URLs
- Web Scraper - Extract content from URLs
- Perplexity - AI-powered research
- Notion - Store research findings
See Also
- Creating Custom Tools - Pre-configure search settings
- Tool Testing - Test searches in playground
- All Tools - Complete tool catalog