Model Context Protocol (MCP) Integration
HornetHive leverages the Model Context Protocol (MCP) to provide enterprise-grade integrations that give AI agents real-time access to your business tools and data.
What is MCP?
Model Context Protocol is an open standard developed by Anthropic that enables AI systems to securely connect to external data sources and tools. Think of it as a universal adapter that lets AI agents interact with any service through a standardized interface.
Why MCP Matters
Traditional Integration Challenges:
- Custom code required for each integration
- Fragile OAuth token management
- Limited tool availability for AI agents
- Difficult to maintain and extend
- Security concerns with custom implementations
MCP Advantages:
- ✅ Standardized server implementations
- ✅ Automatic tool discovery
- ✅ Community-maintained connectors
- ✅ Real-time data access
- ✅ Enterprise-grade security
- ✅ Extensible architecture
HornetHive's MCP Architecture
Automatic OAuth-to-MCP Sync
When you connect an integration via OAuth in HornetHive, the platform automatically:
- Saves OAuth tokens - Standard token storage for direct API calls
- Creates MCP connection - Provisions MCP server with your credentials
- Discovers tools - Identifies available agent tools (read email, search files, etc.)
- Provisions agents - All AI agents receive integration-specific tools
Result: Your agents can immediately access Gmail, Slack, Drive, etc. without additional configuration.
Architecture Flow
User Clicks "Connect Gmail"
↓
OAuth Flow Completes
↓
Tokens Saved (OAuth Table)
↓
MCP Service Auto-Sync
↓
MCP Server Provisioned
↓
Tools Discovered & Registered
↓
AI Agents Receive New Tools
↓
Agents Use Tools in Workflows
Supported MCP Servers
HornetHive integrates with 9+ official MCP servers:
Integration | MCP Server | AI Agent Capabilities |
---|---|---|
Gmail | @modelcontextprotocol/server-gmail | Read inbox, search messages, send emails, manage labels |
Google Calendar | @modelcontextprotocol/server-google-calendar | View events, check availability, create meetings, update schedules |
Google Drive | @modelcontextprotocol/server-gdrive | Search files, read documents, list folders, manage permissions |
Slack | @modelcontextprotocol/server-slack | Read channels, search messages, analyze threads, post updates |
Notion | @modelcontextprotocol/server-notion | Query databases, read pages, search content, update properties |
Linear | @modelcontextprotocol/server-linear | Create issues, update status, search tickets, manage projects |
GitHub | @modelcontextprotocol/server-github | Read repos, search code, analyze issues, review PRs |
Jira | @modelcontextprotocol/server-jira | Manage projects, create tickets, track work, update status |
Filesystem | @modelcontextprotocol/server-filesystem | Local document access for self-hosted deployments |
Each MCP server provides multiple tools. For example, the Gmail MCP server includes:
gmail_read_message
- Fetch email contentgmail_search
- Search across mailboxgmail_send
- Send new emailsgmail_list_labels
- Retrieve label structure
Hybrid Architecture: Real-Time + RAG
HornetHive uses a hybrid approach combining MCP and RAG (Retrieval Augmented Generation):
MCP Tools - Real-time, live data access
- Current emails, latest Slack messages
- Live calendar availability
- Up-to-date file contents
- Fresh project status
RAG Pipeline - Historical context and semantic search
- Past conversations and decisions
- Archived documents
- Historical trends
- Related content discovery
Example Workflow: When HivePilot generates a PRD, it:
- Uses MCP tools to check recent Slack discussions (live data)
- Queries RAG index for past product specs (historical context)
- Combines both to create comprehensive requirements
Document Ingestion Pipeline
MCP-connected sources can optionally feed into our RAG pipeline:
MCP Connection Established
↓
Document Loader (LangChain)
↓
Text Chunking (1000 chars, 200 overlap)
↓
Embedding Generation (OpenAI)
↓
Vector Storage (Pinecone)
↓
Semantic Search Available
Technologies:
- LangChain - Document loading and text splitting
- OpenAI Embeddings - Text-to-vector conversion (
text-embedding-ada-002
) - Pinecone - Vector database for similarity search
- MCP Tools - Real-time API access during agent execution
Enterprise Benefits
1. Extensibility
Add any MCP-compatible server without backend code changes. The community maintains 50+ servers covering:
- Communication (Slack, Discord, Microsoft Teams)
- Project Management (Linear, Jira, Asana, Monday)
- Development (GitHub, GitLab, Bitbucket)
- Storage (Google Drive, Dropbox, OneDrive)
- Productivity (Notion, Confluence, Airtable)
Custom Server Support: Build proprietary MCP servers for internal tools. HornetHive discovers and provisions them automatically.
2. Security
- OAuth tokens stored encrypted at rest
- MCP connections workspace-isolated with RLS policies
- Audit trails for all tool usage
- Granular permissions per integration
- Automatic cleanup on disconnection
Row Level Security Example:
CREATE POLICY "Users can view workspace MCP connections"
ON mcp_connections
FOR SELECT
USING (
workspace_id IN (
SELECT workspace_id FROM workspace_members
WHERE user_id = auth.uid()
)
);
3. Reliability
- Auto-reconnection on transient failures
- Tool validation before agent provisioning
- Graceful degradation when services unavailable
- Comprehensive error handling with retry logic
- Health monitoring per connection
Connection Status Tracking:
active
- Connected and operationaldisconnected
- Manually disconnected by usererror
- Connection failed, needs attention
4. Performance
- Tools cached per workspace
- Real-time data access (no stale cache issues)
- Parallel tool execution for faster workflows
- Optimized for agent workflows with CrewAI integration
Technical Implementation
Backend Architecture
MCP Service Layer (app/services/mcp_service.py
):
class MCPServerManager:
async def connect_server(
self,
server_id: str,
server_config: dict,
workspace_id: str
) -> List[Tool]:
"""
Connect to MCP server and return discovered tools.
Returns:
List of CrewAI tools for agent provisioning
"""
# Connect via MCP SDK
# Discover available tools
# Register in database
# Return tools for agent use
MCP Router (app/routers/mcp_router.py
):
@router.post("/connect")
async def connect_mcp_server(
request: MCPConnectionRequest,
workspace_id: str,
current_user: dict = Depends(get_current_user)
):
"""
Connect to MCP server and store in database.
OAuth tokens automatically synced from integration_tokens.
"""
Agent Integration (app/services/hivepilot/multi_agent_prd.py
):
async def _create_agent_team_with_mcp(
self,
workspace_id: str
) -> Dict[str, Agent]:
"""Create agent team with MCP tools."""
# Get all MCP tools for this workspace
mcp_tools = await mcp_manager.get_workspace_tools(workspace_id)
# Provision each agent with tools
agents["market_analyst"] = Agent(
role="Senior Market Research Analyst",
tools=mcp_tools, # All MCP tools available
# ...
)
Frontend Architecture
MCP Service (src/services/mcpService.ts
):
export const mcpService = {
// Auto-sync OAuth token to MCP connection
autoSyncOAuthToMCP: async (
workspaceId: string,
integrationType: string,
accessToken: string
) => {
const mcpServerType = OAUTH_TO_MCP_MAPPING[integrationType];
// Check if connection exists
const existing = await mcpService.listConnections(workspaceId);
if (existing.find(c => c.server_type === mcpServerType)) {
return; // Already connected
}
// Create new MCP connection
await mcpService.connectServer(
workspaceId,
mcpServerType,
`${integrationType}_oauth`,
{ [`${mcpServerType.toUpperCase()}_TOKEN`]: accessToken }
);
}
};
OAuth Callback Integration (src/pages/settings/IntegrationOAuthPage.tsx
):
// After successful OAuth
await saveIntegrationToken({...});
// Auto-sync to MCP (non-blocking)
if (mcpService.isMCPAvailable(provider)) {
await mcpService.autoSyncOAuthToMCP(
workspaceId,
provider,
accessToken
);
}
Database Schema
MCP Connections Table:
CREATE TABLE mcp_connections (
id UUID PRIMARY KEY,
workspace_id UUID REFERENCES workspaces(id),
server_type TEXT NOT NULL, -- 'gmail', 'slack', etc.
server_name TEXT NOT NULL,
server_config JSONB NOT NULL, -- {command, args, env}
connection_status TEXT DEFAULT 'active',
available_tools JSONB DEFAULT '[]',
total_tool_calls INTEGER DEFAULT 0,
last_connected_at TIMESTAMPTZ,
UNIQUE(workspace_id, server_type, server_name)
);
MCP Tool Usage Logs:
CREATE TABLE mcp_tool_usage (
id UUID PRIMARY KEY,
mcp_connection_id UUID REFERENCES mcp_connections(id),
workspace_id UUID REFERENCES workspaces(id),
tool_name TEXT NOT NULL,
tool_input JSONB,
tool_output JSONB,
execution_time_ms INTEGER,
success BOOLEAN DEFAULT true,
agent_name TEXT,
outcome_id UUID REFERENCES user_outcomes(id),
created_at TIMESTAMPTZ DEFAULT now()
);
For Self-Hosted Deployments
Custom MCP Server Setup
1. Build Your MCP Server:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
const server = new Server({
name: "custom-crm",
version: "1.0.0"
});
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "search_customers",
description: "Search CRM for customer records",
inputSchema: {
type: "object",
properties: {
query: { type: "string" },
limit: { type: "number" }
}
}
}
]
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "search_customers") {
// Call your CRM API
const results = await crmClient.search(args.query);
return { content: [{ type: "text", text: JSON.stringify(results) }] };
}
});
2. Deploy MCP Server:
# Publish to npm
npm publish @yourcompany/custom-crm-mcp
# Or run locally for development
node dist/index.js
3. Connect in HornetHive:
from app.services.mcp_service import mcp_manager
await mcp_manager.connect_server(
server_id="custom-crm",
server_config={
"command": "npx",
"args": ["-y", "@yourcompany/custom-crm-mcp"],
"env": {
"CRM_API_KEY": os.getenv("CRM_API_KEY"),
"CRM_BASE_URL": "https://crm.company.com"
}
},
workspace_id="workspace-123"
)
Environment Variables
For self-hosted deployments, configure:
# Pinecone (for RAG pipeline)
PINECONE_API_KEY=your_api_key
PINECONE_ENVIRONMENT=us-east-1
# OpenAI (for embeddings)
OPENAI_API_KEY=your_openai_key
# Custom MCP servers
CUSTOM_CRM_API_KEY=your_crm_key
INTERNAL_TOOL_TOKEN=your_token
API Reference
List MCP Connections
GET /api/mcp/connections?workspace_id={workspace_id}
Response:
{
"connections": [
{
"id": "uuid",
"server_type": "gmail",
"server_name": "gmail_oauth",
"connection_status": "active",
"available_tools": [
{"name": "gmail_read_message", "description": "..."},
{"name": "gmail_search", "description": "..."}
],
"total_tool_calls": 142,
"last_connected_at": "2025-01-04T10:30:00Z"
}
],
"total_count": 5
}
Connect MCP Server
POST /api/mcp/connect?workspace_id={workspace_id}
Body:
{
"server_type": "slack",
"server_name": "company_slack",
"credentials": {
"SLACK_TOKEN": "xoxb-..."
},
"params": {}
}
Response:
{
"status": "connected",
"connection_id": "uuid",
"tools_available": 8,
"tools": [...]
}
Disconnect MCP Server
DELETE /api/mcp/disconnect/{connection_id}?workspace_id={workspace_id}
Response:
{
"status": "disconnected",
"connection_id": "uuid"
}
Get MCP Analytics
GET /api/mcp/analytics/{workspace_id}
Response:
{
"analytics": [
{
"server_type": "gmail",
"total_tool_calls": 142,
"unique_tools_used": 4,
"avg_execution_time_ms": 850,
"success_rate_percent": 98.5,
"usage_count_last_30d": 89
}
],
"total_connections": 5
}
Best Practices
1. Start with OAuth
Let HornetHive's auto-sync handle MCP provisioning. Don't manually create MCP connections for services that support OAuth.
2. Monitor Usage
Check analytics regularly to understand which tools agents use most:
GET /api/mcp/analytics/{workspace_id}
3. Document Custom Servers
Maintain internal documentation for proprietary MCP servers:
- Tool names and descriptions
- Required credentials
- Usage examples
- Troubleshooting guides
4. Test Before Production
Validate tool discovery and permissions in staging:
tools = await mcp_manager.connect_server(...)
assert len(tools) > 0, "No tools discovered"
5. Plan for Scale
- MCP connections are workspace-scoped
- Tools are cached per workspace
- Consider rate limits for external APIs
- Monitor tool execution times
6. Handle Errors Gracefully
Agents should degrade gracefully when MCP tools fail:
try:
result = agent.execute_task(task)
except ToolExecutionError:
# Fall back to alternative approach
result = agent.execute_without_external_data(task)
Troubleshooting
Connection Status: error
Check: Error message in database
SELECT error_message FROM mcp_connections WHERE id = 'connection-id';
Common causes:
- Invalid OAuth token (expired or revoked)
- Missing environment variables
- Network connectivity issues
- Rate limiting
Solution: Disconnect and reconnect the integration.
Tools Not Discovered
Check: MCP server logs
# View server output
docker logs mcp-server-container
Common causes:
- MCP server not responding
- Incorrect server configuration
- Missing dependencies
Solution: Validate server config and restart.
Agent Not Using Tools
Check: Agent configuration
agent = agents["market_analyst"]
print(f"Agent tools: {len(agent.tools)}")
Common causes:
- Tools not passed to agent
- Agent LLM not understanding tool descriptions
- Tool input schema too complex
Solution: Simplify tool descriptions and schemas.
Performance Optimization
Tool Caching
Tools are cached per workspace to avoid repeated discovery:
# Tools cached after first connection
tools = await mcp_manager.get_workspace_tools(workspace_id)
Parallel Execution
Agents can execute multiple MCP tools in parallel:
# CrewAI automatically parallelizes independent tool calls
results = await agent.execute_tools([
"gmail_search",
"slack_read_channel",
"drive_list_files"
])
Rate Limit Management
HornetHive respects external API rate limits:
- Exponential backoff on 429 responses
- Request queuing per integration
- User notification on limit reached
Security Considerations
Credential Storage
- OAuth tokens encrypted with workspace-specific keys
- MCP credentials never logged
- Environment variables for self-hosted deployments
Data Isolation
- Workspace-level RLS policies
- MCP connections scoped to single workspace
- Tool usage logs partitioned by workspace
Audit Trails
Every tool execution is logged:
SELECT
tool_name,
agent_name,
success,
created_at
FROM mcp_tool_usage
WHERE workspace_id = 'workspace-id'
ORDER BY created_at DESC;
Future Roadmap
Coming Soon
- MCP Connection Health Dashboard - Real-time monitoring UI
- Custom Tool Builder - No-code MCP server creation
- Advanced Analytics - Tool usage insights and recommendations
- Multi-Region Support - Deploy MCP servers close to data sources
Under Consideration
- Tool Marketplace - Community-contributed custom tools
- Federated MCP - Connect multiple HornetHive instances
- Edge Deployment - Run MCP servers at the edge for low latency
References
- MCP Specification - Official protocol docs
- MCP Server Registry - Community servers
- Anthropic MCP Announcement - Original blog post
- HornetHive Integration Docs - Integration guides
- CrewAI Documentation - Multi-agent framework
Support
Getting Help
- Documentation: docs.hornethive.ai
- Email Support: support@hornethive.ai
- Enterprise: enterprise@hornethive.ai
- GitHub Issues: Report bugs and feature requests
Custom MCP Development
Our enterprise team can help you build custom MCP servers for proprietary systems. Contact enterprise@hornethive.ai for:
- Architecture consultation
- Custom server development
- Integration testing
- Production deployment support
Ready to extend HornetHive with custom integrations? Start with our Integration Overview or contact our team for enterprise support.