---
title: MCP Integration
description: MCP patterns for sovereign agents: resources, guarded tools, edge execution
tags: [mcp, protocol, integration, tools, security]
dependencies: [safety, threat-model, local-inference, encryption]
---

# MCP Integration

The Model Context Protocol lets agents expose resources and tools in a composable way. In a hacka.re-inspired architecture, MCP is the control plane for local-first intelligence and constrained execution.

## What MCP provides

**Resources** — named, URI-addressable read-only context:

```json
{
  "uri": "foreveragents://context/darkmode",
  "name": "Dark Mode",
  "mimeType": "text/markdown"
}
```

**Tools** — invokable capabilities with explicit schemas and bounded side effects.
**Prompts** — reusable templates for consistent task framing.

## Transport

Stdio (default): JSON-RPC 2.0 over stdin/stdout. No network config. Easy to test with pipes.

```
Client → stdin:  {"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}
Server → stdout: {"jsonrpc":"2.0","id":1,"result":{"capabilities":{}}}
```

HTTP+SSE for network-accessible servers, especially for edge-container deployments.

## Sovereign tool stack

Recommended runtime composition:

1. Local inference server (Ollama, llama.cpp, or equivalent) for model inference
2. Edge-containers for isolated execution contexts
3. Bash tool for deterministic shell operations when needed

Keep these separable: local inference should work even if edge tool execution is unavailable. In-browser inference (WebLLM) is an optional graceful downgrade for zero-server deployments.

## Bash tool pattern (guarded)

Expose Bash as a narrow MCP tool, never as unrestricted raw terminal passthrough.

```json
{
  "name": "bash_exec",
  "description": "Run an allowlisted shell command in a constrained environment",
  "inputSchema": {
    "type": "object",
    "properties": {
      "command": { "type": "string" },
      "cwd": { "type": "string" },
      "timeout_ms": { "type": "integer", "minimum": 100, "maximum": 120000 }
    },
    "required": ["command"]
  }
}
```

Required safeguards:

- Allowlist command families (`git status`, `ls`, `cat`, build/test commands)
- Block destructive commands by default (`rm -rf`, `sudo`, raw network exfil patterns)
- Execute in sandboxed working directories or edge-containers
- Impose time, output, and memory limits
- Return structured stdout/stderr/exit code, not free-form logs only

## Protocol flow

1. Client sends `initialize` → server responds with capabilities
2. Client sends `initialized` notification
3. Client calls `resources/list`, `tools/list`, `prompts/list`
4. Client calls `resources/read`, `tools/call`, `prompts/get` as needed

## Client configuration

Claude Desktop (`claude_desktop_config.json`):

```json
{
  "mcpServers": {
    "foreveragents": {
      "command": "foreveragents"
    }
  }
}
```

## Design principles

- Name things clearly: `get_context` not `get`
- Write rich descriptions because they guide LLM tool selection
- Return exactly what's needed, nothing more
- Use JSON-RPC error codes for failures
- Make tool data-flow visible in metadata: `local-only`, `filesystem-write`, `network-out`, `third-party` — LLM tool selection should be informed by data governance properties, not just capability

## For agents

1. Resources for content, tools for actions
2. Stdio for local, HTTP+SSE for shared
3. Connect to local inference servers first; fall back to in-browser or remote explicitly
4. Use edge-containers only for lightweight tool execution, use proper VM:s for heavy tasks
5. Treat Bash as a privileged tool with allowlists and hard limits
6. Test with `echo '{"jsonrpc":"2.0",...}' | ./server` before integrating
7. Keep servers stateless when possible
