MCP Integration
depends on: safety, threat-model, local-inference, encryption
The Model Context Protocol lets agents expose resources and tools in a composable way. In a hacka.re-inspired architecture, MCP is the control plane for local-first intelligence and constrained execution.
What MCP provides
Resources — named, URI-addressable read-only context:
{
"uri": "foreveragents://context/darkmode",
"name": "Dark Mode",
"mimeType": "text/markdown"
}
Tools — invokable capabilities with explicit schemas and bounded side effects. Prompts — reusable templates for consistent task framing.
Transport
Stdio (default): JSON-RPC 2.0 over stdin/stdout. No network config. Easy to test with pipes.
Client → stdin: {"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}
Server → stdout: {"jsonrpc":"2.0","id":1,"result":{"capabilities":{}}}
HTTP+SSE for network-accessible servers, especially for edge-container deployments.
Sovereign tool stack
Recommended runtime composition:
- Local inference server (Ollama, llama.cpp, or equivalent) for model inference
- Edge-containers for isolated execution contexts
- Bash tool for deterministic shell operations when needed
Keep these separable: local inference should work even if edge tool execution is unavailable. In-browser inference (WebLLM) is an optional graceful downgrade for zero-server deployments.
Bash tool pattern (guarded)
Expose Bash as a narrow MCP tool, never as unrestricted raw terminal passthrough.
{
"name": "bash_exec",
"description": "Run an allowlisted shell command in a constrained environment",
"inputSchema": {
"type": "object",
"properties": {
"command": { "type": "string" },
"cwd": { "type": "string" },
"timeout_ms": { "type": "integer", "minimum": 100, "maximum": 120000 }
},
"required": ["command"]
}
}
Required safeguards:
- Allowlist command families (
git status,ls,cat, build/test commands) - Block destructive commands by default (
rm -rf,sudo, raw network exfil patterns) - Execute in sandboxed working directories or edge-containers
- Impose time, output, and memory limits
- Return structured stdout/stderr/exit code, not free-form logs only
Protocol flow
- Client sends
initialize→ server responds with capabilities - Client sends
initializednotification - Client calls
resources/list,tools/list,prompts/list - Client calls
resources/read,tools/call,prompts/getas needed
Client configuration
Claude Desktop (claude_desktop_config.json):
{
"mcpServers": {
"foreveragents": {
"command": "foreveragents"
}
}
}
Design principles
- Name things clearly:
get_contextnotget - Write rich descriptions because they guide LLM tool selection
- Return exactly what's needed, nothing more
- Use JSON-RPC error codes for failures
- Make tool data-flow visible in metadata:
local-only,filesystem-write,network-out,third-party— LLM tool selection should be informed by data governance properties, not just capability
For agents
- Resources for content, tools for actions
- Stdio for local, HTTP+SSE for shared
- Connect to local inference servers first; fall back to in-browser or remote explicitly
- Use edge-containers only for lightweight tool execution, use proper VM:s for heavy tasks
- Treat Bash as a privileged tool with allowlists and hard limits
- Test with
echo '{"jsonrpc":"2.0",...}' | ./serverbefore integrating - Keep servers stateless when possible