← foreveragents.dev

Forever Agents

Agents that last forever must serve humans first. The tools will change. The models will change. What won't change: people need software that respects their privacy, their time, their trust, and their eyes.

Definition

A Forever Agent is software that remains useful as tools, models, and protocols change — by serving humans through verifiable architectural properties rather than contractual promises. It runs inference on the user's hardware, stores data where the user controls it, and makes privacy observable rather than promised. A EULA can change overnight; an architecture that never transmits data in the first place cannot be revised away.

Requirements

Keywords follow RFC 2119: MUST, SHOULD, and MAY indicate requirement levels.

Legend

IndicatorLevelMeaning
🟢MUSTRequired. Non-negotiable.
🟡SHOULDExpected unless a justified exception exists.
🔵MAYOptional. Enhances capability when present.

Core requirements

RequirementDetail
🟢Local inferenceSupport local inference servers (Ollama, llama.cpp, LM Studio, or equivalent)
🟢Local execution environmentSupport a local execution environment if the agent uses tool execution
🟢Data sovereigntyStore user data where the user controls it — no mandatory third-party storage
🟢Verifiable privacyCore privacy claims rely on architecture, not policy documents
🟢TransparencyDisclose AI nature, data flows, and trust boundaries
🟢Offline capabilityCore operations functional without internet connection
🟡Zero dependenciesMinimize runtime dependencies; justify every addition
🟡Encrypted stateEncrypt sensitive state at rest with authenticated encryption
🟡Static-file deployableDeployable as static files (HTML/CSS/JS) without a mandatory backend
🟡PortabilityTransferable across mediums (URL, QR, USB, file) without breaking
🔵In-browser inferenceGracefully downgrade to in-browser inference (WebLLM) for zero-server operation
🔵In-browser embeddingsGracefully downgrade to in-browser embeddings (transformers.js) for local RAG
🔵In-browser executionGracefully downgrade to in-browser execution (WebAssembly) for tool sandboxing
🔵Edge deploymentSupport edge/IoT deployment for constrained environments

The hierarchy

Local server inference is the baseline. Running Ollama or llama.cpp on the user's machine is simple, powerful, and already privacy-preserving — data never leaves the device, observable in the network tab, not promised in a document.

In-browser inference (WebLLM, transformers.js, WASM execution environments) is the advanced path for maximum sovereignty: zero infrastructure, zero server process, everything in the browser. But it is not required. Bundling WebLLM or a WASM Linux runtime is a choice, not a prerequisite. A Forever Agent that connects to a local Ollama instance and stores state in local files already satisfies every MUST-level requirement.

This hierarchy simplifies entry. The minimum viable Forever Agent needs a local inference server and local data storage. In-browser capabilities are options for agents that want to push sovereignty further — deployable from a USB stick, runnable from file://, shareable as a single HTML file.

Verifiable over contractual

Forever Agents rely on verifiable architecture over EULAs for privacy.

Context prompts as architecture

These are context prompts encoding durable, pro-human principles as standalone markdown files — fetchable by agents, browsable by humans, servable via curl or MCP alike. Applied across a population of agents, they produce structurally lower information leakage to third parties. Not because of a policy that promises restraint, but because the architecture makes exfiltration unnecessary.

Using these contexts

Each context is a standalone markdown file. Fetch any of them by its .md URL:

curl https://foreveragents.dev/context/darkmode.md
    

Or serve them all locally via MCP:

go install github.com/kristerhedfors/foreveragents/go@latest
    foreveragents            # MCP stdio server
    foreveragents --list     # print all contexts
    foreveragents --get ref  # print one context
    foreveragents --http :8080  # HTTP server
    

← All contexts