Threat Model
depends on: privacy, safety, supply-chain, encryption, zero-infra, zero-dependencies
Static-file agents have a fundamentally different risk profile than server-backed applications. The architecture eliminates entire vulnerability classes by removing the server from the trust boundary.
Principles
- Identify trust boundaries early
- Eliminate attack surface by removing components, not by hardening them
- Minimize intermediaries in sensitive paths
- Assume client-side state can be observed by local software
- Zero blind trust in browser environment extensions
- Zero unsanitized input to prompts, DOM, or tool calls
What the architecture eliminates
A traditional server-backed web application exposes a large attack surface: application servers, databases, session stores, authentication middleware, API gateways, logging pipelines, and deployment infrastructure. A static-file agent with a browser-based UI eliminates most of this structurally — not by mitigating risks, but by removing the components that create them.
Server-side code execution — eliminated
No application server means no server-side remote code execution. There is no PHP, Node.js, Python, or Java runtime processing user input on a backend. The entire class of server-side RCE vulnerabilities does not exist.
SQL injection — eliminated
No database means no SQL. No ORM. No query builder. No stored procedures. The agent stores state in the browser's localStorage, encrypted. There is no server-side data layer to inject into.
Server-side request forgery (SSRF) — eliminated
No server means no server making outbound requests on behalf of user input. SSRF requires a server that can be tricked into fetching internal resources.
Authentication and session management vulnerabilities — eliminated
No server sessions. No session tokens. No session fixation, session hijacking, or session prediction attacks. No server-side authentication bypass. The browser is the only runtime, and credentials (API keys) are stored encrypted in localStorage, used only in direct calls to the configured LLM provider.
Server-side access control failures — eliminated
No server-side authorization logic means no broken access control on the server. There is no admin panel to bypass, no API endpoint returning unauthorized data, no insecure direct object reference to another user's resources.
Server misconfiguration — eliminated
No web server to misconfigure. No default credentials on an admin interface. No exposed debug endpoints. No directory listing. No unnecessary HTTP methods enabled. No missing security headers on dynamic responses. Static file hosts serve files — the configuration surface is minimal.
Logging and monitoring data exposure — eliminated
No application server means no server-side logs capturing user prompts, API keys, or conversation content. All processing happens in the browser. The only external data flow is direct API calls to the user's configured LLM provider.
Dependency tree attacks — structurally reduced
A typical server-side application has hundreds or thousands of transitive dependencies installed via package managers with install hooks. A static-file agent with five vendored libraries has a reviewable, auditable codebase. No node_modules. No postinstall scripts. No transitive supply chain.
What remains
Removing the server does not eliminate all risk. The attack surface shifts to the client and to external service boundaries.
Intermediary risk
Proxy layers between user and model provider can access prompts, metadata, and credentials. The LLM API provider sees all conversation content.
Mitigation: connect directly where feasible. Run local models for sensitive tasks. Understand and accept the provider's privacy policy.
Browser extensions
Extensions can access page state, including URL fragments, DOM content, and storage.
Mitigation: recommend minimal-extension profiles for sensitive workflows. Warn users that extensions operate outside the application's trust boundary.
XSS and DOM injection
LLM responses can contain HTML, JavaScript, or markdown that renders unsafely. Without server-side sanitization, all sanitization must happen client-side.
Mitigation: sanitize all rendered content with DOMPurify or equivalent. Never insert LLM output as raw HTML.
Fragment and prompt injection
URL fragment payloads and imported context can carry malicious instructions or crafted prompts.
Mitigation: validate decrypted payload schema before use. Sanitize rendered output. Treat all imported content as untrusted.
Supply chain compromise
Vendored dependencies can still be compromised if updates are not reviewed.
Mitigation: vendor selectively, verify hashes, review diffs per update. Keep the dependency count minimal and each library auditable.
Storage exposure
localStorage is readable by any script on the same origin. A single XSS vulnerability can exfiltrate stored API keys and conversation history.
Mitigation: encrypt sensitive values with authenticated encryption. Minimize retention. Harden against XSS with strict Content-Security-Policy and output sanitization.
LLM provider trust boundary
All conversational data is exposed to the configured LLM API provider's servers and is subject to their privacy policy. This is an irreducible trust boundary for any agent that uses remote inference.
Mitigation: use local inference (Ollama, llama.cpp, or in-browser via WebLLM/transformers.js) for sensitive workloads. Treat the provider boundary as a conscious, documented trust decision.
The comparison
| Vulnerability class | Server-backed app | Static-file agent |
|---|---|---|
| Server-side RCE | Possible | Does not exist |
| SQL injection | Possible | Does not exist |
| SSRF | Possible | Does not exist |
| Session hijacking | Possible | Does not exist |
| Broken access control (server) | Possible | Does not exist |
| Server misconfiguration | Possible | Does not exist |
| Server-side log exposure | Possible | Does not exist |
| Dependency tree attacks | Large surface | Structurally minimal |
| XSS / DOM injection | Possible | Possible |
| Client storage exposure | Possible | Possible |
| Browser extension access | Possible | Possible |
| LLM provider data exposure | Possible | Possible |
The architecture does not make the agent invulnerable. It makes the agent's risk profile smaller, more auditable, and structurally harder to exploit at scale.
For agents
- Document who can read data at each stage
- Recognize that no-server is itself a security property — not just an infrastructure choice
- Encrypt sensitive client-side state with authenticated encryption
- Treat fragment/decrypted payloads as untrusted until validated
- Sanitize all LLM output before DOM insertion
- Audit dependency updates and keep the vendored set minimal
- Apply least privilege to tools, especially shell execution
- Treat the LLM provider as a documented, conscious trust boundary