TECHNICAL WHITE PAPER
Privacy by Architecture
How PromptCloak achieves zero-knowledge AI chat through cryptographic identity, on-device intelligence stripping, transport-layer hardening, and a stateless backend with no database.
ABSTRACT
PromptCloak is a privacy-first AI chat application for iOS and iPadOS (with a macOS version in development) that provides access to large language models without requiring user accounts, collecting personal information, or persisting conversation data on any server. This paper describes the five-layer privacy architecture that makes this possible: (1) cryptographic device-bound identity, (2) on-device personally identifiable information scrubbing, (3) transport-layer security with certificate pinning, (4) a zero-knowledge stateless backend, and (5) optional direct-to-provider communication that bypasses all intermediaries. We examine the threat model each layer addresses, the implementation details, and the tradeoffs inherent in privacy-first design.
Introduction
Every major AI chat platform — ChatGPT, Gemini, Copilot — requires account creation. An email address. A phone number. OAuth tokens tied to a real identity. Conversations are stored on company servers, indexed, and in many cases used for model training. The user's relationship with AI is permanently bound to their identity.
PromptCloak takes a fundamentally different approach. The question is not “how do we protect user data?” but “how do we never have it in the first place?”
This is the distinction between privacy by policy (promises about how data is handled) and privacy by architecture (engineering systems where data exposure is structurally impossible). PromptCloak is built on the latter principle.
Layer 1: Cryptographic Identity
On first launch, PromptCloak generates an Ed25519 keypair using Apple's CryptoKit framework. Ed25519 is the same elliptic-curve signature algorithm used in SSH, the Signal Protocol, and Tor hidden services. The keypair is generated entirely on-device.
Key Storage
The private key is stored in the iOS Keychain with the access control flag kSecAttrAccessibleWhenUnlockedThisDeviceOnly. This means:
- The key is not included in device backups
- The key is not synced to iCloud Keychain
- The key is only accessible when the device is unlocked
- The key is tied to the specific device — it cannot be extracted
Session Identity
The user's visible identifier is a session ID derived from the public key (e.g., AZ-4E7C2D). This is the only identifier that appears anywhere in the app. There is no concept of a username, display name, avatar, or profile. The backend sees the public key but cannot determine who the user is — there is no registration database to look up.
Request Signing
Every API request is signed with the device's private key. The backend verifies the Ed25519 signature to ensure request integrity and prevent tampering. The auth headers sent with each request are: X-Public-Key, X-Timestamp, X-Signature, and X-Entitlement.
Layer 2: On-Device PII Scrubbing
PromptCloak's Strict Privacy Mode uses Apple's Natural Language framework (NLTagger) to perform real-time entity recognition on all outgoing text. The system identifies three entity types:
- Personal names → replaced with
{{NAME_1}},{{NAME_2}}, etc. - Organizations → replaced with
{{ORG_1}} - Place names → replaced with
{{PLACE_1}}
A bidirectional map maintains the association between placeholders and original values. As the AI response streams back, placeholders are restored to original text in real time. The user always sees real names — the AI model never does.
Mappings are session-scoped and reset when switching conversations. NLTagger runs entirely on-device using Apple NLP models — no network calls are made for entity recognition.
Layer 3: Transport Security
PromptCloak enforces a minimum of TLS 1.3 for all network connections. Beyond standard TLS, the app implements SPKI (Subject Public Key Info) hash pinning against Let's Encrypt intermediate certificates and the ISRG Root X1 certificate.
Certificate pinning prevents man-in-the-middle attacks by ensuring the app only trusts specific certificate authorities. Even if a device's certificate store is compromised (e.g., via a corporate proxy or malicious CA installation), PromptCloak will refuse to establish a connection.
Combined with Ed25519 request signing, the transport layer provides both confidentiality (TLS encryption) and integrity (signature verification) — ensuring that requests cannot be intercepted, modified, or replayed.
Layer 4: Zero-Knowledge Backend
The PromptCloak backend is written in Go and deployed on Render. Its architecture is deliberately minimal:
- No database. The backend has no persistent storage for user data. Messages pass through for AI inference and are immediately discarded.
- Stateless request proxy. The backend's only role is to route requests to AI providers (Groq, xAI, Perplexity), verify Ed25519 signatures, and enforce rate limiting via Redis.
- Redis for rate limiting only. The Redis instance stores rate-limit counters and subscription cache — never message content. Eviction policy is
allkeys-lru.
The security implication is significant: there is nothing to hack, nothing to subpoena, and nothing to leak. A complete breach of the PromptCloak backend would yield rate-limit counters and public keys — never conversation content, user identities, or chat history.
Layer 5: Direct-to-Provider (BYOK)
The Bring Your Own Key feature eliminates even the proxy layer. When a user provides their own OpenAI or Anthropic API key, requests travel directly from the device to the AI provider — the PromptCloak backend is completely bypassed.
API keys are stored in the device Keychain with the same protection level as the Ed25519 private key. A lightweight validation call confirms the key works before it's saved. The data path for BYOK requests is:
No intermediary touches the data. The user is the sole party between their device and the AI model.
Threat Model
| Threat | Mitigation |
|---|---|
| Backend breach | No database exists. No message content is stored. |
| Legal subpoena | Backend has no data to produce. Public keys cannot be linked to identities. |
| Man-in-the-middle attack | TLS 1.3 + SPKI certificate pinning prevents interception. |
| Request tampering | Ed25519 signature verification on every request. |
| PII exposure to AI providers | Strict Mode scrubs names, orgs, and places before transit. |
| EXIF metadata in images | All metadata stripped by re-rendering through graphics context. |
| Identity linkage | No accounts, no email, no phone. Ed25519 keypair only. |
| Device compromise | Private key protected by Keychain with hardware-backed security. |
Honest Limitations
Privacy-first design involves tradeoffs. We believe transparency about limitations is more trustworthy than overclaiming:
- Not end-to-end encrypted. TLS protects data in transit, but the backend decrypts requests to forward them to AI providers. BYOK mode eliminates this concern entirely.
- Not local AI. All models run on remote servers (Groq, xAI). The app does not run inference locally.
- Not fully anonymous from Apple. App Store purchases are tied to Apple ID. The app itself collects no identity, but Apple knows you downloaded it.
- RevenueCat analytics. Anonymized subscription analytics are collected through RevenueCat using the cryptographic public key as the anonymous user ID.
Conclusion
PromptCloak demonstrates that useful AI chat does not require identity surrender. By layering cryptographic identity, on-device NLP, transport hardening, a stateless backend, and direct-to-provider communication, we achieve a system where privacy is a structural property — not a policy that can be changed with a terms update.
We can't leak your data. We never had it.