d33pmemory is a memory-as-a-service API for AI agents. It automatically extracts, stores, and recalls structured information about your users — so your agent always has context, without you managing any of it.
When your agent ingests a conversation, an LLM reads the exchange and extracts meaningful pieces of information — facts the user stated, preferences they showed, events they mentioned, relationships they referenced. Each memory is stored with a type, category, confidence score, and a vector embedding for semantic search.
It's a 0–1 number representing how certain we are the memory is true. Directly stated facts score high (0.9+). Things inferred from context score lower depending on the evidence. Absence-based guesses are never stored.
Ingest is when you send a conversation to d33pmemory to extract and store memories from it. Recall is when you query d33pmemory to retrieve the most relevant memories for a given situation — like 'what do I know about this user right now?'
Every memory is stored alongside a vector embedding — a numerical representation of its meaning. When you recall, your query is also embedded and compared against all stored memories. The closest matches (by semantic similarity) are returned, ranked by relevance and confidence.
Facts (concrete info like dietary restrictions or job title), preferences (likes/dislikes), events (appointments, things that happened), relationships (who knows who), and patterns (repeated behaviors). Each one is tagged, categorised, and given a confidence score.
Yes. The dashboard shows all stored memories with their type, source, confidence, and tags. You can delete individual memories or clear them all.
No. Memories are scoped per user. Each agent's memories are isolated. Nothing leaks between accounts.
Yes — that's what Teams are for. Agents in the same team can access shared memories (like company facts or user preferences that apply across agents). Private memories stay agent-specific.
Create an API key in the dashboard, then call POST /v1/ingest with the conversation text after each interaction, and GET /v1/recall?query=... before generating a response. That's the full loop.
Yes. d33pmemory is model-agnostic — it sits between your agent and your LLM. You call our API, we handle extraction and storage. Your agent can use OpenAI, Anthropic, Mistral, or anything else.
Yes. Install it with: openclaw plugins install d33pmemory. Set agentId to empty string in the config and it auto-detects your agent name. No manual wiring needed.
Python and JS/TS SDKs are in progress. For now, the REST API is straightforward enough to call directly — full docs at api.d33pmemory.com/docs.
Yes — free forever, no credit card required. You get 1 agent, 100 memories, and 200 ingests/month. Upgrade when you need more.
API calls return a PLAN_LIMIT_REACHED error. Your existing memories are safe — you just can't add more until you upgrade or the billing period resets.
Yes. Upgrade or downgrade anytime from the dashboard billing page. Changes take effect immediately.
Yes. Cancel from the dashboard or the Stripe billing portal. Your data is retained for 30 days after cancellation.
All API keys are hashed and encrypted at rest. Memories are stored encrypted and scoped per user. We never share data between accounts.
On Supabase (Postgres + pgvector), hosted in the EU. We don't train on your data.
Still have questions?