[ 01 ]
Understands your sources.
Builds a connected graph of where each memory came from, how recent it is, and which others it relates to — so recall surfaces the cluster, not just the single hit.
A local memory layer that captures context once and recalls it inside every AI tool you use.
Yesterday's chat. Last week's decision. Last quarter's architecture call. The next AI session knows nothing about any of it.
ChatGPT remembers inside ChatGPT. Cursor remembers inside Cursor. The walls are deliberate, and they cost you the moment you switch.
"As we discussed" comes back without a citation. You're trusting that the model isn't inventing the memory you're about to act on.
Whatever the platform decides about retention, training, or pricing — the accumulated context is hostage. Switch vendors, climb the hill again.
A complete memory layer that runs on your machine. No vendor memory. No prompt-stuffing. No re-explaining.
New chat. New tool. New laptop. Your context is already there, with the source attached, before the model speaks.
Save mid-stream from any AI tool, any meeting, any note. memwork keeps the original text and the source — you never have to copy-paste between tabs again.
Memories live in a SQLite file you own. No cloud sync. No vendor memory store. The only network call is the embedding API, and you choose the provider.
Recall happens in-process, in milliseconds. Every result returns with source name, date, and ID — so the next AI session starts with what you already know.
memwork is a closed loop — capture, index, recall — engineered together for one outcome: the next AI session already knows what you decided.
CAPTURE THROUGHPUT
BENCH · MW-001
Snippets ingested per second · M-series MacBook
memwork
0/s
Manual paste
0/30s
Local indexing pipeline runs without leaving your process.
[ 01 ]
Builds a connected graph of where each memory came from, how recent it is, and which others it relates to — so recall surfaces the cluster, not just the single hit.
[ 02 ]
Source name, date, and stable memory ID surface on every result. Verify before you trust. Click through to the original any time.
[ 03 ]
Hybrid scoring: vector similarity, lexical match, recency decay, source weight. Tunable per workflow without rebuilding the index.
[ 04 ]
MCP for editors, CLI for scripts, manual paste for the moments in between. The schema enforces a sourceId on every write — there is no path that forgets.
[ 05 ]
Single-user, single-device by design in v1. No cross-user query path. Sharing is a separate product with a separate trust model — and we'll ship one good thing first.
[ 06 ]
SQLite + sqlite-vec, in-process. Vector search and lexical search compose in the same query plan. Cold start is one file open. Backups are cp.
LATENCY
<60ms
p50 hybrid recall
PRIVACY
Local
SQLite on your disk
COST / RECALL
$0
your machine, your tokens
VENDOR LOCK
0
MIT, MCP, swappable
LOCAL STORAGE
SQLITE + SQLITE-VEC · ONE FILE
INFERENCE
OPENAI EMBEDDINGS · SWAPPABLE
CONTEXT INDEX
VECTOR + LEXICAL HYBRID
MCP SERVER
STDIO TRANSPORT · ONE BINARY
VALIDATED
ZOD AT EVERY BOUNDARY
OPINIONATED
SOURCED BY DEFAULT
LOCAL STORAGE
SQLITE + SQLITE-VEC · ONE FILE
INFERENCE
OPENAI EMBEDDINGS · SWAPPABLE
CONTEXT INDEX
VECTOR + LEXICAL HYBRID
MCP SERVER
STDIO TRANSPORT · ONE BINARY
VALIDATED
ZOD AT EVERY BOUNDARY
OPINIONATED
SOURCED BY DEFAULT
We're inviting engineers to install it on real workflows and tell us what's broken before it ships.
one email when v1 ships · no drip · no scarcity timer
Your AI is about to remember.