arche is persistent memory for
every ai you use
store what ai learns about you—preferences, context, history—and deliver it to any model, any tool, any agent. one identity across every conversation.
works with any mcp-compatible client


your memory, visualized
every fact, preference, and context about you—stored as embeddings, queried by meaning. try searching below.
your memory, visualized
semantic search across all your context. view on desktop for the full interactive experience.
prefers dark mode in all apps
works at a series b fintech startup
uses vim keybindings everywhere
vegan—no meat, dairy, or eggs
how it works
memory that moves with you
ingest
import from chatgpt exports, linkedin, resumes, or manually add memories. arche extracts and classifies the signal.
embed
each memory is vectorized and stored. episodic, semantic, and procedural knowledge—all searchable by meaning.
inject
via mcp, your ai queries your memory before responding. relevant context is injected automatically. you stay in control.
built for power users
designed for developers, researchers, and anyone who uses multiple ai tools daily.
semantic search
query by meaning, not keywords. pgvector-powered similarity search with configurable thresholds.
mcp native
built on the model context protocol. plug into cursor, claude desktop, or any mcp-compatible agent.
graph visualization
explore your memory space visually. similar memories cluster together. search highlights relationships.
import anything
chatgpt exports, linkedin profiles, resumes, twitter archives. extract years of context in minutes.
you own your data
your memories live in your supabase instance. export anytime. no vendor lock-in. full portability.
api & integrations
rest api for programmatic access. gmail sync for automatic context updates. more integrations coming.
stop re-explaining yourself
every ai conversation starts from zero. you repeat your constraints, preferences, and context hundreds of times a year. arche remembers so your ai doesn't forget.
