Memory System
Constella ingests context from your integrations, extracts the useful parts from files and conversations, and builds a unified memory layer that brings back the right context on demand.
01
Pulls in data from connected integrations and synced sources continuously.
02
Converts messy inputs into clean, usable text. Extracts and chunks documents.
03
Assembles the most relevant context from multiple layers into one answer.
Ingestion
Slack, Notion, Google Drive, Obsidian, Gmail, and more — your connected integrations continuously sync into one memory layer. No manual uploads. No copy-paste. The system pulls in what matters from the tools you already use.
Understanding
Documents aren't just stored — their usable text is extracted and made recallable. For long documents, the system preserves the full source while breaking content into smaller searchable passages. Raw becomes retrievable.
Structure
Raw inputs become organized passages, metadata, categories, and connections. The system builds multiple views of the same information — source content, semantic recall, and relationship context — so retrieval is coherent, not fragmented.
Recall
Intent-aware query analysis determines what entities, categories, and search paths matter — before anything is retrieved. Responses stay tied to original records instead of free-floating summaries. Source-grounded recall, not hallucination.
Under the hood
Retrieves both relevant content and the relationships around it — people, projects, topics, events.
Combines semantic search, keyword recall, and structured memory in a single query.
Analyzes intent to determine entities, categories, and search paths before retrieving.
A dedicated GPU-backed embedding pipeline tuned for recall quality.
Large files become smaller recallable sections while preserving the parent document.
Tracks what context was retrieved and used. Source-grounded, not free-floating.
Relationship-aware, source-grounded recall built from integrations, files, and connected memory.