Core concepts.
Essarion is a single platform behind three product surfaces, and every surface reuses the same small vocabulary. This page is that vocabulary. Read it once and the rest of the docs will read like a single document — because the same project, the same run, the same source, mean the same thing whether you're in the research UI, the agent workspace, or hitting the API.
§ 01Containers and execution
Most of Essarion's nouns describe one of two things: a place where work lives, or a thing that runs. The four below are the load-bearing terms for both.
Project
A project is a tenanted container for related work. It is the unit of grouping the platform organizes everything else around — runs, files, sources, documents, history all attach to a project, and a project belongs to exactly one user (or, in shared workflows, to that user's workspace).
Think of a project the way you'd think of a research dossier or a folder on a shared drive: the brief, the references gathered along the way, the drafts produced, the chat history with the agent that built them, the audit trail of every action — all in one bounded place. When you start a new piece of work, you start a new project. When you come back tomorrow, the project is exactly where you left it.
Run
A run is a single execution of an agent workflow. Every research query is a run. Every agent task is a run. A run has a status (pending, in-progress, completed, failed), an ordered set of phases, a persistent timeline of everything that happened during it, and a stable identifier — the request_id — that you can use to retrieve it later.
Runs are first-class. Once started, they persist whether or not you stay connected. You can subscribe to a run while it streams, fetch its full timeline after the fact, share a link to it, or replay it in the UI. The same run object backs the live view in the research app and the JSON returned from GET /api/v1/runs/{request_id}.
Phase & step
A run is composed of phases — the discrete stages it moves through on its way from question to answer. In the research engine, a deep query passes through up to twelve of them: analyze, plan, search, scrape, screen, analyze, cite, synthesize, and a few finishing passes around them. Each phase is a step on the timeline; each step has its own status, inputs, outputs, and timing.
Phases are not implementation details. They're how the platform makes the work legible. When you watch a run unfold, you're watching its phases tick through. When you debug a slow run, you're looking at which phase took the time. When you cite Essarion's reasoning to a stakeholder, you're pointing at a step.
Reasoning chunk
A reasoning chunk is a streamed fragment of the model's reasoning, attached to a step. As an agent thinks through a phase, its working notes are surfaced as chunks — short blocks of text that arrive in order over the stream, then are persisted against the step they belong to.
Chunks are how the platform shows its work in real time without waiting for a phase to finish. They're also how the timeline stays interesting after the fact: a completed run isn't just an answer, it's an annotated trace of how the answer was reached.
§ 02Research objects
The next four terms describe what comes out of a research run — the discovered evidence, the formatted bibliography, and the long-form artifact you actually keep.
Source
A source is a discovered URL with metadata. The research engine builds one for every page it reaches: title, domain, snippet, the full scraped text it extracted, and a set of scores. The scores capture credibility (is the publisher trustworthy), relevance (does it actually answer the question), recency (how fresh is it), and an overall composite the screener uses to decide whether the source survives into the final analysis.
Sources are searchable, reusable, and — if you keep them — yours. A source discovered in one run can be referenced in another, pulled into a project's library, or exported as a citation.
Citation
A citation is a formal bibliographic record built from a source. Where a source is the raw evidence, a citation is the publication-ready reference. The platform produces them in seven formats out of the box: MLA, APA, Chicago Notes, Chicago Author-Date, IEEE, Harvard, and BibTeX.
Citations are emitted alongside synthesized answers, available individually through the API, and exportable in bulk for projects that need a real bibliography at the end. They are the difference between a chatbot's gestured-at "according to some sources" and an answer you can hand to a reviewer.
Document
A document is a markdown artifact owned by a project. Notes, drafts, briefs, memos, summaries — anything the user or an agent writes down inside a project lives as a document. Documents are versioned, searchable within their project, and can be exported.
Documents are how a project accumulates value over time. A run produces an answer; a document is where that answer is shaped, edited, combined with other answers, and eventually shipped.
§ 03Agent runtime
Inside the agent workspace, four more terms describe the runtime: where the agent acts, what capabilities it has, what rules constrain it, and which packaged workflow it's running.
Surface
A surface is an interactive panel an agent acts through. Today the platform exposes four: a real terminal (a PTY the agent can run shell commands in), a real browser (a cloud Chromium the agent drives like a human would), an IDE (a code-aware editor the agent reads and writes through), and a preview pane (a live view of the running app or document).
Surfaces are how an agent moves from "answering" to "doing." A research run never needs more than synthesis; an agent task often needs to read a file, run a script, click a button, and re-render a preview before it's done.
Skill
A skill is a domain-specialized capability or guideline an agent attaches at runtime. Skills bundle the prompts, examples, tools, and conventions for a particular kind of work — accountancy, contract review, executive briefing, code review — so that the underlying model behaves like a specialist without being retrained.
Skills are composable. An agent can pick up several skills for a single task and drop them when the task is done.
Policy
A policy is a guardrail — a do/don't rule governing a class of agent behavior. Policies sit above skills: a skill makes the agent good at a thing, a policy decides whether the agent is allowed to do that thing at all, and under what conditions.
Policies are how platform teams keep the workspace governable as the surface area grows.
Job
A job is a pre-built agent template for a recurring business workflow. Where a skill is a capability the agent picks up, a job is the whole package — a brief, a sequence of steps, a set of skills, a set of policies, and an expected output — wrapped into something a non-technical user can launch with a click.
Jobs are how the platform turns "an AI workspace" into "a workflow we already run on Tuesdays."
§ 04Identity and sessions
The last group of terms is about who is acting and how the platform knows.
API key
An API key is a programmatic credential tied to a user. Every key is prefixed esk_, generated server-side, and shown in plaintext exactly once at creation; the platform stores only a SHA-256 hash. Keys can be revoked or marked inactive, and every use updates a last_used_at timestamp so dormant keys are easy to spot.
API keys are the canonical way to authenticate against the platform from your own code.
Session
A session is a browser session, represented by a JWT cookie set after sign-in. It carries the user identity for the research UI and the agent workspace. Sessions are HTTP-only and expire after a fixed window; signing out rotates the cookie immediately.
Sessions and API keys are two doors into the same identity — one for humans, one for code.
Workspace session
A workspace session is an ephemeral binding of a user to a project, used by Agents. When you open a project in the workspace, the platform creates a workspace session that scopes the agent's reads and writes to that project's drive, that project's runs, and that user's permissions. When you leave, the session ends.
Workspace sessions are how the agent runtime keeps tenants isolated even when many projects share the same underlying machinery.
Subagent
A subagent is a child agent spawned by a parent for parallel sub-work. When a task naturally fans out — read these ten files, summarize each — the parent agent can spawn a subagent per item, let them run concurrently, and collect their outputs. Subagents inherit the parent's project and policies, run on the same runtime, and surface their own steps on the parent's timeline.
Subagents are how the workspace stays fast on tasks that would otherwise be serial.
§ 05Where this vocabulary appears
Every page that follows assumes the terms above. A few starting points if you want to see them in context:
- Runs and phases — see Research workflow and Run timelines.
- Sources and citations — see Sources & library and Citations.
- Surfaces, skills, jobs, subagents — see Surfaces, Skills & jobs, and Runtime & subagents.
- API keys and sessions — see Authentication and API keys.
- Architecture and isolation — see Architecture and Security.