DocsResearchOverview

ResearchAnything.ai.

A citation-grade research agent that turns a question into a defensible report. Plan, search, scrape, screen, score, analyze, synthesize — every step persisted, every claim traceable to a source, every source carrying a formal citation.

§ 01What it is

ResearchAnything.ai is the research engine inside Essarion. You give it a question; it gives you a report you can hand to an editor, a partner, or a regulator. Not a chat transcript. Not a summary of the first three search results. A structured deliverable: an executive summary, a full report with cited claims, a ranked list of vetted sources, and citations rendered in every major academic format.

Underneath, it runs a twelve-phase pipeline that mirrors how a human analyst would actually work — read the question, plan sub-queries, run searches, dedupe, scrape, screen, score, pick the survivors, analyze them, build citations, synthesize, persist. The pipeline is deterministic in shape and recoverable across failures.

TipIf you want to skip straight to running one, start at Quickstart. If you want to understand what happens under the hood, read Research workflow.

§ 02Why it's different

Most general-purpose chat assistants treat research as an aside — a few web lookups stuffed into the same context window as the answer. That works for casual questions; it does not work when the answer has to be defensible.

Purpose-built, not bolted on

ResearchAnything is a dedicated pipeline, not a tool call. Search, scrape, screen, and score are all first-class phases with their own outputs, their own quality gates, and their own persistence. The model is not asked to invent a method on the fly — it follows a method designed for the task.

Every claim is cited

Citations are built, not generated. Each surviving source is parsed for metadata — title, authors, publisher, year, access date, URL — and then formatted by a deterministic citation builder into MLA, APA, Chicago Notes, Chicago Author-Date, IEEE, Harvard, and BibTeX. There is no guesswork in what a citation should look like.

Quality over volume

A general chat tool is rewarded for sounding plausible. ResearchAnything is rewarded for filtering. Sources go through dedup, low-quality screening, and multi-factor scoring (credibility, relevance, recency) before any of them touch the synthesis step. The final report is built from five to twelve survivors, not fifty noisy hits.

Full traceability

Every phase emits a structured event and a reasoning record. You can open a completed run and see exactly which sub-queries the planner produced, which URLs the search returned, which were dropped and why, and which made it into the final scoring stack.

§ 03The output

A finished run produces four things, all persisted and all retrievable.

Executive summary

A concise, single-page synthesis that answers the original question directly. Designed to be the first and sometimes only thing a reader needs.

Full report

A long-form, structured response with sectioned headings, evidence-backed claims, inline references, and a closing notes block. This is the document you would attach to a memo or a brief.

Sources

Every source that survived screening, with its credibility score, relevance score, recency score, overall composite, and a short selection rationale explaining why it ended up in the final stack.

Citations

Each source carries a built citation in every supported format. Export them per-source, per-format, or as a bulk archive. Drop them into a manuscript, a slide deck, or a bibliography.

§ 04Use cases

ResearchAnything is built for work where the answer has to hold up.

Academic research

Literature scans, background reading, thesis groundwork. Output flows directly into a writing project with formatted citations in your preferred style — MLA, APA, Chicago, IEEE, Harvard, or BibTeX.

Journalism & investigations

Source-discovery for a beat, fact-checking against primary sources, tracing the provenance of a claim. The selection rationale field tells you why each source was admitted, which is the same question an editor will ask.

Due diligence

Counterparty research, regulatory background, market and competitive scans. The persisted timeline is auditable; the citation export is defensible.

Market research

Sizing, segmentation, vendor surveys, competitive landscape. The recency score keeps the freshness bias visible; the credibility score keeps the noise out.

§ 05Where to go next

Pick the path that matches what you're doing.