Skip to content

ResearAI/DeepScientist

Repository files navigation

DeepScientist logo DeepScientist

GitHub | 中文文档 | English Docs | Paper | Website

GitHub stars Watch Video License Apache-2.0 Python 3.11+

ICLR 2026 Top 10 Badge

15-minute local setup · One repo per quest · Visible research progress · Human takeover anytime

Quick StartLaunch Your First ProjectProduct TourModel Setup

deepscientist_install

Unlike one-shot AI Scientist or autoresearch-style systems, DeepScientist is a local-first autonomous research studio that keeps the full loop moving on your machine, from baselines and experiment rounds to paper-ready outputs, with a 10-minute setup. Powered by Findings Memory, Bayesian optimization, and the Research Map, it keeps turning each new result into the next starting point and goes deep through broader exploration and, when needed, thousands of experiment validations.

If you want the technical deep dive behind DeepScientist, watch the Video.


deepscientist.mp4

Still Spending Your Time On Research Grunt Work?

What drains researchers is often not the lack of ideas. It is the endless cycle of low-leverage work:

  • new papers keep coming, but only a small fraction turns into an actionable next-step research plan
  • baseline repos fail on environment, dependency, data, and script issues before real work even starts
  • experiment results get scattered across terminals, scripts, notes, and chats, making later review painful
  • writing, figures, and analysis live in separate tools, so turning them into a coherent paper takes far too long

This is the problem DeepScientist is built to solve:

turn fragmented, repetitive, easy-to-lose research work into a local AI workspace that can keep moving, keep accumulating, and keep getting stronger over time

DeepScientist Is Not Just Another "Research Chatbot"

It is not a tool that summarizes papers, throws you a few ideas, and leaves the dirty work to you.

It is much closer to a real long-running AI research partner:

What common AI tools often look like What DeepScientist does instead
Great at chatting, but context disappears quickly Turns tasks, files, branches, artifacts, and memory into durable state
Good at suggesting ideas, but weak at sustained execution Pushes papers, baselines, experiments, and writing inside one workspace
Strong automation, but feels like a black box Lets you inspect the process through the web workspace, Canvas, files, and terminal
Hard to take over once it goes off track Lets you pause, take over, edit plans, change code, and continue at any time
Each run ends when the run ends Preserves failed paths, winning paths, and reproduction lessons for the next round

About

DeepScientist is not a one-shot agent demo. It is a system built for long-horizon research work.

What Can It Actually Help You Get Done?

1. Start a real project from a paper or a research question

  • feed it a core paper, a GitHub repository, or a natural-language research objective
  • it turns those inputs into an executable quest instead of a chat that loses state after a few turns

2. Reproduce baselines and keep the reproduction reusable

  • restore repositories, prepare environments, handle dependencies, and track the critical failures
  • preserve what broke, what got fixed, and which steps are trustworthy for future rounds

3. Run experiments continuously instead of stopping after one pass

  • propose the next hypothesis from existing results
  • branch, ablate, compare, and record conclusions
  • keep failed routes as assets instead of deleting them

4. Turn results into materials you can actually ship

  • organize findings, conclusions, and analysis
  • produce figures, reports, and paper drafts
  • support local PDF and LaTeX compilation workflows

5. Follow the same research effort from multiple surfaces

  • the web workspace in your browser
  • the TUI workflow on a remote server
  • external connector surfaces for collaboration and progress updates

The current docs already cover these collaboration channels:

Why Is It Easier To Keep Using?

What retains users is not a flashy demo. It is a system that becomes more useful the longer you work with it.

DeepScientist tends to stick for four reasons:

Local-first by default

  • code, experiments, drafts, and project state stay on your own machine or server by default
  • this is especially valuable for unpublished ideas, sensitive experiment history, and longer-running research loops

One repo per quest

  • every quest is a real Git repository
  • branches, worktrees, files, and artifacts naturally express research structure

The process is not a black box

  • it does not only give you an output
  • you can inspect what it read, what it changed, what it kept, and what it plans to do next

Human collaboration is built in

  • DeepScientist can move autonomously
  • you can also step in, edit, redirect, and hand control back whenever you want

Why Try It Now?

Because this is not just a concept. It is a real system with public docs, a public paper, and a public install path.

  • 2026/03/24: DeepScientist officially released v1.5
  • 2026/02/01: the paper went live on OpenReview for ICLR 2026
  • npm install path is already available: @researai/deepscientist
  • both Chinese and English docs are available, along with Web, TUI, and connector entry points

Product Preview

Architecture Overview

DeepScientist architecture overview

Example Outputs

DeepScientist generated paper example 1 DeepScientist generated paper example 2
Example paper output 1
Paper-facing deliverables can be preserved directly inside the quest instead of being split across external tools.
Example paper output 2
DeepScientist can carry work through writing, review, figure polish, and export workflows.

Workspace Preview

Start Research dialog Canvas workspace preview Studio and details workspace preview
Start Research
Kick off a quest from a paper, repository, or natural-language goal.
Canvas
Inspect branches, baselines, and accumulated research structure as a visible map.
Studio + Details
Review metrics, traces, and project state without leaving the same workspace.

Progress Reporting

DeepScientist progress reporting example

Projects surface after long-running work

DeepScientist projects surface

Who Will Love DeepScientist Most?

  • graduate students and engineers who want to reproduce papers and push beyond existing baselines
  • labs or research teams running long experiment loops, ablations, and structured result analysis
  • people who want code, experiments, notes, and writing to live in one workspace
  • users who do not want to hand unpublished ideas and intermediate results directly to a pure cloud workflow
  • people who want to run work on servers while following progress from web, TUI, or messaging surfaces

The Core Philosophy Behind DeepScientist

We believe a system that is actually suitable for research should at least satisfy these principles:

  • one quest, one repository, instead of letting everything dissolve after a short conversation
  • branches and worktrees should express research routes naturally instead of being forced into chat history
  • failed paths should be preserved, summarized, and reused instead of overwritten
  • human researchers should always retain takeover power instead of being locked outside the loop
  • the research process should be reviewable, inspectable, and auditable instead of relying on "the model says it did it"

If that sounds like the way you want to work, DeepScientist is worth trying now.

Get Started In 30 Seconds

If you want to try it right now, the shortest path is:

Platform note: DeepScientist fully supports Linux and macOS. Native Windows support is currently experimental (strongly recommend WSL2).

npm install -g @researai/deepscientist
codex login
ds --here

To stop the managed local daemon and all currently running agents:

ds --stop

If you prefer the interactive first-run flow, run this once first:

codex

If codex still appears to be missing after installing DeepScientist, take the explicit repair path instead of assuming the bundled dependency was linked correctly:

npm install -g @openai/codex
which codex
codex login

If which codex still prints nothing after that, fix the npm global bin path first, then retry codex login and ds doctor.

After startup, the default local address is:

http://127.0.0.1:20999

Local browser auth is now optional and disabled by default. If you want a per-launch local access password, start with:

ds --auth true

Then you only need to do three things:

  1. click Start Research
  2. fill in the research goal, baseline links, paper links, or local paths
  3. let DeepScientist start a real research project that can keep evolving locally

If this is your first run, prefer an isolated environment, a non-root user, and a local machine. For the full details, see:

Choose Your Starting Path

I just want to get it running first

I want to launch a real project today

I mainly work on servers and terminals

I want to connect my own models or external collaboration channels

I want to understand the system design first

Autonomous Research Systems

End-to-End Autonomous Research Systems

System System Type E2E Research Map Workshop Keeps Growing Channels Figure & Rebuttal & Review
autoresearch Open-source
RD-Agent Open-source
Agent Laboratory Open-source
AI-Scientist Open-source
AI-Scientist-v2 Open-source
AutoResearchClaw Open-source
ClawPhD Open-source
Dr. Claw Open-source
FARS Closed-source
EvoScientist Open-source
ScienceClaw Open-source
claude-scholar Open-source
Research-Claw Open-source
DeepScientist Open-source

Documentation

NLPCC 2026 AISB Challenge

If you want to benchmark or extend AI scientist systems in the wild, the NLPCC 2026 AISB shared task is a natural next stop:

NLPCC 2026 AISB shared task poster

For Developers And Maintainers

If you are developing or maintaining DeepScientist, continue with:

Citation

If DeepScientist helps your research or engineering work, please cite the paper below. DeepScientist is jointly developed by Yixuan Weng, Weixu Zhao, Shichen Li, Zhen Lin, and Minjun Zhu.

@inproceedings{
weng2026deepscientist,
title={DeepScientist: Advancing Frontier-Pushing Scientific Findings Progressively},
author={Yixuan Weng and Minjun Zhu and Qiujie Xie and QiYao Sun and Zhen Lin and Sifan Liu and Yue Zhang},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=cZFgsLq8Gs}
}

If this feels like the research workflow you have been waiting for, give the project a star. Every star makes it easier for more researchers who actually need it to find it.

Community

Welcome to join the WeChat group for discussion.

DeepScientist WeChat group

More From ResearAI

If you like DeepScientist, you may also want to explore the rest of the ResearAI ecosystem:

Project What it does Stars
MeOS Fork yourself as a Skill, so agents understand you better GitHub stars
AutoFigure generate publication-ready figures GitHub stars
AutoFigure-Edit generate editable vector paper figures GitHub stars
DeepReviewer-v2 review papers and suggest revisions GitHub stars
Awesome-AI-Scientist curated AI scientist landscape GitHub stars

Roadmap

We are building DeepScientist as a long-term local-first research operating system.

The next major upgrades focus on four directions:

1. Deeper Research Loops

  • AI Scientist Benchmark support for more realistic evaluation and comparison
  • smoother automatic baseline upload, download, and reuse
  • stronger experiment replay, comparison, and paper-facing outputs

2. Stronger Long-Horizon Memory

  • stronger Memory and Findings Memory mechanisms
  • better cross-run and cross-quest reuse
  • less repeated failure and less rediscovery cost over long projects

3. Richer Multimodal And Collaborative Workflows

  • VideoAnything-style multimodal research capabilities
  • better local-model, connector, and copilot/autonomous collaboration flows
  • a more efficient and more reliable DeepScientist system across local, collaborative, and long-horizon research settings

4. Stronger Security And Safer Deployment

  • safer local-first and server-side deployment defaults
  • stronger auth, permission, and connector-surface protection
  • less fabrication, lower hallucination, and more verification-grounded outputs
  • better auditability for long-running autonomous research workflows

If this direction is interesting to you, please give the project a Watch and a Star:

Watch DeepScientist Star DeepScientist


This project is maintained by WestlakeNLP. If you run into problems, please ask on DeepWiki first; if it still cannot be resolved, open an issue.

WestlakeNLP is led by ACL Fellow Professor Yue Zhang. If you are interested in a long-term internship, PhD position, or research assistant opportunity, contact Professor Yue Zhang at zhangyue@westlake.edu.cn.

About

Now, Stronger AI Pushes Frontiers, Stronger Our Shared Future.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors