diff --git a/README.md b/README.md index 5c177728a..365d201b9 100644 --- a/README.md +++ b/README.md @@ -16,12 +16,6 @@ - - - - - - [📚Documentation](https://deeplabcut.github.io/DeepLabCut/README.html) | [🛠️ Installation](https://deeplabcut.github.io/DeepLabCut/docs/installation.html) | [🌎 Home Page](https://www.deeplabcut.org) | @@ -57,7 +51,7 @@ **DeepLabCut™️** is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. [Read a short development and application summary below](https://github.com/DeepLabCut/DeepLabCut#why-use-deeplabcut). -# [Installation: how to install DeepLabCut](https://deeplabcut.github.io/DeepLabCut/docs/installation.html) +# [Installation](https://deeplabcut.github.io/DeepLabCut/docs/installation.html) Please click the link above for all the information you need to get started! Please note that currently we support only Python 3.10+ (see conda files for guidance). @@ -80,39 +74,47 @@ pip install --pre "deeplabcut[gui]" or `pip install --pre "deeplabcut"` (headless version with PyTorch)! -To use the TensorFlow (TF) engine (requires Python 3.10; TF up to v2.10 supported on Windows, -up to v2.12 on other platforms): you'll need to run `pip install "deeplabcut[gui,tf]"` -(which includes all functions plus GUIs) or `pip install "deeplabcut[tf]"` (headless -version with PyTorch and TensorFlow). We aim to depreciate the TF part in 2027. +To use the TensorFlow (TF) engine: you'll need to run `pip install "deeplabcut[gui,tf]"` or `pip install "deeplabcut[tf]"` (headless version with TF). +We aim to deprecate the tensorflow backend in version 3.2 (release date TBD). We recommend using our conda file, see [here](https://github.com/DeepLabCut/DeepLabCut/blob/main/conda-environments/README.md) or the [`deeplabcut-docker` package](https://github.com/DeepLabCut/DeepLabCut/tree/main/docker). -# [Documentation: The DeepLabCut Process](https://deeplabcut.github.io/DeepLabCut/README.html) + +# Documentation: The DeepLabCut Process Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at [Nature Protocols paper](https://doi.org/10.1038/s41596-019-0176-0). -For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! https://deeplabcut.github.io/DeepLabCut/docs/course.html + +

-# [DEMO the code](examples/README.md) +# [Code demo](examples/README.md) -🐭 pose tracking of single animals demo [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_DEMO_mouse_openfield.ipynb) +🐭 Pose tracking of single animals demo [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_DEMO_mouse_openfield.ipynb) -See [more demos here](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/README.md). We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab. +See [more demos here](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/README.md). +We provide data and several Jupyter Notebooks, walking you through a demo dataset to test your installation, and another to run DeepLabCut from the start on your own data. +We also show how to use the code in Docker, and on Google Colab. # Why use DeepLabCut? -DeepLabCut continues to be actively maintained and we strive to provide a user-friendly `GUI` and `API` for computer vision researchers and life scientists alike. This means we integrate state-of-the-art models and frameworks, while providing our "best-guess" defaults for life scientists. We highly encourage you to read our papers to get a better understanding of what to use and how to modify the models for your setting. +DeepLabCut continues to be actively maintained and we strive to provide a user-friendly `GUI` and `API` for computer vision researchers and life scientists alike. This means we integrate state-of-the-art models and frameworks, while providing our "best-guess" defaults for life scientists. +We highly encourage you to read our papers to get a better understanding of what to use and how to modify the models for your setting. ## Performance 🔥 In general, we provide all the tooling for you to train and use custom models with various high-performance backbones. -We also provide two foundation pretrained animal models: `SuperAnimal-Quadruped`, `SuperAnimal-TopViewMouse`. To gauge their *out-of-distribution* performance, we provide the following tables. -These models are trained on the [SuperAnimal-Quadruped with AP-10K held out for out-of-domain testing]([https://cocodataset.org/](https://www.nature.com/articles/s41467-024-48792-2)) and the [SuperAnimal-TopViewMouse with DLC-openfield held out for out-of-distribution testing](https://www.nature.com/articles/s41467-024-48792-2). We provide models that include AP-10K in the API (and GUI). +## Pretrained Models + +We also provide two foundation pretrained animal models: `SuperAnimal-Quadruped` & `SuperAnimal-TopViewMouse`. +To gauge their *out-of-distribution* performance, we provide the following tables. + +These models are trained on the [SuperAnimal-Quadruped dataset](https://doi.org/10.5281/zenodo.10619172) with *AP-10K* held out for out-of-domain testing and the [SuperAnimal-TopViewMouse dataset](https://doi.org/10.5281/zenodo.13757509) with *DLC-openfield* held out for out-of-distribution testing (see [Ye et al. 2024](https://www.nature.com/articles/s41467-024-48792-2)). +We provide models that include AP-10K in the API (and GUI). Note, there are many different models to select from in DeepLabCut 3.0. We strongly recommend you check [this Guide](https://deeplabcut.github.io/DeepLabCut/docs/pytorch/architectures.html) for more details. This table, and those below, give you a sense of performance in real-world complex in-the-wild and lab mouse data, respectively. This [link provides the model weights](https://huggingface.co/mwmathis/DeepLabCutModelZoo-SuperAnimal-Quadruped) to reproduce the numbers; but please note, our `full` models are in our DLClibrary and released in the API. @@ -132,8 +134,13 @@ This [link provides the model weights](https://huggingface.co/mwmathis/DeepLabCu ## The History -In 2018, we demonstrated the capabilities for [trail tracking](https://vnmurthylab.org/), [reaching in mice](http://www.mousemotorlab.org/) and various Drosophila behaviors during egg-laying (see [Mathis et al.](https://www.nature.com/articles/s41593-018-0209-y) for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to [rats](http://www.mousemotorlab.org/deeplabcut), humans, various fish species, bacteria, leeches, various robots, cheetahs, [mouse whiskers](http://www.mousemotorlab.org/deeplabcut) and [race horses](http://www.mousemotorlab.org/deeplabcut). DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see [Pretraining boosts out-of-domain robustness for pose estimation](https://arxiv.org/abs/1909.11229) and [Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0)). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. -In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! +### Development and Applications + +In 2018, we demonstrated the capabilities for [trail tracking](https://vnmurthylab.org/), [reaching in mice](http://www.mousemotorlab.org/) and various Drosophila behaviors during egg-laying (see [Mathis et al.](https://www.nature.com/articles/s41593-018-0209-y) for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. +The toolbox has already been successfully applied (by us and others) to [rats](http://www.mousemotorlab.org/deeplabcut), humans, various fish species, bacteria, leeches, various robots, cheetahs, [mouse whiskers](http://www.mousemotorlab.org/deeplabcut) and [race horses](http://www.mousemotorlab.org/deeplabcut). +DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. +The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see [Pretraining boosts out-of-domain robustness for pose estimation](https://arxiv.org/abs/1909.11229) and [Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0)). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. +In v3.0+ we have updated the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

@@ -145,49 +152,51 @@ We currently provide state-of-the-art performance for animal pose estimation and **Left:** Due to transfer learning it requires **little training data** for multiple, challenging behaviors (see [Mathis et al. 2018](https://www.nature.com/articles/s41593-018-0209-y) for details). **Mid Left:** The feature detectors are robust to video compression (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details). **Mid Right:** It allows 3D pose estimation with a single network and camera (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242)). **Right:** It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see [Nath* and Mathis* et al. 2019](https://doi.org/10.1038/s41596-019-0176-0)). -**DeepLabCut** is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See [DLC-Utils](https://github.com/DeepLabCut/DLCutils) for some helper code. +### Ecosystem + +**DeepLabCut** is part of a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. +Moreover, many new tools are being actively developed. See [DLC-Utils](https://github.com/DeepLabCut/DLCutils) for some helper code.

-## Code contributors: +### Code contributors -DLC code was originally developed by [Alexander Mathis](https://github.com/AlexEMG) & [Mackenzie Mathis](https://github.com/MMathisLab), and was extended in 2.0 with the core dev team consisting of [Tanmay Nath](https://github.com/meet10may) (2.0-2.1), [Jessy Lauer](https://github.com/jeylau) (2.1-2.4), and [Niels Poulsen](https://github.com/n-poulsen) (2.3-3.0). -DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including early contributors: Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the [100+ contributors](https://github.com/DeepLabCut/DeepLabCut/graphs/contributors). Please see [AUTHORS](https://github.com/DeepLabCut/DeepLabCut/blob/master/AUTHORS) for more details! +DeepLabCut was originally developed by [Alexander Mathis](https://github.com/AlexEMG) & [Mackenzie Mathis](https://github.com/MMathisLab), and was extended in 2.0 with the core dev team consisting of [Tanmay Nath](https://github.com/meet10may) (2.0-2.1), [Jessy Lauer](https://github.com/jeylau) (2.1-2.4), and [Niels Poulsen](https://github.com/n-poulsen) (2.3-3.0). +DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including early contributors: Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the [100+ contributors](https://github.com/DeepLabCut/DeepLabCut/graphs/contributors). +Please see [AUTHORS](https://github.com/DeepLabCut/DeepLabCut/blob/master/AUTHORS) for more details! 🤩 This is an actively developed package and we welcome community development and involvement: [![Contributors](https://contrib.rocks/image?repo=DeepLabCut/DeepLabCut)](https://github.com/DeepLabCut/DeepLabCut/graphs/contributors) - - -# Get Assistance & be part of the DLC Community✨: +# Get Assistance & be part of the DLC Community✨ | 🚉 Platform | 🎯 Goal | ⏱️ Estimated Response Time | 📢 Support Squad | |------------------------------------------------------------|-----------------------------------------------------------------------------|---------------------------|----------------------------------------| -| GitHub DeepLabCut/[Issues](https://github.com/DeepLabCut/DeepLabCut/issues) | To report bugs and code issues🐛 (we encourage you to search issues first) | 2-5 days | DLC Core Dev Team | -| GitHub DeepLabCut/[Contributing](https://github.com/DeepLabCut/DeepLabCut/blob/master/CONTRIBUTING.md) | To contribute your expertise and experience🙏💯 | 2-5 days | DLC Core Dev Team | -| 🚧 GitHub DeepLabCut/[Roadmap](https://github.com/DeepLabCut/DeepLabCut/blob/master/docs/roadmap.md) | To learn more about our journey✈️ | N/A | N/A +| GitHub - [Issues](https://github.com/DeepLabCut/DeepLabCut/issues) | To report bugs and code issues🐛 (we encourage you to search issues first) | 2-5 days | DLC Core Dev Team | +| GitHub - [Contributing](https://github.com/DeepLabCut/DeepLabCut/blob/master/CONTRIBUTING.md) | To contribute your expertise and experience🙏💯 | 2-5 days | DLC Core Dev Team | | [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&url=https%3A%2F%2Fforum.image.sc%2Ftag%2Fdeeplabcut.json&query=%24.topic_list.tags.0.topic_count&colorB=brightgreen&&suffix=%20topics&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tag/deeplabcut)
🐭Tag: DeepLabCut | To ask help and support questions 👋 | Promptly🔥 | The DLC Community | |[![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) | To discuss with other users, share ideas and collaborate💡 | 2-5 days | The DLC Community | -| [BluSky🦋](https://bsky.app/profile/deeplabcut.bsky.social) | To keep up with our latest news and updates 📢 | 2-5 days | DLC Team | +| [![BlueSky](https://img.shields.io/badge/BlueSky-%40deeplabcut-blue?logo=bluesky)](https://bsky.app/profile/deeplabcut.bsky.social) | To keep up with our latest news and updates 📢 | 2-5 days | DLC Team | | [![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://x.com/DeepLabCut) | To keep up with our latest news and updates 📢 | 2-5 days | DLC Team | -| The DeepLabCut [AI Residency Program](https://www.deeplabcutairesidency.org/) | To come and work with us next summer👏 | Annually | DLC Team | + + -## References \& Citations: +## References \& Citations Please see our [dedicated page](https://deeplabcut.github.io/DeepLabCut/docs/citation.html) on how to **cite DeepLabCut** 🙏 and our suggestions for your Methods section! -## License: +## License This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission. SuperAnimal models are provided for research use only (non-commercial use). -## Major Versions: +## Major Versions **For all versions, please see [here](https://github.com/DeepLabCut/DeepLabCut/releases).** @@ -202,18 +211,21 @@ This package includes graphical user interfaces to label your data, and take you VERSION 1.0: The initial, Nature Neuroscience version of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y) can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11 -# News (and in the news): +# News + +## Major releases -:purple_heart: We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch +💜 We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch -:purple_heart: The DeepLabCut Model Zoo launches SuperAnimals, see more [here](https://deeplabcut.github.io/DeepLabCut/docs/ModelZoo.html). +💜 The DeepLabCut Model Zoo launches SuperAnimals, see more [here](https://deeplabcut.github.io/DeepLabCut/docs/ModelZoo.html). -:purple_heart: **DeepLabCut supports multi-animal pose estimation!** maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the [new `2.2+` releases](https://github.com/DeepLabCut/DeepLabCut/releases) for what's new & how to install it, please see our new [paper, Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0), and the [new docs]( https://deeplabcut.github.io/DeepLabCut) on how to use it! +💜 **DeepLabCut supports multi-animal pose estimation!** maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the [new `2.2+` releases](https://github.com/DeepLabCut/DeepLabCut/releases) for what's new & how to install it, please see our new [paper, Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0), and the [new docs]( https://deeplabcut.github.io/DeepLabCut) on how to use it! -:purple_heart: We support multi-animal re-identification, see [Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0). +💜 We support multi-animal re-identification, see [Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0). -:purple_heart: We have a **real-time** package available! http://DLClive.deeplabcut.org +💜 We have a **real-time** package available! [DLC-live on GitHub](https://github.com/DeepLabCut/DeepLabCut-live) and [DLC-live-GUI](https://github.com/DeepLabCut/DeepLabCut-live-GUI) +## In the news - June 2024: Our second DLC paper ['Using DeepLabCut for 3D markerless pose estimation across species and behaviors'](https://www.nature.com/articles/s41596-019-0176-0) in Nature Protocols has surpassed 1,000 Google Scholar citations! - May 2024: DeepLabCut was featured in Nature: ['DeepLabCut: the motion-tracking tool that went viral'](https://www.nature.com/articles/d41586-024-01474-x) @@ -251,6 +263,8 @@ importing a project into the new data format for DLC 2.0 - July 2018: Ed Yong covered DeepLabCut and interviewed several users for the [Atlantic](https://www.theatlantic.com/science/archive/2018/07/deeplabcut-tracking-animal-movements/564338). - April 2018: first DeepLabCut preprint on [arXiv.org](https://arxiv.org/abs/1804.03142) - ## Funding + # Funding - We are grateful for the follow support over the years! This software project was supported in part by the Essential Open Source Software for Science (EOSS) program at Chan Zuckerberg Initiative (cycles 1, 3, 3-DEI, 4), and jointly with the Kavli Foundation for EOSS Cycle 6! We also thank the Rowland Institute at Harvard for funding from 2017-2020, and EPFL from 2020-present. + We are grateful for the following support and funding over the years! + This software project was supported in part by the **Essential Open Source Software for Science (EOSS)** program at **Chan Zuckerberg Initiative** (cycles 1, 3, 3-DEI, 4), and jointly with the **Kavli Foundation** for **EOSS Cycle 6**! + We also thank the **Rowland Institute** at **Harvard** for funding from 2017-2020, and **EPFL** from 2020-present. diff --git a/_toc.yml b/_toc.yml index 6d70d0299..6bb51801f 100644 --- a/_toc.yml +++ b/_toc.yml @@ -24,6 +24,9 @@ parts: chapters: - file: docs/gui/PROJECT_GUI - file: docs/gui/napari_GUI + sections: + - file: docs/gui/napari/basic_usage + - file: docs/gui/napari/advanced_usage - caption: DLC3 PyTorch Specific Docs chapters: diff --git a/docs/README.md b/docs/README.md index 2812d8f00..59de51578 100644 --- a/docs/README.md +++ b/docs/README.md @@ -4,6 +4,8 @@ deeplabcut: last_metadata_updated: '2026-03-06' ignore: false --- + + Please see https://deeplabcut.github.io/DeepLabCut for documentation on how to use this software. This directory contains the source code for the docs. diff --git a/docs/UseOverviewGuide.md b/docs/UseOverviewGuide.md index 77cf3eda5..ead8471a9 100644 --- a/docs/UseOverviewGuide.md +++ b/docs/UseOverviewGuide.md @@ -4,79 +4,92 @@ deeplabcut: last_metadata_updated: '2026-03-06' ignore: false --- + (overview)= + # 🥳 Get started with DeepLabCut: our key recommendations Below we will first outline what you need to get started, the different ways you can use DeepLabCut, and then the full workflow. Note, we highly recommend you also read and follow our [Nature Protocols paper](https://www.nature.com/articles/s41596-019-0176-0), which is (still) fully relevant to standard DeepLabCut. -```{Hint} -💡📚 If you are new to Python and DeepLabCut, you might consider checking our [beginner guide](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/beginners-guide.html) once you are ready to jump into using the DeepLabCut App! +```{hint} +💡📚 If you are new to Python and DeepLabCut, you might consider checking our {ref}`beginner guide ` once you are ready to jump into using the DeepLabCut App! ``` +## Introduction -## [How to install DeepLabCut](how-to-install) +**DeepLabCut** is a software package for markerless pose estimation of animals performing various tasks. The software can manage multiple projects for various tasks. Each project is identified by the name of the project (e.g. TheBehavior), name of the experimenter (e.g. YourName), as well as the date at creation. This project folder holds a `config.yaml` (a text document) file containing various (project) parameters as well as links the data of the project. + +

+ +

+ +

+ +

+ +## {ref}`Installing DeepLabCut` We don't cover installation in depth on this page, so click on the link above if that is what you are looking for. See below for details on getting started with DeepLabCut! -## What we support: +## What we support We are primarily a package that enables deep learning-based pose estimation. We have a lot of models and options, but don't get overwhelmed -- the developer team has tried our best to "set the best defaults we possibly can"! -- Decide on your needs: there are **two main modes, standard DeepLabCut or multi-animal DeepLabCut**. We highly recommend carefully considering which one is best for your needs. For example, a white mouse + black mouse would call for standard, while two black mice would use multi-animal. **[Important Information on how to use DLC in different scenarios (single vs multi animal)](important-info-regd-usage)** Then pick a user guide: +### Main modes of DeepLabCut + +- Decide on your needs: there are **two main modes, standard DeepLabCut or multi-animal DeepLabCut**. + - We highly recommend carefully considering which one is best for your needs. + - For example, a white mouse + black mouse would call for standard, while two black mice would use multi-animal. **[Important Information on how to use DLC in different scenarios (single vs multi animal)](important-info-regd-usage)** Then pick a user guide: - (1) [How to use standard DeepLabCut](single-animal-userguide) - (2) [How to use multi-animal DeepLabCut](multi-animal-userguide) -- To note, as of DLC3+ the single and multi-animal code bases are more integrated and we support **top-down**, **bottom-up**, and a new "hybrid" approach that is state-of-the-art, called **BUCTD** (bottom-up conditional top down), models. +- To note, as of DLC3+ the single and multi-animal code bases are more integrated and we support **top-down**, **bottom-up**, and a new "hybrid" approach that is state-of-the-art, called **BUCTD** (bottom-up conditional top down) + - If these terms are new to you, check out our [Primer on Motion Capture with Deep Learning!](https://www.sciencedirect.com/science/article/pii/S0896627320307170). In brief, both work for single or multiple animals and each method can be better or worse on your data.

- - Here is more information on BUCTD: +- Here is more information on BUCTD: +

- **Additional Learning Resources:** - - - [TUTORIALS:](https://www.youtube.com/channel/UC2HEbWpC_1v6i9RnDMy-dfA?view_as=subscriber) video tutorials that demonstrate various aspects of using the code base. - - [HOW-TO-GUIDES:](overview) step-by-step user guidelines for using DeepLabCut on your own datasets (see below) - - [EXPLANATIONS:](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials) resources on understanding how DeepLabCut works - - [REFERENCES:](https://github.com/DeepLabCut/DeepLabCut#references) read the science behind DeepLabCut - - [BEGINNER GUIDE TO THE GUI](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/beginners-guide.html) +### Additional learning resources -Getting Started: [a video tutorial on navigating the documentation!](https://www.youtube.com/watch?v=A9qZidI7tL8) +- [Video tutorials:](https://www.youtube.com/channel/UC2HEbWpC_1v6i9RnDMy-dfA?view_as=subscriber) video tutorials that demonstrate various aspects of using the code base. + -### What you need to get started: + - - **a set of videos that span the types of behaviors you want to track.** Having 10 videos that include different backgrounds, different individuals, and different postures is MUCH better than 1 or 2 videos of 1 or 2 different individuals (i.e. 10-20 frames from each of 10 videos is **much better** than 50-100 frames from 2 videos). +- [Explanations:](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials) resources on understanding how DeepLabCut works +- [References:](https://github.com/DeepLabCut/DeepLabCut#references) read the science behind DeepLabCut +- \[Beginner Guide to the GUI\](file:beginners-guide) - - **minimally, a computer w/a CPU.** If you want to use DeepLabCut on your own computer for many experiments, then you should get an NVIDIA GPU. See technical specs [here](https://github.com/DeepLabCut/DeepLabCut/wiki/FAQ). You can also use cloud computing resources, including COLAB ([see how](https://github.com/DeepLabCut/DeepLabCut/blob/master/examples/README.md)). + + -### What you DON'T need to get started: +### What you need to get started - - no specific cameras/videos are required; color, monochrome, etc., is all fine. If you can see what you want to measure, then this will work for you (given enough labeled data). +- **A set of videos that span the types of behaviors you want to track.** Having 10 videos that include different backgrounds, different individuals, and different postures is MUCH better than 1 or 2 videos of 1 or 2 different individuals (i.e. 10-20 frames from each of 10 videos is **much better** than 50-100 frames from 2 videos). - - no specific computer is required (but see recommendations above), our software works on Linux, Windows, and MacOS. +- **Ideally, a computer with a GPU.** If you want to use DeepLabCut on your own computer for training and/or for many experiments, then you should get an NVIDIA GPU. +- You can also use cloud computing resources, including COLAB ([see how](https://github.com/DeepLabCut/DeepLabCut/blob/master/examples/README.md)). -### Overview: -**DeepLabCut** is a software package for markerless pose estimation of animals performing various tasks. The software can manage multiple projects for various tasks. Each project is identified by the name of the project (e.g. TheBehavior), name of the experimenter (e.g. YourName), as well as the date at creation. This project folder holds a ``config.yaml`` (a text document) file containing various (project) parameters as well as links the data of the project. +### What you DON'T need to get started +- No specific cameras/videos are required; color, monochrome, etc., is all fine. If you can see what you want to measure, then this will work for you (given enough labeled data). -

- -

+- No specific computer is required (but see recommendations above), our software works on Linux, Windows, and MacOS. -

- -

+## Workflow overview -### Overview of the workflow: This page contains a list of the essential functions of DeepLabCut as well as demos. There are many optional parameters with each described function. For detailed function documentation, please refer to the main user guides or API documentation. For additional assistance, you can use the [help](UseOverviewGuide.md#help) function to better understand what each function does.

@@ -87,85 +100,134 @@ This page contains a list of the essential functions of DeepLabCut as well as de

-You can have as many projects on your computer as you wish. You can have DeepLabCut installed in an [environment](../conda-environments/README.md) and always exit and return to this environment to run the code. You just need to point to the correct ``config.yaml`` file to [jump back in](/docs/UseOverviewGuide.md#tips-for-daily-use)! The documentation below will take you through the individual steps. +You can have as many projects on your computer as you wish. +You can have DeepLabCut installed in a {ref}`conda environment`; once you are finished, exit your terminal, and later re-activate your environment. + +When working on a given project, you just need to point to the correct `config.yaml` file to [jump back in](/docs/UseOverviewGuide.md#tips-for-daily-use)! The documentation below will take you through the individual steps.

- (important-info-regd-usage)= -# Specific Advice for Using DeepLabCut: +## Usage advice & project types -## Important information on using DeepLabCut: - -We recommend first using **DeepLabCut for a single animal scenario** to understand the workflow - even if it's just our demo data. Multi-animal tracking is more complex - i.e. it has several decisions the user needs to make. Then, when you are ready you can jump into multi-animals... - -### Additional information for getting started with maDeepLabCut: +```{tip} +We recommend first using **DeepLabCut for a single animal scenario** to understand the workflow - even if it's just our demo data. Multi-animal tracking is more complex - i.e. it has several decisions the user needs to make. Then, when you are ready you can jump into multi-animal mode. +``` -We highly recommend using it first in the Project Manager GUI ([Option 3](docs/functionDetails.md#deeplabcut-project-manager-gui)). This will allow you to get used to the additional steps by being walked through the process. Then, you can always use all the functions in your favorite IDE, notebooks, etc. +### First project: single or multi-animal? -### *What scenario do you have?* +*Which scenario do you have?* - **I have single animal videos:** - - quick start: when you `create_new_project` (and leave the default flag to False in `multianimal=False`). This is the typical work path for many of you. + + - Quick start: when you `create_new_project` (and leave the default flag to False in `multianimal=False`). This is the typical work path for a single animal project. - **I have single animal videos, but I want to use the updated network capabilities introduced for multi-animal projects:** - - quick start: when you `create_new_project` just set the flag `multianimal=True`. This enables you to use maDLC features even though you have only one animal. To note, this is rarely required for single animal projects, and not the recommended path. Some tips for when you might want to use this: this is good for say, a hand or a mouse if you feel the "skeleton" during training would increase performance. DON'T do this for things that could be identified an individual objects. i.e., don't do whisker 1, whisker 2, whisker 3 as 3 individuals. Each whisker always has a specific spatial location, and by calling them individuals you will do WORSE than in single animal mode. -[VIDEO TUTORIAL AVAILABLE!](https://youtu.be/JDsa8R5J0nQ) + - Quick start: when you `create_new_project` just set the flag `multianimal=True`. + + - This enables you to use maDLC features even though you have only one animal. To note, this is rarely required for single animal projects, and not the recommended path. + - Some tips for when you might want to use this: + - This is good for e.g. a hand or a mouse if you feel the "skeleton" during training would increase performance. + - Do not do this for things that could be identified as an individual objects. i.e., don't do whisker 1, whisker 2, whisker 3 as 3 individuals. + Each whisker always has a specific spatial location, and by calling them individuals the network will perform worse than in single animal mode. + + - [VIDEO TUTORIAL AVAILABLE!](https://youtu.be/JDsa8R5J0nQ) - **I have multiple *identical-looking animals* in my videos:** - - quick start: when you `create_new_project` set the flag `multianimal=True`. If you can't tell them apart, you can assign the "individual" ID to any animal in each frame. See this [labeling w/2.2 demo video](https://www.youtube.com/watch?v=_qbEqNKApsI) -[VIDEO TUTORIAL AVAILABLE!](https://youtu.be/Kp-stcTm77g) + - Quick start: when you `create_new_project` set the flag `multianimal=True`. + - If you can't tell them apart, you can assign the "individual" ID to any animal in each frame. See this [labeling w/2.2 demo video](https://www.youtube.com/watch?v=_qbEqNKApsI) + - [VIDEO TUTORIAL AVAILABLE!](https://youtu.be/Kp-stcTm77g) - **I have multiple animals, *but I can tell them apart,* in my videos and want to use DLC2.2:** - - quick start: when you `create_new_project` set the flag `multianimal=True`. And always label the "individual" ID name the same; i.e. if you have mouse1 and mouse2 but mouse2 always has a miniscope, in every frame label mouse2 consistently. See this [labeling w/2.2 demo video](https://www.youtube.com/watch?v=_qbEqNKApsI). Then, you MUST put the following in the config.yaml file: `identity: true` -[VIDEO TUTORIAL AVAILABLE!](https://youtu.be/Kp-stcTm77g) - ALSO, if you can tell them apart, label animals them consistently! + - Quick start: when you `create_new_project` set the flag `multianimal=True`. + - Always label the "individual" ID name the same; i.e. if you have mouse1 and mouse2 but mouse2 always has a miniscope, in every frame label mouse2 consistently. See this [labeling w/2.2 demo video](https://www.youtube.com/watch?v=_qbEqNKApsI). + - Then, you MUST put the following in the config.yaml file: `identity: true` + - [VIDEO TUTORIAL AVAILABLE!](https://youtu.be/Kp-stcTm77g) + +```{important} +If you can tell them apart, label your animals consistently! +``` - **I have a pre-2.2 single animal project, but I want to use 2.2:** + - Please read [the conversion to maDLC guide](convert-maDLC) + +### Getting started with multi-animal (ma) DeepLabCut -Please read [this convert 2 maDLC guide](convert-maDLC) +We highly recommend using it first in the Project Manager GUI ([Option 3](docs/functionDetails.md#deeplabcut-project-manager-gui)). +This will allow you to get used to the additional steps by being walked through the process. Then, you can always use all the functions in your favorite IDE, notebooks, etc. -# The options for using DeepLabCut: +## How to run DeepLabCut -Great - now that you get the overall workflow let's jump in! Here, you have several options. +There are several options to use DeepLabCut, and we recommend you pick the one that best suits your needs and experience level. You can always switch between them, so don't worry about picking the "wrong" one. -[**Option 1**](using-demo-notebooks) DEMOs: for a quick introduction to DLC on our data. +- **Option 1**: [Demo notebooks](using-demo-notebooks): for a quick introduction to DLC on our data. -[**Option 2**](using-project-manager-gui) Standalone GUI: is the perfect place for -beginners who want to start using DeepLabCut on your own data. +- **Option 2**: [Standalone GUI](using-project-manager-gui): is the perfect place for + beginners who want to start using DeepLabCut on your own data. -[**Option 3**](using-the-terminal) In the terminal: is best for more advanced users, as -with the terminal interface you get the most versatility and options. +- **Option 3**: [In the terminal](using-the-terminal): is best for more advanced users, as + with the terminal interface you get the most versatility and options. (using-demo-notebooks)= -## Option 1: Demo Notebooks: + +### Option 1: Demo Jupyter notebooks + [VIDEO TUTORIAL AVAILABLE!](https://www.youtube.com/watch?v=DRT-Cq2vdWs) We provide Jupyter and COLAB notebooks for using DeepLabCut on both a pre-labeled dataset, and on the end user's -own dataset. See all the demo's [here!](../examples/README.md) Please note that GUIs are not easily supported in Jupyter in MacOS, as you need a framework build of python. While it's possible to launch them with a few tweaks, we recommend using the Project Manager GUI or terminal, so please follow the instructions below. +own dataset. See all the demo's [here!](../examples/README.md) +Please note that GUIs are not easily supported in Jupyter in MacOS, as you need a framework build of python. While it's possible to launch them with a few tweaks, we recommend using the Project Manager GUI or terminal, so please follow the instructions below. (using-project-manager-gui)= -## Option 2: using the Project Manager GUI: + +### Option 2: using the Project Manager GUI + [VIDEO TUTORIAL!](https://www.youtube.com/watch?v=KcXogR-p5Ak) [VIDEO TUTORIAL#2!](https://youtu.be/Kp-stcTm77g) -Start Python by typing ``ipython`` or ``python`` in the terminal (note: using pythonw for Mac users was depreciated in 2022). -If you are using DeepLabCut on the cloud, you cannot use the GUIs. If you use Windows, please always open the terminal with administrator privileges. Please read more in our Nature Protocols paper [here](https://www.nature.com/articles/s41596-019-0176-0). And, see our [troubleshooting wiki](https://github.com/DeepLabCut/DeepLabCut/wiki/Troubleshooting-Tips). + + + + +If you are using DeepLabCut on the cloud, you cannot use the GUIs. + +```{warning} +On **Windows**: Open the terminal/cmd/anaconda prompt as **Administrator** (right click and select "Run as administrator") to avoid permission issues during usage when downloading models, and for symlink support when videos are not copied into the project folder. +Admin mode is not required for installation. +``` Simply open the terminal and type: + ```python python -m deeplabcut ``` + That's it! Follow the GUI for details (using-the-terminal)= -## Option 3: using the program terminal, Start iPython*: + +### Option 3: using the terminal + +1. Start iPython: + + ```bash + ipython + ``` + +1. Import DeepLabCut: + + ```python + import deeplabcut + ``` + +1. Follow the instructions in the user guides for either standard or multi-animal DeepLabCut (see below). [VIDEO TUTORIAL AVAILABLE!](https://www.youtube.com/watch?v=7xwOhUcIGio) @@ -173,3 +235,7 @@ Please decide with mode you want to use DeepLabCut, and follow one of the follow - (1) [How to use standard DeepLabCut](single-animal-userguide) - (2) [How to use multi-animal DeepLabCut](multi-animal-userguide) + +## Useful links + +Please read more in our Nature Protocols paper [here](https://www.nature.com/articles/s41596-019-0176-0). diff --git a/docs/beginner-guides/Training-Evaluation.md b/docs/beginner-guides/Training-Evaluation.md index ae22ab403..09e1c7f2b 100644 --- a/docs/beginner-guides/Training-Evaluation.md +++ b/docs/beginner-guides/Training-Evaluation.md @@ -6,19 +6,32 @@ deeplabcut: visibility: online status: viable recommendation: move - notes: "As mentioned on oher beginner-guides/ docs, this should be part of the GUI section." + notes: As mentioned on oher beginner-guides/ docs, this should be part of the GUI section. --- -# Neural Network training and evaluation in the GUI + +(file:training-evaluation-gui)= + +# Neural network training and evaluation in the GUI + DLC LIVE! +## Network training + +### Creating a training dataset Before training your model, the first step is to assemble your training dataset. +This involves: -**Create Training Dataset:** Move to the corresponding tab and click **`Create Training Dataset`**. For starters, the default settings will do just fine. While there are more powerful models and data augmentations you might want to consider, you can trust that for most projects the defaults are an ideal place to start. +- Splitting labeled data into training and evaluation subsets +- Creating each shuffle folder with the model configuration ready for training. -> 💡 **Note:** This guide assumes you have a GPU on your local machine. If you're CPU-bound and finding training challenging, consider using Google Colab. Our [Colab Guide](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_YOURDATA_TrainNetwork_VideoAnalysis.ipynb) can help you get started! +**Create Training Dataset:** Move to the corresponding tab and click **`Create Training Dataset`**. For starters, the default settings will do just fine. While there are more powerful models and data augmentations you might want to consider, you can trust that for most projects the defaults are a good place to start. -## Kickstarting the Training Process +```{note} +This guide assumes you have a (CUDA-enabled) GPU on your local machine. If you're CPU-bound and training is not feasible, consider using Google Colab. Our [Colab Guide](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_YOURDATA_TrainNetwork_VideoAnalysis.ipynb) can help you get started! +``` + +### Starting the training process With your training dataset ready, it's time to train your model. @@ -33,31 +46,34 @@ You can keep an eye on the training progress via your terminal window. This will ![DeepLabCut Training in Terminal with TF](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779598041-DC8UJA2NXJXG65ZWJH1O/training-terminal.png?format=500w) -## Evaluate the Network +## Network evaluation After training, it's time to see how well your model performs. -### Steps to Evaluate the Network +### Step-by-step 1. Find and click on the **`Evaluate Network`** tab. -2. **Choose Evaluation Options:** +1. **Choose Evaluation Options:** - **Plot Predictions:** Select this to visualize the model's predictions, similar to standard DeepLabCut (DLC) evaluations. - **Compare Bodyparts:** Opt to compare all the bodyparts for a comprehensive evaluation. -3. Click the **`Evaluate Network`** button, located on the right side of the main window. - ->💡 Tip: If you wish to evaluate all saved snapshots, go to the configuration file and change the `snapshotindex` parameter to `all`. +1. Click the **`Evaluate Network`** button, located on the right side of the main window. +```{tip} +If you wish to evaluate all saved snapshots, go to the configuration file and change the `snapshotindex` parameter to `all`. +``` -### Understanding the Evaluation Results +### Interpreting the results - **Performance Metrics:** DLC will assess the latest snapshot of your model, generating a `.CSV` file with performance -metrics. This file is stored in the **`evaluation-results`** (for TensorFlow models) or the -**`evaluation-results-pytorch`** (for PyTorch models) folder within your project. + metrics. This file is stored in the **`evaluation-results`** (for TensorFlow models) or the + **`evaluation-results-pytorch`** (for PyTorch models) folder within your project. +![Combined Evaluation Results in DeepLabCut](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779617667-0RLTM9DVRALN9YIKSHJZ/combined-evaluation-results.png?format=750w) -![Combined Evaluation Results in DeepLabCut](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779617667-0RLTM9DVRALN9YIKSHJZ/combined-evaluation-results.png?format=750w)) - **Visual Feedback:** Additionally, DLC creates subfolders containing your frames overlaid with both the labeled bodyparts and the model's predictions, allowing you to visually gauge the network's performance. -![Evaluation Example in DeepLabCut](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779623162-BFDAW37B9TO94EGME2O5/check-labels.png?format=500w)) +![Evaluation Example in DeepLabCut](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779623162-BFDAW37B9TO94EGME2O5/check-labels.png?format=500w) + +## Next steps -## Next, head over the beginner guide for [using your new neural network for video analysis](video-analysis) +Head over the {ref}`file:video-analysis-gui` section to learn about applying your trained model to videos, and creating labeled videos with the results of your analysis! diff --git a/docs/beginner-guides/beginners-guide.md b/docs/beginner-guides/beginners-guide.md index a4df28db5..43bce7f1c 100644 --- a/docs/beginner-guides/beginners-guide.md +++ b/docs/beginner-guides/beginners-guide.md @@ -5,24 +5,33 @@ deeplabcut: ignore: false visibility: online status: outdated - recommendation: update - notes: "While it could seem like a useful page for beginners, duplicating installation instructions is not ideal for maintenance. This is also mixing installation/setup with a GUI guide, which should be in its own section/page. This puts into question the reason of existence of this page, as it would end up being two links to different sections. I would rather have well-made, accurate installation and GUI guides, and if there are beginner-relevant information that really cannot fit into those, then we can have a 'beginner's guide' that links to those and has the extra info. I would suggest reviewing whether this style of docs should remain at all, but if we want to keep them revising the approach may be needed." + recommendation: move + notes: Move to GUI section. --- -(beginners-guide)= -# Using DeepLabCut + +(file:beginners-guide)= + +# Using the DeepLabCut GUI + DLC LIVE! This guide, and related pages, are meant as a very-new-to-python beginner guide to DeepLabCut. After you are comfortable with this material we recommend then jumping into the more detailed User Guides! -- **ProTip:** For even more 'in-depth' understanding, head over to check out the [DeepLabCut Course](https://deeplabcut.github.io/DeepLabCut/docs/course.html), which provides a deeper dive into the science behind DeepLabCut. + + + ## Installation Before you begin, make sure that DeepLabCut is installed on your system. -- **ProTip:** For detailed installation instructions, geared towards a bit more advanced users, refer to the [Full Installation Guide](https://deeplabcut.github.io/DeepLabCut/docs/installation.html). +Please see the {ref}`installation page` for detailed instructions on how to install DeepLabCut on your computer. + + + + +## Starting the DeepLabCut GUI -## Starting DeepLabCut +In the terminal, type: -In the terminal, enter: ```bash python -m deeplabcut ``` -This will open the DeepLabCut App (note, the default is dark mode, but you can click "appearance" to change: + +This will open DeepLabCut. + + ![DeepLabCut GUI Screenshot](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779625875-5UHPC367I293CBSP8CT6/GUI-screenshot.png?format=500w) -> 💡 **Note:** For a visual guide on navigating through the DeepLabCut GUI, check out our [YouTube tutorial](https://www.youtube.com/watch?v=tr3npnXWoD4). +```{note} For a visual guide on navigating through the DeepLabCut GUI, check out our [YouTube tutorial](https://www.youtube.com/watch?v=tr3npnXWoD4). +``` -## Starting a New Project +## Starting a new project -### Navigating the GUI on Initial Launch +### Navigating the GUI on initial Launch When you first launch the GUI, you'll find three primary main options: 1. **Create New Project:** Geared towards new initiatives. A good choice if you're here to start something new. -2. **Load Project:** Use this to resume your on-hold or past work. -3. **Model Zoo:** Best suited for those who want to explore Model Zoo. +1. **Load Project:** Use this to resume your on-hold or past work. +1. **Model Zoo:** Best suited for those who want to explore Model Zoo. -### Commencing Your Work: + -- For a first-time or new user, please click on **`Start New Project`**. + -## 🐾 Steps to Start a New Project +### 🐾 New project step-by-step 1. **Launch New Project:** + - When you start a new project, you'll be presented with an empty project window. In DLC3+ you will see a new option "Engine". - - We recommend using the PyTorch Engine: - ![DeepLabCut Engine](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717780414978-17LOVBUJ8JR102QVSFDY/Screen+Shot+2024-06-07+at+7.13.14+PM.png?format=1500w)) + ![DeepLabCut Engine](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717780414978-17LOVBUJ8JR102QVSFDY/Screen+Shot+2024-06-07+at+7.13.14+PM.png?format=1500w) + + ```{note} + For most users, the engine will be PyTorch. See {ref}`sec:deeplabcut-with-tf-install` for TensorFlow support. + ``` + +1. **Filling in Project Details:** -2. **Filling in Project Details:** - **Naming Your Project:** - - Give a specific, well-defined name to your project. - > **💡 Tip:** Avoid empty spaces in your project name. + - Give a specific, easy-to-track name to your project. + + ```{tip} + Avoid empty spaces in your project name. + ``` - - **Naming the Experimenter:** - - Fill in the name of the experimenter. This part of the data remains immutable. + - Fill in the name of the experimenter. This name is used in data headers and directory names and it remains permanently associated with the project. + +1. **Determine Project Location:** -3. **Determine Project Location:** - By default, your project will be located on the **Desktop**. - - To pick a different home, modify the path as needed. + - To pick a different location, browse as needed. + +1. **Multi-Animal or Single-Animal Project:** -4. **Multi-Animal or Single-Animal Project:** - - Tick the 'Multi-Animal' option in the menu, but only if that's the mode of the project. + - Tick the 'Multi-Animal' option in the menu if relevant to your experiment. - Choose the 'Number of Cameras' as per your experiment. -5. **Adding Videos:** +1. **Adding Videos:** + - First, click on **`Browse Videos`** button on the right side of the window, to search for the video contents. - Once the media selection tool opens, navigate and select the folder with your videos. - - > **💡 Tip:** DeepLabCut supports **`.mp4`**, **`.avi`**, **`.mkv`** and **`.mov`** files. + ```{tip} + DeepLabCut supports **`.mp4`**, **`.avi`**, **`.mkv`** and **`.mov`** files. + ``` - A list will be created with all the videos inside this folder. - Unselect the videos you wish to remove from the project. + - Videos outside the project directory can be automatically copied into to the project folder by selecting the "Copy videos to project folder" option. This is the recommended strategy for data management. External videos that are not copied are instead referenced via symbolic links. While using symbolic links avoids duplicating files and reduces storage usage, it is also more prone to issues, for example if the original files are moved or deleted. + - ```{tip} + By default, the GUI will look for a **directory** containing videos. Use the "Select individual files" + checkbox if you want to select individual videos instead of a whole folder. + ``` -6. **Create your project:** - - Click on **`Create`** button on the bottom, right side of the main window. - - A new folder named after your project's name will be created in the location you chose above. +1. **Define bodyparts and individuals:** + - Enter all the name, numbers or IDs of bodyparts you wish to track. + - **Example:** "head", "tail", "left paw", "right paw", etc. + - Less recommended: "L1", "L2", "L3", etc. + - **If you have multiple animals**: + - Enter the name, numbers or IDs of the individuals in your experiment. + - **Example:** "mouse1", "mouse2", "mouse3", etc. + - **Unique bodyparts**: If you wish to track "landmark" locations, such as the edges of a maze, or a specific object, you can add these as "unique bodyparts". These are not considered part of an individual, but are still tracked as part of the project. + - **Example**: "maze_left_edge", "maze_right_edge", "reward_port", etc. + - **Identity labeling**: if and only if you can tell individuals apart by their appearance (not their location), set this to Yes and consistently label your individuals in the same way across videos. This will allow DeepLabCut to learn to tell them apart, and assign consistent identities across frames and videos. + +1. **Create your project:** + + - Click on the **`Create`** button on the bottom, right side of the main window. + - A new folder will be created in the location you chose above. + +## Video tutorial ### 📽 Video Tutorial: Setting Up Your Project in DeepLabCut ![DeepLabCut Create Project GIF](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779616437-30U5RFYV0OY6ACGDG7F4/create-project.gif?format=500w) -## Next, head over to the beginner guide for [Setting up what keypoints to track](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/manage-project.html) +## Next steps + +Next, head over to the beginner guide for {ref}`editing the configuration and managing the project `, which will show you how to edit the configuration file to edit your bodyparts and skeleton structure. diff --git a/docs/beginner-guides/labeling.md b/docs/beginner-guides/labeling.md index 592845edc..8692c0256 100644 --- a/docs/beginner-guides/labeling.md +++ b/docs/beginner-guides/labeling.md @@ -5,75 +5,78 @@ deeplabcut: ignore: false visibility: online status: viable - recommendation: update - notes: "Useful content, a note is that this should be better integrated with the napari plugin docs, making the workflow transition from DLC GUI -> napari viewer -> back to DLC GUI more seamless so as to confuse users less. This will need a bit of restructuring, as napari-DLC docs are also standalone from the main GUI. Finding a good linking strategy would help. Perhaps breaking napari-DLC docs into install/setup, basic usage, *labeling workflow* (new) and advanced usage would allow to do this cleanly, as it would separate the standalone plugin operation from the DLC-GUI integrated workflow, yet retaining a single source for napari-DLC labeling workflow." + recommendation: move + notes: Move to GUI section. Updated to link directly to the napari plugin docs. Making the link specific to the workflow section of the napari docs could help. --- -(labeling)= -# Labeling GUI - -## Selecting Frames to Label - -In DeepLabCut, choosing the right frames for labeling is a key step. The trick is always to select the MOST DIVERSE data you can that your model will see. That means good lighting, bad lighting, anything you want to throw at it. So, first, pick a range of diverse videos! Then, we will help you pick frames. You've got two easy ways to do this: -1. **Let DeepLabCut Choose:** DeepLabCut can extract frames automatically for you. It's got two neat ways to do that: - - **Uniform:** This is like taking a snapshot at regular time intervals. - - **K-means clustering:** This one applies k-means and picks images from different clusters. This is typically better, as it gives you a variety of actions and poses. Note, as it is a clustering tool, it will miss rare events, so ideally run this step, then perhaps consider running the manual GUI to get some rare frames! You can do both within DLC. +(file:labeling-gui)= -2. **Pick Frames Yourself:** Just like flipping through a photo album, you can go through your video and pick the frames that catch your eye - this is great for finding rare frames. Choose the **`manual`** extraction method. +# Labeling GUI -### Here's how to get started: +## Selecting frames to label -- **Step 1:** Click on **`automatic`** in the frame selection area. -- **Step 2:** Choose **`k-means`** for some variety. -- **Step 3:** Hit the **`Extract Frames`** button, usually found at the bottom right corner. +In DeepLabCut, choosing the right frames for labeling is a key step. -By default, DeepLabCut will grab 20 frames from each of your videos and put them into sub-folders, per video, under **labeled-data** in your project. Now, you're all set to start labeling! +```{important} +Always aim to select the **most diverse data** you can for your model to be trained on. This implies picking a variety of good lighting, bad lighting, partial occlusions, and different poses. +If relevant, label data across several experimental sessions, animals, and conditions. +**Labeling 10 frames from several different videos is typically more effective than labeling 100 frames from a single video.** +``` -## Labeling Your First Set of Frames in DeepLabCut +To help you select "different" frames, DeepLabCut provides two main options: -Alright, you've got your extracted frames ready. Now comes the labeling! +1. **Automated frame extraction** DeepLabCut can extract frames automatically for you. -### Entering the Label Frames Area + - **Uniform:** Samples at regular time intervals. Does not guarantee diversity, but is simple and fast. + - **K-means clustering:** This one runs a k-means algorithm and picks images from different clusters. This is typically more robust in extracting a variety of actions and poses. Note, as it is a clustering tool, it will miss rare events, so after running this step, consider using the manual GUI to get some rare frames! You can do both within DLC. -- **Click on `Label Frames`:** This takes you straight to where your frames are, sorted in the **labeled-data** folder, each video in its own sub-folder. -- **Open a Folder:** Click on the first one to start, and then click **`open`**. +1. **Manual frame extraction** Pick frames yourself using the GUI. This is the most time-consuming, but allows you to have full control over the frames you pick, and can be useful to get rare events that automated tools might miss. You can also use this after running automated frame extraction to get some of those "rare" frames. -### The napari DeepLabCut Labeler +### Example workflow -- **Plugin Window Opens:** As soon as you click **`open`**, the napari DeepLabCut plugin window appears, your main stage for labeling. -- **Tutorial Popup:** A quick tutorial window shows up. It's a brief guide, so give it a look to understand the basics. +1. Select **`automatic`** in the frame selection area. +1. Choose **`k-means`** as a good default option for frame extraction, and set the number of frames you want to extract. +1. Hit the **`Extract Frames`** button. -![Labeling Frames in DeepLabCut using Napari Interface](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779602092-LVR2TI6OADSHEYRCGS6F/labeling-napari.png?format=500w)) +By default, DeepLabCut will grab 20 frames from each of your videos and put them into sub-folders, per video, under **labeled-data** in your project. +With this, you are all set to start labeling! -### Labeling Setup +## Frame labeling workflow -- **Frames on Display:** Your frames are lined up in the middle, with a slider below to shuffle through them. -- **Tools and Keypoints:** To the bottom right, you find a list of bodyparts from your configuration. On the top left, all your labeling tools are ready. +Alright, you've got your extracted frames ready. Now comes the labeling! -### The Labeling Process +### Launching the labeling GUI -- **Start with `Add points`:** Click this to begin placing keypoints on your first frame. If you can't see a bodypart, just move to the next one. -- **Navigate Through Frames:** Use the slider to go from one frame to the next after you're done labeling. -- **Save Progress:** Remember to save your work as you go with **`Command and S`** (or **`Ctrl and S`** on Windows). +- **Click on `Label Frames`:** This takes you straight to where your frames are, sorted in the **labeled-data** folder, each video in its own sub-folder. +- **Open a Folder:** Click on the first unlabeled folder to start, and then click **`Open`**. -> 💡 **Note:** For a detailed walkthrough on using the Napari labeling GUI, have a look at the -[DeepLabCut Napari Guide](napari-gui). Additionally, you can watch our instructional -[YouTube video](https://www.youtube.com/watch?v=hsA9IB5r73E) for more insights and tips. +### napari-deeplabcut +Please refer to the {ref}`file:napari-dlc-basic-usage` section for a detailed walkthrough of how to use the napari-DLC plugin for labeling your frames. -### Completing the Set +### Completing the labeling -Work through all the frames in the first folder. Then, proceed to the next, continuing this way until each folder in your **labeled-data** directory is done. +Work through all the frames in the first folder and save them. -## Checking Your Labels +```{tip} +After saving, you can close napari and click **`Label Frames`** again to open the next folder +**OR** +Remove all layers in napari and drag-and-drop the next folder in the same napari session to keep going without needing to close and reopen napari. +``` -After you've labeled all your frames, it's important to ensure they're accurate. +## Checking labels -### How to Check Your Labels +After you've labeled all your frames, you may want to review their accuracy before moving on to training your model. This is a crucial step, as the quality of your labels will directly impact the performance of your model. -- **Return to the Main Window:** Once you're done with labeling, head back to DeepLabCut's main window, and click on **`Check Labels`**. +- **Return to the DeepLabCut GUI:** Once you're done with labeling, head back to DeepLabCut's main window, and click on **`Check Labels`**. - **Review the Labeled Folders:** The system will have created new folders for each labeled set inside your labeled-data folder. These folders contain your original frames overlaid with the keypoints you've added. ![Checking Labels in DeepLabCut](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779615252-6BNW661XB2ULH85RTAD3/evaluation-example.png?format=500w) -Take the time to go through each folder. Accurate labels are key. If there are mistakes, the model might learn incorrectly and mislabel your videos later on. It's all about setting the right foundation for accurate analysis. +Take the time to go through each folder. Accurate labels are key. +If there are mistakes, the model might learn incorrectly and mislabel your videos later on. +A clean foundation is essential for accurate analysis. + +## Next steps + +Head on to {ref}`file:training-evaluation-gui` to learn about training and evaluating your neural network with the labeled data you created! diff --git a/docs/beginner-guides/manage-project.md b/docs/beginner-guides/manage-project.md index 4159ba99b..0faf7f3c5 100644 --- a/docs/beginner-guides/manage-project.md +++ b/docs/beginner-guides/manage-project.md @@ -6,49 +6,62 @@ deeplabcut: visibility: online status: viable recommendation: move - notes: "It seems the beginner guide section is more of a GUI step-by-step. As such, it should be moved to the GUI section, and merged/integrated with the contents there. The content is useful, but making it clear that this is for the GUI would reduce the confusion of a beginner guide being in fact rather central GUI use instructions." + notes: Move to a dedicated GUI section. Making the config edit tool slightly easier to work with and updating the docs below to include additional fields would be helpful. --- -# Setting up what keypoints to track + +(file:manage-project-gui)= + +# Editing and working with the configuration file + DLC LIVE! -**Edit the Configuration File** +The configuration file (`config.yaml`) is the central record of files in your project, as well as the settings for your models. +As a YAML file, it can be edited manually, but the GUI provides an easy way to edit it without needing to know the YAML format. In this guide, we will show you how to edit the configuration file using the GUI. -After creating your DeepLabCut project, you'll go to the main GUI window, where you'll start managing your project from the Project Management Tab. +## Editing the configuration -**Accessing the Configuration File** +After creating your DeepLabCut project, you'll be shown the main GUI window, where you can manage your project from the Project Management Tab. - **Locate the Configuration File:** At the top of the main window, you'll find the file path to the configuration file. -- **Edit the File:** Click on **`Edit config.yaml`**. This action allows you to: - - Define the bodyparts you wish to track. - - Outline the skeleton structure (optional!). +- **Edit the File:** Click on **`Edit config.yaml`**. + - A **`Configuration Editor`** window will open, displaying all the configuration details. + - You will need to modify some of these settings to align with your experiment. + - For example: + - Update or define the bodyparts you wish to track. + - *Optional:* Outline the skeleton structure. -A **`Configuration Editor`** window will open, displaying all the configuration details. You'll need to modify some of these settings to align with your research requirements. +## Step-by-step configuration walkthrough -## Steps to Edit the Configuration - -### 1. Defining Bodyparts +### Defining & updating bodyparts - **Locate the Bodyparts Section:** In the Configuration Editor, find the **`bodyparts`** category. - **Modify the List:** Click on the arrow next to **`bodyparts`** to expand the list. Here, you can: - Update the list with the names of the bodyparts relevant to your study. - Add more entries by right-clicking on a row number and selecting **`Insert`**. - ![Editing Bodyparts in DeepLabCut's Config File](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779624617-CIVZCM23U69NYK9BO3GY/bodyparts.png?format=500w) + -### 2. Defining the Skeleton +### Defining the skeleton - **Navigate to the Skeleton Section:** Scroll down to the **`skeleton`** category. -- **Adjust the Skeleton List:** Click on the arrow to expand this section. You can then: - - Update the pairs of bodyparts to define the skeleton structure of your model. +- **Adjust the Skeleton List:** Click on the arrow to expand this section. + - You can then update the list of bodypart pairs: i.e. the connections that define the skeleton structure of your model. + - In the list of bodypart pairs, each pair has an index. (ranging from 0 to the total number of pairs in the skeleton). + - Each item of the pair (also indexed; 0 or 1) has a value: the name of the bodypart. + - Each pair of two bodyparts represents a connection, where all connections together make the skeleton. ![Defining the Skeleton Structure in Config File](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779598505-HQNECHIKSQ6XL033JX8M/skeleton.png?format=500w) -> 💡 **Tip:** If you're new to DeepLabCut, spend some time visualizing how the chosen bodyparts can be connected effectively to form a coherent skeleton. +```{tip} +Spend some time visualizing how the chosen bodyparts can be connected effectively to form a coherent, visually helpful skeleton. +``` -### Saving Your Changes +### Saving changes - **Save the Configuration:** Once you're satisfied with the modifications, click **`Save`**. This will store your changes and return you to the main GUI window. -## Next, head over the beginner guide for [Labeling your data](labeling) +## Next steps + +Head over the guide for the {ref}`file:labeling-gui`, which will show you how to label your data using the napari-based labeling GUI. diff --git a/docs/beginner-guides/video-analysis.md b/docs/beginner-guides/video-analysis.md index 5ed589253..ac667881b 100644 --- a/docs/beginner-guides/video-analysis.md +++ b/docs/beginner-guides/video-analysis.md @@ -6,23 +6,30 @@ deeplabcut: visibility: online status: viable recommendation: move - notes: "As mentioned on oher beginner-guides/ docs, this should be part of the GUI section." + notes: As mentioned on oher beginner-guides/ docs, this should be part of the GUI section. --- -# Video Analysis with DeepLabCut -DLC LIVE! +(file:video-analysis-gui)= + +# Video analysis in the GUI + +DLC LIVE! After training and evaluating your model, the next step is to apply it to your videos. -**How to Analyze Videos** +## Analyzing videos with your trained model + +### Step-by-step 1. **Navigate to the 'Analyze Videos' Tab:** Begin applying your trained model to video data here. -2. **Select Your Video Format and Files:** - - **Choose Video Format:** Pick the format of your video (`.mp4`, `.avi`, `.mkv`, or `.mov`). - - **Select Videos:** Click **`Select Videos`** to find and open your video file. +1. **Select Your Video Format and Files:** + +- **Choose Video Format:** Pick the format of your video (`.mp4`, `.avi`, `.mkv`, or `.mov`). +- **Select Videos:** Click **`Select Videos`** to find and open your video file. + 3. **Start Analysis:** Click **`Analyze`**. The analysis time depends on video length and resolution. Track progress in the terminal or Anaconda prompt. -## Reviewing Analysis Results +### Reviewing analysis results - **Find Results in Your Project Folder:** After analysis, go to your project's video folder. - **Analysis Files:** Look also for a `.metapickle`, an `.h5`, and possibly a `.csv` file for detailed analysis data. @@ -30,16 +37,21 @@ After training and evaluating your model, the next step is to apply it to your v ![Plot poses](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779600836-YOWM5T2MBY0JN1LB537B/plot-poses.png?format=500w) -## Creating a Labeled Video +## Generating labeled videos + +### Create a labeled video 1. **Go to 'Create Labeled Video' Tab:** The previously analyzed video should be selected. -2. If not already selected, choose your video. -3. Click **`Create Videos`**. +1. If not already selected, choose your video. +1. Click **`Create Videos`**. -## Viewing the Labeled Video +### View the labeled video - Your labeled video will be in your video folder, named after the original video plus model details and 'labeled'. -- Watch the video to assess the model's labeling accuracy. +- Use it in your results, or perform downstream analyses with it! + +## Next steps + + -## Happy DeepLabCutting! -- Check out the more advanced user guides for even more options! +Check our more advanced guides, and consider reading more about models, augmentations and other parameters to further optimize your model and analysis! diff --git a/docs/course.md b/docs/course.md index a47610c78..94d4c24d0 100644 --- a/docs/course.md +++ b/docs/course.md @@ -7,16 +7,17 @@ deeplabcut: status: outdated recommendation: archive --- + # DeepLabCut Self-paced Course -::::{warning} +::::\{warning} This course was designed for DLC 2. An updated version for DLC 3 is in the works. :::: Do you have video of animal behaviors? Step 1: Get Poses ... - DLC LIVE! +DLC LIVE! This document is an outline of resources for a course for those wanting to learn to use `Python` and `DeepLabCut`. We expect it to take *roughly* 1-2 weeks to get through if you do it rigorously. To get the basics, it should take 1-2 days. @@ -27,14 +28,13 @@ We expect it to take *roughly* 1-2 weeks to get through if you do it rigorously.

- ## Installation: You need Python and DeepLabCut installed! -- [See these "beginner docs" for help!](beginners-guide) -- **WATCH:** overview of conda: [Python Tutorial: Anaconda - Installation and Using Conda](https://www.youtube.com/watch?v=YJC6ldI3hWk) +- \[See these "beginner docs" for help!\](file:beginners-guide) +- **WATCH:** overview of conda: [Python Tutorial: Anaconda - Installation and Using Conda](https://www.youtube.com/watch?v=YJC6ldI3hWk) ## Outline: @@ -47,93 +47,91 @@ You need Python and DeepLabCut installed! - **Learning:** learning and teaching signal processing, and overview from Prof. Demba Ba [talk at JupyterCon](https://www.youtube.com/watch?v=ywz-LLYwkQQ) - **DEMO:** Can I DEMO DEEPLABCUT (DLC) quickly? - - Yes: [you can click through this DEMO notebook](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_DEMO_mouse_openfield.ipynb) - - AND follow along with me: [Video Tutorial!](https://www.youtube.com/watch?v=DRT-Cq2vdWs) + - Yes: [you can click through this DEMO notebook](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_DEMO_mouse_openfield.ipynb) + - AND follow along with me: [Video Tutorial!](https://www.youtube.com/watch?v=DRT-Cq2vdWs) - **WATCH:** How do you know DLC is installed properly? (i.e. how to use our test script!) [Video Tutorial!](https://youtu.be/IOWtKn3l33s) - review! - **REVIEW PAPER:** The state of animal pose estimation w/ deep learning i.e. "Deep learning tools for the measurement of animal behavior in neuroscience" [arXiv](https://arxiv.org/abs/1909.13868) & [published version](https://www.sciencedirect.com/science/article/pii/S0959438819301151) - **REVIEW PAPER:** [A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and Perspectives](https://www.sciencedirect.com/science/article/pii/S0896627320307170) - - **WATCH:** There are a lot of docs... where to begin: [Video Tutorial!](https://www.youtube.com/watch?v=A9qZidI7tL8) ### **Module 1: getting started on data** **What you need:** any videos where you can see the animals/objects, etc. You can use our demo videos, grab some from the internet, or use whatever older data you have. Any camera, color/monochrome, etc will work. Find diverse videos, and label what you want to track well :) -- IF YOU ARE PART OF THE COURSE: you will be contributing to the DLC Model Zoo 😊 - - **Slides:** [Overview of starting new projects](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/main/part1-labeling.pdf) - - **READ ME PLEASE:** [DeepLabCut, the science](https://rdcu.be/4Rep) - - **READ ME PLEASE:** [DeepLabCut, the user guide](https://rdcu.be/bHpHN) - - **WATCH:** Video tutorial 1: [using the Project Manager GUI](https://www.youtube.com/watch?v=KcXogR-p5Ak) - - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! - - **WATCH:** Video tutorial 2: [using the Project Manager GUI for multi-animal pose estimation](https://www.youtube.com/watch?v=Kp-stcTm77g) - - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! - - **WATCH:** Video tutorial 3: [using ipython/pythonw (more functions!)](https://www.youtube.com/watch?v=7xwOhUcIGio) - - multi-animal DLC: [labeling](https://www.youtube.com/watch?v=Kp-stcTm77g) - - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! +- IF YOU ARE PART OF THE COURSE: you will be contributing to the DLC Model Zoo 😊 + - **Slides:** [Overview of starting new projects](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/main/part1-labeling.pdf) + - **READ ME PLEASE:** [DeepLabCut, the science](https://rdcu.be/4Rep) + - **READ ME PLEASE:** [DeepLabCut, the user guide](https://rdcu.be/bHpHN) + - **WATCH:** Video tutorial 1: [using the Project Manager GUI](https://www.youtube.com/watch?v=KcXogR-p5Ak) + - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! + - **WATCH:** Video tutorial 2: [using the Project Manager GUI for multi-animal pose estimation](https://www.youtube.com/watch?v=Kp-stcTm77g) + - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! + - **WATCH:** Video tutorial 3: [using ipython/pythonw (more functions!)](https://www.youtube.com/watch?v=7xwOhUcIGio) + - multi-animal DLC: [labeling](https://www.youtube.com/watch?v=Kp-stcTm77g) + - Please go from project creation (use >1 video!) to labeling your data, and then check the labels! ### **Module 2: Neural Networks** - - **Slides:** [Overview of creating training and test data, and training networks](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/main/part2-network.pdf) - - **READ ME PLEASE:** [What are convolutional neural networks?](https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) +- **Slides:** [Overview of creating training and test data, and training networks](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/main/part2-network.pdf) + +- **READ ME PLEASE:** [What are convolutional neural networks?](https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) - - **READ ME PLEASE:** Here is a new paper from us describing challenges in robust pose estimation, why PRE-TRAINING really matters - which was our major scientific contribution to low-data input pose-estimation - and it describes new networks that are available to you. [Pretraining boosts out-of-domain robustness for pose estimation](https://paperswithcode.com/paper/pretraining-boosts-out-of-domain-robustness) +- **READ ME PLEASE:** Here is a new paper from us describing challenges in robust pose estimation, why PRE-TRAINING really matters - which was our major scientific contribution to low-data input pose-estimation - and it describes new networks that are available to you. [Pretraining boosts out-of-domain robustness for pose estimation](https://paperswithcode.com/paper/pretraining-boosts-out-of-domain-robustness) - - **MORE DETAILS:** ImageNet: check out the original paper and dataset: http://www.image-net.org/ + - **MORE DETAILS:** ImageNet: check out the original paper and dataset: http://www.image-net.org/ - - **REVIEW PAPER:** [A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and Perspectives](https://www.sciencedirect.com/science/article/pii/S0896627320307170) +- **REVIEW PAPER:** [A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and Perspectives](https://www.sciencedirect.com/science/article/pii/S0896627320307170) +review! - review! +Before you create a training/test set, please read/watch: - Before you create a training/test set, please read/watch: - - **More information:** [Which types neural networks are available, and what should I use?](https://github.com/DeepLabCut/DeepLabCut/wiki/What-neural-network-should-I-use%3F-(Trade-offs,-speed-performance,-and-considerations)) - - **WATCH:** Video tutorial 1: [How to test different networks in a controlled way](https://www.youtube.com/watch?v=WXCVr6xAcCA) - - Now, decide what model(s) you want to test. - - IF you want to train on your CPU, then run the step `create_training_dataset`, in the GUI etc. on your own computer. - - IF you want to use GPUs on google colab, [**(1)** watch this FIRST/follow along here!](https://www.youtube.com/watch?v=qJGs8nxx80A) **(2)** move your whole project folder to Google Drive, and then [**use this notebook**](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_YOURDATA_TrainNetwork_VideoAnalysis.ipynb) +- **More information:** [Which types neural networks are available, and what should I use?]() +- **WATCH:** Video tutorial 1: [How to test different networks in a controlled way](https://www.youtube.com/watch?v=WXCVr6xAcCA) + - Now, decide what model(s) you want to test. - **MODULE 2 webinar**: https://youtu.be/ILsuC4icBU0 + - IF you want to train on your CPU, then run the step `create_training_dataset`, in the GUI etc. on your own computer. + - IF you want to use GPUs on google colab, [**(1)** watch this FIRST/follow along here!](https://www.youtube.com/watch?v=qJGs8nxx80A) **(2)** move your whole project folder to Google Drive, and then [**use this notebook**](https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_YOURDATA_TrainNetwork_VideoAnalysis.ipynb) + **MODULE 2 webinar**: https://youtu.be/ILsuC4icBU0 ### **Module 3: Evaluation of network performance** - - **Slides** [Evaluate your network](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/master/part3-analysis.pdf) - - **WATCH:** [Evaluate the network in ipython](https://www.youtube.com/watch?v=bgfnz1wtlpo) - - why evaluation matters; how to benchmark; analyzing a video and using scoremaps, conf. readouts, etc. +- **Slides** [Evaluate your network](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/master/part3-analysis.pdf) +- **WATCH:** [Evaluate the network in ipython](https://www.youtube.com/watch?v=bgfnz1wtlpo) + - why evaluation matters; how to benchmark; analyzing a video and using scoremaps, conf. readouts, etc. ### **Module 4: Scaling your analysis to many new videos** Once you have good networks, you can deploy them. You can create "cron jobs" to run a timed analysis script, for example. We run this daily on new videos collected in the lab. Check out a simple script to get started, and read more below: - - [Analyzing videos in batches, over many folders, setting up automated data processing](https://github.com/DeepLabCut/DLCutils/tree/master/SCALE_YOUR_ANALYSIS) +- [Analyzing videos in batches, over many folders, setting up automated data processing](https://github.com/DeepLabCut/DLCutils/tree/master/SCALE_YOUR_ANALYSIS) - - How to automate your analysis in the lab: [datajoint.io](https://datajoint.io), Cron Jobs: [schedule your code runs](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/) +- How to automate your analysis in the lab: [datajoint.io](https://datajoint.io), Cron Jobs: [schedule your code runs](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/) ### **Module 5: Got Poses? Now what ...** Pose estimation took away the painful part of digitizing your data, but now what? There is a rich set of tools out there to help you create your own custom analysis, or use others (and edit them to your needs). Check out more below: - - [Helper code and packages for use on DLC outputs](https://github.com/DeepLabCut/DLCutils) +- [Helper code and packages for use on DLC outputs](https://github.com/DeepLabCut/DLCutils) - - Create your own machine learning classifiers: https://scikit-learn.org/stable/ +- Create your own machine learning classifiers: https://scikit-learn.org/stable/ - - **REVIEW PAPER:** [Toward a Science of Computational Ethology](https://www.sciencedirect.com/science/article/pii/S0896627314007934) +- **REVIEW PAPER:** [Toward a Science of Computational Ethology](https://www.sciencedirect.com/science/article/pii/S0896627314007934) - - **REVIEW PAPER:** The state of animal pose estimation w/ deep learning i.e. "Deep learning tools for the measurement of animal behavior in neuroscience" [arXiv](https://arxiv.org/abs/1909.13868) & [published version](https://www.sciencedirect.com/science/article/pii/S0959438819301151) - - - **REVIEW PAPER:** [Big behavior: challenges and opportunities in a new era of deep behavior profiling](https://www.nature.com/articles/s41386-020-0751-7) +- **REVIEW PAPER:** The state of animal pose estimation w/ deep learning i.e. "Deep learning tools for the measurement of animal behavior in neuroscience" [arXiv](https://arxiv.org/abs/1909.13868) & [published version](https://www.sciencedirect.com/science/article/pii/S0959438819301151) - - **READ**: [Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning](https://www.pnas.org/content/112/38/E5351) +- **REVIEW PAPER:** [Big behavior: challenges and opportunities in a new era of deep behavior profiling](https://www.nature.com/articles/s41386-020-0751-7) +- **READ**: [Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning](https://www.pnas.org/content/112/38/E5351) *compiled and edited by Mackenzie Mathis* diff --git a/docs/dlc-live/dlc-live-gui/user_guide/misc/misc_landing.md b/docs/dlc-live/dlc-live-gui/user_guide/misc/misc_landing.md index 4bfab1712..40f8fa7d0 100644 --- a/docs/dlc-live/dlc-live-gui/user_guide/misc/misc_landing.md +++ b/docs/dlc-live/dlc-live-gui/user_guide/misc/misc_landing.md @@ -8,4 +8,4 @@ deeplabcut: In this section, you can find additional resources related to the GUI and DLC-live, including: - {ref}`file:dlclivegui-pretrained-models` : How to download and export pre-trained models from the DeepLabCut Model Zoo for use in the GUI -- {ref}`file:dlclivegui-tinmestamp-format` : Information on timestamp formats used in the GUI to help with synchronization +- {ref}`file:dlclivegui-timestamp-format` : Information on timestamp formats used in the GUI to help with synchronization diff --git a/docs/dlc-live/dlc-live-gui/user_guide/misc/timestamp_format.md b/docs/dlc-live/dlc-live-gui/user_guide/misc/timestamp_format.md index b0b6958e1..e7aed2612 100644 --- a/docs/dlc-live/dlc-live-gui/user_guide/misc/timestamp_format.md +++ b/docs/dlc-live/dlc-live-gui/user_guide/misc/timestamp_format.md @@ -3,7 +3,7 @@ deeplabcut: last_metadata_updated: '2026-03-17' ignore: false --- -(file:dlclivegui-tinmestamp-format)= +(file:dlclivegui-timestamp-format)= # Video timestamp format When recording videos, the application automatically saves frame timestamps to a JSON file alongside the video file. diff --git a/docs/dlc-live/dlc-live-gui/user_guide/overview.md b/docs/dlc-live/dlc-live-gui/user_guide/overview.md index fb975b5e2..521b872dd 100644 --- a/docs/dlc-live/dlc-live-gui/user_guide/overview.md +++ b/docs/dlc-live/dlc-live-gui/user_guide/overview.md @@ -146,7 +146,7 @@ Find more information here if needed: {ref}`deeplabcut-live`. ```{note} Timestamps are additionally saved in a JSON file alongside the video, providing precise timing information for when each frame was processed. -See {ref}`file:dlclivegui-tinmestamp-format` for details. +See {ref}`file:dlclivegui-timestamp-format` for details. ``` (sec:dlclivegui-recording-paths-info)= diff --git a/docs/docker.md b/docs/docker.md index 7e4e44093..dcfc23bf9 100644 --- a/docs/docker.md +++ b/docs/docker.md @@ -7,66 +7,63 @@ deeplabcut: status: review_needed recommendation: verify --- + (docker-containers)= -# DeepLabCut Docker containers - -For DeepLabCut 2.2.0.2 and onwards, we provide container containers on [DockerHub]( -https://hub.docker.com/r/deeplabcut/deeplabcut). Using Docker is an alternative approach -to using DeepLabCut, which only requires the user to install [Docker]( -https://www.docker.com/) on your machine, vs. following the step-by-step installation -guide for a Anaconda setup. All dependencies needed to run DeepLabCut in the terminal or -running Jupyter notebooks with DeepLabCut pre-installed are shipped with the provided -Docker images. - -The [`napari-deeplabcut` labelling GUI]( -https://deeplabcut.github.io/DeepLabCut/docs/gui/napari_GUI.html) can be used to label -your data, but it cannot be run in a Docker container: it should be installed as -documented in the link above: `pip install napari-deeplabcut` (checkout the [workflow]( -https://deeplabcut.github.io/DeepLabCut/docs/gui/napari_GUI.html#workflow) as well!). + +# DeepLabCut in Docker + +From DeepLabCut 2.2.0.2 onward, we provide container images on [DockerHub](https://hub.docker.com/r/deeplabcut/deeplabcut). +Using Docker is an alternative approach to installing DeepLabCut in a local conda or pip environment: the images bundle all dependencies needed to run DeepLabCut in a reproducible, self-contained environment. +In a Docker container, DeepLabCut can be used from the terminal, or with Jupyter notebook - the DeepLabCut GUI is not supported. +The approach requires a local installation of [Docker / Docker Desktop](https://www.docker.com/), and is meant for users who need strict reproducibility, an isolated environment, or server-based automation. + +```{important} +The napari-deeplabcut plugin **cannot be run in a Docker container**. To label +your data, please {ref}`install napari-deeplabcut ` in a local, non-dockerized environment, e.g. using pip: `pip install napari-deeplabcut` . +``` Advanced users can directly head to [DockerHub](https://hub.docker.com/r/deeplabcut/deeplabcut) and use the provided images there. To get started with using the images, we however also provide a helper tool, `deeplabcut-docker`, which makes the transition to docker images particularly convenient; to install the tool, run -``` bash +```bash $ pip install deeplabcut-docker ``` -on your machine (potentially in a virtual environment, or an existing Anaconda environment). -Note that this will *not* disprupt or install Tensorflow, or any other DeepLabCut dependencies on your computer---the Docker containers are completely isolated from your existing software installation! +on your machine (in any environment). deeplabcut-docker is just a lightweight package for setting up the Docker environment and it will *not* disrupt your installation of TensorFlow, PyTorch or any other dependencies. The Docker container itself is completely isolated from your existing software installation! ## Usage modes With `deeplabcut-docker`, you can use the images in two modes. -- *Note 1: When running any of the following commands first, it can take some time to complete (a few minutes, depending on your internet connection), since it downloads the Docker image in the background. If you do not see any errors in your terminal, assume that everything is working fine! Subsequent runs of the command will be faster.* -- *Note 2: The labelling GUI cannot be used through the Docker images. However, you can install [`napari-deeplabcut`](https://github.com/DeepLabCut/napari-deeplabcut/tree/main?tab=readme-ov-file#napari-deeplabcut-keypoint-annotation-for-pose-estimation) in a conda environment to do the labelling!* -- *Note 3: For any mode below, you might want to set which directory is the base, namely, so you can have read/write (or read-only access). Here is how to do so: -If you want to mount the whole directory could e.g., pass* - -`deeplabcut-docker bash -v /home/mackenzie/DEEPLABCUT:/home/mackenzie/DEEPLABCUT` - -(which will mount the full directory into the container in read/write mode) - -If read-only access is enough, `deeplabcut-docker bash -v /home/mackenzie/DEEPLABCUT:/home/mackenzie/DEEPLABCUT:ro` - +```{note} +1. When running any of the following commands first, it can take some time to complete (a few minutes, depending on your internet connection), since it downloads the Docker image in the background. If you do not see any errors in your terminal, assume that everything is working fine! Subsequent runs of the command will be faster.* + +1. For any mode below, you might want to set which directory is the base, namely, so you can have read/write (or read-only access). Here is how to do so: + If you want to mount the whole directory could e.g., pass* + `deeplabcut-docker bash -v /home/mackenzie/DEEPLABCUT:/home/mackenzie/DEEPLABCUT` + (which will mount the full directory into the container in read/write mode) + If read-only access is enough, `deeplabcut-docker bash -v /home/mackenzie/DEEPLABCUT:/home/mackenzie/DEEPLABCUT:ro` +``` ### Terminal mode You can run the light version of DeepLabCut and open a terminal by running -``` bash +```bash $ deeplabcut-docker bash ``` -**Important:** if have GPUs on your machine and want to use them to train models, you +````{important} +If you have GPUs on your machine and want to use them to train models, you need to pass the `--gpus all` argument to `deeplabcut-docker`: ``` bash $ deeplabcut-docker bash --gpus all ``` +```` Inside the terminal, you can confirm that DeepLabCut is correctly installed by running and noting which version installs. -``` bash +```bash $ ipython >>> import deeplabcut ``` @@ -75,7 +72,7 @@ $ ipython You can run DeepLabCut by starting a jupyter notebook server. The corresponding image can be pulled and started by running -``` bash +```bash $ deeplabcut-docker notebook ``` @@ -92,28 +89,35 @@ Advanced users and developers can visit the [`/docker` subdirectory](https://git **(1)** Install Docker. See https://docs.docker.com/install/ & for Ubuntu: https://docs.docker.com/install/linux/docker-ce/ubuntu/ Test docker: - $ sudo docker run hello-world - - The output should be: ``Hello from Docker! This message shows that your installation appears to be working correctly.`` +``` +$ sudo docker run hello-world +``` -*if you get the error ``docker: Error response from daemon: Unknown runtime specified nvidia.`` just simply restart docker: +The output should be: `Hello from Docker! This message shows that your installation appears to be working correctly.` - $ sudo systemctl daemon-reload - $ sudo systemctl restart docker +\*if you get the error `docker: Error response from daemon: Unknown runtime specified nvidia.` just simply restart docker: +``` + $ sudo systemctl daemon-reload + $ sudo systemctl restart docker +``` **(2)** Add your user to the docker group (https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) -Quick guide to create the docker group and add your user: +Quick guide to create the docker group and add your user: Create the docker group. - $ sudo groupadd docker +``` +$ sudo groupadd docker +``` + Add your user to the docker group. - $ sudo usermod -aG docker $USER +``` +$ sudo usermod -aG docker $USER +``` (perhaps restart your computer (best) or (at min) open a new terminal to make sure that you are added from now on) - ## Notes and troubleshooting We dropped GUI support in 2.3.5+ due to too many numerous issues supporting them. Also please note these are tested on unix systems. diff --git a/docs/gui/PROJECT_GUI.md b/docs/gui/PROJECT_GUI.md index 139d200e0..fc450bd64 100644 --- a/docs/gui/PROJECT_GUI.md +++ b/docs/gui/PROJECT_GUI.md @@ -6,25 +6,22 @@ deeplabcut: visibility: online status: review_needed recommendation: update - notes: "While the content is generally accurate, repeating installation instructions is not ideal. I would suggest linking to the installation guide instead of re-suggesting commmands but then still saying to read the install page... Also, the GUI is likely used by the majority of users, so I would even consider making this a full section in the TOC, and maybe even having one file per GUI tab, which would make tracking code/docs sync easier. Addendum: it seems the beginner guide section is more of a GUI step-by-step, as mentioned earlier in this comment. I would suggest merging/moving and adding links in the present doc, which would make it less of a video list and more of a proper GUI guide." + notes: 'While the content is generally accurate, repeating installation instructions is not ideal. I would suggest linking to the installation guide instead of re-suggesting commmands but then still saying to read the install page... Also, the GUI is likely used by the majority of users, so I would even consider making this a full section in the TOC, and maybe even having one file per GUI tab, which would make tracking code/docs sync easier. Addendum: it seems the beginner guide section is more of a GUI step-by-step, as mentioned earlier in this comment. I would suggest merging/moving and adding links in the present doc, which would make it less of a video list and more of a proper GUI guide.' --- -(project-manager-gui)= -# Interactive Project Manager GUI - -As some users may be more comfortable working with an interactive interface, we wanted to provide an easy-entry point to the software. All the main functionality is available in an easy-to-deploy GUI interface. Thus, while the many advanced features are not fully available in this Project GUI, we hope this gets more users up-and-running quickly. -**Release notes:** As of DeepLabCut 2.1+ now provide a full front-end user experience for DeepLabCut, and as of 2.3+ we changed the GUI from wxPython to PySide6 with napari support. - -## Get Started: +(project-manager-gui)= -(1) Install DeepLabCut using the simple-install with Anaconda found [here!](how-to-install)*. -Now you have DeepLabCut installed, but if you want to update it, either follow the prompt in the GUI which will ask you to upgrade when a new version is available, or just go into your env (activate DEEPLABCUT) then run: +# Project Manager GUI -` pip install 'deeplabcut[gui,modelzoo]'` *but please see [full install guide](how-to-install)! +As some users may be more comfortable working with an interactive interface, we wanted to provide an easy entry point to the software. All the main functionality is available in an easy-to-use GUI interface. +While several advanced features are not fully available in this Project GUI, we hope this gets more users up-and-running quickly. +**Release notes:** As of DeepLabCut 2.1+ now provide a full front-end user experience for DeepLabCut, and as of 2.3+ we changed the GUI from wxPython to PySide6 with napari support. -(2) Open the terminal and run: `python -m deeplabcut` +## Getting started +1. Install DeepLabCut following the instructions in the {ref}`installation page`. +1. Open the terminal and run: `python -m deeplabcut`

@@ -33,33 +30,44 @@ Now you have DeepLabCut installed, but if you want to update it, either follow t Start at the Project Management Tab and work your way through the tabs to built your customized model and deploy it on new data. We recommend to keep the terminal visible (as well as the GUI) so you can see the ongoing processes as you step through your project, or any errors that might arise. -- For specific napari-based labeling features, see the ["napari gui" docs](napari-gui-usage). +- For specific napari-based labeling features, see the {ref}`napari gui` section. - To change from dark to light mode, set appearance at the top: +

-## Video Demos: How to launch and run the Project Manager GUI: +## User guide + +```{important} +See the dedicated {ref}`file:beginners-guide` section for a step-by-step walkthrough of the GUI. +``` + +## Video demos + +### How to launch and run the Project Manager GUI +```{tip} **Click on the images!** +``` Note that currently the video demo is the wxPython version, but the logic is the same! [![Watch the video](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1572824438905-QY9XQKZ8LAJZG6BLPWOQ/ke17ZwdGBToddI8pDm48kIIa76w436aRzIF_cdFnEbEUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcLthF_aOEGVRewCT7qiippiAuU5PSJ9SSYal26FEts0MmqyMIhpMOn8vJAUvOV4MI/guilaunch.jpg?format=1000w)](https://youtu.be/KcXogR-p5Ak) -### Using the Project Manager GUI with the latest DLC code (single animals, plus objects): ⬇️ +### Using the Project Manager GUI with the latest DLC code (single animals, plus objects) [![Watch the video](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1589046800303-OV1CCNZINWDMF1PZWCWE/ke17ZwdGBToddI8pDm48kB4PVlRPKDmSlQNbUD3wvXgUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcaja1QZ1SznGf7WzFOi-J6zLusnaF2VdeZcKivwxvFiDfGDqVYuwbAlftad9hfoui/dlc_gui_22.png?format=1000w)](https://www.youtube.com/watch?v=JDsa8R5J0nQ) [Read more here](important-info-regd-usage) -### Using the Project Manager GUI with the latest DLC code (multiple identical-looking animals, plus objects): +### Using the Project Manager GUI with the latest DLC code (multiple identical-looking animals, plus objects) [![Watch the video](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1589047147498-G1KTFA5BXR4PVHOOR7OG/ke17ZwdGBToddI8pDm48kJDij24pM2COisBTLIGjR1pZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIel60EThn7SDFlTiSprUhmjQQHn9bhdY9dnQSKs8bCCo/Untitled.png?format=1000w)](https://www.youtube.com/watch?v=Kp-stcTm77g) [Read more here](important-info-regd-usage) -## VIDEO DEMO: How to benchmark your data with the new networks and data augmentation pipelines: +### How to benchmark your data with the new networks and data augmentation pipelines [Watch the video](https://youtu.be/WXCVr6xAcCA) diff --git a/docs/gui/napari/advanced_usage.md b/docs/gui/napari/advanced_usage.md new file mode 100644 index 000000000..3ce04c4d5 --- /dev/null +++ b/docs/gui/napari/advanced_usage.md @@ -0,0 +1,79 @@ +--- +deeplabcut: + last_content_updated: '2026-04-09' + last_metadata_updated: '2026-04-09' + ignore: false + last_verified: '2026-04-09' + verified_for: 3.0.0rc14 +--- + +(file:napari-dlc-advanced-features)= + +# napari-DLC - Advanced features + +napari-DLC provides several additional features to enhance the annotation experience. + +This section covers some of these features in more detail. +For more basic features and workflows, see the {ref}`basic usage section `. + +## Layer status panel + +### Current folder + +The current folder associated with the active Points layer is displayed at the top of the dock widget. +This is the folder where annotations will be saved when using **File -> Save Selected Layer(s)** (or `Ctrl+S`). + +### Labeling progress + +When a labeled data folder is loaded, the widget shows a percentage of labeled frames, based on the theoretical maximum number of keypoints (i.e. number of bodyparts x number of individuals x number of frames) that could be labeled. + +```{note} +This can be a useful reference to track labeling progress.
+Since visibility cannot be accounted for, it should be considered only a rough estimate of relative labeling progress rather than an absolute measure of completeness: hidden/occluded keypoints are not counted, therefore projects with occlusions will not have every body part on every individual in every frame. +``` + +### Point size slider + +The dock widget includes a slider to adjust the size of all keypoints in the viewer; the selected dot size will be saved in `config.yaml` for convenience, meaning DLC will reuse it for future sessions. + +## Copy-paste annotations + +To copy-paste keypoints from one frame to another: + +- Select the keypoints you want to copy using the selection tool (shortcut `3`) +- Press `Ctrl+C` to copy the selected keypoints +- Navigate to the target frame and press `Ctrl+V` to paste the keypoints + +## Color scheme display features + +The plugin shows a list of bodyparts and their corresponding colors in the dock widget. You can toggle the visibility of this color scheme using the **Show color scheme** button. + +```{tip} +The display only shows keypoints that are currently visible in the viewer.
+To show all bodyparts in the color scheme from the config, use the checkbox at the top of the color scheme list. +``` + +### Quick body part/individual selection + +Clicking on a body part in the color scheme will select all keypoints of that body part in the viewer (including across individuals if applicable). + +This can be useful for quickly selecting and editing all keypoints of a specific body part. + +In individual coloring mode, the color scheme also shows the individuals list, and clicking on an individual will select all keypoints belonging to that individual. + +### Jump to body part in viewer + +To locate a bodypart label that is currently not visible in the viewer, enable "Show all bodyparts" in the color scheme list. +Then, click on a bodypart entry in the color scheme list. +The viewer will jump to the first instance of that body part and select it (when it exists). +If the bodypart is already visible in the viewer, clicking on it in the color scheme will simply select all keypoints of that bodypart, as described above. + +This helps quickly find a specific body part in the viewer. + +## Trajectory plot + +The **Show trajectories** button opens a trajectory plot in a separate dock widget. This plot shows the trajectories of all **selected keypoints** over time, and will color-code them according to the active color scheme (bodyparts or individuals). + +To show the trajectory of a specific keypoint, simply select that keypoint in the viewer (using the selection tool or by clicking on the corresponding body part in the color scheme). + +Additional controls in the trajectory plot dock widget allow you to zoom and pan the plot, as well as adjust the time window shown. diff --git a/docs/gui/napari/basic_usage.md b/docs/gui/napari/basic_usage.md new file mode 100644 index 000000000..5e126b072 --- /dev/null +++ b/docs/gui/napari/basic_usage.md @@ -0,0 +1,285 @@ +--- +deeplabcut: + last_content_updated: '2026-04-09' + last_metadata_updated: '2026-04-09' + ignore: false + last_verified: '2026-04-09' + verified_for: 3.0.0rc14 +--- + +(file:napari-dlc-basic-usage)= + +# napari-DLC - Basic usage + +`napari-deeplabcut` is a napari plugin for keypoint annotation and label refinement. It can be used either as part of the DeepLabCut GUI or as a standalone annotation tool. + +## Before you start + +If you installed `DeepLabCut[gui]`, `napari-deeplabcut` is already included. + +### In the DeepLabCut GUI + +When labeling frames, checking labels, or manually extracting frames from videos, the napari plugin will open automatically. + +### As a standalone plugin + +You can also install it as a standalone plugin: + +```bash +pip install napari-deeplabcut +``` + +Start napari from a terminal: + +```bash +napari +``` + +Then open the plugin from: + +**Plugins -> napari-deeplabcut: Keypoint controls** + +## Supported inputs + +The plugin reader can open the following inputs: + +- DeepLabCut `config.yaml` +- Image folders (supports `.png`, `.jpg`, extracted frames from DLC, as well as folders of mixed formats) +- Videos (`.mp4`, `.avi`, `.mov`) +- `.h5` annotation files + +You can load files either by: + +- dragging and dropping them onto the napari viewer, or +- using the **File** menu + +```{tip} +If you drag and drop a compatible labeled-data folder, the widget opens automatically. +``` + +## Using napari + +```{important} +To familiarize yourself with napari, we recommend checking out the [official napari documentation and tutorials](https://napari.org/stable/usage.html). +``` + +## Recommended basic labeling workflow + +The simplest way to **start labeling** is: + +1. Open an image-only folder +1. Open the corresponding `config.yaml` from your DeepLabCut project + +**OR** + +1. Open a folder inside a DeepLabCut project's `labeled-data` directory with a `CollectedData_.h5` file already present + +```{note} +In this case, you do not have to load in the `config.yaml` as the plugin will automatically read the project config from the expected location relative to the `CollectedData...` file. +``` + +This creates: + +- an **Image** layer containing the images (or video frames) +- a **Points** layer initialized with the keypoints defined in the project config + - The `CollectedData_.h5` contains your ground truth annotations + - Any `machinelabels-iter<...>.h5` files contain machine predictions that can be refined and saved into `CollectedData...` + +```{tip} +When machine labels are present, you will see keypoints from ALL current layers. +Before editing, make sure to hide other layers to avoid confusion, and select the correct layer to edit (e.g. the `machinelabels...` layer if you want to refine machine predictions). +``` + +You can then start annotating directly in the **Points** layer. +To do so, make sure the correct **Points** layer is selected in the layer list (left panel of the viewer). Click on the **+** icon to start adding keypoints; the selection tool to edit existing keypoints; and the pan/zoom tool to navigate the viewer. + +## Labeling + +Once the **Points** layer is active, you can place and edit keypoints in the viewer. + +### Widget options + +- **Keypoint selection**: The dropdown shows which bodypart will be added when placing a new keypoint in the Points layer. It can be changed manually, and will be updated according to the active labeling mode (see below). +- **View shortcuts**: opens a reference of napari-deeplabcut shortcuts and their context (i.e. when they are active). +- **Show tutorial**: opens the napari-DLC tutorial panels. + +#### Labeling mode + +- **Sequential**: when a keypoint is placed, the next keypoint in the config list is automatically selected. This is useful for labeling frames in order. Adding an already present keypoint in the frame does nothing. +- **Quick**: As sequential, but adding an already present keypoint in the frame will move it to the new location. +- **Loop**: The currently selected bodypart is retained and the viewer advances to the next frame. This is useful for labeling a specific body part across many frames in a row. If the end of the video is reached, the viewer will loop back to the beginning. + +The dock widget also provides additional controls, including: + +- **Warn on overwrite**: enable or disable overwrite confirmation +- **Show trails**: display keypoint trails over time +- **Show trajectories**: open a trajectory plot in a separate dock widget +- **Show color scheme**: display the active color mapping +- **Video tools**: extract frames and store crop coordinates when a video is loaded + +## Saving annotations + +To save annotations, select the **Points** layer you want to save and use: + +**File -> Save Selected Layer(s)...** + +or press: + +```text +Ctrl+S +``` + +```{note} +If you open a folder that is outside a DeepLabCut project and then save a Points layer, you will be prompted to provide the corresponding `config.yaml`. After saving, you can move the labeled-data folder into your project for downstream DeepLabCut workflows. +``` + +Annotations are saved into the dataset folder as: + +```text +CollectedData_.h5 +``` + +These are the ground truth annotations that DeepLabCut will use for training and evaluation. +A companion CSV file is also written: + +```text +CollectedData_.csv +``` + +```{important} +DeepLabCut uses the `.h5` file as the authoritative annotation file. CSVs and machine labels will not be taken into account for training. +``` + +### Save behavior and notes + +- Make sure the correct **Points** layer is selected before saving. +- If several Points layers are selected at the same time, the plugin will not save them in order to avoid ambiguity. +- If saving would overwrite existing annotations, the plugin will ask for confirmation. + - This confirmation can be disabled by unchecking **Warn on overwrite** in the dock widget. + +```{note} +Several plugin functions expect `config.yaml` to be located two folders above the saved `CollectedData...` file, matching the standard DeepLabCut project structure.
+Keeping data inside the project directory is recommended for best compatibility. Fallbacks asking for the config file location are provided when this structure is not respected, but some features may be disabled or limited in that case. +``` + +### Useful shortcuts + +- napari native: + - `2` / `3`: switch between labeling and selection mode + - `4`: pan and zoom mode + - `Ctrl+R`: reset the viewer to the default zoom and position +- napari-deeplabcut specific: + - `M`: cycle through annotation modes + - `E`: toggle edge coloring + - `F`: toggle between individual and bodypart coloring modes + - `V`: toggle visibility of the selected layer + - `Backspace`: delete selected point(s) + - `Ctrl+C` / `Ctrl+V`: copy and paste selected points + +```{tip} +Use the **View shortcuts** button in the dock widget for a quick reference of napari-deeplabcut shortcuts and their context (i.e. when they are active). +``` + +### More quality-of-life features + +See the {ref}`Advanced features ` for useful features such as copy-pasting annotations, quick bodypart selection, and more. + +## Labeling workflows + +### Labeling from scratch + +Use this when the image folder does **not** yet contain a `CollectedData_.h5` file. + +1. Open a folder of extracted images +1. Open the corresponding DeepLabCut `config.yaml` +1. Select the created **Points** layer +1. Label keypoints +1. Save with `Ctrl+S` + +After saving, the folder will contain: + +```text +CollectedData_.h5 +CollectedData_.csv +``` + +### Resuming labeling + +Use this when the folder already contains a `CollectedData_.h5` file. + +- Open (or drag and drop) the folder in napari. + +Existing annotations and keypoint metadata will be loaded automatically from the H5 file. +In this case, loading `config.yaml` is usually **not needed** unless: + +- The project's bodyparts have changed or +- You want to refresh the configured color scheme + +### Refining machine labels + +Use this when the folder contains a machine predictions file such as: + +```text +machinelabels-iter<...>.h5 +``` + +Open the folder in napari. + +If both a `CollectedData...` file and a `machinelabels...` file are present: + +1. Edit the `machinelabels` layer +1. Optionally press `E` to show edge coloring (red edges indicate confidence below the threshold defined in `config.yaml`) +1. Hide other layers to avoid confusion while editing +1. Edit keypoints in the `machinelabels` layer to refine machine predictions +1. Save the selected `machinelabels` layer + +The refined annotations will be merged into `CollectedData...`. + +If only `machinelabels...` is present, saving refinements will still create a new `CollectedData...` target. + +```{important} +Saving a `machinelabels...` layer does **not** overwrite the machine labels file itself. +Refinements are written into the appropriate `CollectedData...` file.
+Make sure overwrite confirmation is enabled if you want to avoid accidentally overwriting existing `CollectedData...` annotations. +``` + +## Video workflow (crop and frame extraction) + +Videos can also be opened directly in napari. + +```{tip} +This works best by using the main DLC GUI and following steps there for manual frame extraction, which will automatically open the video in napari. +The workflow is otherwise the same when opening a video directly in napari. +``` + +When a video is loaded, the plugin provides a small video action panel that can be used to: + +- Extract the current frame into the dataset +- Optionally export existing machine labels for that frame (load the corresponding h5 file first) +- Define and save crop coordinates to the DeepLabCut `config.yaml` + +Keypoints from video-based workflows can be edited and saved in the same way as image-folder workflows. + +## Working with multiple folders + +We do not currently support working on **more than one dataset folder at a time**. +If a new folder is opened while another one is already open, the plugin will prevent new frames from being loaded, attempt to load annotations using the current folder context, and show a warning. + +After finishing one folder, simply: + +1. Save the relevant **Points** layer +1. Remove the current layers from the viewer using the layer list (left panel) +1. Open the next folder (e.g. by dragging and dropping it onto the viewer) + +This helps keep saving behavior unambiguous. + +## Demo + +A short demo video is available here: + +[Link to video](https://youtu.be/hsA9IB5r73E) + +```{warning} +This demo may be outdated, but the general annotation workflow remains the same. If you would like an updated video tutorial, please open a feature request issue on GitHub, and we will update it. + +``` diff --git a/docs/gui/napari_GUI.md b/docs/gui/napari_GUI.md index 3c97d1f41..79f1ee7e9 100644 --- a/docs/gui/napari_GUI.md +++ b/docs/gui/napari_GUI.md @@ -1,228 +1,22 @@ --- deeplabcut: last_content_updated: '2026-02-10' - last_metadata_updated: '2026-03-06' + last_metadata_updated: '2026-04-09' ignore: false visibility: online status: outdated recommendation: archive notes: Being updated in a separate PR (#3280) + last_verified: '2026-04-09' + verified_for: 3.0.0rc14 --- -(napari-gui)= -# napari labeling GUI +(file:napari-gui-landing)= +# napari GUI -We replaced wxPython with PySide6 + as of version 2.3. Here is how to use the napari-aspects of the new GUI. It is available in napari-hub as a stand alone GUI as well as integrated into our main GUI, [please see docs here](https://deeplabcut.github.io/DeepLabCut/docs/gui/PROJECT_GUI.html). +Welcome to the documentation for napari-DLC, the napari plugin for keypoint annotation and label refinement. This plugin can be used either as part of the DeepLabCut GUI or as a standalone annotation tool. -[![PyPI](https://img.shields.io/pypi/v/napari-deeplabcut.svg?color=green)](https://pypi.org/project/napari-deeplabcut) -[![Python Version](https://img.shields.io/pypi/pyversions/napari-deeplabcut.svg?color=green)](https://python.org) -[![tests](https://github.com/DeepLabCut/napari-deeplabcut/workflows/tests/badge.svg)](https://github.com/DeepLabCut/napari-deeplabcut/actions) -[![codecov](https://codecov.io/gh/DeepLabCut/napari-deeplabcut/branch/main/graph/badge.svg)](https://codecov.io/gh/DeepLabCut/napari-deeplabcut) -[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-deeplabcut)](https://napari-hub.org/plugins/napari-deeplabcut) +## Table of contents -A napari plugin for keypoint annotation with DeepLabCut. - - -## Installation - -You can install the full DeepLabCut napari-based GUI via [pip] by running this in your conda env: - -`pip install 'deeplabcut[tf,gui]'` or mac M1/M2 chip users: `pip install 'deeplabcut[apple_mchips,gui]'` - -*please note this is available since v2.3 - -This is not needed if you ran the above installation, but you can install the stand-alone `napari-deeplabcut` via [pip]: - -` pip install napari-deeplabcut ` - - -To install latest development version: - - ` pip install git+https://github.com/DeepLabCut/napari-deeplabcut.git ` - - -(napari-gui-usage)= -## Usage - -To use the full GUI, please run: - -`python -m deeplabcut` - -To use the stand-alone napari plugin, please launch napari: - -`napari ` - -Then, activate the plugin in Plugins > napari-deeplabcut: Keypoint controls. - -All accepted files (`config.yaml`, images, `.h5` data files) can be loaded either by dropping them directly onto the canvas or via the File menu. - -The easiest way to get started is to drop a folder (typically a folder from within a DeepLabCut's `labeled-data` directory), and, if labeling from scratch, drop the corresponding `config.yaml` to automatically add a `Points layer` and populate the dropdown menus. - -[🎥 DEMO](https://youtu.be/hsA9IB5r73E) - -**Tools & shortcuts are:** - -- `2` and `3`, to easily switch between labeling and selection mode -- `4`, to enable pan & zoom (which is achieved using the mouse wheel or finger scrolling on the Trackpad) -- `M`, to cycle through regular (sequential), quick, and cycle annotation mode (see the description [here](https://github.com/DeepLabCut/DeepLabCut-label/blob/ee71b0e15018228c98db3b88769e8a8f4e2c0454/dlclabel/layers.py#L9-L19)) -- `E`, to enable edge coloring (by default, if using this in refinement GUI mode, points with a confidence lower than 0.6 are marked -in red) -- `F`, to toggle between animal and body part color scheme. -- `V`, to toggle visibility of the selected layer. -- `backspace` to delete a point. -- Check the box "display text" to show the label names on the canvas. -- To move to another folder, be sure to save (Ctrl+S), then delete the layers, and re-drag/drop the next folder. - -![napari_shortcuts](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/192345a5-e411-4d56-b718-ef52f91e195e/Qwerty.png?format=1500w) - - - -### Save Layers - -Annotations and segmentations are saved with `File > Save Selected Layer(s)...` (or its shortcut `Ctrl+S`). -Only when saving segmentation masks does a save file dialog pop up to name the destination folder; -keypoint annotations are otherwise automatically saved in the corresponding folder as `CollectedData_.h5`. -- As a reminder, DLC will only use the H5 file; so be sure if you open already labeled images you save/overwrite the H5. -- Note, before saving a layer, make sure the points layer is selected. If the user clicked on the image(s) layer first, does `Save As`, then closes the window, any labeling work during that session will be lost! -- Modifying and then saving points in a `machinelabels...` layer will add to or overwrite the existing `CollectedData` layer and will **not** save to the `machinelabels` file. - -### Video frame extraction and prediction refinement - -Since v0.0.4, videos can be viewed in the GUI. - -Since v0.0.5, trailing points can be visualized; e.g., helping in the identification -of swaps or outlier, jittery predictions. - -Loading a video (and its corresponding output h5 file) will enable the video actions -at the top of the dock widget: they offer the option to manually extract video -frames from the GUI, or to define cropping coordinates. -Note that keypoints can be displaced and saved, as when annotating individual frames. - - -## Workflow - -Suggested workflows, depending on the image folder contents: - -1. **Labeling from scratch** – the image folder does not contain `CollectedData_.h5` file. - - Open *napari* as described in [Usage](#usage) and open an image folder together with the DeepLabCut project's `config.yaml`. - The image folder creates an *image layer* with the images to label. - Supported image formats are: `jpg`, `jpeg`, `png`. - The `config.yaml` file creates a *Points layer*, which holds metadata (such as keypoints read from the config file) necessary for labeling. - Select the *Points layer* in the layer list (lower left pane on the GUI) and click on the *+*-symbol in the layer controls menu (upper left pane) to start labeling. - The current keypoint can be viewed/selected in the keypoints dropdown menu (right pane). - The slider below the displayed image (or the left/right arrow keys) allows selecting the image to label. - - To save the labeling progress refer to [Save Layers](#save-layers). - `Data successfully saved` should be shown in the status bar, and the image folder should now contain a `CollectedData_.h5` file. - (Note: For convenience, a CSV file with the same name is also saved.) - -2. **Resuming labeling** – the image folder contains a `CollectedData_.h5` file. - - Open *napari* and open an image folder (which needs to contain a `CollectedData_.h5` file). - In this case, it is not necessary to open the DLC project's `config.yaml` file, as all necessary metadata is read from the `h5` data file. - - Saving works as described in *1*. - - ***Note that if a new body part has been added to the `config.yaml` file after having started to label, loading the config in the GUI is necessary to update the dropdown menus and other metadata.*** - - ***As `viridis` is `napari-deeplabcut` default colormap, selecting the colormap in the GUI or loading the config in the GUI can be used to update the color scheme.*** - -4. **Refining labels** – the image folder contains a `machinelabels-iter<#>.h5` file. - - The process is analog to *2*. - Open *napari* and open an image folder. - If the video was originally labeled, *and* had outliers extracted it will contain a `CollectedData_.h5` file and a `machinelabels-iter<#>.h5` file. In this case, select the `machinelabels` layer in the GUI, and type `e` to show edges. Red indicates likelihood < 0.6. As you navigate through frames, images with labels with edges will need to be refined (moved, deleted, etc). Images with labels without edges will be on the `CollectedData` (previous manual annotations) layer and shouldn't need refining. However, you can switch to that layer and fix errors. You can also right-click on the `CollectedData` layer and select `toggle visibility` to hide that layer. Select the `machinelabels` layer before saving which will append your refined annotations to `CollectedData`. - - If the folder only had outliers extracted and wasn't originally labeled, it will not have a `CollectedData` layer. Work with the `machinelabels` layer selected to refine annotation positions, then save. - - In this case, it is not necessary to open the DLC project's `config.yaml` file, as all necessary metadata is read from the `h5` data file. - - Saving works as described in *1*. - -6. **Drawing segmentation masks** - - Drop an image folder as in *1*, manually add a *shapes layer*. Then select the *rectangle* in the layer controls (top left pane), - and start drawing rectangles over the images. Masks and rectangle vertices are saved as described in [Save Layers](#save-layers). - Note that masks can be reloaded and edited at a later stage by dropping the `vertices.csv` file onto the canvas. - -### Workflow flowchart - -```{mermaid} -graph TD - id1[What stage of labeling?] - id2[deeplabcut.label_frames] - id3[deeplabcut.refine_labels] - id4[Add labels to, or modify in, \n `CollectedData...` layer and save that layer] - id5[Modify labels in `machinelabels` layer and save \n which will create a `CollectedData...` file] - id6[Have you refined some labels from the most recent iteration and saved already?] - id7["All extracted frames are already saved in `CollectedData...`. -1. Hide or trash all `machinelabels` layers. -2. Then modify in and save `CollectedData`"] - id8[" -1. hide or trash all `machinelabels` layers except for the most recent. -2. Select most recent `machinelabels` and hit `e` to show edges. -3. Modify only in `machinelabels` and skip frames with labels without edges shown. -4. Save `machinelabels` layer, which will add data to `CollectedData`. - - If you need to revisit this video later, ignore `machinelabels` and work only in `CollectedData`"] - - id1 -->|I need to manually label new frames \n or fix my labels|id2 - id1 ---->|I need to refine outlier frames \nfrom analyzed videos|id3 - id2 -->id4 - id3 -->|I only have a `machinelabels...` file|id5 - id3 ---->|I have both `machinelabels` and `CollectedData` files|id6 - id6 -->|yes|id7 - id6 ---->|no, I just extracted outliers|id8 -``` - -### Labeling multiple image folders - -Labeling multiple image folders has to be done in sequence; i.e., only one image folder can be opened at a time. -After labeling the images of a particular folder is done and the associated *Points layer* has been saved, *all* layers should be removed from the layers list (lower left pane on the GUI) by selecting them and clicking on the trashcan icon. -Now, another image folder can be labeled, following the process described in *1*, *2*, or *3*, depending on the particular image folder. - - -### Defining cropping coordinates - -Prior to defining cropping coordinates, two elements should be loaded in the GUI: -a video and the DLC project's `config.yaml` file (into which the crop dimensions will be stored). -Then it suffices to add a `Shapes layer`, draw a `rectangle` in it with the desired area, -and hit the button `Store crop coordinates`; coordinates are automatically written to the configuration file. - - -## Contributing - -Contributions are very welcome. Tests can be run with [tox], please ensure -the coverage at least stays the same before you submit a pull request. - -To locally install the code, please git clone the repo and then run `pip install -e .` - - -## Issues - -If you encounter any problems, please [file an issue] along with a detailed description. - -[file an issue]: https://github.com/DeepLabCut/napari-deeplabcut/issues - - -## Acknowledgements - - -This [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template. We thank the Chan Zuckerberg Initiative (CZI) for funding this work! - - - - -[napari]: https://github.com/napari/napari -[Cookiecutter]: https://github.com/audreyr/cookiecutter -[@napari]: https://github.com/napari -[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin -[BSD-3]: http://opensource.org/licenses/BSD-3-Clause -[tox]: https://tox.readthedocs.io/en/latest/ -[pip]: https://pypi.org/project/pip/ -[PyPI]: https://pypi.org/ +- [Installation (on GitHub)](https://github.com/DeepLabCut/napari-deeplabcut?tab=readme-ov-file#installation) +- {ref}`Basic usage ` +- {ref}`Advanced features ` diff --git a/docs/installation.md b/docs/installation.md index dd9d0e743..d0403bd36 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -10,23 +10,31 @@ deeplabcut: last_verified: '2026-04-21' verified_for: 3.0.0rc14 --- + (file:how-to-install)= -# How To Install DeepLabCut -- **DeepLabCut can be run on Windows, Linux, or MacOS as long as you have Python 3.10 installed** - - (see also [technical considerations](tech-considerations-during-install) and if you run into issues also check out the [Installation Tips](https://deeplabcut.github.io/DeepLabCut/docs/recipes/installTips.html) page). -- 🚧 Please note, there are several modes of installation: - - please decide to either use a [**conda environment**](https://deeplabcut.github.io/DeepLabCut/docs/installation.html#conda-the-installation-process-is-as-easy-as-this-figure) based installation (**recommended**), - - or the supplied [**Docker container**](docker-containers) (recommended for Ubuntu advanced users). -- 🚀 Please note, you will get the best performance with using a **GPU**! - - Please see the section on [GPU support](https://deeplabcut.github.io/DeepLabCut/docs/installation.html#gpu-support) to install your GPU driver and CUDA. +# Installing DeepLabCut + +- **DeepLabCut can be run on Windows, Linux, or MacOS as long as you have Python 3.10-3.12 installed** + - See also {ref}`technical considerations `. + + + + + +- 🚧 Please note, there are several possibilities for installation: + - **Recommended for most users**: Install in a [**conda environment**](https://deeplabcut.github.io/DeepLabCut/docs/installation.html#conda-the-installation-process-is-as-easy-as-this-figure) + - Install with **{ref}`uv `** (recommended for developers) + - In the supplied **{ref}`Docker container `** (recommended for Ubuntu advanced users and reproducibility). +- 🚀 You will get the best performance when using a **GPU**! + - Please see the section on {ref}`GPU support ` to install your GPU driver and CUDA. -```{Hint} Familiar with python packages and conda? Quick Install Guide: +````{hint} +Familiar with python packages and conda? This assumes you have `conda`/`mamba` installed and this will install DeepLabCut in a fresh -environment. If you have an NVIDIA GPU, install PyTorch according to [their instructions -](https://pytorch.org/get-started/locally/) (with your desired CUDA version) - you just -need your GPU drivers installed. +environment. +If you have an NVIDIA GPU, install PyTorch according to [their instructions](https://pytorch.org/get-started/locally/) (with your desired CUDA version) - you just need your GPU drivers installed. ```bash conda create -n DEEPLABCUT python=3.12 @@ -37,7 +45,6 @@ conda activate DEEPLABCUT # GPU version of pytorch for CUDA 11.3 conda install pytorch cudatoolkit=11.3 -c pytorch - # install the latest version of DeepLabCut pip install --pre deeplabcut # or if you want to use the GUI @@ -47,97 +54,120 @@ pip install --pre deeplabcut[gui] # should print `True` python -c "import torch; print(torch.cuda.is_available())" ``` +```` -- If you're familiar with the command line and want TensorFlow support, look [below]( -deeplabcut-with-tf-install) for a fresh installation that has worked for us (on Linux) -and makes it possible to use the GPU with both PyTorch and TensorFlow. +- If you're familiar with the command line and want TensorFlow support, look {ref}`below ` for a fresh installation on Linux and makes it possible to use the GPU with both PyTorch and TensorFlow. +## Using Conda -## CONDA: The installation process is as easy as this figure! --> +DLC - DLC +**The installation process is as easy as the figure on the right!↘️** -### 🚨 Before you start with our conda file, do you have a GPU? -````{admonition} 🚨 Click here for more information! -:class: dropdown -- We recommend having a GPU if possible! -- You **need to decide if you want to use a CPU or GPU for your models**: (Note, you can also use the CPU-only for project management and labeling the data! Then, for example, use Google Colaboratory GPUs for free (read more [here](https://github.com/DeepLabCut/DeepLabCut/tree/master/examples#demo-4-deeplabcut-training-and-analysis-on-google-colaboratory-with-googles-gpus) and there are a lot of helper videos on [our YouTube channel!](https://www.youtube.com/playlist?list=PLjpMSEOb9vRFwwgIkLLN1NmJxFprkO_zi)). +### 🚨 Before you start... - - **CPU?** Great, jump to the next section below! +Do you have a GPU? If yes, see the {ref}`GPU support section ` below for installation instructions. - - **NVIDIA GPU?** If you want to use your own GPU (i.e., a GPU is in your workstation), then you need to be sure you have a CUDA compatible GPU, CUDA, and cuDNN installed. Please note, which CUDA you install depends on what version of PyTorch you want to use. So, please check "GPU Support" below carefully. **Note, DeepLabCut is up to date with the latest CUDA and PyTorch!** +If not, you can still install DeepLabCut and use it on your CPU, but it will be much slower for training and evaluation (but not for labeling or project management). - - **Apple M-chip GPU?** Be sure to install miniconda3, and your GPU will be used by default. -```` +`````{admonition} 🚨 Hardware information! +--- +class: dropdown +--- +- We recommend having a GPU if possible! +- You **need to decide if you want to use a CPU or GPU for your models** + + ````{tab-set} + ```{tab-item} CPU + Great, jump to the next section below! + ``` + ```{tab-item} NVIDIA GPU + If you want to use your own GPU (i.e., a GPU is in your workstation), then you need to be sure you have a CUDA compatible GPU, CUDA, and cuDNN installed. + Please note, which CUDA you install depends on what version of PyTorch you want to use. So, please check {ref}`sec:install-gpu-support` below carefully. **Note, DeepLabCut is up to date with the latest CUDA and PyTorch!** + ``` + ```{tab-item} Apple M-chip GPU + Be sure to install miniconda, and your GPU will be used by default. + ``` + ```` + +- Note, you can also use the CPU-only install for project management and labeling the data! + Then, for example, use Google Colaboratory GPUs for free (read more [here](https://github.com/DeepLabCut/DeepLabCut/tree/master/examples#demo-4-deeplabcut-training-and-analysis-on-google-colaboratory-with-googles-gpus) and there are a lot of helper videos on [our YouTube channel!](https://www.youtube.com/playlist?list=PLjpMSEOb9vRFwwgIkLLN1NmJxFprkO_zi)). +````` + +### Step 1: Install miniconda + +```{important} +Download [miniconda](https://www.anaconda.com/docs/getting-started/miniconda/main) for your operating system +``` -### Step 1: Install Python via Anaconda +- miniconda is an easy way to install Python and additional packages across various operating systems +- With miniconda, you can install all the dependencies in an [environment](https://conda.io/docs/user-guide/tasks/manage-environments.html) on your machine +- Miniconda is a lightweight version of Anaconda that includes only conda and its dependencies. -### Install [anaconda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html#), or use miniconda3 for MacOS users (see below) +```{admonition} Wait, why are we mixing Anaconda, miniconda and conda? +--- +class: dropdown tip +--- +`conda` is the terminal-based environment management system that is included in both Anaconda and Miniconda. This is the actual workhorse that allows you to create and manage environments, and install packages. -- Anaconda is an easy way to install Python and additional packages across various operating systems. With Anaconda you create all the dependencies in an [environment](https://conda.io/docs/user-guide/tasks/manage-environments.html) on your machine. +**Anaconda** is a full-featured distribution that includes conda, Python, and a large number of scientific packages and their dependencies, plus some graphical user interfaces (GUIs) for managing environments and packages. It is a larger download and takes up more disk space. -```{Hint} -Download anaconda for your operating system: [anaconda.com/download/ -](https://www.anaconda.com/download/) +**Miniconda** is a minimal distribution that includes only conda and its dependencies, along with Python. It does not include any additional packages or GUIs. We recommend it as most GUIs and base packages provided by the full Anaconda distribution are not necessary for DeepLabCut. ``` -- IF you use a M1 or M2 chip in your MacBook with v12.5+ (typically 2020 or newer machines), we recommend **miniconda3,** which operates with the same principles as anaconda. This is straight forward and explained in detail here: https://docs.conda.io/projects/conda/en/latest/user-guide/install/macos.html. But in short, open the program "terminal" and copy/paste and run the code that is supplied below. +(sec:conda-build-env)= -### 💡 miniconda for Mac -````{admonition} Click the button to see code for miniconda for Mac -:class: dropdown -wget https://repo.anaconda.com/miniconda/Miniconda3-py310_4.12.0-MacOSX-arm64.sh -O ~/miniconda.sh -bash ~/miniconda.sh -b -p $HOME/miniconda -source ~/miniconda/bin/activate -conda init zsh -```` +### Step 2: Build a conda environment -### Step 2: Build an Env using our Conda file! +Use the `DEEPLABCUT.yaml` file to build a conda environment with all the dependencies for DeepLabCut. -You simply need to have this `.yaml` file anywhere locally on your computer. So, let's download it! +You simply need to have this `.yaml` file locally on your computer. -```{Hint} -Windows users: Be sure you have `git` installed along with anaconda: https://gitforwindows.org/ +```{warning} +On **Windows**, make sure you have `git` installed: [Git for Windows](https://gitforwindows.org/) ``` -- TO DIRECTLY DOWNLOAD THE CONDA FILE conda: +- Follow the link ➡️ for the [conda file](https://github.com/DeepLabCut/DeepLabCut/blob/main/conda-environments/DEEPLABCUT.yaml#:~:text=Raw%20file%20content-,Download,-%E2%8C%98) and then click "..." and select Download - - click ➡️ for [CONDA FILE](https://github.com/DeepLabCut/DeepLabCut/blob/main/conda-environments/DEEPLABCUT.yaml#:~:text=Raw%20file%20content-,Download,-%E2%8C%98) and then click the "..." and select Download - Screen Shot 2023-09-13 at 10 33 32 PM + Screen Shot 2023-09-13 at 10 33 32 PM -- **Now, in Terminal (or Anaconda Command Prompt for Windows users)**, if you clicked to download, go to your downloads folder. +- **Now, in Terminal (or Anaconda Command Prompt for Windows users)**: -```{Hint} -Windows users: Be sure to open the program terminal/cmd/anaconda prompt with a RIGHT-click, "open as admin" -``` + - If you clicked to download, go to your downloads folder. -```{Hint} -:class: dropdown -If you cloned the repo onto your Desktop, the command may look like: -``cd C:\Users\YourUserName\Desktop\DeepLabCut\conda-environments`` -You can (on Windows) hold SHIFT and right-click > Copy as path, or (on Mac) right-click and while in the menu press the OPTION key to reveal Copy as Pathname. -``` -Be sure you are in the folder that has the `.yaml` file, then run: + - Be sure you are in the folder that has the `.yaml` file, then run: + + `conda env create -f DEEPLABCUT.yaml` + +- You can now use this environment from anywhere on your computer. + Just activate your environment by running: `conda activate DEEPLABCUT` -``conda env create -f DEEPLABCUT.yaml`` +Now you should see (`DEEPLABCUT`) on the left of your terminal screen: +``` +(DEEPLABCUT) YourName-MacBook... +``` -- You can now use this environment from anywhere on your computer (i.e., no need to go back into the conda- folder). Just enter your environment by running: - - Ubuntu/MacOS: ``source/conda activate nameoftheenv`` (i.e. on your Mac: ``conda activate DEEPLABCUT``) - - Windows: ``activate nameoftheenv`` (i.e. ``activate DEEPLABCUT``) +```{note} +No need to run `pip install deeplabcut`, it's already in the conda file! +``` -Now you should see (`nameofenv`) on the left of your terminal screen, i.e. ``(DEEPLABCUT) YourName-MacBook...`` -NOTE: no need to run pip install deeplabcut, as it is already installed!!! :) +(sec:deeplabcut-with-tf-install)= -(deeplabcut-with-tf-install)= -### 💡 Notice: PyTorch and TensorFlow Support within DeepLabCut +#### TensorFlow support ````{admonition} DeepLabCut TensorFlow Support -:class: dropdown -As of June 2024 we have a PyTorch Engine backend and we will be depreciating the -TensorFlow backend by the end of 2024. Currently, if you want to use TensorFlow, you +--- +class: dropdown +--- +💡 **PyTorch and TensorFlow Support within DeepLabCut** + +As of June 2024 we have a PyTorch Engine backend and we will be deprecating the +TensorFlow backend by 2027. +Currently, if you want to use TensorFlow, you need to run `pip install deeplabcut[tf]` in order to install the correct version of -TensorFlow in your conda env. Please note, we will be providing bug fixes, but we will +TensorFlow in your conda env. +Please note, we will be providing bug fixes, but we will not be supporting new TensorFlow versions beyond 2.10 (Windows), and 2.12 for other OS. Installing TensorFlow and getting it to have access to the GPU can be a bit tricky. @@ -170,66 +200,78 @@ pip install --pre deeplabcut ``` ```` -**Great, that's it! DeepLabCut is installed!** 🎉💜 +### Step 3: Let's run DeepLabCut! +**DeepLabCut is installed!** 🎉💜 -### Step 3: Really, that's it! Let's run DeepLabCut +Launch the DeepLabCut GUI in your new conda env by running `python -m deeplabcut` Head over to the [User Guide Overview](https://deeplabcut.github.io/DeepLabCut/docs/UseOverviewGuide.html) for information. -🎉 Launch DeepLabCut in your new env by running `python -m deeplabcut` +```{warning} +On **Windows**: Open the terminal/cmd/anaconda prompt as **Administrator** (right click and select "Run as administrator") to avoid permission issues when downloading models, and for symlink support when videos are not copied into the project folder. +``` + +### Conda environment management tips + +Here are some conda environment management tips: [kapeli.com: Conda Cheat Sheet](https://kapeli.com/cheat_sheets/Conda.docset/Contents/Resources/Documents/index) + + -## Other ways to install DeepLabCut and additional tips + -### Alternatively, you can git clone this repo and install from source! -i.e., if the download did not work or you just want to have the source code handy! +Please see how to test your installation by following [this video](https://www.youtube.com/watch?v=IOWtKn3l33s). + + + +## Other ways to install DeepLabCut + +### git clone + +Recommended for users who want to modify the code, or want to be up-to-date with the latest code on GitHub. - **Windows/Linux/MacBooks:** git clone this repo (in the terminal/cmd program, while **in a folder** you wish to place DeepLabCut -To git clone type: ``git clone https://github.com/DeepLabCut/DeepLabCut.git``). Note, this can be anywhere, even downloads is fine.) -- Then follow the same steps as in Step 2 above, adjusting for the file now being in the downloaded folder. +- To git clone run: `git clone https://github.com/DeepLabCut/DeepLabCut.git`) +- Then follow the same steps as in Step 2 above, adjusting for the `DEEPLABCUT.yaml` env file now being in the folder where you git cloned the repo. +- Or use pip/uv to install from the cloned repo (see below). -### PIP: +(sec:uv-install)= -- Everything you need to build custom models within DeepLabCut (i.e., use our source code and our dependencies) can be installed with `pip install 'deeplabcut[gui]'` (for GUI support w/PyTorch) or without the gui: `pip install 'deeplabcut'`. -- If you want to use the SuperAnimal models, then please use `pip install 'deeplabcut[gui,modelzoo]'`. +### `uv` (recommended for developers) -## DOCKER: +- Install `uv` following [instructions here](https://docs.astral.sh/uv/getting-started/installation/) +- Run in the cloned repo: -- We also have docker containers. Docker is the most reproducible way to use and deploy code. Please see our dedicated docker package and page [here](https://deeplabcut.github.io/DeepLabCut/docs/docker.html). +```bash +uv venv -p 3.12 +uv pip install -e .[gui,modelzoo,tf] # Change optional install as needed +source .venv/bin/activate # or & .venv\Scripts\activate.ps1 on Windows +``` -## Pro Tips: +### `pip` -More [installation ProTips](installation-tips) are also available. +If you already have a local environment, everything you need to use the project manager GI, train and/or build custom models within DeepLabCut (i.e., use our source code and our dependencies) can be installed with `pip install 'deeplabcut[gui]'` (for GUI support w/PyTorch) or without the gui: `pip install 'deeplabcut'`. -If you ever want to update your DLC, just run `pip install --upgrade deeplabcut` once -you are inside your env. If you want to use a specific release, then you need to specify -the version you want, such as `pip install deeplabcut==3.0`. Once installed, you can -check the version by running `import deeplabcut` `deeplabcut.__version__`. Don't be -afraid to update, DLC is backwards compatible with your 2.0+ projects and performance -continues to get better and new features are added nearly monthly. +- If you **cloned the repo** and want to make edits to the code locally, navigate to the cloned repo folder and run `pip install -e .[gui,modelzoo,tf]` to install the package in "editable" mode, which allows you to make changes to the code and have those changes reflected when you import the package. +- If you want to use the SuperAnimal models, then please use `pip install 'deeplabcut[gui,modelzoo]'`. -**All of the data you labelled in version 2.X is also compatible with version 3+ and the -PyTorch engine**! There is no change in the workflow or the way labels are handled: the -big changes happen under-the-hood! If you've been working with DeepLabCut 2.X and want -to learn more about moving to the PyTorch engine, checkout our docs on [moving from -TensorFlow to PyTorch](dlc3-user-guide) +### Docker -Here are some conda environment management tips: [kapeli.com: Conda Cheat Sheet]( -https://kapeli.com/cheat_sheets/Conda.docset/Contents/Resources/Documents/index) +- We also have docker containers. Docker is the most reproducible way to use and deploy code. Please see our dedicated docker package and page [here](https://deeplabcut.github.io/DeepLabCut/docs/docker.html). -**Pro Tip:** If you want to modify code and then test it, you can use our provided -testscripts. This would mean you need to be up-to-date with the latest GitHub-based code -though! Please see [here](installation-tips) on how to get the latest GitHub code, and -how to test your installation by following this video: -https://www.youtube.com/watch?v=IOWtKn3l33s. +### Creating your own conda environment -## Creating your own customized conda env (recommended route for Linux: Ubuntu, CentOS, Mint, etc.) + -*Note in a fresh ubuntu install, you will often have to run: ``sudo apt-get install gcc python3-dev`` to install the GNU Compiler Collection and the python developing environment. + -Some users might want to create their own customize env. - Here is an example. +```{tip} +In a fresh ubuntu install, you will often have to run: `sudo apt-get install gcc python3-dev` to install the GNU Compiler Collection and the python developing environment. +``` -In the terminal type: +Create a new conda environment with Python 3.10 (or 3.11, 3.12) by running: `conda create -n DLC python=3.10` @@ -237,68 +279,87 @@ In the terminal type: `pip install deeplabcut`) or `pip install 'deeplabcut[gui]'` which has a napari based GUI. +## Updating your installation + +If you ever want to update your DLC, just run `pip install --upgrade deeplabcut` inside your env. +If you want to use a specific release, then specify the version you want, such as `pip install deeplabcut==3.0`. +Once installed, you can +check the version by running `import deeplabcut` `deeplabcut.__version__`. + +Don't be afraid to update, DLC is backwards compatible with your 2.0+ projects and performance continues to get better and new features are added often. + +### Data compatibility + +**All of the data you labelled in version 2.X is also compatible with version 3+ and the +PyTorch engine**! +There is no change in the workflow or the way labels are handled: the +big changes happen under-the-hood! If you've been working with DeepLabCut 2.X and want +to learn more about moving to the PyTorch engine, check out our docs on [moving from +TensorFlow to PyTorch](dlc3-user-guide) + +(sec:install-gpu-support)= -## **GPU Support:** +## GPU Support -The ONLY thing you need to do **first** if you have an NVIDIA GPU and the matching NVIDIA CUDA+driver installed. -- CUDA: https://developer.nvidia.com/cuda-downloads (just follow the prompts here!) -- DRIVERS: https://www.nvidia.com/Download/index.aspx +### General GPU support -### The most common "new user" hurdle is installing and using your GPU, so don't get discouraged! +Please ensure you have an NVIDIA GPU and the matching NVIDIA driver installed. -**CRITICAL:** If you have a GPU, you should FIRST **install an appropriate driver for -your specific GPU**, then you can use the supplied conda file. You'll need an NVIDIA GPU -which is compatible with CUDA. To see a list of CUDA-enabled NVIDIA GPUs, please [see -their website](https://developer.nvidia.com/cuda-gpus). +```{warning} +If you have a GPU, you should first **install an appropriate driver for +your specific GPU**, then you can use the supplied conda file. +``` -- Here we provide notes on how to install and check your GPU use with TensorFlow (which -is used by DeepLabCut and already installed with the Anaconda files above). Thus, you do -not need to independently install tensorflow. +- Drivers: see [NVIDIA Drivers](https://www.nvidia.com/Download/index.aspx) +- CUDA: download [here](https://developer.nvidia.com/cuda-downloads) if needed. Installing the drivers usually allows you to skip installing CUDA; instead obtaining via the PyTorch installation process. -**FIRST**, install a driver for your GPU. Find DRIVER HERE: -https://www.nvidia.com/download/index.aspx +### Installing CUDA and cuDNN for TensorFlow GPU support -- Check which driver is installed by typing this into the terminal: ``nvidia-smi``. +You will need an NVIDIA GPU that is compatible with CUDA. -**SECOND**, install CUDA: https://developer.nvidia.com/ (Note that cuDNN, https://developer.nvidia.com/cudnn, is supplied inside the anaconda environment files, so you don't need to install it again). +To see a list of CUDA-enabled NVIDIA GPUs, please [see their website](https://developer.nvidia.com/cuda-gpus). -**THIRD:** Follow the steps above to get the `DEEPLABCUT` conda file and install it! +Here we provide notes on how to install and check your GPU use with TensorFlow, which is used by DeepLabCut and will be installed with the Anaconda files above. +Thus, you do not need to independently install tensorflow. -### Notes: +1. Install a driver for your GPU, using the NVIDIA Drivers link above. + - Check which driver is installed by typing this into the terminal: `nvidia-smi`. +1. Install [CUDA](https://developer.nvidia.com/). Note that [cuDNN](https://developer.nvidia.com/cudnn) is supplied inside the anaconda environment files, so you don't need to install it again. +1. Follow the steps above to get the `DEEPLABCUT` conda file and install it! + +### Notes - **As of version 3.0+ we moved to PyTorch. The Last supported version of TensorFlow is -2.10 (window users) and 2.12 for others (we have not tested beyond this).** + 2.10 (window users) and 2.12 for others** (We will not be testing beyond this) + - Please be mindful different versions of TensorFlow require different CUDA versions. + - As the combination of TensorFlow and CUDA matters, we strongly encourage you to -**check your driver/cuDNN/CUDA/TensorFlow versions** [on this StackOverflow post]( -https://stackoverflow.com/questions/30820513/what-is-version-of-cuda-for-nvidia-304-125/30820690#30820690 -). + **check your driver/cuDNN/CUDA/TensorFlow versions** [on this StackOverflow post](https://stackoverflow.com/questions/30820513/what-is-version-of-cuda-for-nvidia-304-125/30820690#30820690). + - To check your GPU is working, in the terminal, run: -`nvcc -V` to check your installed version(s). + `nvcc -V` to check your installed version(s). - The best practice is to then run the supplied `testscript_pytorch_single_animal.py` -(or `testscript_tensorflow_single_animal.py` for the TensorFlow engine); this is inside the examples folder you -acquired when you git cloned the repo. Here is more information/a short -[video on running the testscript](https://www.youtube.com/watch?v=IOWtKn3l33s). -- Additionally, if you want to use the bleeding edge, with your git clone you also get -the latest code. While inside the main DeepLabCut folder, you can run `./reinstall.sh` -to be sure it's installed (more [here](installation-tips)) -- You can test that your GPU is being properly engaged with these additional [tips]( -https://www.tensorflow.org/programmers_guide/using_gpu). -- Ubuntu users might find this [installation guide]( -https://deeplabcut.github.io/DeepLabCut/docs/recipes/installTips.html#installation-on-ubuntu-20-04-lts -) for a fresh ubuntu install useful as well. - -## Troubleshooting: - -TensorFlow: + (or `testscript_tensorflow_single_animal.py` for the TensorFlow engine); this is inside the examples folder you + acquired when you git cloned the repo. Here is more information/a short + [video on running the test scripts](https://www.youtube.com/watch?v=IOWtKn3l33s). + +- You can test that your GPU is being properly used with these additional [tips](https://www.tensorflow.org/programmers_guide/using_gpu). + +- Ubuntu users might find this [installation guide](https://deeplabcut.github.io/DeepLabCut/docs/recipes/installTips.html#installation-on-ubuntu-20-04-lts) for a fresh DLC install on Ubuntu useful as well. + +## Troubleshooting + +### TensorFlow + Here are some additional resources users have found helpful (posted without endorsement): - https://stackoverflow.com/questions/30820513/what-is-the-correct-version-of-cuda-for-my-nvidia-driver/30820690

- +

- https://www.tensorflow.org/install/source#gpu @@ -307,38 +368,66 @@ Here are some additional resources users have found helpful (posted without endo - https://developer.nvidia.com/cuda-toolkit-archive - -FFMPEG: +### FFMPEG - A few Windows users report needing to install re-install ffmpeg (after windows updates) as described here: https://video.stackexchange.com/questions/20495/how-do-i-set-up-and-use-ffmpeg-in-windows (A potential error could occur when making new videos). On Ubuntu, the command is: `sudo apt install ffmpeg` -DEEPLABCUT: +### DeepLabCut + +- If you git clone or download this folder, and are inside of it then `import deeplabcut` will import the package from the local folder rather than from the latest on PyPi! + +(sec:system-wide-considerations-during-install)= + +## System-wide installation considerations + +```{note} +**What is a system-wide installation?** + +A system-wide installation, or a base environment installation, is when you install using the default Python environment/interpreter on your computer, instead of a compartimentalized, separate environment (e.g., a conda environment). + +This is often a source of conflicts between packages, user confusion and progressive "dependency hell" (where you have to keep installing and uninstalling packages to get the right versions for different applications). + +To avoid this, we recommend using a virtual environment (e.g., conda or uv managed environments) to keep your DeepLabCut installation separate from other Python packages and applications on your system. +``` + +If you perform a system-wide/base environment installation, and the computer has other Python packages or TensorFlow versions installed that conflict, this will overwrite them. + +If you have a dedicated machine for DeepLabCut, this may be *temporarily* fine, but will degrade over time as you try to install or update other packages. + +Indeed, if there are other applications that require different versions of libraries, then installing/updating anything would potentially break those applications. + +One way to manage virtual environments is to use conda environments (for which you need Anaconda/miniconda installed). +An environment is a self-contained directory that contains a Python installation for a particular version of Python, plus additional packages, without any cross-talk with other environments (NVIDIA drivers being a notable exception, as they are system-wide by nature). + +(sec:hardware-considerations-during-install)= + +## Hardware considerations + +- **Computer**: + + - For reference, we use e.g. Dell workstations (79xx series) with **Ubuntu 16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS** and for versions prior to 2.2, we run a Docker container that has TensorFlow, etc. installed (https://github.com/DeepLabCut/Docker4DeepLabCut2.0). Now we use the new Docker containers supplied on this repo (linux support only), also available through [DockerHub](https://hub.docker.com/r/deeplabcut/deeplabcut) or the [`deeplabcut-docker`](https://pypi.org/project/deeplabcut-docker/) helper script. + +- **Computing Hardware**: -- if you git clone or download this folder, and are inside of it then ``import deeplabcut`` will import the package from there rather than from the latest on PyPi! + - An NVIDIA GPU with *at least* 8GB VRAM (memory) is ideal. + - A GPU is not strictly necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets are faster. You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. -(system-wide-considerations-during-install)= -## System-wide considerations: +- **Camera Hardware**: -If you perform the system-wide installation, and the computer has other Python packages or TensorFlow versions installed that conflict, this will overwrite them. If you have a dedicated machine for DeepLabCut, this is fine. If there are other applications that require different versions of libraries, then one would potentially break those applications. The solution to this problem is to create a virtual environment, a self-contained directory that contains a Python installation for a particular version of Python, plus additional packages. One way to manage virtual environments is to use conda environments (for which you need Anaconda installed). + - The software is very robust to variations stemming from various cameras (cell phone cameras, grayscale, color; captured under infrared light, different manufacturers, etc.). See demos on our [website](https://www.mousemotorlab.org/deeplabcut/). + - Note that a model trained on certain data/camera may not generalize to data from a different camera however, so we recommend using the same camera for training and inference. -(tech-considerations-during-install)= -## Technical Considerations: +- **Software**: -- Computer: + - Operating System: Linux (Ubuntu), MacOS[^1] (Mojave), or Windows 10. However, we the authors strongly recommend Ubuntu! + - DeepLabCut is written in Python 3 (https://www.python.org/) and not compatible with Python 2. - - For reference, we use e.g. Dell workstations (79xx series) with **Ubuntu 16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS** and for versions prior to 2.2, we run a Docker container that has TensorFlow, etc. installed (https://github.com/DeepLabCut/Docker4DeepLabCut2.0). Now we use the new Docker containers supplied on this repo (linux support only), also available through [DockerHub](https://hub.docker.com/r/deeplabcut/deeplabcut) or the [`deeplabcut-docker`](https://pypi.org/project/deeplabcut-docker/) helper script. + -- Computer Hardware: - - Ideally, you will use a strong NVIDIA GPU with *at least* 8GB memory. A GPU is not necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets are faster (see WIKI). You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. + -- Software: - - Operating System: Linux (Ubuntu), MacOS* (Mojave), or Windows 10. However, the authors strongly recommend Ubuntu! *MacOS does not support NVIDIA GPUs (easily), so we only suggest this option for CPU use or a case where the user wants to label data, refine data, etc and then push the project to a cloud resource for GPU computing steps, or use MobileNets. - - Anaconda/Python3: Anaconda: a free and open source distribution of the Python programming language (download from https://www.anaconda.com/). DeepLabCut is written in Python 3 (https://www.python.org/) and not compatible with Python 2. - - `pip install deeplabcut` - - TensorFlow - - If you want to use a pre3.0 version, you will need [TensorFlow](https://www.tensorflow.org/) (we used version 1.0 in the Nature Neuroscience paper, later versions also work with the provided code (we tested **TensorFlow versions 1.0 to 1.15, and 2.0 to 2.10**; we recommend TF2.10 now) for Python 3.8, 3.9, 3.10 with GPU support. - - To note, is it possible to run DeepLabCut on your CPU, but it will be VERY slow (see: [Mathis & Warren](https://www.biorxiv.org/content/early/2018/10/30/457242)). However, this is the preferred path if you want to test DeepLabCut on your own computer/data before purchasing a GPU, with the added benefit of a straightforward installation! Otherwise, use our COLAB notebooks for GPU access for testing. - - Docker: We highly recommend advanced users use the supplied [Docker container](docker-containers) +[^1]: MacOS does not support NVIDIA GPUs (easily), so we only suggest this option for CPU use or a case where the user wants to label data, refine data, etc, and then push the project to a cloud resource for GPU computing steps, or use MobileNets diff --git a/docs/maDLC_UserGuide.md b/docs/maDLC_UserGuide.md index a5e873cdf..4e79c4135 100644 --- a/docs/maDLC_UserGuide.md +++ b/docs/maDLC_UserGuide.md @@ -304,7 +304,7 @@ which then also uses temporal information to link across the video frames. Note, we also highly recommend that you use more bodyparts that you might otherwise have (see the example below). -For more information, checkout the [napari-deeplabcut docs](napari-gui) for +For more information, checkout the [napari-deeplabcut docs](file:napari-gui-landing) for more information about the labelling workflow. ### (E) Check Annotated Frames diff --git a/docs/standardDeepLabCut_UserGuide.md b/docs/standardDeepLabCut_UserGuide.md index a29130de3..9847b41dc 100644 --- a/docs/standardDeepLabCut_UserGuide.md +++ b/docs/standardDeepLabCut_UserGuide.md @@ -243,7 +243,7 @@ The toolbox provides a function **label_frames** which helps the user to easily all the extracted frames using an interactive graphical user interface (GUI). The user should have already named the bodyparts to label (points of interest) in the project’s configuration file by providing a list. The following command invokes the -napari-deeplabcut labelling GUI. Checkout the [napari-deeplabcut docs](napari-gui) for +napari-deeplabcut labelling GUI. Checkout the [napari-deeplabcut docs](file:napari-gui-landing) for more information about the labelling workflow. ```python @@ -271,7 +271,7 @@ labels to the bodyparts in the config.yaml file. Thereafter, the user can call t 2.0.5+: then a box will pop up and ask the user if they wish to display all parts, or only add in the new labels. Saving the labels after all the images are labelled will append the new labels to the existing labeled dataset. -For more information, checkout the [napari-deeplabcut docs](napari-gui) for +For more information, checkout the [napari-deeplabcut docs](file:napari-gui-landing) for more information about the labelling workflow. ### (E) Check Annotated Frames @@ -956,7 +956,7 @@ deeplabcut.refine_labels(config_path) ``` This will launch a GUI where the user can refine the labels. -Please refer to the [napari-deeplabcut docs](napari-gui) for more information about the labelling workflow. +Please refer to the [napari-deeplabcut docs](file:napari-gui-landing) for more information about the labelling workflow. After correcting the labels for all the frames in each of the subdirectories, the users should merge the data set to create a new dataset. In this step the iteration parameter in the config.yaml file is automatically updated.