Documentation : Update formatting of single animal guide#3315
Conversation
Improve user documentation for the GUI and CLI flows: add an important note to always run the terminal as administrator on Windows, rename and rephrase GUI/CLI headings for clarity, and break out stepwise startup instructions. Restructure the create_new_project section with a concise list of required and optional arguments, add a note explaining symbolic links and why Windows requires admin privileges, include a code example and tip block, and fix Windows path formatting. Minor wording and formatting tweaks throughout to improve readability.
Convert the single-animal user guide to MyST-friendly markdown and improve layout and clarity. Changes include: add a table-of-contents directive; replace raw HTML <img>/<p> blocks with MyST image directives and metadata; introduce admonitions (important/note/caution/hint) for critical points; restructure the project directory section into bulleted lists and an ASCII tree; clarify Windows path guidance and config.yaml parameter notes; normalize API Docs headings and other formatting fixes. These are documentation/formatting updates to improve rendering and readability—no functional code changes.
Update documentation for the single-animal user guide: convert local link refs to file:single-animal-userguide, fix napari link formatting, and reorganize/clarify many sections in standardDeepLabCut_UserGuide.md. Added consistent subsection headings (Overview, Code example, Output, etc.), separators, code block language annotations, a schematic of the training dataset layout, and improved wording/indentation for lists and examples. Also updated UseOverviewGuide.md to point to the revised single-animal guide. These changes improve readability, provide clearer examples, and make the workflow steps and outputs easier to follow.
Minor documentation cleanup in docs/standardDeepLabCut_UserGuide.md: split the conda activation into its own numbered step, simplify wording and line breaks for clarity, and remove bold all-caps headings (OVERVIEW and MODEL COMPARISON) in favor of normal sentences. Purely editorial changes; no functional code changes.
There was a problem hiding this comment.
Pull request overview
Updates the single-animal documentation to use more consistent MyST/Sphinx-friendly formatting (admonitions, images/figures, headings) and improves navigability via an in-page contents block and reorganized sections.
Changes:
- Refactors
standardDeepLabCut_UserGuide.mdinto a more structured MyST format (contents directive, admonitions, images-as-directives, reorganized headings/lists). - Updates
UseOverviewGuide.mdto point readers to the single-animal guide using the new target label. - Adds a Windows “run as administrator” admonition to the Project Manager GUI doc.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| docs/UseOverviewGuide.md | Updates links pointing to the single-animal guide. |
| docs/standardDeepLabCut_UserGuide.md | Major MyST formatting refresh: contents block, directive-based images, admonitions, and section restructuring. |
| docs/gui/PROJECT_GUI.md | Adds an {important} note about Windows admin terminal usage. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| - We highly recommend carefully considering which one is best for your needs. | ||
| - For example, a white mouse + black mouse would call for standard, while two black mice would use multi-animal. **[Important Information on how to use DLC in different scenarios (single vs multi animal)](important-info-regd-usage)** Then pick a user guide: | ||
| - (1) [How to use standard DeepLabCut](single-animal-userguide) | ||
| - (1) \[How to use standard DeepLabCut\](file:single-animal-userguide) |
| Please decide with mode you want to use DeepLabCut, and follow one of the following: | ||
|
|
||
| - (1) [How to use standard DeepLabCut](single-animal-userguide) | ||
| - (1) \[How to use standard DeepLabCut\](file:single-animal-userguide) |
| all the extracted frames using an interactive graphical user interface (GUI). The user | ||
| should have already named the bodyparts to label (points of interest) in the | ||
| project’s configuration file by providing a list. The following command invokes the | ||
| napari-deeplabcut labelling GUI. Checkout the \[napari-deeplabcut docs\](file:napari-gui-landing) for |
There was a problem hiding this comment.
Seems like a valid concern? This occurs multiple times, maybe worth addressing if that is possible.
| @@ -285,22 +415,30 @@ Saving the labels after all the images are labelled will append the new labels t | |||
| For more information, checkout the \[napari-deeplabcut docs\](file:napari-gui-landing) for | |||
| @@ -1033,6 +1281,8 @@ This will launch a GUI where the user can refine the labels. | |||
|
|
|||
| Please refer to the \[napari-deeplabcut docs\](file:napari-gui-landing) for more information about the labelling workflow. | |||
| All the outputs generated during the course of a project will be stored in one of these subdirectories, thus allowing each project to be | ||
| curated in separation from other projects. | ||
|
|
| parameters in the config.yaml file. Also, the user can change the number of frames to extract from each video using | ||
| the numframes2extract in the config.yaml file. | ||
| parameters in the config.yaml file. | ||
| Also, the user can change the number of frames to extract from each video using the numframes2extract in the config.yaml file. |
There was a problem hiding this comment.
| Also, the user can change the number of frames to extract from each video using the numframes2extract in the config.yaml file. | |
| Also, the user can change the number of frames to extract from each video using the numframes2pick in the config.yaml file. |
|
|
||
| #### Overview | ||
|
|
||
| You can also filter the predictions with a median filter (default) or with a [SARIMAX model](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), if you wish. This creates a new .h5 file with the ending *\_filtered* that you can use in create_labeled_data and/or plot trajectories. |
There was a problem hiding this comment.
| You can also filter the predictions with a median filter (default) or with a [SARIMAX model](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), if you wish. This creates a new .h5 file with the ending *\_filtered* that you can use in create_labeled_data and/or plot trajectories. | |
| You can also filter the predictions with a median filter (default) or with a [SARIMAX model](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), if you wish. This creates a new .h5 file with the ending *\_filtered* that you can use in create_labeled_video and/or plot trajectories. |
|
|
||
| To draw a skeleton, you need to first define the pairs of connected nodes (in the `config.yaml` file) and set the | ||
| skeleton color (in the `config.yaml` file). There is also a GUI to help you do this, use by calling | ||
| `deeplabcut.SkeletonBuilder(configpath)`! |
There was a problem hiding this comment.
| `deeplabcut.SkeletonBuilder(configpath)`! | |
| `deeplabcut.SkeletonBuilder(config_path)`! |
| @@ -713,10 +902,16 @@ dynamic: triple containing (state, detectiontreshold, margin) | |||
| given the movement of the animal). | |||
There was a problem hiding this comment.
| fraction, you can crop around your animal/object to make processing speeds faster. For example, if you have a large open | |
| field experiment but only track the mouse, this will speed up your analysis (also helpful for real-time applications). | |
| To use this simply add `dynamic=(True,.5,10)` when you call `analyze_videos`. | |
| ```python | |
| dynamic: triple containing (state, detectionthreshold, margin) | |
| If the state is true, then dynamic cropping will be performed. | |
| That means that if an object is detected (i.e., any body part > detectionthreshold), | |
| then object boundaries are computed according to the smallest/largest x position and | |
| smallest/largest y position of all body parts. This window is expanded by the margin | |
| and from then on only the posture within this crop is analyzed (until the object is lost; | |
| i.e., < detectionthreshold). The current position is utilized for updating the crop window | |
| for the next frame (this is why the margin is important and should be set large enough | |
| given the movement of the animal). |
|
|
||
| #### Optional addition of more labels | ||
|
|
||
| OPTIONAL: In the event of adding more labels to the existing labeled dataset, the user need to append the new |
There was a problem hiding this comment.
| OPTIONAL: In the event of adding more labels to the existing labeled dataset, the user need to append the new | |
| OPTIONAL: In the event of adding more labels to the existing labeled dataset, the user needs to append the new |
| while the second contains files for the PyTorch engine. At the top level in these directories, there are directories | ||
| referring to different iterations of label refinement (see below): **iteration-0**, **iteration-1**, etc. | ||
| The iteration directories store shuffle directories, where each shuffle directory stores model data related to a | ||
| particular experiment: trained and tested on a particular training and testing sets, and with a particular model |
There was a problem hiding this comment.
| particular experiment: trained and tested on a particular training and testing sets, and with a particular model | |
| particular experiment: trained and tested on a particular training and testing set, and with a particular model |
| @@ -357,7 +518,7 @@ and `augmenter_type` when you call the function. | |||
| suggest seeing our [dedicated documentation on models](dlc3-architectures) for more information ( | |||
There was a problem hiding this comment.
| and `augmenter_type` when you call the function. | |
| - Networks: ImageNet pre-trained networks OR SuperAnimal pre-trained network weights will be downloaded, as you | |
| select. You can decide to do transfer-learning (recommended) or "fine-tune" both the backbone and the decoder head. We | |
| suggest seeing our [dedicated documentation on models](dlc3-architectures) for more information ( |
| #### Overview | ||
|
|
||
| The plotting components of this toolbox utilizes matplotlib. Therefore, these plots can easily be customized by | ||
| the end user. We also provide a function to plot the trajectory of the extracted poses across the analyzed video, which | ||
| can be called by typing: | ||
|
|
||
| ``` | ||
| deeplabcut.plot_trajectories(config_path, [‘fullpath/analysis/project/videos/reachingvideo1.avi’]) | ||
| #### Code example |
There was a problem hiding this comment.
| #### Overview | |
| The plotting components of this toolbox utilizes matplotlib. Therefore, these plots can easily be customized by | |
| the end user. We also provide a function to plot the trajectory of the extracted poses across the analyzed video, which | |
| can be called by typing: | |
| ``` | |
| deeplabcut.plot_trajectories(config_path, [‘fullpath/analysis/project/videos/reachingvideo1.avi’]) | |
| #### Code example | |
| #### Overview | |
| The plotting components of this toolbox utilize matplotlib. Therefore, these plots can easily be customized by | |
| the end user. We also provide a function to plot the trajectory of the extracted poses across the analyzed video, which | |
| can be called by typing: | |
| #### Code example |
| all the extracted frames using an interactive graphical user interface (GUI). The user | ||
| should have already named the bodyparts to label (points of interest) in the | ||
| project’s configuration file by providing a list. The following command invokes the | ||
| napari-deeplabcut labelling GUI. Checkout the \[napari-deeplabcut docs\](file:napari-gui-landing) for |
There was a problem hiding this comment.
Seems like a valid concern? This occurs multiple times, maybe worth addressing if that is possible.
Motivation
The single animal guide has some older formatting present that could be improved in order to help navigation in the document, and better focus the attention of users on important concepts.
Scope
Refreshes the guide's format to be more MyST-like, more consistent, and provide better navigation with a contents table and working links, as well as more bullet lists and consistent section headers throughout.
TODO
[]()to be proper{ref}directives.