Is there an existing issue for this?
Operating System
Mac OS Sequoia 15.6.1
DeepLabCut version
3.0.0rc10
What engine are you using?
pytorch
DeepLabCut mode
single animal
Device type
Builtin GPU (metal)
Bug description 🐛
Confidence values are not constrained between 0 and 1. For example, on one model the confidence values are mostly between 2 and 4, and on another they mostly stay below 1 but spike to over 12. My expectation is that these values represent a range from 0 (not at all confident) to 1 (100% confident), as is the case for the standard resnet model.
Here are some likelihood plots showing the issue:
Steps To Reproduce
- Create a new environment with
deeplabcut==3.0.0rc10
- Create a new project and label data
- Create the training dataset with an rtmpose model, like
deeplabcut.create_training_dataset(config_path, net_type="rtmpose_x")
- Train the model and analyze the data
Relevant log output
Anything else?
The mm pose implementation has a similar issue (with one potential solution in the comments): open-mmlab/mmpose#2755
This article explains where the confidence scores are derived from in the rtmpose architecture: https://deepwiki.com/Tau-J/rtmlib/4.2.1-rtmpose#simcc-postprocessing
The softmax solution in the first file seems correct, but I haven't dug into the guts of the deeplabcut implementation to see where to apply it.
EDIT:
It appears there is a bool to apply the softmax function in the SimCCPredictor, and an option to normalize. Looking at my config, it's possible the normalize: false line is the issue here:
heads:
bodypart:
type: RTMCCHead
weight_init: RTMPose
target_generator:
type: SimCCGenerator
input_size:
- 384
- 384
smoothing_type: gaussian
sigma:
- 6.93
- 6.93
simcc_split_ratio: 2.0
label_smooth_weight: 0.0
normalize: false
criterion:
x:
type: KLDiscreteLoss
use_target_weight: true
beta: 10.0
label_softmax: true
y:
type: KLDiscreteLoss
use_target_weight: true
beta: 10.0
label_softmax: true
predictor:
type: SimCCPredictor
simcc_split_ratio: 2.0
Code of Conduct
Is there an existing issue for this?
Operating System
Mac OS Sequoia 15.6.1
DeepLabCut version
3.0.0rc10
What engine are you using?
pytorch
DeepLabCut mode
single animal
Device type
Builtin GPU (metal)
Bug description 🐛
Confidence values are not constrained between 0 and 1. For example, on one model the confidence values are mostly between 2 and 4, and on another they mostly stay below 1 but spike to over 12. My expectation is that these values represent a range from 0 (not at all confident) to 1 (100% confident), as is the case for the standard resnet model.
Here are some likelihood plots showing the issue:
Steps To Reproduce
deeplabcut==3.0.0rc10deeplabcut.create_training_dataset(config_path, net_type="rtmpose_x")Relevant log output
Anything else?
The mm pose implementation has a similar issue (with one potential solution in the comments): open-mmlab/mmpose#2755
This article explains where the confidence scores are derived from in the rtmpose architecture: https://deepwiki.com/Tau-J/rtmlib/4.2.1-rtmpose#simcc-postprocessing
The softmax solution in the first file seems correct, but I haven't dug into the guts of the deeplabcut implementation to see where to apply it.
EDIT:
It appears there is a bool to apply the softmax function in the
SimCCPredictor, and an option to normalize. Looking at my config, it's possible thenormalize: falseline is the issue here:Code of Conduct