🧪 test(coverage): achieve 100% test coverage#1018
Conversation
|
FWIW, I'd be fine with not checking tests/ for coverage at all. I don't think coverage was ever intended for test code, but only code being run by other code. :) |
ba46c9f to
b158fae
Compare
The maintainer of coveragepy seems to disagree with you on this https://nedbatchelder.com/blog/202008/you_should_include_your_tests_in_coverage (and I too :D ). |
Introduce covdefaults to handle standard exclusion patterns and enforce fail_under=100 with branch coverage. Add targeted tests for every previously uncovered code path across CLI, env backends, and builder. Fix test_external_uv_detection assertion. Refactor mock distributions in test_projectbuilder to use class-level _metadata attributes.
|
There are only two arguments "for" there:
Linters check this already. Pytest also has a mode to run tests with the same name. I don't think this is really that important if you have flake8/ruff. And if you missed something with a test that's not running, coverage will show the thing you tried to cover isn't covered already. :)
Coverage for test helpers is fine, though less important. It's the test files themselves that IMO get very little if any benefit from covering. There are quite a few reasons not run parts of tests; for example, you might have a property-based test suite (like the one we just added to packaging) and it's not run during testing (and we want 100% coverage without this extra suite). Similar for benchmarking code. And there are OS specific tests that don't run, etc. As a user, I want to know the code I'm using it covered. I don't care if the code that covers the code I'm using is covered. :) I have the opposite issue with test coverage from the "common complaints" there; it takes something with 100% coverage and lowers it because there are bits of the test suite that don't count toward coverage. |
|
Oh, I'm not saying at all that you have to run full coverage all the time, but I do expect the CI to validate that all the code in there is run, and we don't have that code in our tests. |
Closes #174
The project had ~94% line coverage with gaps across platform-specific code paths (Windows colorama init, symlink detection), version-conditional compat branches, and several untested CLI/env backend paths. Branch coverage was not measured at all, hiding additional blind spots.
🔧 This PR introduces
covdefaultsas a coverage plugin to standardize exclusion patterns and enforcefail_under=100with branch coverage enabled. Rather than sprinkling# pragma: no coverliberally, each uncovered path got a targeted test where possible. Pragmas are limited to genuinely unreachable code: version-specific compat branches (Python < 3.10.2, < 3.11, < 3.14), a Windows-only symlink probe that depends onos.O_TEMPORARY, and a single always-True version gate on 3.14+. The mock distribution hierarchy intest_projectbuilderis refactored to use class-level_metadataattributes with a centralizedread_textand registry-basedfrom_name, eliminating duplicated method overrides that created untestable branches.tests/test_integration.pyis now omitted from coverage measurement since those tests are always skipped without--run-integration. A pre-existing bug intest_external_uv_detection_success(asserting againstshutil.which("uv", path=...)instead ofshutil.which("uv")) is fixed as well.