Skip to content

🧪 test(coverage): achieve 100% test coverage#1018

Merged
gaborbernat merged 1 commit intopypa:mainfrom
gaborbernat:coverage
Apr 10, 2026
Merged

🧪 test(coverage): achieve 100% test coverage#1018
gaborbernat merged 1 commit intopypa:mainfrom
gaborbernat:coverage

Conversation

@gaborbernat
Copy link
Copy Markdown
Collaborator

@gaborbernat gaborbernat commented Apr 9, 2026

Closes #174

The project had ~94% line coverage with gaps across platform-specific code paths (Windows colorama init, symlink detection), version-conditional compat branches, and several untested CLI/env backend paths. Branch coverage was not measured at all, hiding additional blind spots.

🔧 This PR introduces covdefaults as a coverage plugin to standardize exclusion patterns and enforce fail_under=100 with branch coverage enabled. Rather than sprinkling # pragma: no cover liberally, each uncovered path got a targeted test where possible. Pragmas are limited to genuinely unreachable code: version-specific compat branches (Python < 3.10.2, < 3.11, < 3.14), a Windows-only symlink probe that depends on os.O_TEMPORARY, and a single always-True version gate on 3.14+. The mock distribution hierarchy in test_projectbuilder is refactored to use class-level _metadata attributes with a centralized read_text and registry-based from_name, eliminating duplicated method overrides that created untestable branches.

⚠️ tests/test_integration.py is now omitted from coverage measurement since those tests are always skipped without --run-integration. A pre-existing bug in test_external_uv_detection_success (asserting against shutil.which("uv", path=...) instead of shutil.which("uv")) is fixed as well.

@gaborbernat gaborbernat added the enhancement New feature or request label Apr 9, 2026
@gaborbernat gaborbernat added the enhancement New feature or request label Apr 9, 2026
@henryiii
Copy link
Copy Markdown
Contributor

FWIW, I'd be fine with not checking tests/ for coverage at all. I don't think coverage was ever intended for test code, but only code being run by other code. :)

@gaborbernat gaborbernat force-pushed the coverage branch 5 times, most recently from ba46c9f to b158fae Compare April 10, 2026 15:13
@gaborbernat
Copy link
Copy Markdown
Collaborator Author

FWIW, I'd be fine with not checking tests/ for coverage at all. I don't think coverage was ever intended for test code, but only code being run by other code. :)

The maintainer of coveragepy seems to disagree with you on this https://nedbatchelder.com/blog/202008/you_should_include_your_tests_in_coverage (and I too :D ).

Introduce covdefaults to handle standard exclusion patterns and enforce
fail_under=100 with branch coverage. Add targeted tests for every
previously uncovered code path across CLI, env backends, and builder.
Fix test_external_uv_detection assertion. Refactor mock distributions
in test_projectbuilder to use class-level _metadata attributes.
@henryiii
Copy link
Copy Markdown
Contributor

There are only two arguments "for" there:

It’s easy to copy and paste a test to create a new test, but forget to change the name

Linters check this already. Pytest also has a mode to run tests with the same name. I don't think this is really that important if you have flake8/ruff.

And if you missed something with a test that's not running, coverage will show the thing you tried to cover isn't covered already. :)

the tests directory has code that is not a test itself, but is a helper for the tests

Coverage for test helpers is fine, though less important. It's the test files themselves that IMO get very little if any benefit from covering.

There are quite a few reasons not run parts of tests; for example, you might have a property-based test suite (like the one we just added to packaging) and it's not run during testing (and we want 100% coverage without this extra suite). Similar for benchmarking code. And there are OS specific tests that don't run, etc.

As a user, I want to know the code I'm using it covered. I don't care if the code that covers the code I'm using is covered. :)

I have the opposite issue with test coverage from the "common complaints" there; it takes something with 100% coverage and lowers it because there are bits of the test suite that don't count toward coverage.

@gaborbernat
Copy link
Copy Markdown
Collaborator Author

Oh, I'm not saying at all that you have to run full coverage all the time, but I do expect the CI to validate that all the code in there is run, and we don't have that code in our tests.

@gaborbernat gaborbernat merged commit 4348292 into pypa:main Apr 10, 2026
65 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Missing coverage in tests

2 participants