Developed by Yuxing Tang (yuxing.tang@nih.gov), Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center
This software provides a trained model to seperate the frontal view chest x-ray (CXR) into two categories: normal vertical and anti-clock wise 90-degree rotated.
For example, a large number of CXRs in the PLCO dataset (https://biometry.nci.nih.gov/cdas/plco/) are left (anti-clock wise) 90-degree rotated, however, no meta data is available with the CXR image describing this. Here I provide a trained CNN model (ResNet18) to automatic seperate normal view CXRs and rotated ones.
- Linux or OSX
- NVIDIA GPU
- Python 2.7
- PyTorch v0.3 or later
- Numpy
(Image Source: NIH ChestXray14 https://nihcc.app.box.com/v/ChestXray-NIHCC)
- Download the trained model in our lab Box Drive here (85M).
- Put the trained model into ./trained-models/
- Run python run_test_samples.py
- The images will be seperated into two individual folders, namely: images-0 with normal CXRs, and images-90 with 90-degree roatations.
- Download the trained model in our lab Box Drive here (85M).
- Put the trained model into ./trained-models/
- Create ./images/ folder and put your own images into this foler.
- Generate a .txt file to include the image file names in shell command line: ls ./images/ > test_list.txt
- Run: python run_test_own.py to test.
- The images will be seperated into two individual folders, namely: images-0-own with normal CXRs, and images-90-own with 90-degree roatations.
We provided the list of rotated CXRs in the PLCO dataset in PLCO-rotation-90.txt
