To prepare the dataset in HDF5 format, run the following command:
python prepare_dataset.pyYou can modify the dataset name in the configuration.txt file.
The workflow consists of two main steps:
- Training an FCN (Fully Convolutional Network) model
- Testing the trained FCN model
To start training, run the following command:
python pytorch_train.py- Model architecture and training settings are configured in
configuration.txt. - Number of sub-images (
N_subimgs) for different datasets:- DRIVE & STARE: 20,000
- CHASE_DB1: 21,000
- HRF: 30,000
- Private dataset: 90,000
- Training parameters:
- Epochs (
N-epochs): 100 - Batch size (
batch_size): 35 - Learning rate (
lr): 3e-3
- Epochs (
To test the trained model, run the following command:
python pytorch_predict_fcn.py- Stride settings for testing:
- DRIVE, STARE, CHASE_DB1:
stride_height = 5,stride_width = 5 - HRF, Private dataset:
stride_height = 10,stride_width = 10
- DRIVE, STARE, CHASE_DB1:
Our pretrained model used in paper are in GoogleDrive.
After completing all processes, run the following command to obtain evaluation results:
python evalution.pyI'm very grateful for my co-first author, Chee Hong Lee's(cheehong200292@gmail.com) diligent efforts and contributions. Many thanks for codes of these baseline backbone networks, including DUNet, DSCNet, AttUNet, UKAN, RollingUNet, MambaUNet, CTFNet, IterNet, BCDUNet, UNet++. Transforms refer to torchbiomed.