Skip to content

stevezs315/AttUKAN

Repository files navigation

1. Preparing datasets

To prepare the dataset in HDF5 format, run the following command:

python prepare_dataset.py

You can modify the dataset name in the configuration.txt file.

Available Datasets

DRIVE, STARE, CHASE_DB, HRF

2. Run the Full Workflow

The workflow consists of two main steps:

  1. Training an FCN (Fully Convolutional Network) model
  2. Testing the trained FCN model

Train an FCN Model

To start training, run the following command:

python pytorch_train.py
  • Model architecture and training settings are configured in configuration.txt.
  • Number of sub-images (N_subimgs) for different datasets:
    • DRIVE & STARE: 20,000
    • CHASE_DB1: 21,000
    • HRF: 30,000
    • Private dataset: 90,000
  • Training parameters:
    • Epochs (N-epochs): 100
    • Batch size (batch_size): 35
    • Learning rate (lr): 3e-3

Test the FCN Model

To test the trained model, run the following command:

python pytorch_predict_fcn.py
  • Stride settings for testing:
    • DRIVE, STARE, CHASE_DB1: stride_height = 5, stride_width = 5
    • HRF, Private dataset: stride_height = 10, stride_width = 10

Pretrained Model

Our pretrained model used in paper are in GoogleDrive.

Quantitative Evaluation

After completing all processes, run the following command to obtain evaluation results:

python evalution.py

Acknowledgement

I'm very grateful for my co-first author, Chee Hong Lee's(cheehong200292@gmail.com) diligent efforts and contributions. Many thanks for codes of these baseline backbone networks, including DUNet, DSCNet, AttUNet, UKAN, RollingUNet, MambaUNet, CTFNet, IterNet, BCDUNet, UNet++. Transforms refer to torchbiomed.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages