Retinal Vessel Segmentation Based on the Anam-Net Model
DOI:
https://doi.org/10.5755/j02.eie.30594Keywords:
Anam-Net, Deep learning, Data augmentation, Retinal vessel segmentation, Semantic segmentationAbstract
Accurate segmentation of retinal blood vessels can help ophthalmologists diagnose eye-related diseases such as diabetes and hypertension. The task of segmentation of the vessels comes with a number of challenges. Some of the challenges are due to haemorrhages and microaneurysms in fundus imaging, while others are due to the central vessel reflex and low contrast. Encoder-decoder networks have recently achieved excellent performance in retinal vascular segmentation at the trade-off of increased computational complexity. In this work, we use the Anam-Net model to accurately segment retinal vessels at a low computational cost. The Anam-Net model consists of a lightweight convolutional neural network (CNN) along with bottleneck layers in the encoder and decoder stages. Compared to the standard U-Net model and the R2U-Net model, the Anam-Net model has 6.9 times and 10.9 times fewer parameters. We evaluated the Anam-Net model on three open-access datasets: DRIVE, STARE, and CHASE_DB. The results show that the Anam-Net model achieves better segmentation accuracy compared to several state-of-the-art methods. For the DRIVE, STARE, and CHASE DB datasets, the model achieved {sensitivity and accuracy} of {0.8601, 0.9660}, {0.8697, 0.9728}, and {0.8553, 0.9746}, respectively. On the DRIVE, STARE, and CHASE_DB datasets, we also conduct cross-training experiments. The outcome of this experiment demonstrates the generalizability and robustness of the Anam-Net model.
Downloads
Published
How to Cite
Issue
Section
License
The copyright for the paper in this journal is retained by the author(s) with the first publication right granted to the journal. The authors agree to the Creative Commons Attribution 4.0 (CC BY 4.0) agreement under which the paper in the Journal is licensed.
By virtue of their appearance in this open access journal, papers are free to use with proper attribution in educational and other non-commercial settings with an acknowledgement of the initial publication in the journal.
Funding data
-
Ministry of Education – Kingdom of Saudi Arabi
Grant numbers DRI−KSU − 415