Skip to content
2000
Volume 21, Issue 1
  • ISSN: 1573-4056
  • E-ISSN: 1875-6603

Abstract

Introduction

Fundus fluorescein angiography captures detailed images of fundus vasculature, enabling precise disease assessment. Translating fundus images to fundus fluorescein angiography images can assist patients unable to use contrast agents due to physical constraints, facilitating disease analysis. Previous studies on this translation task were limited by the use of only 17 image pairs for training, potentially restricting model performance.

Methods

Image pairs were collected from patients through a collaborating hospital to create a larger dataset. A fundus image to fundus fluorescein angiography translation model was developed using structure self-supervised representation cycle learning. This model focuses on vascular structures for self-supervised learning, incorporates an auxiliary branch, and utilizes cycle learning to enhance the main training pipeline.

Results

Comparative evaluations on the test set demonstrate superior performance of the proposed model, with significantly improved Fréchet inception distance and kernel inception distance scores. Additionally, generalization experiments conducted on public datasets further confirm the model's advantages in various evaluation metrics.

Discussion

The enhanced performance of the proposed model can be attributed to the larger dataset and the novel structure self-supervised cycle learning approach, which effectively captures vascular details critical for accurate translation. The model's robust generalization across public datasets suggests its potential applicability in diverse clinical settings. However, challenges such as computational complexity and the need for further validation in real-world scenarios warrant additional investigation to ensure scalability and clinical reliability.

Conclusion

The proposed model effectively translates fundus images to fundus fluorescein angiography images, overcoming limitations of small datasets in previous studies. This approach demonstrates strong generalization capabilities, highlighting its potential to aid in large-scale disease analysis and patient care.

This is an open access article published under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/legalcode
Loading

Article metrics loading...

/content/journals/cmir/10.2174/0115734056374967250704090646
2025-07-18
2025-09-19
Loading full text...

Full text loading...

/deliver/fulltext/cmir/21/1/CMIR-21-E15734056374967.html?itemId=/content/journals/cmir/10.2174/0115734056374967250704090646&mimeType=html&fmt=ahah

References

  1. AbràmoffM.D. GarvinM.K. SonkaM. Retinal imaging and image analysis.IEEE Rev. Biomed. Eng.2010316920810.1109/RBME.2010.208456722275207
    [Google Scholar]
  2. BernardesR. SerranhoP. LoboC. Digital ocular fundus imaging: A review.Ophthalmologica2011226416118110.1159/00032959721952522
    [Google Scholar]
  3. IqbalS. KhanT.M. NaveedK. NaqviS.S. NawazS.J. Recent trends and advances in fundus image analysis: A review.Comput. Biol. Med.2022151Pt A10627710.1016/j.compbiomed.2022.10627736370579
    [Google Scholar]
  4. LiT. BoW. HuC. KangH. LiuH. WangK. FuH. Applications of deep learning in fundus images: A review.Med. Image Anal.20216910197110.1016/j.media.2021.10197133524824
    [Google Scholar]
  5. YannuzziL.A. OberM.D. SlakterJ.S. SpaideR.F. FisherY.L. FlowerR.W. RosenR. Ophthalmic fundus imaging: Today and beyond.Am. J. Ophthalmol.2004137351152410.1016/j.ajo.2003.12.03515013876
    [Google Scholar]
  6. TsangS.H. SharmaT. Fluorescein angiography.Atlas of Inherited Retinal DiseasesChamSpringer201810.1007/978‑3‑319‑95046‑4_2
    [Google Scholar]
  7. MarmorM.F. RavinJ.G. Fluorescein angiography.Arch. Ophthalmol.2011129794394810.1001/archophthalmol.2011.16021746986
    [Google Scholar]
  8. SavastanoM.C. RispoliM. LumbrosoB. Di AntonioL. MastropasquaL. VirgiliG. SavastanoA. BacheriniD. RizzoS. Fluorescein angiography versus optical coherence tomography angiography: FA vs OCTA Italian Study.Eur. J. Ophthalmol.202131251452010.1177/112067212090976932228026
    [Google Scholar]
  9. BlairM.P. ShapiroM.J. HartnettM.E. Fluorescein angiography to estimate normal peripheral retinal nonperfusion in children.J. AAPOS201216323423710.1016/j.jaapos.2011.12.15722681939
    [Google Scholar]
  10. CsincsikL. CheungC.M.G. BannonF. PetoT. ChakravarthyU. Agreement between color, fluorescein angiography, and spectral domain optical coherence tomography in the detection of macular fibrosis in neovascular age-related macular degeneration.Am. J. Ophthalmol.202527212613510.1016/j.ajo.2025.01.01139880106
    [Google Scholar]
  11. França DiasM. Ken KawassakiR. Amaral de MeloL. ArakiK. Raphael GuimarãesR. Ligorio FialhoS. Optimizing retinal imaging: Evaluation of ultrasmall tio2 nanoparticle- fluorescein conjugates for improved fundus fluorescein angiography.Methods2025233304110.1016/j.ymeth.2024.11.01239566751
    [Google Scholar]
  12. LiraR.P.C. OliveiraC.L.A. MarquesM.V.R.B. SilvaA.R. PessoaC.C. Adverse reactions of fluorescein angiography: A prospective study.Arq. Bras. Oftalmol.200770461561810.1590/S0004‑2749200700040001117906757
    [Google Scholar]
  13. SuZ. YeP. TengY. ZhangL. ShuX. Adverse reaction in patients with drug allergy history after simultaneous intravenous fundus fluorescein angiography and indocyanine green angiography.J. Ocul. Pharmacol. Ther.201228441041310.1089/jop.2011.022122372690
    [Google Scholar]
  14. KajiS. KidaS. Overview of image-to-image translation by use of deep neural networks: Denoising, super-resolution, modality conversion, and reconstruction in medical imaging.Radiological Phys. Technol.201912323524810.1007/s12194‑019‑00520‑y31222562
    [Google Scholar]
  15. ZhuJ-Y. Toward multimodal image-to-image translation.Adv. Neural Inf. Process. Syst.2017301510.48550/arXiv.1711.11586
    [Google Scholar]
  16. WangT. Pretraining is all you need for image-to-image translation.arXiv:2205.1295220221710.48550/arXiv.2205.12952
    [Google Scholar]
  17. MurezZ. Image to image translation for domain adaptation.Proceedings of the IEEE conference on computer vision and pattern recognition2018.10.1109/CVPR.2018.00473
    [Google Scholar]
  18. HoyezHenri Unsupervised image-to-image translation: A review.Sensors20222221854010.3390/s22218540
    [Google Scholar]
  19. LeCunYann BengioYoshua HintonGeoffrey Deep learning.Nature2015521755343644410.1038/nature14539
    [Google Scholar]
  20. GuoY. LiuY. OerlemansA. LaoS. WuS. LewM.S. Deep learning for visual understanding: A review.Neurocomputing2016187274810.1016/j.neucom.2015.09.116
    [Google Scholar]
  21. ShresthaA. MahmoodA. Review of deep learning algorithms and architectures.IEEE Access20197530405306510.1109/ACCESS.2019.2912200
    [Google Scholar]
  22. ShindeP.P. ShahS. A review of machine learning and deep learning applications.2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA)2018.10.1109/ICCUBEA.2018.8697857
    [Google Scholar]
  23. PangY. LinJ. QinT. ChenZ. Image-to-image translation: Methods and applications.IEEE Trans. Multimed.2022243859388110.1109/TMM.2021.3109419
    [Google Scholar]
  24. FarahaniAbolfazl A brief review of domain adaptation.Advances in Data Science and Information Engineering. Transactions on Computational Science and Computational IntelligenceChamSpringer202187789410.1007/978‑3‑030‑71704‑9_65
    [Google Scholar]
  25. IsolaP. Image-to-image translation with conditional adversarial networks.Proceedings of the IEEE conference on computer vision and pattern recognition2017.
    [Google Scholar]
  26. HenryJoyce NatalieTerry Pix2pix gan for image-to-image translation.Berlin, GermanyResearch Gate Publication202115
    [Google Scholar]
  27. ParkT. Contrastive learning for unpaired image-to-image translation.16th European Conference2020.
    [Google Scholar]
  28. ChoiY. Stargan v2: Diverse image synthesis for multiple domains.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2020.10.1109/CVPR42600.2020.00821
    [Google Scholar]
  29. ChoiY. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation.Proceedings of the IEEE conference on computer vision and pattern recognition2018.10.1109/CVPR.2018.00916
    [Google Scholar]
  30. KanekoT. Stargan-vc2: Rethinking conditional methods for stargan-based voice conversion.2019Available from: https://www.isca-archive.org/interspeech_2019/kaneko19_interspeech.html
  31. MaY. LiuJ. LiuY. FuH. HuY. ChengJ. QiH. WuY. ZhangJ. ZhaoY. Structure and illumination constrained GAN for medical image enhancement.IEEE Trans. Med. Imaging202140123955396710.1109/TMI.2021.310193734339369
    [Google Scholar]
  32. Deserno Né LehmannT.M. HandelsH. Maier-Hein Né FritzscheK.H. MersmannS. PalmC. TolxdorffT. WagenknechtG. WittenbergT. Viewpoints on medical image processing: From science to application.Curr. Med. Imaging201392798810.2174/157340561130902000224078804
    [Google Scholar]
  33. LiW. Generating fundus fluorescence angiography images from structure fundus images using generative adversarial networks.arXiv:2006.1021620201710.48550/arXiv.2006.10216
    [Google Scholar]
  34. LiPing Synthesizing multi-frame high-resolution fluorescein angiography images from retinal fundus images using generative adversarial networks.Biomed. Eng. Online20232211610.1186/s12938‑023‑01070‑6
    [Google Scholar]
  35. HuangK. LiM. YuJ. MiaoJ. HuZ. YuanS. ChenQ. Lesion-aware generative adversarial networks for color fundus image to fundus fluorescein angiography translation.Comput. Methods Programs Biomed.202322910730610.1016/j.cmpb.2022.10730636580822
    [Google Scholar]
  36. ShiDanli Translation of color fundus photography into fluorescein angiography using deep learning for enhanced diabetic retinopathy screening.Ophthalmol. Sci.20233410040110.1016/j.xops.2023.100401
    [Google Scholar]
  37. ChenRuoyu Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening.npj. Digit. Med.20247341810.1038/s41746‑024‑01018‑7
    [Google Scholar]
  38. KamranS.A. Fundus2Angio: A conditional GAN architecture for generating fluorescein angiography images from retinal fundus photography.15th International Symposium, ISVC 20202020.10.1007/978‑3‑030‑64559‑5_10
    [Google Scholar]
  39. KamranS.A. Attention2angiogan: Synthesizing fluorescein angiography from retinal fundus images using generative adversarial networks.Conference: 2020 25th International Conference on Pattern Recognition (ICPR)2021.10.1109/ICPR48806.2021.9412428
    [Google Scholar]
  40. KamranS.A. Vtgan: Semi-supervised retinal image synthesis and disease prediction using vision transformers.2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)2021.10.1109/ICCVW54120.2021.00362
    [Google Scholar]
  41. DingL. BawanyM.H. KuriyanA.E. RamchandranR.S. WykoffC.C. SharmaG. A novel deep learning pipeline for retinal vessel detection in fluorescein angiography.IEEE Trans. Image Process.2020296561657310.1109/TIP.2020.299153032396087
    [Google Scholar]
  42. FangZ. Uwat-gan: Fundus fluorescein angiography synthesis via ultra-wide-angle transformation multi-scale gan.Medical Image Computing and Computer Assisted Intervention – MICCAI 2023ChamSpringer202310.1007/978‑3‑031‑43990‑2_70
    [Google Scholar]
  43. GeR. FangZ. WeiP. ChenZ. JiangH. ElazabA. LiW. WanX. ZhangS. WangC. UWAFA-GAN: Ultra-wide-angle fluorescein angiography transformation via multi-scale generation and registration enhancement.IEEE J. Biomed. Health Inform.20242884820482910.1109/JBHI.2024.339459738683721
    [Google Scholar]
  44. DurgadeviM. Generative Adversarial Network (GAN): A general review on different variants of GAN and applications.2021 6th International Conference on Communication and Electronics Systems (ICCES)2021.10.1109/ICCES51350.2021.9489160
    [Google Scholar]
  45. RonnebergerO. FischerP. BroxT. U-net: Convolutional networks for biomedical image segmentation.arXiv:1505.0459720151810.48550/arXiv.1505.04597
    [Google Scholar]
  46. XuJ. Understanding and improving layer normalization.Adv. Neural Inf. Process. Syst.2019321510.48550/arXiv.1911.07013
    [Google Scholar]
  47. XuJ. Reluplex made more practical: Leaky ReLU.2020 IEEE Symposium on Computers and communications (ISCC)2020.10.1109/ISCC50000.2020.9219587
    [Google Scholar]
  48. ValadaA. RadwanN. BurgardW. “Deep auxiliary learning for visual localization and odometry.” 2018 IEEE international conference on robotics and automation (ICRA).USAIEEE2018
    [Google Scholar]
  49. DeryL.M. Aang: Automating auxiliary learning.arXiv:2205.1408220221710.48550/arXiv.2205.14082
    [Google Scholar]
  50. LiuS. DavisonA. JohnsE. Self-supervised generalisation with meta auxiliary learning.Adv. Neural Inf. Process. Syst.20193217
    [Google Scholar]
  51. UelwerT. A survey on self-supervised representation learning.arXiv:2308.1145520231810.48550/arXiv.2308.11455
    [Google Scholar]
  52. EricssonL. GoukH. LoyC.C. HospedalesT.M. Self-supervised representation learning: Introduction, advances, and challenges.IEEE Signal Process. Mag.2022393426210.1109/MSP.2021.3134634
    [Google Scholar]
  53. FengZ. XuC. TaoD. Self-supervised representation learning by rotation feature decoupling.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition2019.10.1109/CVPR.2019.01061
    [Google Scholar]
  54. ChenX. DingM. WangX. XinY. MoS. WangY. HanS. LuoP. ZengG. WangJ. Context autoencoder for self-supervised representation learning.Int. J. Comput. Vis.2024132120822310.1007/s11263‑023‑01852‑4
    [Google Scholar]
  55. LiH. LiuH. HuY. FuH. ZhaoY. MiaoH. LiuJ. An annotation-free restoration network for cataractous fundus images.IEEE Trans. Med. Imaging20224171699171010.1109/TMI.2022.314785435100108
    [Google Scholar]
  56. LiH. Structure-consistent restoration network for cataract fundus image enhancement.Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022ChamSpringer202210.1007/978‑3‑031‑16434‑7_47
    [Google Scholar]
  57. YangB. ZhaoH. CaoL. LiuH. WangN. LiH. Retinal image enhancement with artifact reduction and structure retention.Pattern Recognit.202313310896810.1016/j.patcog.2022.108968
    [Google Scholar]
  58. LiH. LiuH. FuH. XuY. ShuH. NiuK. HuY. LiuJ. A generic fundus image enhancement network boosted by frequency self-supervised representation learning.Med. Image Anal.20239010294510.1016/j.media.2023.10294537703674
    [Google Scholar]
  59. PizerS.M. AmburnE.P. AustinJ.D. CromartieR. GeselowitzA. GreerT. ter Haar RomenyB. ZimmermanJ.B. ZuiderveldK. Adaptive histogram equalization and its variations.Comput. Vis. Graph. Image Process.198739335536810.1016/S0734‑189X(87)80186‑X
    [Google Scholar]
  60. FanC.N. ZhangF.Y. Homomorphic filtering based illumination normalization method for face recognition.Pattern Recognit. Lett.201132101468147910.1016/j.patrec.2011.03.023
    [Google Scholar]
  61. RahmanS. RahmanM.M. Abdullah-Al-WadudM. Al-QuaderiG.D. ShoyaibM. An adaptive gamma correction for image enhancement.EURASIP J. Image Video Process.2016201613510.1186/s13640‑016‑0138‑1
    [Google Scholar]
  62. SundaramN. BroxT. KeutzerK. Dense point trajectories by gpu-accelerated large displacement optical flow.Computer Vision – ECCV 2010. ECCV 2010. Lecture Notes in Computer ScienceChamSpringer201010.1007/978‑3‑642‑15549‑9_32
    [Google Scholar]
  63. BrislinR.W. Back-translation for cross-cultural research.J. Cross Cult. Psychol.19701318521610.1177/135910457000100301
    [Google Scholar]
  64. ZachC. KlopschitzM. PollefeysM. Disambiguating visual relations using loop constraints.2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition2010.
    [Google Scholar]
  65. ZhuJ-Y. Unpaired image-to-image translation using cycle-consistent adversarial networks.Proceedings of the IEEE international conference on computer vision2017.10.1109/ICCV.2017.244
    [Google Scholar]
  66. MaoX. Least squares generative adversarial networks.Proceedings of the IEEE international conference on computer vision2017.
    [Google Scholar]
  67. NohK.J. KimJ. ParkS.J. LeeS. Multimodal registration of fundus images with fluorescein angiography for fine-scale vessel segmentation.IEEE Access20208637576376910.1109/ACCESS.2020.2984372
    [Google Scholar]
  68. RezaA.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement.J. VLSI Signal Process.2004381354410.1023/B:VLSI.0000028532.53893.82
    [Google Scholar]
  69. IbanezL. The ITK Software GuideSecond EdUSAThe Insight Software Consortium2005
    [Google Scholar]
  70. AndradeN. FariaF.A. FábioA.M.C. A practical review on medical image registration: From rigid to deep learning based approaches.2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)2018.10.1109/SIBGRAPI.2018.00066
    [Google Scholar]
  71. SongG. HanJ. ZhaoY. WangZ. DuH. A review on medical image registration as an optimization problem.Curr. Med. Imaging201713327428310.2174/157340561266616092012395528845149
    [Google Scholar]
  72. ZouF. A sufficient condition for convergences of adam and rmsprop.2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)201910.1109/CVPR.2019.01138
    [Google Scholar]
  73. JayasumanaS. Rethinking fid: Towards a better evaluation metric for image generation.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition2024.10.1109/CVPR52733.2024.00889
    [Google Scholar]
  74. TamminaS. Transfer learning using vgg-16 with deep convolutional neural network for classifying images.Inter. J. Scient. Res. Publicat.2019910p942010.29322/IJSRP.9.10.2019.p9420
    [Google Scholar]
  75. ModersitzkiJ. FAIR: Flexible algorithms for image registration.PhiladelphiaSociety for Industrial and Applied Mathematics200910.1137/1.9780898718843
    [Google Scholar]
/content/journals/cmir/10.2174/0115734056374967250704090646
Loading
/content/journals/cmir/10.2174/0115734056374967250704090646
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test