Skip to content
2000
Volume 21, Issue 1
  • ISSN: 1573-4056
  • E-ISSN: 1875-6603

Abstract

Introduction

Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples.

Methods

To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations.

Results

Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperform the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information.

Discussion

The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP.

Conclusion

The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intra-class variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

This is an open access article published under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/legalcode
Loading

Article metrics loading...

/content/journals/cmir/10.2174/0115734056361649250717162910
2025-06-30
2025-09-19
Loading full text...

Full text loading...

/deliver/fulltext/cmir/21/1/CMIR-21-E15734056361649.html?itemId=/content/journals/cmir/10.2174/0115734056361649250717162910&mimeType=html&fmt=ahah

References

  1. ZhangX ZhouS LiB WangY LuK LiuW WangZ Automatic segmentation of pericardial adipose tissue from cardiac MR images via semi-supervised method with difference-guided consistency.Med Phys20255231679169210.1002/mp.17558
    [Google Scholar]
  2. WangL. SunY. SeidlitzJ. BethlehemR.A.I. Alexander-BlochA. DorfschmidtL. LiG. ElisonJ.T. LinW. WangL. A lifespan-generalizable skull-stripping model for magnetic resonance images that leverages prior knowledge from brain atlases.Nat. Biomed. Eng.20259570071510.1038/s41551‑024‑01337‑w39779813
    [Google Scholar]
  3. MacdonaldJ.A. ZhuZ. KonkelB. MazurowskiM.A. WigginsW.F. BashirM.R. Duke liver dataset: A publicly available liver mri dataset with liver segmentation masks and series labels.Radiol. Artif. Intell.20235522027510.1148/ryai.22027537795141
    [Google Scholar]
  4. WangS.H. DuJ. XuH. YangD. YeY. ChenY. ZhuY. BaT. YuanC. YangZ.H. Automatic discrimination of different sequences and phases of liver MRI using a dense feature fusion neural network: A preliminary study.Abdom. Radiol.202146104576458710.1007/s00261‑021‑03142‑434057565
    [Google Scholar]
  5. ZhuZ. MittendorfA. ShropshireE. AllenB. MillerC. MustafaR. 3d pyramid pooling network for abdominal mri series classification.IEEE Trans. Pattern Anal. Mach. Intell.20204441688169810.1109/TPAMI.2020.303399033112740
    [Google Scholar]
  6. MahmutogluM.A. PreethaC.J. MeredigH. TonnJ.C. WellerM. WickW. BendszusM. BrugnaraG. VollmuthP. Deep learning–based identification of brain MRI sequences using a model trained on large multicentric study cohorts.Radiol. Artif. Intell.20246123009510.1148/ryai.23009538166331
    [Google Scholar]
  7. ZhouM. WuX. WeiX. XiangT. FangB. KwongS. Low-light enhancement method based on a Retinex model for structure preservation.IEEE Trans. Multimed.20242665066210.1109/TMM.2023.3268867
    [Google Scholar]
  8. ZhouM. LanX. WeiX. LiaoX. MaoQ. LiY. WuC. XiangT. FangB. An end-to-end blind image quality assessment method using a recurrent network and self-attention.IEEE Trans. Broadcast202369236937710.1109/TBC.2022.3215249
    [Google Scholar]
  9. ZhouM. ZhaoX. LuoF. Robust rgb-t tracking via adaptive modality weight correlation filters and cross-modality learning.ACM Trans. Multimed. Comput. Commun. Appl.2023204120
    [Google Scholar]
  10. SnellJ. SwerskyK. ZemelR. Prototypical networks for few-shot learning.Adv. Neural Inf. Process. Syst.201730
    [Google Scholar]
  11. XiB. LiJ. LiY. SongR. XiaoY. DuQ. ChanussotJ. Semisupervised cross-scale graph prototypical network for hyperspectral image classification.IEEE Trans Neural Netw Learn Syst202334119337935110.1109/TNNLS.2022.315828035320108
    [Google Scholar]
  12. LiuX. ZhouF. LiuJ. JiangL. Meta-learning based prototype-relation network for few-shot classification.Neurocomputing202038322423410.1016/j.neucom.2019.12.034
    [Google Scholar]
  13. ChaoX. ZhangL. Few-shot imbalanced classification based on data augmentation.Multimedia Syst.20232952843285110.1007/s00530‑021‑00827‑0
    [Google Scholar]
  14. ChengG. CaiL. LangC. YaoX. ChenJ. GuoL. HanJ. Spnet: Siamese-prototype network for fewshot remote sensing image scene classification.IEEE Trans. Geosci. Remote Sens.202160111
    [Google Scholar]
  15. WuF. JeremyS. Attentive prototype few-shot learning with capsule networkbased embedding.Computer Vision–ECCV 2020: 16th European ConferenceGlasgow, UK, August 23–28, 2020, pp. 237–253.10.1007/978‑3‑030‑58604‑1_15
    [Google Scholar]
  16. DeuschelJ. FirmbachD. Multi-prototype few-shot learning in histopathology.2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)Montreal, BC, Canada, 11-17 October 2021, pp. 620-628.10.1109/ICCVW54120.2021.00075
    [Google Scholar]
  17. ZhangC. YueJ. QinQ. Global prototypical network for few-shot hyperspectral image classification.IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.2020134748475910.1109/JSTARS.2020.3017544
    [Google Scholar]
  18. BhuniaA.K. YangY. Sketch less for more: On-the-fly fine-grained sketchbased image retrieval.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSeattle, WA, USA, 2020, pp. 9776-9785.10.1109/CVPR42600.2020.00980
    [Google Scholar]
  19. PengY. HeX. ZhaoJ. Object-part attention model for fine-grained image classification.IEEE Trans. Image Process.20182731487150010.1109/TIP.2017.277404129990123
    [Google Scholar]
  20. ZhangH. XuT. ElhoseinyM. HuangX. ZhangS. ElgammalA. MetaxasD. Spda-cnn: Unifying semantic part detection and abstraction for fine-grained recognition.Proceedings of the IEEE conference on computer vision and pattern recognitionLas Vegas, NV, USA, 27-30 June 2016, pp. 1143-1152.10.1109/CVPR.2016.129
    [Google Scholar]
  21. RenS. HeK. GirshickR. SunJ. Faster r-cnn: Towards real-time object detection with region proposal networks.IEEE Trans. Pattern Anal. Mach. Intell.20173961137114910.1109/TPAMI.2016.257703127295650
    [Google Scholar]
  22. LongJ. ShelhamerE. DarrellT. Fully convolutional networks for semantic segmentation.Proceedings of the IEEE conference on computer vision and pattern recognitionUC Berkeley, USA, 2015, pp. 3431-3440.10.1109/CVPR.2015.7298965
    [Google Scholar]
  23. GirshickR. DonahueJ. DarrellT. MalikJ. Rich feature hierarchies for accurate object detection and semantic segmentation.2014 IEEE Conference on Computer Vision and Pattern RecognitionColumbus, OH, USA, 23-28 June 2014, pp. 580-587.10.1109/CVPR.2014.81
    [Google Scholar]
  24. ZhangN. DonahueJ. GirshickR. DarrellT. Part-based r-cnns for fine-grained category detection.Computer Vision – ECCV 2014, 13th European ConferenceZurich, Switzerland, September 6-12, 2014, pp. 834–849.
    [Google Scholar]
  25. KongS. FowlkesC. Low-rank bilinear pooling for finegrained classification.Proceedings of the IEEE conference on computer vision and pattern recognitionHonolulu, HI, USA, 21-26 July 2017, pp. 7025-7034.
    [Google Scholar]
  26. SunM. YuanY. ZhouF. DingE. Multi-attention multi-class constraint for fine-grained image recognition.Proceedings of the european conference on computer vision (ECCV)Springer, Cham, 06 October 2018, pp 834–850.10.1007/978‑3‑030‑01270‑0_49
    [Google Scholar]
  27. SunG. CholakkalH. KhanS. KhanF. ShaoL. Fine-grained recognition: Accounting for subtle differences between similar classes.Proc. Conf. AAAI Artif. Intell.2020347120471205410.1609/aaai.v34i07.6882
    [Google Scholar]
  28. ZhuangP. WangY. QiaoY. Learning attentive pairwise interaction for fine-grained classification.Proc. Conf. AAAI Artif. Intell.2020347131301313710.1609/aaai.v34i07.7016
    [Google Scholar]
  29. GaoY. HanX. WangX. HuangW. ScottM. Channel interaction networks for fine-grained image categorization.Proc. Conf. AAAI Artif. Intell.2020347108181082510.1609/aaai.v34i07.6712
    [Google Scholar]
  30. HeK. ZhangX. RenS. SunJ. Deep residual learning for image recognition.Proceedings of the IEEE conference on computer vision and pattern recognitionLas Vegas, NV, USA, 27-30 June 2016, pp. 770-778.10.1109/CVPR.2016.90
    [Google Scholar]
  31. DosovitskiyA. BeyerL. KolesnikovA. WeissenbornD. ZhaiX. UnterthinerT. DehghaniM. MindererM. HeigoldG. GellyS. An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint20202010.11929
    [Google Scholar]
  32. HeJ. ChenJ.N. LiuS. KortylewskiA. YangC. BaiY. WangC. Transfg: A transformer architecture for fine-grained recognition.Proc. Conf. AAAI Artif. Intell.202236185286010.1609/aaai.v36i1.19967
    [Google Scholar]
  33. AbnarS. ZuidemaW. Quantifying attention flow in transformers.arXiv preprint20202005.0092810.18653/v1/2020.acl‑main.385
    [Google Scholar]
  34. WangJ. LiuH. WangX. JingL. Interpretable image recognition by constructing transparent embedding space.Proceedings of the IEEE/CVF international conference on computer visionMontreal, QC, Canada, 10-17 October 2021, pp. 875-884.10.1109/ICCV48922.2021.00093
    [Google Scholar]
  35. NautaM. SchlöttererJ. Van KeulenM. Pip-net: Patch-based intuitive prototypes for interpretable image classification.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Vancouver, BC, Canada, 17-24 June 2023, pp. 2744-2753.10.1109/CVPR52729.2023.00269
    [Google Scholar]
  36. WangC. LiuY. ChenY. Learning support and trivial prototypes for interpretable image classification.2023 IEEE/CVF International Conference on Computer Vision (ICCV)Paris, France, 01-06 October 2023, pp. 2062-2072.10.1109/ICCV51070.2023.00197
    [Google Scholar]
/content/journals/cmir/10.2174/0115734056361649250717162910
Loading
/content/journals/cmir/10.2174/0115734056361649250717162910
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test