Full text loading...
Multimodal medical image fusion is a core tool to enhance the clinical utility of medical images by integrating complementary information from multiple images. However, the existing deep learning-based fusion methods are not good at effectively extracting the key target features, and easy to make the results blurry.
The main objective of the paper is to propose a medical image fusion method that effectively extracts features from source images and preserves them in the fused results.
The prior knowledge and a dual-branch U-shaped structure are employed by the proposed method to extract both the local and global features from images of different modalities. A novel Transformer module is designed to capture the global correlations at the super-pixel level. Each feature extraction module uses the Haar Wavelet downsampling to reduce the spatial resolution of the feature maps while preserving as much information as possible, effectively reducing the information uncertainty.
Extensive experiments on public medical image datasets and a biological image dataset demonstrated that the proposed method achieves superior performance in both qualitative and quantitative evaluations.
This paper applies prior knowledge to medical image fusion and proposes a novel dual-branch U-shaped medical image fusion network. Compared with nine state-of-the-art fusion methods, the proposed method produces better-fused results with richer texture details and better visual quality.
Article metrics loading...
Full text loading...
References
Data & Media loading...