Current Medical Imaging - Current Issue
Volume 21, Issue 1, 2025
-
-
Smartphone-based Anemia Screening via Conjunctival Imaging with 3D-Printed Spacer: A Cost-effective Geospatial Health Solution
More LessAuthors: A.M. Arunnagiri, M. Sasikala, N. Ramadass and G. RamyaIntroductionAnemia is a common blood disorder caused by a low red blood cell count, reducing blood hemoglobin. It affects children, adolescents, and adults of all genders. Anemia diagnosis typically involves invasive procedures like peripheral blood smears and complete blood count (CBC) analysis. This study aims to develop a cost-effective, non-invasive tool for anemia detection using eye conjunctiva images.
MethodEye conjunctiva images were captured from 54 subjects using three imaging modalities such as a DSLR camera, a smartphone camera, and a smartphone camera fitted with a 3D-printed spacer macro lens. Image processing techniques, including You Only Look Once (YOLOv8) and the Segment Anything Model (SAM), and K-means clustering were used to analyze the image. By using an MLP classifier, the images were classified as anemic, moderately anemic, and normal. The trained model was embedded into an Android application with geotagging capabilities to map the prevalence of anemia in different regions.
ResultsFeatures extracted using SAM segmentation showed higher statistical significance (p < 0.05) compared to K-Means. Comparing high resolution (DSLR modality) and the proposed 3D-printed spacer macrolens shows statistically significant differences (p < 0.05). The classification accuracy was 98.3% for images from a 3D spacer-equipped smartphone camera, on par with the 98.8% accuracy obtained from DSLR camera-based images.
ConclusionThe mobile application, developed using images captured with a 3D spacer-equipped modality, provides portable, cost-effective, and user-friendly non-invasive anemia screening. By identifying anemic clusters, it assists healthcare workers in targeted interventions and supports global health initiatives like Sustainable Development Goal (SDG) 3.
-
-
-
Diffusion Model-based Medical Image Generation as a Potential Data Augmentation Strategy for AI Applications
More LessAuthors: Zijian Cao, Jueye Zhang, Chen Lin, Tian Li, Hao Wu and Yibao ZhangIntroductionThis study explored a generative image synthesis method based on diffusion models, potentially providing a low-cost and high-efficiency training data augmentation strategy for medical artificial intelligence (AI) applications.
MethodsThe MedMNIST v2 dataset was utilized as a small-volume training dataset under low-performance computing conditions. Based on the characteristics of existing samples, new medical images were synthesized using the proposed annotated diffusion model. In addition to observational assessment, quantitative evaluation was performed based on the gradient descent of the loss function during the generation process and the Fréchet Inception Distance (FID), using various loss functions and feature vector dimensions.
ResultsCompared to the original data, the proposed diffusion model successfully generated medical images of similar styles but with dramatically varied anatomic details. The model trained with the Huber loss function achieved a higher FID of 15.2 at a feature vector dimension of 2048, compared with the model trained with the L2 loss function, which achieved the best FID of 0.85 at a feature vector dimension of 64.
DiscussionThe use of the Huber loss enhanced model robustness, while FID values indicated acceptable similarity between generated and real images. Future work should explore the application of these models to more complex datasets and clinical scenarios.
ConclusionThis study demonstrated that diffusion model-based medical image synthesis is potentially applicable as an augmentation strategy for AI, particularly in situations where access to real clinical data is limited. Optimal training parameters were also proposed by evaluating the dimensionality of feature vectors in FID calculations and the complexity of loss functions.
-
-
-
Liver Functions in Patients with Chronic Liver Disease and Liver Cirrhosis: Correlation of FLIS and LKER with PALBI Grade and APRI
More LessAuthors: Ahmet Cem Demirşah and Elif GündoğduIntroductionIn chronic liver disease (CLD) and liver cirrhosis (LC), assessing hepatic function and disease severity is crucial for patient management. This study aimed to evaluate the relationship between platelet-albumin-bilirubin (PALBI) grade and aspartate aminotransferase/platelet ratio index (APRI) with the functional liver imaging score (FLIS) and liver-to-kidney enhancement ratio (LKER) using gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced hepatobiliary phase (HBP) magnetic resonance imaging (MRI).
MethodsAfter applying exclusion criteria, 86 patients with CLD or LC who underwent Gd-EOB-DTPA-enhanced MRI between January 2018 and October 2023 were included. APRI and PALBI grades were calculated from laboratory data. FLIS was determined as the sum of three HBP imaging features (liver parenchymal enhancement, biliary excretion, and portal vein sign), with each scoring 0–2. LKER was calculated by dividing liver signal intensity by kidney intensity using region of interest (ROI) measurements. Spearman’s correlation was used to assess relationships between the variables.
ResultsAPRI showed a weak negative correlation with both FLIS (r = –0.327, p = 0.02) and LKER (r = –0.308, p = 0.004). PALBI showed a moderate negative correlation with FLIS (r = –0.495, p = 0.001) and LKER (r = –0.554, p = 0.0001).
DiscussionFLIS and LKER moderately correlated with PALBI and weakly with APRI. LKER may be a more practical tool due to its quantitative nature. Despite limitations, combining imaging and lab-based scores could enhance liver function assessment.
ConclusionFLIS and LKER can validate, rather than predict or exclude, liver dysfunction in CLD and LC.
-
-
-
Non-infectious Hepatic Cystic Lesions: A Narrative Review
More LessAuthors: Adem Ceri, Andreas Busse-Coté, Delphine Weil, Eric Delabrousse, Vincent Di Martino and Paul CalameHepatic cysts are commonly encountered in clinical practice, presenting a wide spectrum of lesions that vary in terms of pathogenesis, clinical presentation, imaging characteristics, and potential severity. While benign hepatic cysts are the most prevalent, other cystic lesions, which can sometimes mimic simple cysts, may be malignant and pose significant clinical challenges. Simple biliary cysts, the most common type, are typically diagnosed using ultrasound. However, for complex lesions, advanced imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are crucial. In ambiguous cases, additional diagnostic tools such as contrast-enhanced ultrasound (CEUS), Positron Emission Tomography (PET), cyst fluid aspiration, or biopsy may be necessary. Understanding the nuances of these cystic lesions is crucial for accurate diagnosis and management, as it distinguishes between benign and potentially life-threatening conditions and informs the decision on appropriate treatment strategies. Non-parasitic cysts encompass a range of conditions, including simple biliary cysts, hamartomas, Caroli disease, polycystic liver disease, mucinous cystic neoplasms, intraductal papillary mucinous neoplasms, ciliated hepatic foregut cysts, and peribiliary cysts. Each type has specific clinical and imaging features that guide non-invasive diagnosis. Treatment approaches vary, with conservative management for asymptomatic lesions and more invasive techniques, such as surgery or percutaneous interventions, reserved for symptomatic cases or those with complications. This review focuses on non-parasitic cystic lesions, exploring their pathophysiology, epidemiology, risk of malignant transformation, treatment options, and key findings from imaging diagnosis.
-
-
-
SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities
More LessAuthors: N. Kavitha and B. AnandIntroductionPneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis.
Materials and MethodsThis paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories.
ResultsSqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric.
DiscussionThe validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations.
ConclusionSqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.
-
-
-
MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation
More LessBackgroundAccurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation.
ObjectiveThis study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details.
MethodsThis paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas.
ResultsExperimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%.
ConclusionMBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.
-
-
-
Multi-scale based Network and Adaptive EfficientnetB7 with ASPP: Analysis of Novel Brain Tumor Segmentation and Classification
More LessAuthors: Sheetal Vijay Kulkarni and S. PoornapushpakalaIntroductionMedical imaging has undergone significant advancements with the integration of deep learning techniques, leading to enhanced accuracy in image analysis. These methods autonomously extract relevant features from medical images, thereby improving the detection and classification of various diseases. Among imaging modalities, Magnetic Resonance Imaging (MRI) is particularly valuable due to its high contrast resolution, which enables the differentiation of soft tissues, making it indispensable in the diagnosis of brain disorders. The accurate classification of brain tumors is crucial for diagnosing many neurological conditions. However, conventional classification techniques are often limited by high computational complexity and suboptimal accuracy. Motivated by these issues, an innovative model is proposed in this work for segmenting and classifying brain tumors. The research aims to develop a robust and efficient deep learning framework that can assist clinicians in making precise and early diagnoses, ultimately leading to more effective treatment planning. The proposed methodology begins with the acquisition of MRI images from standardized medical imaging databases.
MethodsSubsequently, the abnormal regions from the images are segmented using the Multiscale Bilateral Awareness Network (MBANet), which incorporates multi-scale operations to enhance feature representation and image quality. A novel classification architecture then processes the segmented images, termed Region Vision Transformer-based Adaptive EfficientNetB7 with Atrous Spatial Pyramid Pooling (RVAEB7-ASPP). To optimize the performance of the classification model, hyperparameters are fine-tuned using the Modified Random Parameter-based Hippopotamus Optimization Algorithm (MRP-HOA).
ResultsThe model's effectiveness is verified through a comprehensive experimental evaluation that utilizes various performance metrics and is compared to current state-of-the-art methods. The proposed MRP-HOA-RVAEB7-ASPP model achieves an impressive classification accuracy of 98.2%, significantly outperforming conventional approaches in brain tumor classification tasks.
DiscussionThe MBANet effectively performs brain tumor segmentation, while the RVAEB7-ASPP model provides reliable classification. The integration of the MRP-HOA-RVAEB7-ASPP model optimizes feature extractions and parameter tuning, leading to improved accuracy and robustness.
ConclusionThe integration of advanced segmentation, adaptive feature extraction, and optimal parameter tuning enhances the reliability and accuracy of the model. This framework provides a more effective and trustworthy solution for the early detection and clinical assessment of brain tumors, leading to improved patient outcomes through timely intervention.
-
-
-
Mapping the Evolution of Thyroid Ultrasound Research: A 30-year Bibliometric Analysis
More LessAuthors: Ting Jiang, Chuansheng Yang, Lv Wu, Xiaofen Li and Jun ZhangIntroductionThyroid ultrasound has emerged as a critical diagnostic modality, attracting substantial research attention. This bibliometric analysis systematically maps the 30-year evolution of thyroid ultrasound research to identify developmental trends, research hotspots, and emerging frontiers.
MethodsEnglish-language articles and reviews (1994-2023) from Web of Science Core Collection were extracted. Bibliometric analysis was performed using VOSviewer and CiteSpace to examine collaborative networks among countries/institutions/authors, reference timeline visualization, and keyword burst detection.
ResultsA total of 8,489 documents were included for further analysis. An overall upward trend in research publications was found. China, the United States, and Italy were the productive countries, while the United States, Italy, and South Korea had the greatest influence. The journal Thyroid obtained the highest IF. The keywords with the greatest strength were “disorders”, “thyroid volume”, and “association guidelines”. The timeline view of reference demonstrated that deep learning, ultrasound-based risk stratification systems, and radiofrequency ablation were the latest reference clusters.
DiscussionThree dominant themes emerged: the ultrasound characteristics of thyroid disorders, the application of new techniques, and the assessment of the risk of malignancy of thyroid nodules. Applications of deep learning and the development and improvement of correlation guides such as TI-RADS are the present focus of research.
ConclusionThe specific application efficacy and improvement of TI-RADS and the optimization of deep learning algorithms and their clinical applicability will be the focus of subsequent research.
-
-
-
Multimodal Imaging and Clinical Implications of Collagenous Fibroma in the Juxtaforaminal Premaxillary Fat Pad Mimicking Locoregional Tumor Recurrence: A Case Report and Literature Review
More LessAuthors: Jeong Pyo Lee, Hye Jin Baek, Ki-Jong Park, Jin Pyeong Kim, Hyo Jung An and Eun ChoBackgroundCollagenous fibroma (CF), or desmoplastic fibroblastoma, is a rare benign tumor with few reported cases involving the facial region. Its presence in uncommon sites can pose diagnostic challenges due to overlapping clinical and radiologic features with malignant neoplasms.
Case PresentationHerein, we report a case of a 48-year-old female with CF in the juxtaforaminal premaxillary fat pad, presenting with neuralgic pain extending to the ipsilateral upper gingiva. The patient had a history of adenoid cystic carcinoma (AdCC) of the right nasolabial fold, which was treated surgically four years prior. During evaluation with a multimodal radiologic approach using ultrasonography, CT, and MRI, the lesion was revealed to be a soft tissue lesion in the premaxillary region, raising suspicion of recurrent AdCC. However, histopathologic examination of the surgical excision confirmed the diagnosis of CF.
ConclusionThis case highlights the importance of integrating clinical history, imaging findings, and pathological analysis for accurate diagnosis and appropriate management.
-
-
-
Preliminary Study on the Evaluation Value of Extracellular Volume Fraction in the Pathological Grading of Lung Invasive Adenocarcinoma
More LessAuthors: Bin Nan, Yukun Pan, Yinghui Ge, Minghua Sun, Jin Cai and Xiaojing KanIntroductionThis study aims to evaluate the diagnostic value of extracellular volume fraction (ECV) and spectral CT parameters in assessing the pathological grading of lung invasive adenocarcinoma (IAC) presenting as solid or subsolid nodules.
MethodsA retrospective collection of patients who were pathologically confirmed as IAC with solid or subsolid pulmonary nodules at our hospital from March 2023 to November 2024 was conducted. Relevant data were recorded, and the patients were divided into two groups: intermediate/high differentiation and low differentiation. The parameters including arterial phase iodine concentration (ICA), arterial phase normalized iodine concentration (NICA), arterial phase normalized effective atomic number (nZeffA), arterial phase extracellular volume fraction (ECVA), venous phase iodine concentration (ICV), venous phase Normalized Iodine Concentration (NICV), venous phase normalized effective atomic number (nZeffV), and venous phase extracellular volume fraction (ECVV) were compared between the two groups. Parameters with statistical significance were evaluated for their diagnostic performance using Receiver Operating Characteristic (ROC) curves.
ResultsA total of 61 patients were included, comprising 40 in the intermediate to high differentiation group and 21 in the low differentiation group. The intermediate/high differentiation group had higher values of ECVA, NICA, ECVV, ICV, NICV, and nZeffV than the low differentiation group (P < 0.05). The AUC values for these parameters were 0.679, 0.620, 0.757, 0.688, 0.724 and 0.693 respectively. Among these, ECVV had the largest AUC, with a sensitivity and specificity of 72.5% and 71.4%, respectively. Through binary logistic regression analysis, five features were identified: the maximum diameter of the lesion, bronchus encapsulated air sign, lobulation sign, spiculation sign, and pleural traction sign. The integration of these imaging features with ECVV resulted in a model with enhanced diagnostic performance, characterized by an AUC of 0.886, a sensitivity of 85.7%, and a specificity of 80.0%.
DiscussionECVV outperforms other spectral parameters in differentiating IAC grades, reflecting changes in the tumor microenvironment. Combining ECVV with imaging features enhances diagnostic accuracy, though the study’s single-center design and small sample size limit generalizability.
ConclusionExtracellular volume fraction can provide more information for the pathological grading assessment of invasive adenocarcinoma of the lung. Compared to other spectral parameters, ECVV exhibits the highest diagnostic performance, and its combination with conventional imaging features can further enhance diagnostic accuracy.
-
-
-
Effective Feature Extraction for Knee Osteoarthritis Detection on X-ray Images using Convolutional Neural Networks
More LessAuthors: Lei Yu, Shuai Zhang, Xueting Zhang, Heng Wang, Mengnan You and Yimin JiangBackgroundKnee osteoarthritis (KOA) is a degenerative joint disease commonly assessed using X-ray images based on the Kellgren-Lawrence (KL) criteria. Although the KL standard exists, its ambiguity often causes patients to misunderstand their condition, leading to overtreatment or delayed treatment and challenges in guiding precise surgical decisions. Moreover, the data-driven technology has been impeded by low resolution and feature distribution inconsistency of knee X-ray images. The imbalances between positive and negative samples further degrade detection accuracy.
ObjectiveThe objective of this study was to develop a deep learning-based model, namely Task-aligned Path Aggregation Feature Fusion For Knee Osteoarthritis Detection (TPAFFKnee), to improve KOA detection accuracy by addressing limitations in traditional methods. Its more accurate detection could help in terms of proper treatment for patients and precision in surgery by physicians.
MethodsWe proposed the TPAFFKnee model based on the EfficientNetB4 network, which introduced a path aggregation network for better feature extraction and replaced Fully Convolutional Network (FCN) with task-aligned detection as the head. Additionally, the loss function was improved by replacing the original loss function with Efficient Intersection over Union Loss (EIoU Loss) to address the imbalance between positive and negative samples.
ResultsThe results showed that the model could accurately detect KOA categories and lesion locations based on the KL classification criteria, with a Mean Average Precision (mAP) of 93% on the Mendeley KOA dataset of 1650 knee osteoarthritis X-ray images from several hospitals. The mAP for the K2, K3, and K4 categories were 98.6%, 98.5%, and 99.6%, respectively. Compared with Faster R-CNN, SSD, RetinaNet, EfficientNetB4, and YOLOX, the proposed algorithm improved detection mAP by 14.3%, 12.4%, 15.3%, 22.7%, and 4.3%.
ConclusionThis study emphasizes the importance of the EfficientNetB4 network in KOA detection. The TPAFFKnee model provides an effective solution for improving the accuracy of KOA detection and offers a promising approach for standardized KL classification in medical applications. Future research can integrate more clinical data while improving the overall landscape of healthcare delivery through data-driven automation solutions.
-
-
-
DWI-based Biologically Interpretable Radiomic Nomogram for Predicting 1-year Biochemical Recurrence after Radical Prostatectomy: A Deep Learning, Multicenter Study
More LessAuthors: Xiangke Niu, Yongjie Li, Lei Wang and Guohui XuIntroductionIt is not rare to experience a biochemical recurrence (BCR) following radical prostatectomy (RP) for prostate cancer (PCa). It has been reported that early detection and management of BCR following surgery could improve survival in PCa.
This study aimed to develop a nomogram integrating deep learning-based radiomic features and clinical parameters to predict 1-year BCR after RP and to examine the associations between radiomic scores and the tumor microenvironment (TME).
MethodsIn this retrospective multicenter study, two independent cohorts of patients (n = 349) who underwent RP after multiparametric magnetic resonance imaging (mpMRI) between January 2015 and January 2022 were included in the analysis. Single-cell RNA sequencing data from four prospectively enrolled participants were used to investigate the radiomic score-related TME. The 3D U-Net was trained and optimized for prostate cancer segmentation using diffusion-weighted imaging, and radiomic features of the target lesion were extracted. Predictive nomograms were developed via multivariate Cox proportional hazard regression analysis. The nomograms were assessed for discrimination, calibration, and clinical usefulness.
ResultsIn the development cohort, the clinical-radiomic nomogram had an AUC of 0.892 (95% confidence interval: 0.783--0.939), which was considerably greater than those of the radiomic signature and clinical model. The Hosmer–Lemeshow test demonstrated that the clinical-radiomic model performed well in both the development (P = 0.461) and validation (P = 0.722) cohorts.
DiscussionDecision curve analysis revealed that the clinical-radiomic nomogram displayed better clinical predictive usefulness than the clinical or radiomic signature alone in both cohorts. Radiomic scores were associated with a significant difference in the TME pattern.
ConclusionOur study demonstrated the feasibility of a DWI-based clinical-radiomic nomogram combined with deep learning for the prediction of 1-year BCR. The findings revealed that the radiomic score was associated with a distinctive tumor microenvironment.
-
-
-
The Long-term Volumetric and Radiological Changes of COVID-19 on Lung Anatomy: A Quantitative Assessment
More LessAuthors: A. Savranlar, M. Öztürk, H. Sipahioğlu, Y. Savranlar and M. Tahta ŞahingözObjectiveThis study aimed to assess the long-term volumetric and radiological effects of COVID-19 on lung anatomy. The severity of the disease was evaluated using radiological scoring, and lung volume measurements were performed via 3D Slicer software.
MethodsA retrospective analysis was conducted on a total of 127 patients diagnosed with COVID-19 between April 2020 and December 2023. Initial and follow-up chest CT scans were reviewed to analyze lung volumes and radiological findings. Lung lobes were segmented using 3D Slicer software to measure volumes. Severity scores were assigned based on the Chung system, and statistical methods, including logistic regression and Wilcoxon signed-rank tests, were used to analyze outcomes.
ResultsFollow-up CT scans showed significant improvements in lung volumes and severity scores. The left lung total volume increased significantly (p = 0.038), while right lung total volume and COVID-19-affected lung volumes demonstrated non-significant improvements. Severity scores and the number of affected lobes decreased significantly (p 0.05). Correlation analyses revealed that age negatively influenced lung volume recovery (r = -0.177, p = 0.047). Persistent pathological findings, such as interstitial thickening and fibrotic bands, were observed.
ConclusionCOVID-19 induces lasting changes in lung structure, particularly in elderly and severely affected patients. Long-term follow-up and the consideration of antifibrotic therapies are essential to manage post-COVID-19 complications effectively. A multidisciplinary approach is recommended to support patient recovery and minimize healthcare burdens.
-
-
-
CT-based Radiomics of Intratumoral and Peritumoral Regions to Predict the Recurrence Risk in Patients with Non-muscle-invasive Bladder Cancer within Two Years after TURBT
More LessAuthors: Ting Cao, Na Li, Chuanchao Guo, Hepeng Zhang, Lihua Chen, Ke Wu, Lisha Liang, Ximing Wang and Wen ShenBackgroundPredicting the recurrence risk of NMIBC after TURBT is crucial for individualized clinical treatment.
ObjectiveThe objective of this study is to evaluate the ability of radiomic feature analysis of intratumoral and peritumoral regions based on computed tomography (CT) imaging to predict recurrence in non-muscle-invasive bladder cancer (NMIBC) patients who underwent transurethral resection of bladder tumor (TURBT).
MethodsA total of 233 patients with NMIBC who underwent TURBT were retrospectively analyzed. Within the intratumoral and peritumoral regions of the venous phase images, 1316 radiomics features were extracted. Feature selection was used to identify a set of top recurrence-associated features within the training cohort. Three models were constructed to predict recurrence for a given patient using Random Forest (RF): Model 1 was based on the radiomics features set from the intratumoral region, Model 2 was based on a combination of intratumoral and peritumoral regions, and Model 3 combined the radiomics features from Model 2 and clinical factors. The three models were then independently tested on internal and external cohorts, and their performance was evaluated. We also employed the bootstrap method on the internal cohort to further validate the performance of the model.
ResultsCombining intratumoral and peritumoral regions, Model 2 yielded a higher area under the receiver operator characteristic curves (AUC) than Model 1, with 0.826 AUCs of the training cohort. After adding clinical factors, the predictive performance of Model 3 for postoperative recurrence of NMIBC was further improved, and the AUCs of the training, internal, and external validation cohorts of Model 3 were 0.860 (95% CI: 0.829-0.954), 0.829 (0.812-0.863), and 0.805 (0.652-0.840), respectively (all p>0.05). The bootstrap value of Model 3 on the internal cohort was 0.852. Model 3 stratified patients into high- and low-risk groups with significantly different recurrence-free survival (RFS) (p<0.001).
ConclusionRadiomic features derived from intratumoral regions can predict the 2-year recurrence risk following TURBT in patients with NMIBC. The predictive performance is further enhanced when combined with radiomic features from peritumoral regions and clinical risk factors.
-
-
-
RNN-AHF Framework: Enhancing Multi-focal Nature of Hypoxic Ischemic Encephalopathy Lesion Region in MRI Image Using Optimized Rough Neural Network Weight and Anti-Homomorphic Filter
More LessAuthors: M. Thangeswari, R. Muthucumaraswamy, K. Anitha and N.R. ShankerIntroductionImage enhancement of the Hypoxic-Ischemic Encephalopathy (HIE) lesion region in neonatal brain MR images is a challenging task due to the diffuse (i.e., multi-focal) nature, small size, and low contrast of the lesions. Classifying the stages of HIE is also difficult because of the unclear boundaries and edges of the lesions, which are dispersed throughout the brain. Moreover, unclear boundaries and edges are due to chemical shifts, partial volume artifacts, and motion artifacts. Further, voxels may reflect signals from adjacent tissues. Existing algorithms perform poorly in HIE lesion enhancement due to artifacts, voxels, and the diffuse nature of the lesion.
MethodsIn this paper, we propose a Rough Neural Network and Anti-Homomorphic Filter (RNN-AHF) framework for the enhancement of the HIE lesion region.
ResultsThe RNN-AHF framework reduces the pixel dimensionality of the feature space, eliminates unnecessary pixels, and preserves essential pixels for lesion enhancement.
DiscussionThe RNN efficiently learns and identifies pixel patterns and facilitates adaptive enhancement based on different weights in the neural network. The proposed RNN-AHF framework operates using optimized neural weights and an optimized training function. The hybridization of optimized weights and the training function enhances the lesion region with high contrast while preserving the boundaries and edges.
ConclusionThe proposed RNN-AHF framework achieves a lesion image enhancement and classification accuracy of approximately 93.5%, which is better than traditional algorithms.
-
-
-
Initial Recurrence Risk Stratification of Papillary Thyroid Cancer based on Intratumoral and Peritumoral Dual Energy CT Radiomics
More LessAuthors: Yan Zhou, Yongkang Xu, Yan Si, Feiyun Wu and Xiaoquan XuIntroductionThis study aims to evaluate the potential of Dual-Energy Computed Tomography (DECT)-based radiomics in preoperative risk stratification for the prediction of initial recurrence in Papillary Thyroid Carcinoma (PTC).
MethodsThe retrospective analysis included 236 PTC cases (165 in the training cohort, 71 in the validation cohort) collected between July 2020 and June 2021. Tumor segmentation was carried out in both intratumoral and peritumoral areas (1 mm inner and outer to the tumor boundary). Three region-specific rad-scores were developed (rad-score [VOIwhole], rad-score [VOIouter layer], and rad-score [VOIinner layer]), respectively. Three radiomics models incorporating these rad-scores and additional risk factors were compared to a clinical model alone. The optimal radiomics model was presented as a nomogram.
ResultsRad-scores from peritumoral regions (VOIouter layer and VOIinner layer) outperformed the intratumoral rad-score (VOIwhole). All radiomics models surpassed the clinical model, with peritumoral-based models (radiomics models 2 and 3) outperforming the intratumoral-based model (radiomics model 1). The top-performing nomogram, which included tumor size, tumor site, and rad-score (VOIinner layer), achieved an Area Under the Curve (AUC) of 0.877 in the training cohort and 0.876 in the validation cohort. The nomogram demonstrated good calibration, clinical utility, and stability.
DiscussionDECT-based intratumoral and peritumoral radiomics advance PTC initial recurrence risk prediction, providing clinical radiology with precise predictive tools. Further work is needed to refine the model and enhance its clinical application.
ConclusionRadiomics analysis of DECT, particularly in peritumoral regions, offers valuable predictive information for assessing the risk of initial recurrence in PTC.
-
-
-
Automated Brain Tumor Segmentation using Hybrid YOLO and SAM
More LessAuthors: Paul Jeyaraj M and Senthil Kumar MIntroductionEarly-stage Brain tumor detection is critical for timely diagnosis and effective treatment. We propose a hybrid deep learning method, Convolutional Neural Network (CNN) integrated with YOLO (You Only Look once) and SAM (Segment Anything Model) for diagnosing tumors.
MethodsA novel hybrid deep learning framework combining a CNN with YOLOv11 for real-time object detection and the SAM for precise segmentation. Enhancing the CNN backbone with deeper convolutional layers to enable robust feature extraction, while YOLOv11 localizes tumor regions, SAM is used to refine the tumor boundaries through detailed mask generation.
ResultsA dataset of 896 MRI brain images is used for training, testing, and validating the model, including images of both tumors and healthy brains. Additionally, CNN-based YOLO+SAM methods were utilized successfully to segment and diagnose brain tumors.
DiscussionOur suggested model achieves good performance of Precision as 94.2%, Recall as 95.6% and mAP50(B) score as 96.5% demonstrating and highlighting the effectiveness of the proposed approach for early-stage brain tumor diagnosis
ConclusionThe validation is demonstrated through a comprehensive ablation study. The robustness of the system makes it more suitable for clinical deployment.
-
-
-
GRMA-Net: A novel two-stage 3D Semi-supervised Pneumonia Segmentation based on Dual Multiscale Uncertainty Estimation with Graph Reasoning in Chest CTs
More LessAuthors: Jianning Zang, Yu Gu, Lidong Yang, Baohua Zhang, Jing Wang, Xiaoqi Lu, Jianjun Li, Xin Liu, Ying Zhao, Dahua Yu, Siyuan Tang and Qun HeIntroductionThis study aims to propose and evaluate a two-stage semi-supervised segmentation framework with dual multiscale uncertainty estimation and graph reasoning, addressing the challenges of obtaining high-precision pixel-level labels and effectively utilizing unlabeled data for accurate pneumonia lesion segmentation.
MethodsFirst, we design a guided supervised training strategy for modeling aleatoric uncertainty (AU) at dual scales, reducing the impact on segmentation performance caused by aleatoric uncertainties introduced by blurred lesions and their boundaries in the image. Second, we design a training strategy for multi-scale noisy pseudo-label correction to reduce the cognitive bias problem caused by unreliable predictions in the model. Finally, we design a new combination of fused feature interaction graph reasoning (FIGR) and attention modules, which enables the network model to better capture image features in small infected regions.
ResultsOur study was validated using the MosMedData public dataset. The proposed algorithm improves the performance by 1.25%, 1.03%, 2.98%, and 0.59% on Dice, Jaccard, normalized surface dice (NSD), and average distance of boundaries (ADB), respectively, compared to the baseline model.
DiscussionOur semi-supervised pneumonia segmentation framework, through two-stage multi-scale uncertainty estimation and modeling, significantly improves segmentation performance by leveraging unlabeled data and addressing uncertainties, offering clinical benefits in pneumonia diagnosis while facing challenges in generalization and computational efficiency that future work will target with GAN-based data synthesis and architecture optimization.
ConclusionIt can be convincingly concluded that the proposed algorithm is of profound importance and value in the domain of clinical practice.
-
-
-
Clinical and Imaging Data-based Machine Learning for Early Diagnosis of Bronchopulmonary Dysplasia: A Meta-analysis
More LessAuthors: Yilin Chen, Huixu Ma and Xi LiuIntroductionThis meta-analysis aimed to evaluate the diagnostic performance of Machine Learning (ML) models for early prediction of bronchopulmonary dysplasia (BPD) in preterm infants, addressing the need for timely risk stratification.
MethodsSystematic searches of PubMed, Embase, and other databases identified 9 eligible studies (12,755 infants). Data were extracted and pooled using bivariate generalized linear mixed models. Study quality was assessed via QUADAS-2.
ResultsML models demonstrated high accuracy (pooled sensitivity: 0.81, specificity: 0.85, AUC: 0.90). Multimodal models and ensemble algorithms (e.g., Random Forest) outperformed single-modality approaches. Models using data from the first 7 postnatal days achieved superior performance compared to those using data from day 28.
DiscussionML enables ultra-early BPD prediction, preceding conventional diagnosis by weeks. Heterogeneity in data modalities and validation strategies highlights the need for standardized reporting.
ConclusionML-based BPD prediction shows promise for clinical translation but requires prospective validation and cost-effectiveness analysis.
-
-
-
2-D Stationary Wavelet Transform and 2-D Dual-Tree DWT for MRI Denoising
More LessAuthors: Mourad Talbi, Brahim Nasraoui and Arij AlfaidiIntroductionThe noise emergence in the digital image can occur throughout image acquisition, transmission, and processing steps. Consequently, eliminating the noise from the digital image is required before further processing. This study aims to denoise noisy images (including Magnetic Resonance Images (MRIs)) by employing our proposed image denoising approach.
MethodsThis proposed approach is based on the Stationary Wavelet Transform (SWT 2-D) and the 2 - D Dual-Tree Discrete Wavelet Transform (DWT). The first step of this approach consists of applying the 2 - D Dual-Tree DWT to the noisy image to obtain noisy wavelet coefficients. The second step of this approach consists of denoising each of these coefficients by applying an SWT 2-D based denoising technique. The denoised image is finally obtained by applying the inverse of the 2-D Dual-Tree DWT to the denoised coefficients obtained in the second step. The proposed image denoising approach is evaluated by comparing it to four denoising techniques existing in literature. The latters are the image denoising technique based on thresholding in the SWT-2D domain, the image denoising technique based on deep neural network, the image denoising technique based on soft thresholding in the domain of 2-D Dual-Tree DWT, and Non-local Means Filter.
ResultsThe proposed denoising approach, and the other four techniques previously mentioned, are applied to a number of noisy grey scale images and noisy Magnetic Resonance Images (MRIs) and the obtained results are in terms of PSNR (Peak Signal to Noise Ratio), SSIM (Structural Similarity), NMSE (Normalized Mean Square Error) and Feature Similarity (FSIM). These results show that the proposed image denoising approach outperforms the other denoising techniques applied for our evaluation.
DiscussionIn comparison with the four denoising techniques applied for our evaluation, the proposed approach permits to obtain highest values of PSNR, SSIM and FSIM and the lowest values of NMSE. Moreover, in cases where the noise level σ = 10 or σ = 20, this approach permits the elimination of the noise from the noisy images and introduces slight distortions on the details of the original images. However, in case where σ = 30 or σ = 40, this approach eliminates a great part of the noise and introduces some distortions on the original images.
ConclusionThe performance of this approach is proven by comparing it to four image denoising techniques existing in literature. These techniques are the denoising technique based on thresholding in the SWT-2D domain, the image denoising technique based on a deep neural network, the image denoising technique based on soft thresholding in the domain of 2 - D Dual-Tree DWT and the Non-local Means Filter. All these denoising techniques, including our approach, are applied to a number of noisy grey scale images and noisy MRIs, and the obtained results are in terms of PSNR (Peak Signal to Noise Ratio), SSIM(Structural Similarity), NMSE (Normalized Mean Square Error) and FSIM (Feature Similarity). These results show that this proposed approach outperforms the four denoising techniques applied for our evaluation.
-
Volumes & issues
-
Volume 21 (2025)
-
Volume 20 (2024)
-
Volume 19 (2023)
-
Volume 18 (2022)
-
Volume 17 (2021)
-
Volume 16 (2020)
-
Volume 15 (2019)
-
Volume 14 (2018)
-
Volume 13 (2017)
-
Volume 12 (2016)
-
Volume 11 (2015)
-
Volume 10 (2014)
-
Volume 9 (2013)
-
Volume 8 (2012)
-
Volume 7 (2011)
-
Volume 6 (2010)
-
Volume 5 (2009)
-
Volume 4 (2008)
-
Volume 3 (2007)
-
Volume 2 (2006)
-
Volume 1 (2005)
Most Read This Month Most Read RSS feed