Current Medical Imaging - Current Issue
Volume 21, Issue 1, 2025
-
-
Application Value of Intelligent Quick Magnetic Resonance for Accelerating Brain MR Scanning and Improving Image Quality in Acute Ischemic Stroke
More LessAuthors: Bo Xue, Dengjie Duan, Junbang Feng, Zhenjun Zhao, Jinkun Tan, Jinrui Zhang, Chao Peng, Chang Li and Chuanming LiIntroductionThis study aimed to evaluate the effectiveness of intelligent quick magnetic resonance (IQMR) for accelerating brain MRI scanning and improving image quality in patients with acute ischemic stroke.
MethodsIn this prospective study, 58 patients with acute ischemic stroke underwent head MRI examinations between July 2023 and January 2024, including diffusion-weighted imaging and both conventional and accelerated T1-weighted, T2-weighted, and T2 fluid-attenuated inversion recovery fat-saturated (T2-FLAIR) sequences. Accelerated sequences were processed using IQMR, producing IQMR-T1WI, IQMR-T2WI, and IQMR-T2-FLAIR images. Image quality was assessed qualitatively by two readers using a five-point Likert scale (1 = non-diagnostic to 5 = excellent). Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesions and surrounding tissues were quantitatively measured. The Alberta Stroke Program Early CT Score (ASPECTS) was used to evaluate ischemia severity.
ResultsTotal scan time was reduced from 5 minutes 9 seconds to 2 minutes 40 seconds, accounting for a reduction of 48.22%. IQMR significantly improved SNR/CNR in accelerated sequences (P < .05), achieving parity with routine sequences (P > .05). Qualitative scores for lesion conspicuity and internal display improved post-IQMR (P < .05).. ASPECTS showed no significant difference between IQMR and routine images (P = 0.79; ICC = 0.91–0.93).
DiscussionIQMR addressed MRI’s slow scanning limitation without hardware modifications, enhancing diagnostic efficiency. The results have been found to align with advancements in deep learning. Limitations included the small sample size and the exclusion of functional sequences.
ConclusionIQMR could significantly reduce brain MRI scanning time and enhance image quality in patients with acute ischemic stroke.
-
-
-
Imaging of Carotid Blowout Syndrome in a Patient with Nasopharyngeal Carcinoma after Radiation Therapy
More LessAuthors: Yuanling Yang, Xinting Peng, Weiyi Liu, Lixuan Huang and Zisan ZengIntroductionThis case highlights the rare but life-threatening complication of carotid blowout syndrome (CBS) after radiotherapy for nasopharyngeal carcinoma (NPC). It is characterized by rupture of the carotid artery, often occurring months or years after treatment. Early diagnosis and timely intervention are essential to improve clinical outcomes.
Case PresentationA 45-year-old woman with NPC developed recurrent epistaxis 31 months after chemoradiotherapy. MRI and MRA ruled out tumor recurrence. High-resolution vessel wall imaging (VWI) revealed eccentric thickening, irregular enhancement, and a pseudoaneurysm in the lacerum segment of the left internal carotid artery (ICA), which was confirmed by CTA and DSA. The patient underwent embolization and remained stable at 1-year follow-up.
ConclusionThis case underscores the value of VWI in detecting CBS-related vascular changes. Imaging is crucial for early diagnosis and timely intervention in high-risk patients with NPC who have undergone radiotherapy.
-
-
-
Prevalence and Determinants of the Pool Sign in Lung Cancer Patients with Brain Metastasis
More LessAuthors: Ying Long, Zhao-ping Chen, Lin-hui Wang, Xue-qing Liao, Ming Guo and Zhong-qing HuangPurposeThe pool sign, an emerging MRI biomarker for differentiating brain metastases (BM) from primary neoplasms, is primarily documented in case reports. Systematic data on its prevalence and determinants in BM among patients with lung cancer are lacking. This study aims to evaluate the occurrence of the pool sign and identify factors associated with its presence.
Materials and MethodsBetween January 2017 and August 2024, data from 6,004 lung cancer patients were retrospectively extracted from the electronic health records system. The clinical and demographic characteristics, along with BM MRI features, were compared between the pool sign and non-pool sign groups using univariate and multivariate analyses.
ResultsA total of 427 patients (81 women; mean age, 62.17 years) were enrolled in the study. The pool sign was observed in 29 patients (6.8%). The inter-reader reliability for the pool sign ranged from moderate to substantial (κ=0.61–0.80), while the intra-reader reliability was moderate (κ=0.6). In the univariate analysis, a statistically significant difference was observed in the volume size of metastases between the pool sign group and the non-pool sign group (median 4.8 vs. 0.5, P < 0.0001). This finding suggests that the presence of the pool sign is more likely associated with BMs exhibiting relatively larger tumor volumes. Additionally, the prevalence of solid-cystic masses was significantly higher in the pool sign group compared to the non-pool sign group, with rates of 79.3% and 44.5%, respectively (P = 0.0014). However, there were no statistically significant differences in other examined variables. In the multivariate analysis, the findings demonstrated that an increase in tumor volume (OR = 1.050, 95% CI 1.025-1.076, P < 0.001) and the presence of a solid-cystic mass (OR = 3.666, 95% CI 1.159-11.595, P = 0.027) were significantly correlated with a higher probability of pool sign occurrence.
ConclusionThe pool sign occurs in 6.8% of BM in patients with lung cancer and is independently associated with larger lesion volume and solid-cystic morphology. Its diagnostic utility warrants further validation.
-
-
-
Identification of PD-L1 Expression in Resectable NSCLC using Interpretable Machine Learning Model Based on Spectral CT
More LessAuthors: Henan Lou, Shiyu Cui, Yinying Dong, Shunli Liu, Shaoke Li, Hongzheng Song and Xiaodan ZhaoIntroductionThis study aimed to explore the value of a machine learning model based on spectral computed tomography (CT) for predicting the programmed death ligand-1 (PD-L1) expression in resectable non-small cell lung cancer (NSCLC).
MethodsIn this retrospective study, 131 instances of NSCLC who underwent preoperative spectral CT scanning were enrolled and divided into a training cohort (n = 92) and a test cohort (n = 39). Clinical-imaging features and quantitative parameters of spectral CT were analyzed. Variable selection was performed using univariate and multivariate logistic regression, as well as LASSO regression. We used eight machine learning algorithms to construct a PD-L1 expression predictive model. We utilized sensitivity, specificity, accuracy, calibration curve, the area under the curve (AUC), F1 score and decision curve analysis (DCA) to evaluate the predictive value of the model.
ResultsAfter variable selection, cavitation, ground-glass opacity, and CT40keV and CT70keV at venous phase were selected to develop eight machine learning models. In the test cohort, the extreme gradient boosting (XGBoost) model achieved the best diagnostic performance (AUC = 0.887, sensitivity = 0.696, specificity = 0.937, accuracy = 0.795 and F1 score = 0.800). The DCA indicated favorable clinical utility, and the calibration curve demonstrated the model’s high level of prediction accuracy.
DiscussionOur study indicated that the machine learning model based on spectral CT could effectively evaluate the PD-L1 expression in resectable NSCLC.
ConclusionThe XGBoost model, integrating spectral CT quantitative parameters and imaging features, demonstrated considerable potential in predicting PD-L1 expression.
-
-
-
Classifiers Combined with DenseNet Models for Lung Cancer Computed Tomography Image Classification: A Comparative Analysis
More LessAuthors: Menna Allah Mahmoud, Sijun Wu, Ruihua Su, Yanhua Wen, Shuya Liu and Yubao GuanIntroductionLung cancer remains a leading cause of cancer-related mortality worldwide. While deep learning approaches show promise in medical imaging, comprehensive comparisons of classifier combinations with DenseNet architectures for lung cancer classification are limited.
The study investigates the performance of different classifier combinations, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Multi-Layer Perceptron (MLP), with DenseNet architectures for lung cancer classification using chest CT scan images.
MethodsA comparative analysis was conducted on 1,000 chest CT scan images comprising Adenocarcinoma, Large Cell Carcinoma, Squamous Cell Carcinoma, and normal tissue samples. Three DenseNet variants (DenseNet-121, DenseNet-169, DenseNet-201) were combined with three classifiers: SVM, ANN, and MLP. Performance was evaluated using accuracy, Area Under the Curve (AUC), precision, recall, specificity, and F1-score with an 80-20 train-test split.
ResultsThe optimal model achieved 92% training accuracy and 83% test accuracy. Performance across models ranged from 81% to 92% for training accuracy and 73% to 83% for test accuracy. The most balanced combination demonstrated robust results (training: 85% accuracy, 0.99 AUC; test: 79% accuracy, 0.95 AUC) with minimal overfitting.
DiscussionDeep learning approaches effectively categorize chest CT scans for lung cancer detection. The MLP-DenseNet-169 combination's 83% test accuracy represents a promising benchmark. Limitations include retrospective design and a limited sample size from a single source.
ConclusionThis evaluation demonstrates the effectiveness of combining DenseNet architectures with different classifiers for lung cancer CT classification. The MLP-DenseNet-169 achieved optimal performance, while SVM-DenseNet-169 showed superior stability, providing valuable benchmarks for automated lung cancer detection systems.
-
-
-
PneumoNet: Deep Neural Network for Advanced Pneumonia Detection
More LessBackgroundAdvancements in computational methods in medicine have brought about extensive improvement in the diagnosis of illness, with machine learning models such as Convolutional Neural Networks leading the charge. This work introduces PneumoNet, a novel deep-learning model designed for accurate pneumonia detection from chest X-ray images. Pneumonia detection from chest X-ray images is one of the greatest challenges in diagnostic practice and medical imaging. Proper identification of standard chest X-ray views or pneumonia-specific views is required to perform this task effectively. Contemporary methods, such as classical machine learning models and initial deep learning methods, guarantee good performance but are generally marred by accuracy, generalizability, and preprocessing issues. These techniques are generally marred by clinical usage constraints like high false positives and poor performance over a broad spectrum of datasets.
Materials and MethodsA novel deep learning architecture, PneumoNet, has been proposed as a solution to these problems. PneumoNet applies a convolutional neural network (CNN) structure specifically employed for the improvement of accuracy and precision in image classification. The model employs several layers of convolution as well as pooling, followed by fully connected dense layers, for efficient extraction of intricate features in X-ray images. The innovation of this approach lies in its advanced layer structure and its training, which are optimized to enhance feature extraction and classification performance greatly. The model proposed here, PneumoNet, has been cross-validated and trained on a well-curated dataset that includes a balanced representation of normal and pneumonia cases.
ResultsQuantitative results demonstrate the model’s performance, with an overall accuracy of 98% and precision values of 96% for normal and 98% for pneumonia cases. The recall values for normal and pneumonia cases are 96% and 98%, respectively, highlighting the consistency of the model.
ConclusionThese performance measures collectively indicate the promise of the proposed model to improve the diagnostic process, with a substantial advancement over current methods and paving the way for its application in clinical practice.
-
-
-
Exploring the Predictive Value of Grading in Regions Beyond Peritumoral Edema in Gliomas based on Radiomics
More LessAuthors: Jie Pan, Jun Lu, Shaohua Peng and Minhai WangIntroductionAccurate preoperative grading of adult-type diffuse gliomas is crucial for personalized treatment. Emerging evidence suggests tumor cell infiltration extends beyond peritumoral edema, but the predictive value of radiomics features in these regions remains underexplored.
MethodsA retrospective analysis was conducted on 180 patients from the UCSF-PDGM dataset, split into training (70%) and validation (30%) cohorts. Intratumoral volumes (VOI_I, including tumor body and edema) and peritumoral volumes (VOI_P) at 7 expansion distances (1–5, 10, 15 mm) were analyzed. Feature selection involved Levene's test, t-test, mRMR, and LASSO regression. Radiomics models (VOI_I, VOI_P, and combined intratumoral-peritumoral models) were evaluated using AUC, accuracy, sensitivity, specificity, and F1 score, with Delong tests for comparisons.
ResultsThe combined radiomics models established for the intratumoral and peritumoral 1-5mm ranges (VOI_1-5mm) showed better predictive performance than the VOI_I model (AUC=0.815/0.672), among which the VOI_1 model performed the best: in the training cohort, the AUC was 0.903 (accuracy=0.880, sensitivity=0.905, specificity=0.855, F1=0.884); in the validation cohort, the AUC was 0.904 (accuracy=0.852, sensitivity=0.778, specificity=0.926, F1=0.840). This model significantly outperformed the VOI_I model (p<0.05) and the 10/15mm combined models (p<0.05).
DiscussionThe peritumoral regions within 5 mm beyond the edematous area contain critical grading information, likely reflecting subtle tumor infiltration. Model performance declined with larger peritumoral distances, possibly due to increased normal tissue dilution.
ConclusionThe radiomics features of the intratumoral region and the peritumoral region within 5 mm can optimize the preoperative grading of gliomas, providing support for surgical planning and prognostic evaluation.
-
-
-
Smartphone-based Anemia Screening via Conjunctival Imaging with 3D-Printed Spacer: A Cost-effective Geospatial Health Solution
More LessAuthors: A.M. Arunnagiri, M. Sasikala, N. Ramadass and G. RamyaIntroductionAnemia is a common blood disorder caused by a low red blood cell count, reducing blood hemoglobin. It affects children, adolescents, and adults of all genders. Anemia diagnosis typically involves invasive procedures like peripheral blood smears and complete blood count (CBC) analysis. This study aims to develop a cost-effective, non-invasive tool for anemia detection using eye conjunctiva images.
MethodEye conjunctiva images were captured from 54 subjects using three imaging modalities such as a DSLR camera, a smartphone camera, and a smartphone camera fitted with a 3D-printed spacer macro lens. Image processing techniques, including You Only Look Once (YOLOv8) and the Segment Anything Model (SAM), and K-means clustering were used to analyze the image. By using an MLP classifier, the images were classified as anemic, moderately anemic, and normal. The trained model was embedded into an Android application with geotagging capabilities to map the prevalence of anemia in different regions.
ResultsFeatures extracted using SAM segmentation showed higher statistical significance (p < 0.05) compared to K-Means. Comparing high resolution (DSLR modality) and the proposed 3D-printed spacer macrolens shows statistically significant differences (p < 0.05). The classification accuracy was 98.3% for images from a 3D spacer-equipped smartphone camera, on par with the 98.8% accuracy obtained from DSLR camera-based images.
ConclusionThe mobile application, developed using images captured with a 3D spacer-equipped modality, provides portable, cost-effective, and user-friendly non-invasive anemia screening. By identifying anemic clusters, it assists healthcare workers in targeted interventions and supports global health initiatives like Sustainable Development Goal (SDG) 3.
-
-
-
Diffusion Model-based Medical Image Generation as a Potential Data Augmentation Strategy for AI Applications
More LessAuthors: Zijian Cao, Jueye Zhang, Chen Lin, Tian Li, Hao Wu and Yibao ZhangIntroductionThis study explored a generative image synthesis method based on diffusion models, potentially providing a low-cost and high-efficiency training data augmentation strategy for medical artificial intelligence (AI) applications.
MethodsThe MedMNIST v2 dataset was utilized as a small-volume training dataset under low-performance computing conditions. Based on the characteristics of existing samples, new medical images were synthesized using the proposed annotated diffusion model. In addition to observational assessment, quantitative evaluation was performed based on the gradient descent of the loss function during the generation process and the Fréchet Inception Distance (FID), using various loss functions and feature vector dimensions.
ResultsCompared to the original data, the proposed diffusion model successfully generated medical images of similar styles but with dramatically varied anatomic details. The model trained with the Huber loss function achieved a higher FID of 15.2 at a feature vector dimension of 2048, compared with the model trained with the L2 loss function, which achieved the best FID of 0.85 at a feature vector dimension of 64.
DiscussionThe use of the Huber loss enhanced model robustness, while FID values indicated acceptable similarity between generated and real images. Future work should explore the application of these models to more complex datasets and clinical scenarios.
ConclusionThis study demonstrated that diffusion model-based medical image synthesis is potentially applicable as an augmentation strategy for AI, particularly in situations where access to real clinical data is limited. Optimal training parameters were also proposed by evaluating the dimensionality of feature vectors in FID calculations and the complexity of loss functions.
-
-
-
Liver Functions in Patients with Chronic Liver Disease and Liver Cirrhosis: Correlation of FLIS and LKER with PALBI Grade and APRI
More LessAuthors: Ahmet Cem Demirşah and Elif GündoğduIntroductionIn chronic liver disease (CLD) and liver cirrhosis (LC), assessing hepatic function and disease severity is crucial for patient management. This study aimed to evaluate the relationship between platelet-albumin-bilirubin (PALBI) grade and aspartate aminotransferase/platelet ratio index (APRI) with the functional liver imaging score (FLIS) and liver-to-kidney enhancement ratio (LKER) using gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced hepatobiliary phase (HBP) magnetic resonance imaging (MRI).
MethodsAfter applying exclusion criteria, 86 patients with CLD or LC who underwent Gd-EOB-DTPA-enhanced MRI between January 2018 and October 2023 were included. APRI and PALBI grades were calculated from laboratory data. FLIS was determined as the sum of three HBP imaging features (liver parenchymal enhancement, biliary excretion, and portal vein sign), with each scoring 0–2. LKER was calculated by dividing liver signal intensity by kidney intensity using region of interest (ROI) measurements. Spearman’s correlation was used to assess relationships between the variables.
ResultsAPRI showed a weak negative correlation with both FLIS (r = –0.327, p = 0.02) and LKER (r = –0.308, p = 0.004). PALBI showed a moderate negative correlation with FLIS (r = –0.495, p = 0.001) and LKER (r = –0.554, p = 0.0001).
DiscussionFLIS and LKER moderately correlated with PALBI and weakly with APRI. LKER may be a more practical tool due to its quantitative nature. Despite limitations, combining imaging and lab-based scores could enhance liver function assessment.
ConclusionFLIS and LKER can validate, rather than predict or exclude, liver dysfunction in CLD and LC.
-
-
-
Non-infectious Hepatic Cystic Lesions: A Narrative Review
More LessAuthors: Adem Ceri, Andreas Busse-Coté, Delphine Weil, Eric Delabrousse, Vincent Di Martino and Paul CalameHepatic cysts are commonly encountered in clinical practice, presenting a wide spectrum of lesions that vary in terms of pathogenesis, clinical presentation, imaging characteristics, and potential severity. While benign hepatic cysts are the most prevalent, other cystic lesions, which can sometimes mimic simple cysts, may be malignant and pose significant clinical challenges. Simple biliary cysts, the most common type, are typically diagnosed using ultrasound. However, for complex lesions, advanced imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are crucial. In ambiguous cases, additional diagnostic tools such as contrast-enhanced ultrasound (CEUS), Positron Emission Tomography (PET), cyst fluid aspiration, or biopsy may be necessary. Understanding the nuances of these cystic lesions is crucial for accurate diagnosis and management, as it distinguishes between benign and potentially life-threatening conditions and informs the decision on appropriate treatment strategies. Non-parasitic cysts encompass a range of conditions, including simple biliary cysts, hamartomas, Caroli disease, polycystic liver disease, mucinous cystic neoplasms, intraductal papillary mucinous neoplasms, ciliated hepatic foregut cysts, and peribiliary cysts. Each type has specific clinical and imaging features that guide non-invasive diagnosis. Treatment approaches vary, with conservative management for asymptomatic lesions and more invasive techniques, such as surgery or percutaneous interventions, reserved for symptomatic cases or those with complications. This review focuses on non-parasitic cystic lesions, exploring their pathophysiology, epidemiology, risk of malignant transformation, treatment options, and key findings from imaging diagnosis.
-
-
-
SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities
More LessAuthors: N. Kavitha and B. AnandIntroductionPneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis.
Materials and MethodsThis paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories.
ResultsSqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric.
DiscussionThe validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations.
ConclusionSqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.
-
-
-
MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation
More LessBackgroundAccurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation.
ObjectiveThis study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details.
MethodsThis paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas.
ResultsExperimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%.
ConclusionMBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.
-
-
-
Multi-scale based Network and Adaptive EfficientnetB7 with ASPP: Analysis of Novel Brain Tumor Segmentation and Classification
More LessAuthors: Sheetal Vijay Kulkarni and S. PoornapushpakalaIntroductionMedical imaging has undergone significant advancements with the integration of deep learning techniques, leading to enhanced accuracy in image analysis. These methods autonomously extract relevant features from medical images, thereby improving the detection and classification of various diseases. Among imaging modalities, Magnetic Resonance Imaging (MRI) is particularly valuable due to its high contrast resolution, which enables the differentiation of soft tissues, making it indispensable in the diagnosis of brain disorders. The accurate classification of brain tumors is crucial for diagnosing many neurological conditions. However, conventional classification techniques are often limited by high computational complexity and suboptimal accuracy. Motivated by these issues, an innovative model is proposed in this work for segmenting and classifying brain tumors. The research aims to develop a robust and efficient deep learning framework that can assist clinicians in making precise and early diagnoses, ultimately leading to more effective treatment planning. The proposed methodology begins with the acquisition of MRI images from standardized medical imaging databases.
MethodsSubsequently, the abnormal regions from the images are segmented using the Multiscale Bilateral Awareness Network (MBANet), which incorporates multi-scale operations to enhance feature representation and image quality. A novel classification architecture then processes the segmented images, termed Region Vision Transformer-based Adaptive EfficientNetB7 with Atrous Spatial Pyramid Pooling (RVAEB7-ASPP). To optimize the performance of the classification model, hyperparameters are fine-tuned using the Modified Random Parameter-based Hippopotamus Optimization Algorithm (MRP-HOA).
ResultsThe model's effectiveness is verified through a comprehensive experimental evaluation that utilizes various performance metrics and is compared to current state-of-the-art methods. The proposed MRP-HOA-RVAEB7-ASPP model achieves an impressive classification accuracy of 98.2%, significantly outperforming conventional approaches in brain tumor classification tasks.
DiscussionThe MBANet effectively performs brain tumor segmentation, while the RVAEB7-ASPP model provides reliable classification. The integration of the MRP-HOA-RVAEB7-ASPP model optimizes feature extractions and parameter tuning, leading to improved accuracy and robustness.
ConclusionThe integration of advanced segmentation, adaptive feature extraction, and optimal parameter tuning enhances the reliability and accuracy of the model. This framework provides a more effective and trustworthy solution for the early detection and clinical assessment of brain tumors, leading to improved patient outcomes through timely intervention.
-
-
-
Mapping the Evolution of Thyroid Ultrasound Research: A 30-year Bibliometric Analysis
More LessAuthors: Ting Jiang, Chuansheng Yang, Lv Wu, Xiaofen Li and Jun ZhangIntroductionThyroid ultrasound has emerged as a critical diagnostic modality, attracting substantial research attention. This bibliometric analysis systematically maps the 30-year evolution of thyroid ultrasound research to identify developmental trends, research hotspots, and emerging frontiers.
MethodsEnglish-language articles and reviews (1994-2023) from Web of Science Core Collection were extracted. Bibliometric analysis was performed using VOSviewer and CiteSpace to examine collaborative networks among countries/institutions/authors, reference timeline visualization, and keyword burst detection.
ResultsA total of 8,489 documents were included for further analysis. An overall upward trend in research publications was found. China, the United States, and Italy were the productive countries, while the United States, Italy, and South Korea had the greatest influence. The journal Thyroid obtained the highest IF. The keywords with the greatest strength were “disorders”, “thyroid volume”, and “association guidelines”. The timeline view of reference demonstrated that deep learning, ultrasound-based risk stratification systems, and radiofrequency ablation were the latest reference clusters.
DiscussionThree dominant themes emerged: the ultrasound characteristics of thyroid disorders, the application of new techniques, and the assessment of the risk of malignancy of thyroid nodules. Applications of deep learning and the development and improvement of correlation guides such as TI-RADS are the present focus of research.
ConclusionThe specific application efficacy and improvement of TI-RADS and the optimization of deep learning algorithms and their clinical applicability will be the focus of subsequent research.
-
-
-
Multimodal Imaging and Clinical Implications of Collagenous Fibroma in the Juxtaforaminal Premaxillary Fat Pad Mimicking Locoregional Tumor Recurrence: A Case Report and Literature Review
More LessAuthors: Jeong Pyo Lee, Hye Jin Baek, Ki-Jong Park, Jin Pyeong Kim, Hyo Jung An and Eun ChoBackgroundCollagenous fibroma (CF), or desmoplastic fibroblastoma, is a rare benign tumor with few reported cases involving the facial region. Its presence in uncommon sites can pose diagnostic challenges due to overlapping clinical and radiologic features with malignant neoplasms.
Case PresentationHerein, we report a case of a 48-year-old female with CF in the juxtaforaminal premaxillary fat pad, presenting with neuralgic pain extending to the ipsilateral upper gingiva. The patient had a history of adenoid cystic carcinoma (AdCC) of the right nasolabial fold, which was treated surgically four years prior. During evaluation with a multimodal radiologic approach using ultrasonography, CT, and MRI, the lesion was revealed to be a soft tissue lesion in the premaxillary region, raising suspicion of recurrent AdCC. However, histopathologic examination of the surgical excision confirmed the diagnosis of CF.
ConclusionThis case highlights the importance of integrating clinical history, imaging findings, and pathological analysis for accurate diagnosis and appropriate management.
-
-
-
Preliminary Study on the Evaluation Value of Extracellular Volume Fraction in the Pathological Grading of Lung Invasive Adenocarcinoma
More LessAuthors: Bin Nan, Yukun Pan, Yinghui Ge, Minghua Sun, Jin Cai and Xiaojing KanIntroductionThis study aims to evaluate the diagnostic value of extracellular volume fraction (ECV) and spectral CT parameters in assessing the pathological grading of lung invasive adenocarcinoma (IAC) presenting as solid or subsolid nodules.
MethodsA retrospective collection of patients who were pathologically confirmed as IAC with solid or subsolid pulmonary nodules at our hospital from March 2023 to November 2024 was conducted. Relevant data were recorded, and the patients were divided into two groups: intermediate/high differentiation and low differentiation. The parameters including arterial phase iodine concentration (ICA), arterial phase normalized iodine concentration (NICA), arterial phase normalized effective atomic number (nZeffA), arterial phase extracellular volume fraction (ECVA), venous phase iodine concentration (ICV), venous phase Normalized Iodine Concentration (NICV), venous phase normalized effective atomic number (nZeffV), and venous phase extracellular volume fraction (ECVV) were compared between the two groups. Parameters with statistical significance were evaluated for their diagnostic performance using Receiver Operating Characteristic (ROC) curves.
ResultsA total of 61 patients were included, comprising 40 in the intermediate to high differentiation group and 21 in the low differentiation group. The intermediate/high differentiation group had higher values of ECVA, NICA, ECVV, ICV, NICV, and nZeffV than the low differentiation group (P < 0.05). The AUC values for these parameters were 0.679, 0.620, 0.757, 0.688, 0.724 and 0.693 respectively. Among these, ECVV had the largest AUC, with a sensitivity and specificity of 72.5% and 71.4%, respectively. Through binary logistic regression analysis, five features were identified: the maximum diameter of the lesion, bronchus encapsulated air sign, lobulation sign, spiculation sign, and pleural traction sign. The integration of these imaging features with ECVV resulted in a model with enhanced diagnostic performance, characterized by an AUC of 0.886, a sensitivity of 85.7%, and a specificity of 80.0%.
DiscussionECVV outperforms other spectral parameters in differentiating IAC grades, reflecting changes in the tumor microenvironment. Combining ECVV with imaging features enhances diagnostic accuracy, though the study’s single-center design and small sample size limit generalizability.
ConclusionExtracellular volume fraction can provide more information for the pathological grading assessment of invasive adenocarcinoma of the lung. Compared to other spectral parameters, ECVV exhibits the highest diagnostic performance, and its combination with conventional imaging features can further enhance diagnostic accuracy.
-
-
-
Effective Feature Extraction for Knee Osteoarthritis Detection on X-ray Images using Convolutional Neural Networks
More LessAuthors: Lei Yu, Shuai Zhang, Xueting Zhang, Heng Wang, Mengnan You and Yimin JiangBackgroundKnee osteoarthritis (KOA) is a degenerative joint disease commonly assessed using X-ray images based on the Kellgren-Lawrence (KL) criteria. Although the KL standard exists, its ambiguity often causes patients to misunderstand their condition, leading to overtreatment or delayed treatment and challenges in guiding precise surgical decisions. Moreover, the data-driven technology has been impeded by low resolution and feature distribution inconsistency of knee X-ray images. The imbalances between positive and negative samples further degrade detection accuracy.
ObjectiveThe objective of this study was to develop a deep learning-based model, namely Task-aligned Path Aggregation Feature Fusion For Knee Osteoarthritis Detection (TPAFFKnee), to improve KOA detection accuracy by addressing limitations in traditional methods. Its more accurate detection could help in terms of proper treatment for patients and precision in surgery by physicians.
MethodsWe proposed the TPAFFKnee model based on the EfficientNetB4 network, which introduced a path aggregation network for better feature extraction and replaced Fully Convolutional Network (FCN) with task-aligned detection as the head. Additionally, the loss function was improved by replacing the original loss function with Efficient Intersection over Union Loss (EIoU Loss) to address the imbalance between positive and negative samples.
ResultsThe results showed that the model could accurately detect KOA categories and lesion locations based on the KL classification criteria, with a Mean Average Precision (mAP) of 93% on the Mendeley KOA dataset of 1650 knee osteoarthritis X-ray images from several hospitals. The mAP for the K2, K3, and K4 categories were 98.6%, 98.5%, and 99.6%, respectively. Compared with Faster R-CNN, SSD, RetinaNet, EfficientNetB4, and YOLOX, the proposed algorithm improved detection mAP by 14.3%, 12.4%, 15.3%, 22.7%, and 4.3%.
ConclusionThis study emphasizes the importance of the EfficientNetB4 network in KOA detection. The TPAFFKnee model provides an effective solution for improving the accuracy of KOA detection and offers a promising approach for standardized KL classification in medical applications. Future research can integrate more clinical data while improving the overall landscape of healthcare delivery through data-driven automation solutions.
-
-
-
DWI-based Biologically Interpretable Radiomic Nomogram for Predicting 1-year Biochemical Recurrence after Radical Prostatectomy: A Deep Learning, Multicenter Study
More LessAuthors: Xiangke Niu, Yongjie Li, Lei Wang and Guohui XuIntroductionIt is not rare to experience a biochemical recurrence (BCR) following radical prostatectomy (RP) for prostate cancer (PCa). It has been reported that early detection and management of BCR following surgery could improve survival in PCa.
This study aimed to develop a nomogram integrating deep learning-based radiomic features and clinical parameters to predict 1-year BCR after RP and to examine the associations between radiomic scores and the tumor microenvironment (TME).
MethodsIn this retrospective multicenter study, two independent cohorts of patients (n = 349) who underwent RP after multiparametric magnetic resonance imaging (mpMRI) between January 2015 and January 2022 were included in the analysis. Single-cell RNA sequencing data from four prospectively enrolled participants were used to investigate the radiomic score-related TME. The 3D U-Net was trained and optimized for prostate cancer segmentation using diffusion-weighted imaging, and radiomic features of the target lesion were extracted. Predictive nomograms were developed via multivariate Cox proportional hazard regression analysis. The nomograms were assessed for discrimination, calibration, and clinical usefulness.
ResultsIn the development cohort, the clinical-radiomic nomogram had an AUC of 0.892 (95% confidence interval: 0.783--0.939), which was considerably greater than those of the radiomic signature and clinical model. The Hosmer–Lemeshow test demonstrated that the clinical-radiomic model performed well in both the development (P = 0.461) and validation (P = 0.722) cohorts.
DiscussionDecision curve analysis revealed that the clinical-radiomic nomogram displayed better clinical predictive usefulness than the clinical or radiomic signature alone in both cohorts. Radiomic scores were associated with a significant difference in the TME pattern.
ConclusionOur study demonstrated the feasibility of a DWI-based clinical-radiomic nomogram combined with deep learning for the prediction of 1-year BCR. The findings revealed that the radiomic score was associated with a distinctive tumor microenvironment.
-
-
-
The Long-term Volumetric and Radiological Changes of COVID-19 on Lung Anatomy: A Quantitative Assessment
More LessAuthors: A. Savranlar, M. Öztürk, H. Sipahioğlu, Y. Savranlar and M. Tahta ŞahingözObjectiveThis study aimed to assess the long-term volumetric and radiological effects of COVID-19 on lung anatomy. The severity of the disease was evaluated using radiological scoring, and lung volume measurements were performed via 3D Slicer software.
MethodsA retrospective analysis was conducted on a total of 127 patients diagnosed with COVID-19 between April 2020 and December 2023. Initial and follow-up chest CT scans were reviewed to analyze lung volumes and radiological findings. Lung lobes were segmented using 3D Slicer software to measure volumes. Severity scores were assigned based on the Chung system, and statistical methods, including logistic regression and Wilcoxon signed-rank tests, were used to analyze outcomes.
ResultsFollow-up CT scans showed significant improvements in lung volumes and severity scores. The left lung total volume increased significantly (p = 0.038), while right lung total volume and COVID-19-affected lung volumes demonstrated non-significant improvements. Severity scores and the number of affected lobes decreased significantly (p 0.05). Correlation analyses revealed that age negatively influenced lung volume recovery (r = -0.177, p = 0.047). Persistent pathological findings, such as interstitial thickening and fibrotic bands, were observed.
ConclusionCOVID-19 induces lasting changes in lung structure, particularly in elderly and severely affected patients. Long-term follow-up and the consideration of antifibrotic therapies are essential to manage post-COVID-19 complications effectively. A multidisciplinary approach is recommended to support patient recovery and minimize healthcare burdens.
-
Volumes & issues
-
Volume 21 (2025)
-
Volume 20 (2024)
-
Volume 19 (2023)
-
Volume 18 (2022)
-
Volume 17 (2021)
-
Volume 16 (2020)
-
Volume 15 (2019)
-
Volume 14 (2018)
-
Volume 13 (2017)
-
Volume 12 (2016)
-
Volume 11 (2015)
-
Volume 10 (2014)
-
Volume 9 (2013)
-
Volume 8 (2012)
-
Volume 7 (2011)
-
Volume 6 (2010)
-
Volume 5 (2009)
-
Volume 4 (2008)
-
Volume 3 (2007)
-
Volume 2 (2006)
-
Volume 1 (2005)
Most Read This Month Most Read RSS feed