Recent Advances in Electrical & Electronic Engineering - Volume 18, Issue 10, 2025
Volume 18, Issue 10, 2025
-
-
Computational Analysis of Diabetes Kidney Diseases Using Machine Learning
More LessAuthors: Ganesh Chandra, Namita Tiwari, Urmila Mahor, Parashu Ram Pal, Vikash Yadav and Deepak Kumar MishraThe increasing complexity of healthcare, coupled with an ageing population, poses significant challenges for decision-making in healthcare delivery. Implementing smart decision support systems can indeed alleviate some of these challenges by providing clinicians with timely and personalized insights. These systems can leverage vast amounts of patient data, advanced analytics, and predictive modeling to offer clinicians a comprehensive view of individual patient needs and potential outcomes.
Currently, researchers and doctors need a faster solution for various diseases in health care. So they started to use the Machine Learning (ML) algorithms for better solution. ML is a sub field of Artificial Intelligence (AI) that provides a useful tool for data analysis, automatic process and others for healthcare system. The use of ML is increasing continuously in healthcare system due to its learning power.
In this paper the following algorithms are used for the diagnosis of Diabetes and Kidney Disease such as: Gradient Boosting Classifier (GBC), Random Forest Classifier (RFC), Extra Trees Classifier (ETC), Support Vector Classifier (SVC) and Multilayer Perceptron (MNP) Neural Network, In our model, Gradient Boosting Classifier is used with repeated cross validation to develop our system for better results. The experiment analysis performed for both unbalanced and balanced dataset. The accuracy achieved in case of unbalanced and balanced datasets for GBC, ETC, RFC SVC, MLP & DTC are 75.7 & 92.2, 75.7 & 90.1, 74.4 & 80.0, 62.5 & 66.4, 58.3 & 63.0 and 59.4 & 74.5 respectively. On comparing these results, we found that GBC results are better than other algorithms.
-
-
-
Recent Patents on Pipeline Detection and Cleaning Electronic Devices
More LessAuthors: Zhuo Cheng, Lie Li, Youtao Xia, Jiangnan Liu and Daolong YangThis paper provides an overview of the most recent advancements in pipeline inspection and cleaning technology over the past five years, drawing insights from patents and scholarly papers. It primarily focuses on three types of devices: long-distance pipeline inspection robots, small-diameter underground pipeline cleaning robots, and magnetic suction wall-climbing robots.
A pipeline robot is a robot specifically designed to inspect, maintain and repair pipelines. With the continuous development and expansion of pipeline systems, today's pipelines have become an integral part of urban infrastructure. However, because pipelines are often located underground or in other hard-to-reach places, traditional inspection and maintenance methods become very difficult and expensive. Therefore, it is important to research and develop pipeline robots that can improve the safety, reliability and efficiency of pipeline systems by inspecting, cleaning, overhauling and even repairing pipelines.
Devices need to be developed to clean the inner wall of pipelines from scale buildup and detect pipeline damage. This would improve conveyance efficiency and prolong the life of the pipeline.
The pipeline cleaning robot cleans the pipeline scaling, removes pipeline deposits, and effectively solves the problem of low pipeline conveying efficiency or pipeline clogging; the pipeline inspection robot inspects the pipeline after the pipeline cleaning robot cleans the pipeline, detects whether there is any damage to the pipeline, and reduces the probability of pipeline conveying pipeline breakage.
Pipeline robots can be transformed according to the parameters of the pipeline, adapting to different types of pipelines, and can effectively solve the problem of low pipeline conveying efficiency or pipeline clogging; pipeline inspection robots can detect the existence of pipeline damage in a timely manner, and can improve the reliability of pipeline conveying.
The above technology improves the life span of pipes in pipeline transportation, improves the efficiency of pipeline transportation, solves the problem of pipeline clogging, and guarantees the reliability of pipeline transportation.
-
-
-
Federated Learning: An Approach for Managing Data Privacy and Security in Collaborative Learning
More LessAuthors: Reeti Jaswal, Surya Narayan Panda and Vikas KhullarIn the field of machine learning, federated learning (FL) has become a breakthrough paradigm, provided a decentralized method of training models while solving issues with data security, privacy, and scalability. This study offers a thorough analysis of FL, including an examination of its underlying theories, its varieties, and a comparison with more conventional machine learning techniques. We explore the drawbacks of conventional machine learning techniques, especially when sensitive and distributed data is involved. We also explain how FL addresses these drawbacks by leveraging collaborative learning across decentralized devices or servers. We also highlight the various fields in which FL finds application, including healthcare, industries, IoT, mobile devices, and education, demonstrating its potential to deliver tailored services and predictive analytics while maintaining data privacy. Furthermore, we address the main obstacles to FL adoption, such as costly communication, heterogeneous systems, statistical heterogeneity, and privacy concerns, and we suggest possible directions for future research to effectively overcome these obstacles. In order to facilitate future study and growth in this quickly developing discipline, this review attempts to shed light on the advances and challenges of FL.
-
-
-
Research Progress on Damage and Protection of Key Friction Pairs in Axial Piston Pump
More LessAuthors: Jianying Li, Yongyuan Zha, Hailong Yang and Haoyang GaoAxial piston pumps represent one of the most core and technically challenging components in hydraulic systems. They are widely used in practical engineering fields such as hydraulic transmission and control. This article first outlines the development history of piston pumps and then provides a comprehensive analysis of the research progress on the friction characteristics and surface modification technologies of key friction pairs in axial piston pumps.
This article aims to summarize the friction wear and oil film characteristics of key friction pairs in axial piston pumps, analyze and elaborate the research progress in improving the tribological performance of friction pair surfaces through methods such as surface texturing and surface coating, and provide theoretical support for the damage and protection of axial piston pumps.
The present study was conducted by organizing and analyzing research literature from both domestic and international scholars. The objective was to explore the lubrication and friction wear properties of key friction pairs, as well as the application of surface texturing, surface coating, and other technologies in improving the tribological performance of friction pair surfaces.
A review of pertinent literature has revealed that surface texturing and surface coating can effectively reduce the friction coefficient of friction pairs, enhance lubrication performance, and thereby extend the service life of axial piston pumps. Moreover, these methods facilitate the reduction of wear on the friction pairs and enhance their overall performance.
This article presents a summary of the research progress on friction, wear, and oil film characteristics of key friction pairs in axial piston pumps. It also analyzes the role of technologies such as surface texturing and surface coating in improving the tribological properties of the friction pair surfaces. Furthermore, it provides a prospective outlook on future developments.
-
-
-
Recent Developments of Solar Charge Controllers Technologies: A Bibliometric Study
More LessAuthors: Azhar A.D., Arsad A.Z., Yew W.H., Ghazali A., Chau C.F. and Zuhdi A.W.M.A solar photovoltaic system is a renewable energy that depends on solar irradiance and ambient temperature. A solar charge controller is required to ensure the energy received by the photovoltaic cell or module is maximized at the output. This study presents the first bibliometric analysis based on a thorough evaluation of the most frequently cited articles on the solar charge controller to forecast future trends and applications. This paper performs a statistical analysis using the Scopus database and extracts the 100 most cited papers. It has been illustrated that the solar charge controller literature expanded swiftly from 2012 to 2023, with 2019 receiving the most publications and papers published in 2018 receiving more citations compared to works published in other years. In recent years, battery storage, electric vehicles, energy storage, controllers, photovoltaic systems, and power management systems have garnered great interest. The solar charge controller's functional evaluation and determination of the optimal renewable energy system might boost its potential. Our analysis of highly cited articles on solar charge controllers emphasizes a selection of features, including control methods and systems, issues, challenges to establishing current constraints, research gaps, and the need to find solutions and resolve problems. Every aspect of the features of this overview is anticipated to contribute to an upsurge through the invention of advanced solar charge controllers and control methods for future photovoltaic systems.
-
-
-
Development Status and Future Prospects of Strapdown Inertial Navigation Technology
More LessAuthors: Hongbo Liu, Juncheng Wu, Yuze Lin and Xiaodong YangBackgroundInertial navigation is a comprehensive technology involving precision machinery, computer technology, microelectronics, optics, automatic control, materials, and other disciplines and fields. Strapdown inertial navigation systems have gradually developed into the mainstream and direction of inertial navigation systems because of their small size, low cost, simple structure, and high reliability. The angular rate and acceleration information of the carrier relative to the inertial space are measured by the inertial measurement unit (IMU), and the instantaneous velocity and position information of the carrier are automatically calculated by Newton's law of motion. It has the characteristics of not relying on external information, not radiating energy to the outside world, not being disturbed, and good concealment. Therefore, it is widely used in aerospace, aviation, and navigation, especially in the military field.
ObjectiveThis paper describes the development history of the Strapdown Inertia Navigation System summarizes the research status of the Strapdown Inertia Navigation System, and looks forward to its future development direction.
MethodsThe key technologies of strapdown inertial navigation systems are investigated, and the related research on strapdown inertial navigation technology is understood in detail. The hardware composition and related algorithms of the strapdown inertial navigation system are analyzed in detail.
ResultsNowadays, strapdown inertial navigation technology is the mainstream of inertial navigation technology. Compared with the gimbaled inertial navigation system technology, the strapdown inertial navigation system has higher precision, smaller volume, and lower cost. At present, the initial alignment technology, error compensation technology, navigation algorithm technology, and inertial device technology of strapdown inertial navigation technology have made full development.
ConclusionWith the development of initial alignment technology, error compensation technology, navigation algorithm technology, and inertial device technology, the accuracy, response speed and anti-interference ability of strapdown inertial navigation technology have been significantly improved. The development of strapdown inertial navigation is closely related to the gyroscope technology which belongs to inertial device technology. The strapdown inertial navigation system based on optical gyroscopes such as laser gyroscopes and fibre optic gyroscopes is mature, and the MEMS strapdown inertial navigation system has a wide application prospect compared with the optical gyroscope strapdown inertial navigation system.
-
-
-
A Comprehensive Assessment of Various Converter Topologies for Electric Vehicle and Grid Integration
More LessThe emergence of Vehicle-to-Grid technology presents an encouraging frontier, enabling electric vehicles (EVs) to work as energy resources for various resident appliances, thereby playing crucial roles in peak load management, power stability, and balancing the load. Vehicle-to-grid hinges on both directions of power flow between the Electric Vehicle batteries and AC grid, facilitated by converters having bidirectional electronics power. This study undertakes a comprehensive investigation of various power factor correction (PFC) devices and various converter (DC to DC) topologies tailored for Vehicle to Grid or Grid to Vehicle operations. A detailed comparison of THD, efficiency, and power level of various topologies has been presented in this paper. The observations based on the comparative analysis have also been given. Additionally, through a detailed analysis, considering factors such as efficiency and THD, this study aims to guide the selection of suitable bidirectional chargers, thus serving as a valuable resource for decision-makers in the Electric Vehicle domain.
-
-
-
Transformational Trends Shaping the Future of the Telecom Industry
More LessAuthors: P. Ashok and Giri HallurThe fast adoption of cutting-edge technology is transforming the telecom business. This article examines the telecom industry's future, emphasizing the importance of new technology. 5G connection offers unparalleled speeds and low latency, supporting innovative applications and services across industries. AI and ML are improving network performance, consumer experiences, and operational automation. The Internet of Things (IoT) is connecting more gadgets, making cities smarter and factories more efficient. Edge computing also reduces latency and improves real-time data processing, which is crucial for autonomous cars and AR. Blockchain technology improves telecom transaction security and transparency, reducing fraud and data leaks. Quantum computing may deliver extraordinary computational power and secure communication routes. Cloud computing is vital for managing data traffic and developing novel services as the industry prepares for 6G technology, which promises faster speeds and more dependable connections. This article examines these technical changes and their effects on the telecom sector in various countries. It creates a picture of a dynamic and fast changing world where the customer requirements are growing day and by day and how companies aim to retain their customers by providing high quality of services. The analysis also emphasizes how the governments’ policies and investments initiated certain developments in each region in the telecom industry.
-
-
-
Some Studies on Two-degree-of-freedom Inversion-based Adaptive Control for Time-delay Systems
More LessAuthors: Santanu Mallick and Ujjwal MondalBackgroundAdaptive controller has contributed to a generic approach for modeling and controlling various complicated systems. In the non-minimum phase (NMP) system, inversion-based feedforward tracking is not achievable, as its inverse transfer model is unstable. To overcome the set point tracking problem of the non-minimum phase system, model reference adaptive control is used as inversion-based feedforward control, and state feedback is used as feedback control in the two-degree-of-freedom (TDOF) structure. Here, the model reference adaptive control (MRAC) scheme is used for obtaining the desired response of the system with a small steady-state error, whereas the state feedback technique provides stabilization of the system. The presence of a delay in the transfer function turns the system into a non-minimum phase system. The time-delay system is widely applied as a benchmark for designing different control methodologies.
AimsIn this study, a two-degree-of-freedom inversion-based model reference adaptive control for a time-delay system was implemented.
ObjectiveA two-degree-of-freedom inversion-based model reference adaptive control for a class of time-delay systems was implemented so that the system could track the reference input. Moreover, it also provided stability to the given system.
MethodsDifferent techniques, such as two-degree-of-freedom inversion-based model reference adaptive control, Proportional Integral Derivative (PID) control, and (FOPID) control, were discussed for the stabilization of a class of time-delay systems. Here, an inversion-based model reference adaptive control structure with rules of the Massachusetts Institute of Technology (MIT), along with a state feedback control technique, was applied for the delay system so that the system could track the reference input.
ResultsBy the analysis of different techniques, it was observed that the combined form of MRAC and state feedback control scheme was able to provide a suitable tracking response. Here, the unit step signal was used as the reference input. In a time-delay system, an initial undershoot was observed; however, using a two-degree-of-freedom inversion-based MRAC technique, the initial undershoot was almost nullified. The two-degree-of-freedom technique was also found to provide a better tracking response compared to PID control and (FOPID) control technique, as also elaborated in this study.
ConclusionThe results of the simulation using MATLAB indicated that the proposed two-degree-of-freedom inversion-based adaptive control technique provides a better tracking response compared to PID control and fractional order PID control. However, for the stabilization of time-delay systems, other methodologies may be implemented.
-
-
-
HA-SCINet based Carbon Emission Prediction of the Typical Park
More LessAuthors: Wang Zeli, Liu Zeyu, Song Xupeng, Wei Guangtao, Li Mingchun, Huang Hongjun, Shao Weiqi and Li YuanchengBackgroundWith the rapid economic growth and the accelerated process of industrialization, the production activities of enterprises within parks have significantly increased, leading to a continuous rise in carbon emissions. Under the context of the “dual carbon” goals, studying the prediction of carbon emissions in typical parks holds significant practical importance. It is not only a key measure to address climate change but also an important pathway to achieve sustainable development.
ObjectiveIn order to predict the carbon emissions of the typical park more accurately, we propose a carbon emissions prediction model HA-SCINet.
MethodsThe model uses a recursive downsampling-convolution-interaction architecture. In each layer, the long-term dependence in time series data is extracted by HyperAttention. Then through the L-layer SCI-Block of the binary tree structure, the down-sampling interactive learning extracts both short-term and long-term dependencies. These extracted features are merged and reorganized, and added to the original time series to generate a new sequence with enhanced predictability. Finally, employ a fully connected network for forecasting the enhanced sequence. The carbon emission data of the typical park serve as input, leading to higher accuracy prediction results through the Stacked K-layer stacked HA-SCINet.
ResultsThe mean square error (MSE) and mean absolute error (MAE) of HA-SCINet prediction model are 0.0819 and 0.204 respectively, outperforming the mainstream Dlinear and Nlinear models.
ConclusionThe experimental results show that the devised model outperforms in predicting carbon emissions, and is better suited for forecasting carbon emissions within the context of the typical park.
-
-
-
Study of Cross-layer Design of the Shortest Path Routing and the SDMA in Wide-area Directed MANET
More LessBackgroundTo satisfy the battle need, the information transmission delay of Mobile Ad Hoc Networks (MANETs) which are the neural network and information transmission channel for the modern battlefield must be lower. Therefore, the design of low-delay routing algorithms is of great significance for improving the operational capabilities of wide-area directional MANETs. The traditional routing algorithm design is mostly based on OSI/RM model, not on the global optimization, resulting in the algorithm not having adaptability and performance being far from optimal.
ObjectiveAn algorithm named MNCR (MAC and Network Cross-layer Routing) based on cross-layer optimizing was proposed, which can finish the end-to-end data transmission with lower delay for some particular application like wide-area directed mobile ad hoc network (MANET) for a cooperative strike.
MethodsThe proposed MNCR algorithm selected the shortest routing path based on the SDMA (Space Division Multiple Access) timetable applied and informed by MAC. When a packet arrived, the node computed all the possible paths and the corresponding delay according to the SDMA timetable and then selected the path with the lowest delay, which ensured that the packets were transmitted to the destination.
ResultsThe theoretical analysis shows that the MNCR algorithm is less expensive than the classical shortest path algorithm. The simulation results show that the MNCR routing algorithm can save the end-to-end delay and ensure the real-time transmission of information. At the same time, the algorithm effectively balances the load of network nodes and overcomes the problem that the shortest path algorithm is prone to the “bottleneck effect”, which can effectively improve the reliability of information transmission.
ConclusionThe proposed MNCR algorithm can finish the end-to-end data transmission with lower delay. So it has certain particular applications in wide-area directed mobile ad hoc networks (MANET) for cooperative strikes.
-
-
-
Multi-view 3D Reconstruction based on Context Information Fusion and Full Scale Connection
More LessAuthors: Yunyan Wang, Yuhao Luo and Chao XiongBackgroundMulti-view stereo matching is the reconstruction of a three-dimensional point cloud model from multiple views. Although the learn-based method achieves excellent results compared with the traditional method, the existing multi-view stereo matching method will lose the underlying details when extracting features due to the deepening of the number of convolutional layers, which will affect the quality of subsequent reconstruction.
ObjectiveThe objective of this approach is to improve the integrity and accuracy of 3D reconstruction, and obtain a 3D point cloud model with richer texture and more complete structure.
MethodsFirstly, a context-semantic information fusion module is constructed in the feature extraction network FPN, and the feature maps containing rich context information can be obtained by using multi-scale dense connections.Subsequently, a full-scale jump connection is introduced in the regularization process to capture the shallow level of detail information and deep level of semantic information at the full scale, and capture the texture features of the scene more accurately, so as to carry out reliable depth estimation.
ResultsThe experimental results on DTU dataset show that the proposed CU-MVSNet reduces the completeness error by 3.58%, the accuracy error by 3.7%, and the overall error by 3.51% compared with the benchmark network. It also shows good generalization on TnT dataset.
ConclusionThe CU-MVSNet method proposed in this paper can improve the completeness and accuracy of 3D reconstruction, and obtain a 3D point cloud model with more detailed texture and more complete structure.
-
-
-
Stability Analysis of Direct-drive Wind Turbines Connected To Weak Grid Using Linear Immunity Impedance Modeling
More LessAuthors: Can Ding, Houjun Yang, Yudong Shi and Ying Bo JiBackgroundWith the direct-drive wind turbine connected to the weak grid, subsynchronous oscillation (SSO) problems occur frequently. The use of traditional control strategies to inhibit it has a poor effect and will seriously threaten the stability of system operation.
MethodsThis paper initially analyzes the factors that give rise to subsynchronous oscillation (SSO) based on the established sequential impedance model for direct-drive wind power grid connection. Through impedance sensitivity analysis of the control parameters, it is found that the proportional gain of the current loop and that of the phase-locked loop are the dominant factors generating the risk of SSO. Secondly, considering that Linear Active Disturbance Rejection Control (LADRC) has a faster response speed and stronger anti-disturbance ability compared with the traditional Proportional Integral (PI) control, this paper replaces the phase-locked loop and current-loop PI control with the self-resistant controller. The second-order extended state observer is employed to perform real-time estimation and compensation of system perturbations.
ResultsAfter performing simulation and frequency domain analysis on the MATLAB/Simulink platform, it is found that compared with traditional PI control and voltage feedforward control strategies, the direct-drive wind power grid-connected system using LADRC control can effectively suppress oscillations under different weak grids and possess better robustness.
ConclusionThe direct-drive wind power grid-connected system equipped with LADRC control can effectively suppress oscillations under different weak grid conditions and shows good robustness.
-
-
-
A Novel Fault Location Method Based on ICEEMDAN-NTEO and Ghost-Asf-YOLOv8
More LessAuthors: Can Ding, Changhua Jiang, Fei Wang and Pengcheng MaBackgroundThe rapid growth of distribution grids and the increase in load demand have made distribution grids play a crucial role in urban development. However, distribution networks are prone to failures due to multiple events. These faults not only incur high maintenance costs, but also result in reduced productivity as well as huge economic losses. Therefore, accurate and fast fault localization methods are very important for the safe and stable operation of distribution systems.
MethodsFirstly, the Ghost-Asf-YOLOv8 network is employed to assess the three-phase fault voltage travelling waveforms at both ends of the line, determine the temporal range of the fault occurrence, and differentiate its line mode components. Subsequently, the ICCEMDAN algorithm is employed to decompose the line mode components, thereby yielding the IMF1 components. The key feature information is then enhanced through the application of NTEO. Finally, the Ghost-Asf-YOLOv8 network is employed to further narrow down the time range of the initial traveling wave head, thereby enabling the calculation of the fault location and the determination of the traveling wave arrival time.
ResultsExperiments are conducted based on the simulation data of the constructed hybrid line model, and the comparison experiments between the TEO algorithm and the NTEO algorithm are conducted, which show that the NTEO has good noise immunity when applied to fault localization. In addition, the proposed ICCEMDAN-NTEO method is also compared with the fault localization methods based on DWT and HHT, and the results show that the method has high accuracy. Finally, the light weighted YOLOv8 model captures the traveling wave time quickly and accurately to compensate for the shortcomings of the visualization data.
ConclusionThis work presents a novel fault localization method that integrates traditional and artificial intelligence techniques, offering rapid detection and minimal localization error.
-
-
-
Predictive Analysis of the Mental Health Risks of Music by EEG and Magnetic Resonance Imaging
More LessAuthors: Jiexin Chen, Yibing Cao and Lei XuIntroductionThe current research on the mental health risks of music has shortcomings in data collection, individual differences, and evaluation criteria. For this reason, this article will use neuroplasticity EEG and magnetic resonance imaging techniques to provide a basis for early identification and prevention of music-related mental health risks.
MethodsFirst, EEG was used to perform neuroimaging tests on participants, and it was observed that music stimulation can cause specific changes in brain electrical activity, and the EEG characteristics were preprocessed; then magnetic resonance imaging technology was used to further reveal the structural and functional changes of the brain under music stimulation, and the potential regulatory effects of music on mental health risks were discovered.
ResultsThe average Valence score of participants after playing positive music increased from 3.5 points to 7.15 points, and the degree of pleasure increased by 3.65 points (p<0.05) with statistically significant differences; the influence of brainwave music on beta waves is also more significant (p<0.01). Discussion: The results of this study show that music has a significant impact on mental health. Positive music can significantly improve the pleasure of participants, while bad music may lead to a decrease in pleasure.
ConclusionThis study demonstrates that music significantly influences brain activity and emotional states, as evidenced by EEG and MRI data. Positive music enhances pleasure and modulates beta waves, suggesting a protective effect on mental health, while negative music may pose emotional risks. These neurobiological markers offer objective tools for early prediction and personalized intervention in music-related mental health issues. Despite limitations in sample size and short-term observation, our findings advance the use of neuroimaging in identifying at-risk individuals and support the development of music-based preventive strategies.
-
-
-
Optimal Scheduling Method of Medium and Low Voltage AC-DC Hybrid Distribution Network Based on Edge Cloud Cooperation
More LessAuthors: Yongxiang Cai, Hongyan He, Yueqi Wen, Yuanlong Gao, Qiming Zhang, Song Zhang and Guohao LinIntroductionThe rapid advancement of power electronics technology has endowed VSC-interconnected AC/DC hybrid distribution networks with superior operational advantages, including enhanced power supply capacity and improved compatibility with renewable energy integration. Furthermore, the progressive development of cloud computing and edge computing architectures has significantly developed cloud-edge collaborative control technologies. Moreover, to synergistically leverage these dual technological advancements, this paper proposes a coordinated cloud-edge control methodology for medium/low-voltage VSC-based AC/DC hybrid distribution systems.
MethodsThis paper proposes a cloud-edge coordinated control methodology for VSC-interconnected medium/low-voltage AC/DC hybrid distribution networks. The methodological framework comprises three principal phases: First, a comprehensive analysis is conducted on the interaction mechanisms and regulatory capabilities between cloud servers and edge computing nodes. Subsequently, a hierarchical control strategy is developed through cloud-edge coordination, where the cloud layer optimizes network loss minimization while edge layers simultaneously minimize the weighted sum of power losses and three-phase imbalance levels. Finally, the multi-objective optimization model is systematically transformed into a second-order cone programming (SOCP) formulation, establishing an efficient convex optimization framework.
ResultsA comprehensive case study was conducted on a representative AC/DC hybrid medium/low-voltage distribution network topology to validate the proposed methodology. The numerical results demonstrate that the medium-voltage (MV) side dispatch strategy achieves 52% network loss reduction compared to pre-dispatch conditions through active-reactive power coupling-enhanced photovoltaic accommodation. Furthermore, the cloud-edge coordinated framework enables deep exploitation of operational potential in low-voltage (LV) AC/DC interconnected feeder sections, effectively mitigating voltage violations while maintaining three-phase equilibrium constraints. Particularly, the synergistic optimization mechanism reduces power losses to 48% of baseline values through coordinated control of converter stations and intelligent edge devices.
DiscussionThe results of this paper show that the proposed method can promote the photovoltaic consumption of medium and low voltage AC / DC hybrid distribution network with high proportion of renewable energy generation access, make full use of the advantages of DC lines in new energy access capacity and the advantages of flexible equipment in flexible regulation and control ability, and ensure that the voltage of distribution network does not exceed the limit and the three-phase unbalance degree does not exceed the limit in the period of high photovoltaic output. However, it should be noted that the method proposed in this paper has certain limitations. The method proposed in this paper has higher requirements for global communication. For the low-voltage distribution station area with incomplete communication and incomplete measurement, distributed control and other methods are more suitable.
ConclusionThis study addresses the challenge of insufficient photovoltaic (PV) hosting capacity caused by large-scale distributed PV integration in medium/low-voltage distribution networks. Physically, we develop a hybrid AC/DC distribution network topology leveraging the flexible power dispatch capabilities of voltage source converters (VSCs), thereby overcoming the conventional radial topology constraints. Computationally, a cloud-edge coordinated control architecture driven by distributed computing paradigms is proposed, which synergistically exploits the regulation potential of low-voltage (LV) feeder sections through two coordinated mechanisms: 1) A hierarchical optimization framework that decouples system-level objectives (cloud layer) and local constraints (edge layer), significantly enhancing computational efficiency; 2) Dynamic resource allocation that fully utilizes edge computing nodes for real-time adjustment while maintaining global optimality through cloud-based coordination.
-
-
-
Pulmonary Nodule Detection Using RBB-Based Optimised Yolov8x-C2f Network
More LessAuthors: Swati Chauhan, Nidhi Malik and Rekha VigIntroductionMalignant lung nodules are a major cause of lung cancer-related mortality and rank among the most prevalent cancers worldwide. Due to a lack of contrast, little nodules blend with their surroundings and other structures; therefore, it is difficult to detect them efficiently during the diagnostic phase, making it challenging for the radiologist to determine whether the nodule is malignant or not. This study evaluates the model’s performance in rapidly and accurately detecting nodules from lung CT scans.
MethodsNodule detection is accomplished by using conventional diagnostic procedures such as radiographic imaging techniques and computerized tomography (CT). However, these methods aren't always effective in spotting tiny nodules, and they can put patients at risk of radiation exposure. Consequently, there has been a lot of research in this field using deep learning to process images and identify nodules in the lungs. To improve the model's accuracy, our method employs a residual bounding box-based optimized “You Only Look Once version 8x-Coordinates-To-Features” (YOLOv8x-C2f) model in conjunction with a handful of preprocessing steps.
ResultsThis model is evaluated with the help of the “Lung Image Database Consortium and Image Database Resource Initiative” LIDC/IDRI dataset, which was acquired through the lung nodule analysis (LUNA16) grand challenge. With an impressive mean average precision (mAP50) of 0.70% and precision of 89%, the suggested model achieves an impressive accuracy of 95.2% when it comes to nodule recognition with a confidence factor.
ConclusionThe study demonstrates that the model's superior architecture and features can accurately identify and localize nodules, enhancing overall performance relative to state-of-the-art approaches.
-
-
-
Short-term Load Forecasting Based on K-medoids Clustering and XGB-Tide Model
More LessAuthors: Bohao Sun, Yuting Pei, Bo Yan, Zesen Wang and Liying ZhangIntroductionElectric power load is significantly influenced by weather conditions, making accurate load prediction under varying weather scenarios essential for effective planning and stable operation of the power system. This paper introduces a short-term load forecasting method that combines k-Medoids clustering and XGB-TiDE. Initially, k-Medoids clusters the original load data into categories such as sunny, high temperature, and rain/snow days. Subsequently, XGBoost identifies critical features within these subsequences. The combined forecast model, XGB-TiDE, is then tailored for each subsequence. Here, the TiDE model's predictions are refined point-by-point using the XGBoost results to derive the final short-term load forecasts. An empirical analysis using real power load data from a specific region demonstrates that our proposed model achieves superior accuracy, especially under extreme weather conditions such as high temperatures and precipitation. Electric power load is significantly influenced by weather conditions, making accurate load prediction under varying weather scenarios essential for effective planning and stable operation of the power system.
ObjectiveAddressing the limitations of short-term load forecasting under extreme weather conditions, this paper introduces a novel approach that leverages k-Medoids clustering and the XGB-TiDE model to enhance forecasting accuracy. This method strategically segments power load data into meaningful clusters before applying the robust predictive capabilities of XGB-TiDE, aiming for a significant improvement in forecast precision.
MethodsInitially, k-Medoids clusters the original load data into categories such as sunny, high temperature, and rain/snow days. Subsequently, XGBoost identifies critical features within these subsequences. The combined forecast model, XGB-TiDE, is then tailored for each subsequence. Here, the TiDE model's predictions are refined point-by-point using the XGBoost results to derive the final short-term load forecasts.
ResultsThe model presented in this study demonstrates outstanding performance across all weather conditions, consistently achieving lower mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) compared to other models. For instance, on a sunny day, this model records an MAE of 8.49, RMSE of 10.29, and MAPE of 2.55%, markedly surpassing the Autoformer, which shows an MAE of 18.29, RMSE of 22.33, and MAPE of 5.50%. These results underscore the superior accuracy of our proposed forecasting approach.
ConclusionAn empirical analysis using real power load data from a specific region demonstrates that our proposed model achieves superior accuracy, especially under extreme weather conditions such as high temperatures and precipitation.
-
-
-
EMIF: Equivariant Multimodal Medical Image Fusion Network Via Super Token and Haar Wavelet Downsampling
More LessAuthors: Yukun Zhang, Lei Wang, Zizhen Huang, Yaolong Han, Shanliang Yang and Bin LiBackgroundMultimodal medical image fusion is a core tool to enhance the clinical utility of medical images by integrating complementary information from multiple images. However, the existing deep learning-based fusion methods are not good at effectively extracting the key target features, and easy to make the results blurry.
ObjectiveThe main objective of the paper is to propose a medical image fusion method that effectively extracts features from source images and preserves them in the fused results.
MethodsThe prior knowledge and a dual-branch U-shaped structure are employed by the proposed method to extract both the local and global features from images of different modalities. A novel Transformer module is designed to capture the global correlations at the super-pixel level. Each feature extraction module uses the Haar Wavelet downsampling to reduce the spatial resolution of the feature maps while preserving as much information as possible, effectively reducing the information uncertainty.
ResultsExtensive experiments on public medical image datasets and a biological image dataset demonstrated that the proposed method achieves superior performance in both qualitative and quantitative evaluations.
ConclusionThis paper applies prior knowledge to medical image fusion and proposes a novel dual-branch U-shaped medical image fusion network. Compared with nine state-of-the-art fusion methods, the proposed method produces better-fused results with richer texture details and better visual quality.
-
-
-
Comparative Analysis of CNN Performances Using CIFAR-100 and MNIST Databases: GPU vs. CPU Efficiency
More LessAuthors: Rania Boukhenoun, Hakim Doghmane, Kamel Messaoudi and El-Bay BourennaneIntroductionConvolutional Neural Networks (CNNs) and deep learning algorithms have significantly advanced image processing and classification. This study compares three CNN architectures VGG-16, ResNet-50, and ResNet-18, and evaluates their performance on the CIFAR-100 and MNIST datasets. Training time is prioritized as a critical metric, along with test accuracy and training loss, under varying hardware configurations (NVIDIA-GPU and Intel-CPU).
MethodsExperiments were conducted on an Ubuntu 22.04 system using the PyTorch framework. The hardware configurations included an NVIDIA GeForce GTX 1660 Super GPU and an Intel Core i5-10400 CPU. The CIFAR-100 dataset, containing 60,000 color images across 100 classes, and the MNIST dataset, comprising 70,000 grayscale images, were used for benchmarking.
ResultsThe results highlight the superior efficiency of GPUs, with training times reduced by up to 10x compared to CPUs. For CIFAR-100, VGG-16 required 13,000 seconds on the GPU versus 130,000 seconds on the CPU, while ResNet-18, the most time-efficient model, completed training in 150 seconds on the GPU and 1,740 seconds on the CPU. ResNet-50 achieved the highest test accuracy (~80%) on CIFAR-100. On MNIST, ResNet-18 was the most efficient, with training times of 185 seconds on the GPU and 22,000 seconds on the CPU.
DiscussionThis study highlights the clear advantage of GPUs over CPUs in reducing training times, particularly for complex models such as VGG-16 and ResNet-50. ResNet-50 achieved the highest accuracy, while ResNet-18 was the most time efficient. However, the use of simpler datasets (MNIST and CIFAR-100) may not fully capture the complexity of the real world.
ConclusionThis study emphasizes the importance of hardware selection for deep learning workflows. Using the CIFAR-100 and MNIST datasets, we demonstrated that GPUs significantly outperform CPUs, achieving up to a 10x reduction in training time while maintaining competitive accuracy. Among the architectures tested, ResNet-50 delivered the highest test accuracy (~80%) on CIFAR-100, demonstrating superior feature extraction capabilities compared to VGG-16 and ResNet-18. Meanwhile, ResNet-18 proved to be the most time-efficient architecture, completing training in 150 seconds on the GPU, a significant improvement over VGG-16's 13,000 seconds. These results highlight the advantage of residual connections in reducing training complexity and achieving higher performance. The results underscore the critical role of both architecture selection and hardware optimization in advancing deep learning workflows.
-
Most Read This Month