Recent Advances in Computer Science and Communications - Online First
Description text for Online First listing goes here...
1 - 20 of 23 results
-
-
Machine Learning Based Cancer Detection and Classification: A Critical Review of Approaches and Performance
Authors: Pragya Singh and Sanjeev KumarAvailable online: 17 March 2025More LessBackgroundCancer is known as a deadly disease, which includes several types of cancer. Cancer cannot be cured without proper treatment. Also, it is crucial to detect cancer at an early stage. The objective of this study is to examine, assess, classify, and explore recent advancements in the detection of different human body cancer types, such as breast, brain, lung, liver, and skin cancer.
MethodThis study explores several tools and methods in machine learning, either supervised or unsupervised, and deep learning involved in treatment procedures. It also highlights current issues and provides directions for future research projects. In this review study, different advanced machine learning, deep learning and artificial intelligence algorithms are used for the detection and classification of different types of cancers, including breast, skin, lung cancer and brain tumor.
ResultsThis paper reviews advanced techniques, standard dataset comparison and analysis of identification of skin, breast, lung cancer and brain tumors. It also evaluates these techniques from the perspectives of F-measure, sensitivity, specificity, accuracy, and precision.
ConclusionThis research article focuses on detecting cancer using machine learning techniques. Successive improvements and detection of cancer over the past decades are reviewed, covering various types of cancer-like breast, brain, lung, liver, skin, and others. This paper focuses on the usage of machine learning in the diagnosis, treatment, and improvement of cancer.
-
-
-
A Study on Privacy Preserving Techniques in Fog Computing: Issues, Challenges, and Solutions
Authors: Sabitha Banu A, Isbudeen Noor Mohamed, Shabbir Ahmed and Mehdi GheiseriAvailable online: 19 February 2025More LessPrivacy plays a substantial role in both public and private, databases especially in the healthcare industry and government sectors require a high-confidential data transmission process. Most often, these data contain personal information that must be concealed throughout data processing and transmission between terminal devices and cloud data centers, such as username, ID, account information, and a few more sensitive details. Recently, fog computing highly utilized for such data transmission, storing, and network interconnection processes due to its low latency, mobility, reduced computational cost, position awareness, data localization, and geographical distribution. It delivers to cloud computing and the widespread positioning of IoT applications. Since fog-based service is provided to the massive-scale end-end users by fog server/node, privacy is a foremost concern for fog computing. Fog computing poses several challenges if it comes to delivering protected data transfer; making the development of privacy preservation strategies particularly desirable. This paper exhibits a systematical literature review (SLR) on privacy preservation methods developed for fog computing in terms of issues, challenges, and various solutions. The main objective of this paper is to categorize the existing privacy-related research methods and solutions that have been published between 2012 and 2022 using analytical and statistical methods. The next step is to present specific practical issues in this area. Depending on the issues, the merits and drawbacks of each suggested fog security method are explored, and some suggestions are made for how to tackle the privacy concerns with fog computing. To build, deploy, and maintain fog systems, several imminent motivational directions and open concerns in this topic were presented.
-
-
-
A Reinforcement Learning Inspired Approach for Efficient Cognitive Radio Network Routing
Authors: Parul Tomar, Ranjita Joon, Gyanendra Kumar and P KarthikAvailable online: 28 January 2025More LessIntroductionOne fundamental characteristic of Cognitive Radio Networks (CRNs) is their dynamic operating environment, where network conditions, such as the activities of Primary Users (PUs), undergo continuous changes over time. While Secondary Users (SUs) are engaged in communication, if a PU reappears on an SU's channel, the SU is required to vacate the channel and switch to another available channel. Thus, finding a stable route that minimizes frequent channel switches is a challenging task in CRNs.
MethodExisting solutions to reduce PU interference often overlook the energy consumption of nodes when forming clusters, focusing solely on the minimum number of common channels in a cluster. Consequently, these schemes suffer from frequent channel switches due to PU appearances. The proposed Cognitive Radio Network Routing (CRNR) approach aims to minimize frequent channel switches by employing a Reinforcement Learning (RL) technique called Q-Learning to select stable routes with channels exhibiting higher OFF-state probabilities.
ResultThis strategy ensures that selected routes avoid rerouting by prioritizing channels with higher off-state probabilities. Experimental studies demonstrate that the CRNR approach enhances network throughput and reduces interference when compared with existing techniques. CRNR introduces a novel application of AI, use of Q-Learning, a reinforcement learning technique in wireless networks.
ConclusionThis bridges the gap between machine learning and network design, showcasing how intelligent algorithms can optimize communication decisions in real-time, which could inspire further exploration of AI-driven techniques in network management and beyond.
-
-
-
Real-Time Analysis of Sensitive Data Security in Manuscript Transition
Authors: Farhat Firoz, Jyoti Srivastava, Fahad A. Al-Abbasi and Firoz AnwarAvailable online: 23 January 2025More LessBackgroundCybersecurity requirements for ensuring data security during research manuscript transit on the journal website require continuous improvement and adherence to best practices. Research data loss can have significant negative consequences across multiple dimensions including time and financial loss. The present research investigates security vulnerabilities during the real-time transit of manuscripts on a journal website.
Material and MethodsProcedure: Website Access: The journal website was accessed, and manuscript components (main manuscript, figures, tables, graphical abstract, funding sources, suggested reviewer, and cover letter) were uploaded.
Operating system: Kali Linux, designed for penetration testing and security auditing was used.
Tools and software: Nmap (Version 7.95-2) for network discovery and security auditing. Nikto (2.5.0) for web server vulnerability scanning, and Tor (13.0.13) to anonymize web activities. Firefox (127.0.2) as the web browser, and VMware Workstation with Kali Rolling (2023.2 in a virtual environment.
Testing phase: Initial upload of the manuscript and supplementary materials. Upload of figures, tables, and graphical abstract. Inclusion of funding sources, suggested reviewers, and cover letter.
Data Collection and Analysis: Network traffic and potential vulnerabilities were monitored on Nmap, Nikto, and Tor.
Activities were conducted in the virtual environment of VMware Workstation for controlled and replicable setup.
Output measures: Identified and documented potential security gap or vulnerabilities leading to data theft during manuscript transit.
ResultsAn Nmap scan of XXXXXXXX.com (IP: yyyyyyyyyyy) revealed six open ports: 80 (HTTP Apache), 443 (SSL/SMTP Exim), 587 (SMTP Exim), 993 (IMAPS), and 995 (POP3S). each server showed potential vulnerabilities. The scan took 86.15 seconds.
ConclusionThe results demonstrate a high risk of exposing sensitive information due to open ports, the presence of potentially outdated services, and the possibility of incomplete detection due to filtered ports pose a high risk of sensitive data during manuscript transit on the website of the journal.
-
-
-
Image Encryption for Indoor Space Layout Planning
Authors: Ping Ye and Jihoon KweonAvailable online: 30 December 2024More LessBackgroundIndoor space layout planning and design involves sensitive and confidential information. To enhance the security and confidentiality of such data, the study introduces an advanced image encryption algorithm. This algorithm is based on simultaneous chaotic systems and bit plane permutation diffusion, aiming to provide a more secure and reliable approach to indoor space layout design.
MethodsThe study proposes an image encryption algorithm that incorporates simultaneous chaotic systems and bit plane permutation diffusion. This algorithm is then applied to the process of indoor space layout planning and design. Comparative analysis is conducted to evaluate the performance of the proposed algorithm against other existing methods. Additionally, a comparative testing of indoor space layout planning and design methods is carried out to assess the overall effectiveness of the research method.
ResultsThrough the algorithm comparison test, information entropy, adjacent pixel distribution and response time were selected as evaluation indexes. The results demonstrated that the improved image encryption algorithm exhibited superior performance in terms of information entropy (with average information entropy of 7.9990), anti-noise attack capability (with PSNR value of 37.58db), and anti-differential attack capability (with NPCR and UACI values of 99.6% and 33.5%) when compared to the benchmark algorithm. In the actual application effect test, the study selected space utilization, functionality, security, ease of use, confidentiality, flexibility and other evaluation indicators. A comparative analysis of the actual application effects of various interior design projects revealed that the interior space layout planning and design method proposed in the study exhibited notable superiority over the comparison method across all indicators. In particular, it showed overall advantages in space utilization (92.5% in modern apartment design), functionality score (9.5 in future living experience museum design), and safety assessment.
ConclusionThe above key results demonstrate that the improved image encryption algorithm and the designed indoor space layout planning method have substantial practical applications and are expected to enhance security and confidentiality in the field of indoor space layout planning, thereby providing users with a more optimal experience.
-
-
-
Deep Neural Network Framework for Predicting Cardiovascular Diseases from ECG Signals
Authors: Tanishq Soni, Deepali Gupta, Mudita Uppal, Sapna Juneja, Yonis Gulzar and Kayhan Zrar GhafoorAvailable online: 30 December 2024More LessIntroductionCardio Vascular Disease (CVD), a primary cause of death worldwide, includes a variety of heart-related disorders like heart failure, arrhythmias, and coronary artery disease (CAD), where plaque buildup narrows the heart muscle's blood vessels and causes angina or heart attacks. Genetics, congenital anomalies, bad diet, lack of exercise, smoking, and chronic diseases including hypertension and diabetes can cause cardiac disease.
MethodThe symptoms can range from chest pain and shortness of breath to exhaustion and palpitations and diagnosis usually involves a medical history, physical examination, and electrocardiograms (ECGs), and stress testing. Lifestyle adjustments, medicines, angioplasty, and bypass grafts or heart transplants are possible treatments. Preventive measures include healthy living, risk factor management, and frequent checkups, which are few measures, whereas advanced algorithms can analyze massive volumes of ECG and MRI data to find patterns and anomalies that humans may overlook.
ResultsThe deep learning models increase arrhythmia, coronary artery disease, and heart failure diagnosis accuracy and speed. They enable predictive analytics, early intervention, and personalized treatment programs, increasing cardiac care results. The proposed DNN model consists of a 3-layer architecture having input, hidden, and output layers. In the hidden layer, 2 layers, namely, the dense layer and batch normalization layer are added to enhance its accuracy.
ConclusionThree different optimizers namely Adam, AdaGrad, and AdaDelta are tested on 50 epochs and 32 batch sizes for predicting cardiovascular disease. Adam optimizer has the highest accuracy of 85% using the proposed deep neural network.
-
-
-
Artificial Intelligence for Cardiovascular Diseases
Authors: Mohd Qasid Lari, Deepak Kumar, Ajay Kumar, Yogesh Murti, Prashant Kumar Yadav and Dileep KumarAvailable online: 30 December 2024More LessGlobally, cardiovascular disease [CVD] continues to be a major cause of death. Advancements in Artificial Intelligence [AI] in recent times present revolutionary opportunities for the diagnosis, treatment, and prevention of this condition. In this paper, we review mainly the applications of AI in CVDs with its limitations and challenges. Artificial intelligence [AI] algorithms can quickly and precisely analyze medical images, such as CT scans, X-rays, and ECGs, helping with early and more accurate identification of a variety of CVD diseases. To identify those who are at a high risk of getting CVD, AI models can also analyze patient data. This allows for early intervention and preventive measures. AI systems are also capable of analyzing complicated medical data to provide individualized therapy recommendations based on the requirements and traits of each patient. During patient meetings, AI-powered solutions can also help healthcare practitioners by offering real-time insights and recommendations, which may improve treatment outcomes. Machine learning [ML], which is a branch of AI and computer sciences, has also been employed to uncover complex interactions among clinical variables, leading to more accurate predictive models for major adverse cardiovascular events [MACE] like combining clinical data with stress test results has improved the detection of myocardial ischemia, enhancing the ability to predict future cardiovascular outcomes. In this paper, we will focus on the current AI applications in different CVDs. Also, precision medicine, and targeted therapy for these cardiovascular problems will be discussed.
-
-
-
A Study on Learning Resources Recommendation Based on Multi-Domain Fusion Network
Authors: ShuQin Zhang, HaoRan Wang and XinYu SuAvailable online: 30 December 2024More LessBackgroundConsidering the singularity of collaborative filtering algorithms in recommending learning resources and the problem that existing knowledge graph convolutional networks cannot deeply mine the neighbourhood information of learning resource nodes in application scenarios with less neighbourhood information, a multi-domain fusion convolutional network learning resource recommendation model based on knowledge graphs is proposed here.
ObjectiveThis study aimed to improve the accuracy and personalization of recommendations of filtering algorithms.
MethodsFirst, the model mapped learner nodes, learning resources, and their neighbour nodes into low-dimensional dense vectors. Second, the multi-domain fusion layer and the multi-domain aggregator were used to obtain the fused multi-domain learning resource vector. Finally, the learner vector and the multi-domain learning resource vector were fed into the prediction layer to calculate the interaction probability.
ResultsTo verify the effectiveness of the algorithm, we conducted a comparative experiment using the publicly available datasets MOOPer and MOOCCubeX. The experimental results showed that the proposed model outperformed baseline models, such as CKE, MKR, KGCN, DEKGCN, and KGIN, in terms of evaluation metrics, such as AUC, ACC, and F1. At the same time, when the neighborhood information was limited, the AUC, ACC, and F1 values of the proposed model still maintained the optimal value.
ConclusionCompared to the optimal baseline model, the effectiveness of the proposed model has been proven.
-
-
-
Cable Fault Detection Based on Improved Deep Convolutional Neural Network
Authors: Xin Chen, Hongxiang Xue, Xing Yang and Qi’an DingAvailable online: 30 December 2024More LessBackgroundThe high-voltage cable is a critical component in power transmission systems, making regular inspections essential for the timely detection of potential hazards, schedule maintenance, and avoiding safety accidents.
ObjectiveThis paper aims to use deep learning algorithms to improve the precision and timeliness of cable fault detection, thereby ensuring safe and secure power system operation.
MethodsAutomatic cable fault detection based on YOLOv8s was conducted in the study in order to assist the power sector in automatically detecting cable faults.
ResultsPConv and BiFPN networks were added to the backbone network to improve the feature fusion performance of the model. To enhance the model's identification capabilities, the WIoU loss function was modified.
ConclusionThe proposed method allows for the rapid detection of cable faults by analyzing three common fault types: “thunderbolt,” “wear,” and “break.” By deploying this approach on edge computing devices mounted on UAVs, automatic inspection of power faults can be effectively achieved.
-
-
-
Lightweight Research on Fatigue Driving Face Detection Based on YOLOv8
Authors: Yin Lifeng and Ding ZiyuanAvailable online: 23 December 2024More LessIntroductionWith the rapid development of society, motor vehicles have become one of the main means of transportation. However, as the number of motor vehicles continues to increase, traffic safety accidents also continue to appear, bringing serious threats to people's lives, and property safety. Fatigue driving is one of the important causes of traffic safety accidents.
MethodTo address this problem, a target detection algorithm called VA-YOLO is designed to improve the speed and accuracy of facial recognition for fatigue checking. The algorithm employs a lightweight backbone network, VanillaNet, instead of the traditional backbone network, which reduces the computational and parametric quantities of the model. The SE attention mechanism is also introduced to enhance the model's attention to the target features, which further improves the accuracy of target detection. Finally, in terms of the bounding box regression loss function, the SIoU loss function is used to reduce the error.
ResultThe experimental results show that, compared toYolov8n, the VA-YOLO algorithm improves the accuracy by 1.3% while the number of parameters decreases by 30%.
ConclusionThis shows that the VA-YOLO algorithm has a significant advantage in realizing the balance between the number of parameters and accuracy, which is important for improving the speed and accuracy of fatigue driving detection.
-
-
-
Advanced Digital Technologies for Promoting Indian Culture and Tourism through Cinema
Available online: 23 December 2024More LessCulture and Tourism are two mainly interrelated elements that contribute a lot to achieving Sustainable Development for any developing country especially India, which has an extremely rich historical and cultural background. Tourism Industry is the fastest growing sector in a local economy creating several job opportunities which ultimately raise the standard of living of people which further raises the consumption level of goods and services, resulting in a rise in the Gross Domestic Product (GDP) of a country. However, various studies pointed out major promotional strategies concerning tourism and culture but an amalgamated promotional approach for both was still missing. With this motivation, the current study aims at providing an amalgamated promotional approach in assimilation with the latest Industry 4.0 technologies such as Artificial Intelligence (AI), Machine Learning (ML), Big Data, Blockchain, Virtual Reality (VR), Digital Twin and Metaverse to the Indian tourism industry by reviewing prior research studies. The findings of the current study are establishing an online future travel demands forecasting system, an online tourists’ destination personalized recommendation system, an online tourist’s review analysis recommendation system, and an online destination image recommendation system and provide the practical design for it through 1+5 Architectural Views Model and by applying several ML algorithms such as CNN, BPNN, SVM, Collaborative Filtering, K-means Clustering, API Emotion, and Naïve Bayes algorithms. Finally, this study has discussed challenges and suggested vital recommendations for future work with the assimilation of Industry 4.0 technologies.
-
-
-
A PSO-Optimized Neural Network and ABC Feature Selection Approach with eXplainable Artificial Intelligence (XAI) for Natural Disaster Prediction
Authors: Mounira Sassi and Hanen IdoudiAvailable online: 23 December 2024More LessIntroduction”Artificial Intelligence will revolutionize our lives” is a phrase frequently echoed. The influence of Artificial Intelligence (AI) and Machine Learning (ML) extends across various aspects of our daily lives, encompassing health, education, economics, the environment, and more.
MethodA particularly formidable challenge lies in decision support, especially in critical scenarios such as natural disaster management, where artificial intelligence significantly amplifies its ongoing capacity to assist in making optimal decisions. In the realm of disaster management, the primary focus often centers on preventing or mitigating the impact of disasters. Consequently, it becomes imperative to anticipate their occurrence in terms of both time and location, enabling the effective implementation of necessary strategies and measures. In our research, we propose a disaster forecasting framework based on a Multi-Layer Perceptron (MLP) empowered by the Particle Swarm Optimization (PSO) algorithm. The PSO-MLP is further fortified by the incorporation of the Artificial Bee Colony (ABC) algorithm for feature selection, pinpointing the most critical elements. Subsequently, we employ the LIME (Local Interpretable Model-agnostic Explanations) model, a component of eXplainable Artificial Intelligence (XAI). This comprehensive approach aims to assist managers and decision-makers in comprehending the factors influencing the determination of the occurrence of such disasters and increases the performance of the PSO-MLP model. The approach, specifically applied to predict snow avalanches, has yielded impressive results.
ResultThe obtained accuracy of 0.92 and an AUC of 0.94 demonstrate the effectiveness of the proposed framework. In comparison, the prediction precision achieved through an SVM is 0.75, while the RF classifier yields 0.86, and XGBoost reaches 0.77. Notably, the precision is further enhanced to 0.81 when utilizing XGBoost optimized by the grid-search.
ConclusionThese results highlight the superior performance of the proposed methodology, showcasing its potential for accurate and reliable snow avalanche predictions compared to other established models.
-
-
-
Field Pest Detection via Pyramid Vision Transformer and Prime Sample Attention
Available online: 10 December 2024More LessBackgroundPest detection plays a crucial role in smart agriculture; it is one of the primary factors that significantly impact crop yield and quality. Objective: In actual field environments, pests often appear as dense and small objects, which pose a great challenge to field pest detection. Therefore, this paper addresses the problem of dense small pest detection.
MethodsWe combine a pyramid vision transformer and prime sample attention (named PVT-PSA) to design an effective pest detection model. Firstly, a pyramid vision transformer is adopted to extract pest feature information. Pyramid vision transformer fuses multi-scale pest features through pyramid structure and can capture context information of small pests, which is conducive to the feature expression of small pests. Then, we design prime sample attention to guide the selection of pest samples in the model training process to alleviate the occlusion effect between dense pests and enhance the overall pest detection accuracy.
ResultsThe effectiveness of each module is verified by the ablation experiment. According to the comparison experiment, the detection and inference performance of the PVT-PSA is better than the other eleven detectors in field pest detection. Finally, we deploy the PVT- PSA model on a terrestrial robot based on the Jetson TX2 motherboard for field pest detection.
ConclusionThe pyramid vision transformer is utilized to extract relevant features of pests. Additionally, prime sample attention is employed to identify key samples that aid in effectively training the pest detection models. The model deployment further demonstrates the practicality and effectiveness of our proposed approach in smart agriculture applications.
-
-
-
Comprehensive Analysis of Oversampling Techniques for Addressing Class Imbalance Employing Machine Learning Models
Authors: Shivani Rana, Rakesh Kanji and Shruti JainAvailable online: 10 December 2024More LessBackgroundUnbalanced datasets present a significant challenge in machine learning, often leading to biased models that favor the majority class. Recent oversampling techniques like SMOTE, Borderline SMOTE, and ADASYN attempt to mitigate these issues. This study investigates these techniques in conjunction with machine learning models like SVM, Decision Tree, and Logistic Regression. The results reveal critical challenges such as noise amplification and overfitting, which we address by refining the oversampling approaches to improve model performance and generalization.
AimIn order to address this challenge of unbalanced datasets, the minority class is oversampled to accommodate the majority class. Oversampling techniques such SMOTE (Synthetic Minority Oversampling Technique), Borderline SMOTE and ADASYN (Adaptive Synthetic Sampling) are used in this work.
ObjectiveTo perform the comprehensive analysis of various oversampling methods for taking acre of class imbalance issue using ML methods.
MethodThe proposed methodology uses BERT technique which removes the pre-processing step. Various proposed oversampling techniques in the literature are used for balancing the data, followed by feature extraction followed by text classification using ML algorithms. Experiments are performed using ML classification algorithms like Decision tree (DT), Logistic regression (LR), Support vector machine (SVM) and Random forest (RF) for categorizing the data.
ResultThe results show improvement corresponding SVM using Borderline SMOTE, resulting in an accuracy of 71.9% and MCC value of 0.53.
ConclusionThe suggested method assists in the evolution of fairer and more effective ML models by addressing this basic issue of class imbalance.
-
-
-
A Survey on the Communication of UAVs with Charging and Control Stations
Available online: 09 December 2024More LessUnmanned Aerial Vehicles (UAVs) have a history of over a century of deployment, but in recent decades, they have progressed at a staggering rate. Nowadays, UAVs are used by a large number of civil and military applications. The communication functionality of a UAV with external systems for control and charging is strongly connected with evolving technologies and services. This leads to an increased number of alternatives when designing UAV communications. This review presents the information needed for choosing an efficient communication system between UAVs and two important elements, the Ground Control Station (GCS) and the Charging Station (CS). GCS is responsible for monitoring and controlling the UAV’s units, while CS is used for the formal charging of the UAV. This study aimed at collecting, classifying, and evaluating all of the necessary information in order to obtain the final decision about the kind of communication that is most efficient for a target UAV application. The features of the telemetry open-source protocols are presented for the UAV-GCS communication and evaluated according to the needs of the most significant application domains. Communication between UAVs and CSs is classified depending on the existence of an intermediate server and analyzed considering telemetry protocols and application domains. Communication algorithms are evaluated in terms of time and energy efficiency. Lastly, for the most significant application domains, the most suitable algorithms are matched.
-
-
-
The Inverse-Consistent Deformable MRI Registration Method Based on the Improved UNet Model and Similarity Attention
Authors: Tianqi Cheng, Lei Wang, Yaolong Han, Shilong Liu, Chunyu Yan, Yanqing Sun, Shanliang Yang and Bin LiAvailable online: 09 December 2024More LessIntroductionDeformable image registration is an essential task in medical image analysis. The UNet model, or the model with the U-shaped structure, has been popularly proposed in deep learning-based registration methods. However, they easily lose the important similarity information in the up-sampling stage, and these methods usually ignore the inherent inverse consistency of the transformation between a pair of images. Furthermore, the traditional smoothing constraints used in the existing methods can only partially ensure the folding of the deformation field.
MethodAn inverse consistent deformable medical image registration network(ICSANet) based on the inverse consistency constraint and the similarity-based local attention is developed. A new UNet network is constructed by introducing similarity-based local attention to focus on the spatial correspondence in the high-similarity space. A novel inverse consistency constraint is proposed, and the objective function of the new form is presented with the combination of the traditional constraint conditions. Experiment: The performance of the proposed method is compared with the typical registration models, such as the VoxelMorph, PVT, nnFormer, and TransMorph-diff model, on the brain IXI and OASIS datasets.
ResultExperimental results on the brain MRI datasets show that the images can be deformed symmetrically until two distorted images are well matched. The quantitative comparison and visual analysis indicate that the proposed method performs better, and the Dice index can be improved by at least 12% with only 10% parameters.
ConclusionThis paper presents a new medical image registration network, ICSANet. By introducing a similarity attention gate, it accurately captures high-similarity spatial correspondences between source and target images, resulting in better registration performance.
-
-
-
A Deep Learning Framework with Learning without Forgetting for Intelligent Surveillance in IoT-enabled Home Environments in Smart Cities
Authors: Surjeet Dalal, Neeraj Dahiya, Amit Verma, Neetu Faujdar, Sarita Rathee, Vivek Jaglan, Uma Rani and Dac-Nhuong LeAvailable online: 04 November 2024More LessBackgroundInternet of Things (IoT) technology in smart urban homes has revolutionised sophisticated monitoring. This progress uses interconnected devices and systems to improve security, resource management, and resident safety. Smart cities use technology to improve efficiency, sustainability, and quality. Internet of Things-enabled intelligent monitoring technologies are key to this goal.
ObjectivesIntelligent monitoring in IoT-enabled homes in smart cities improves security, convenience, and quality of life from advanced technologies. Using live monitoring and risk identification tools to quickly discover and resolve security breaches and suspicious activity to protect citizens. Intelligent devices allow homeowners to remotely control lighting, security locks, and surveillance cameras. Using advanced technologies to regulate heating, cooling, and lighting based on occupancy and usage.
MethodThis study introduces a deep learning architecture that uses LwF (Learning without Forgetting) to keep patterns while absorbing new data. The authors use IoT devices to collect and analyse data in real-time for monitoring and surveillance. They use sophisticated data pre-processing to handle IoT devices' massive data. The authors train the deep learning model with historical and real-time data and cross-validation to ensure resilience.
ResultThe proposed model has been validated on two different Robloflow datasets of 7382 images. The proposed model gains an accuracy level of 98.27%. The proposed Yolo-LwF model outperforms both the original Yolo and LwF models in terms of detection speed and adaptive learning.
ConclusionBy raising the bar for intelligent monitoring solutions in smart cities, the suggested system is ideal for real-time, adaptive surveillance in IoT-enabled households. By embracing adaptability and knowledge retention, authors envision heightened security and safety levels in urban settings.
-
-
-
Real Time Object Detection Algorithm in Foggy Weather Based on WVIT-YOLO Model
Authors: Huiying Zhang, Qinghua Zhang, Yifei Gong, Feifan Yao and Pan XiaoAvailable online: 04 November 2024More LessIntroductionTo address the challenges of low visibility, object recognition difficulties, and low detection accuracy in foggy weather, this paper introduces the WViT-YOLO real-time fog detection model, built on the YOLOv5 framework. The NVIT-Net backbone network, incorporating NTB and NCB modules, enhances the model's ability to extract both global and local features from images.
MethodAn efficient convolutional C3_DSConv module is designed and integrated with channel attention mechanisms and ShuffleAttention at each upsampling stage, improving the model's computational speed and its ability to detect small and blurry objects. The Wise-IOU loss function is utilized during the prediction stage to enhance the model's convergence efficiency.
ResultExperimental results on the publicly available RTTS dataset for vehicle detection in foggy conditions demonstrate that the WViT-YOLO model achieves a 3.2% increase in precision, a 9.5% rise in recall, and an 8.6% improvement in mAP50 compared to the baseline model. Furthermore, WViT-YOLO shows a 9.5% and 8.6% mAP50 improvement over YOLOv7 and YOLOv8, respectively. For detecting small and blurry objects in foggy conditions, the model demonstrates approximately a 5% improvement over the benchmark, significantly enhancing the detection network's generalization ability under foggy conditions.
ConclusionThis advancement is crucial for improving vehicle safety in such weather. The code is available at https://github.com/QinghuaZhang1/mode.
-
-
-
Smart Health Monitoring Approach to Diagnose Attention-Deficit Hyperactivity Disorderbased on Real-Time Activity and Heart Rate Variability using Boosting Models
Authors: Amandeep Kaur, Kuldeep Singh, Prabhpreet Kaur, Bhanu Priya, Gajendra Kumar and Abhishek SharmaAvailable online: 04 November 2024More LessIntroductionAttention-Deficit Hyperactivity Disorder (ADHD) is a prevalent chronic mental health condition that significantly impacts the psychological and physical well-being of millions of adolescents. Early detection and accurate diagnosis are crucial for effective treatment and mitigating the disorder's adverse effects. Despite extensive research efforts, current methods often fall short in simultaneously accounting for daily motor activity and heart rate variability in ADHD detection.
MethodAddressing these gaps, this paper introduces a histogram-based gradient-boosting classifier for analyzing real-time activity and heart-rate variability data to automate ADHD diagnosis. By extracting twelve key features from the data and selecting the most significant ones with the extra tree model, we evaluate these features using various classifiers, including histogram-based gradient boosting, light gradient boosting machine, extreme gradient boosting, gradient boosting, and adaptive boosting.
ResultsThe histogram-based gradient-boosting model, validated through ten-fold cross-validation, outperforms other classifiers with an accuracy of 99.12%, an F1 measure of 99.12%, and a sensitivity of 99.13%. Additionally, it achieves a specificity of 99.1%, an AUC of 0.9995, and a minimal FDR of 0.88%. These results demonstrate that the proposed approach offers a highly effective and precise solution for automated ADHD diagnosis.
ConclusionThe implications of these findings suggest that integrating real-time activity and heart-rate variability data into diagnostic processes can significantly enhance the accuracy and efficiency of ADHD assessment, potentially leading to earlier and more reliable diagnoses, improved patient outcomes, and more tailored treatment strategies.
-
-
-
Biofuels Policy as the Indian Strategy to Achieve the 2030 Sustainable Development Goal 7: Targets, Progress, and Barriers
Authors: Michel Mutabaruka, Manmeet Kaur, Sanjay Singla, Purushottam Sharma, Gurpreet Kaur and Gagandeep SinghAvailable online: 16 October 2024More LessIntroductionThe use of 20% blended biofuels to fossil fuels is one of the important targets of the Government of India to address the impacts of Climate Change, energy-related environmental pollution, and illnesses due to air pollution.
MethodThe National Policy on Biofuels 2018 (NPB 2018) is in place to boost the emerging production of biofuels and, therefore, respond to different international agreements, including the Sustainable Development Goals (SDGs) and the Paris Agreement. Hence, this article examined the production of biofuels in India in line with Agenda 2030 to project the share to be taken by biofuels as its contribution to the country’s energy needs.
ResultThe results were compromising; it was observed that the data from 2000 up to 2017 were not on the side of realizing the targets of production and consumption of biofuels in India, whereas the data from 2018 up to now showed a hope of achieving 2030 set goal of E20 petrol in 2025-26, and E5 diesel in 2030. It was clear that the production of bioethanol was boosting compared to its sibling biodiesel, and renewable energy will continue to have a hard take a good share in the total annual energy used in India.
ConclusionIt is recommended to share data between different stakeholders to promote more research, as the low performance in achieving the targets was due to poor communication and missing technology, rather than the lack of feedstock or unavailability of production facilities.
-