International Journal of Sensors Wireless Communications and Control - Volume 15, Issue 1, 2025
Volume 15, Issue 1, 2025
-
-
Private Blockchain-based Efficient Secure Cloud Data Storage Using Federated Learning Framework
More LessBackgroundSome of the new challenges that consumers are confronted with include safeguarding data and privacy. Data retrieved from the cloud's external sources and the calculations that follow aren't always accurate. Addressing the risk of adversaries utilizing various exploitation techniques to compromise transactional data privacy is a basic concern when establishing big private networks. The profession of criminal investigation is quickly embracing new technologies, and one of these is blockchain. Every sector, from banking and supply chain management to smart apps and the Internet of Things (IoT), has been increasingly vulnerable to security threats in recent years.
MethodsAn effective solution to the “data island” problem, federated-learning (FL) has recently been a hot and broad concern topic. But as FL technology finds more practical uses, training management gets more complicated, and the trade-off of multi-tasking gets more difficult, due to the increasing quantity of FL tasks. A privacy-preserving FL framework with multi-tasks using a partitioned blockchain is proposed in this study to address this shortcoming. The framework may execute several FL tasks by separate requesters. To start, an FL task force is established to help with the visualization, organization, and administration of security aggregation.
ResultsIn order to safeguard users' privacy and guarantee the accuracy of the global model, the suggested framework incorporates both Paillier-homomorphic-encryption (PHE) and Pearson-correlation-coefficient (PCC). Lastly, a novel incentive system based on the blockchain is introduced to encourage individuals to provide their valuable data.
ConclusionOur suggested framework achieves a global model accuracy of 99.2% according to the experimental data. Specifically, in the realm of industrial applications, the suggested framework is clearly more suited to real-world settings.
-
-
-
5G RedCap Enhancement Towards Improved Cellular LPWAN/5G-IoT for Smart Cities and Industrial IoT Using Genetic Algorithm-Based Neural Network
More LessAuthors: Emmanuel U. Ogbodo, Adnan M. Abu-Mahfouz and Anish A. KurienBackgroundThe low power wide area networks (LPWANs) technologies significantly impact numerous IoT deployment use cases, especially in the smart cities' scenario. LPWAN is used to support low data rate use cases. Unfortunately, medium data rate (up to 50 Mbps and more) IoT applications are not operational by LPWAN. Hence, a 5G reduced capability (RedCap) new radio (NR) device was provided to address this limitation. However, the 5G RedCap suffers a coverage loss due to the reduction of the physical layer complexity of the 5G legacy user equipment (UE). Therefore, 5G RedCap enhancements require coverage loss compensation.
ObjectiveThis paper aims to improve the performance of 5G RedCap in terms of coverage, energy efficiency, and throughput for Smart Cities and Industrial IoT (IIoT) using a genetic algorithm based neural network (GA-NN) model.
MethodsThe method involves using a GA-NN model for a two-fold enhancement of the 5G RedCap. This enhancement includes a specialized-enhancement RedCap (se-RedCap) for low data rates and an enhanced RedCap (eRedCap) for high data rates (up to 300 Mbps) support. The GA-NN model has been implemented and assessed in MATLAB Global Optimization and 5G Toolbox. Furthermore, an introduced and modified parametric rectified linear unit (ePReLU) activation function fA evaluates the final summation data parameters trained with a specific threshold for the best performance.
ResultsThe numerical results confirm that the specialized-enhancement RedCap (se-RedCap) and enhanced RedCap (eRedCap) outperform legacy cellular LPWANs and conventional RedCap when considering coverage, energy efficiency, and throughput.
ConclusionThis paper successfully covers two types of usage scenarios: the very low data rate typically seen in LPWAN and the high data rate of up to 300 Mbps, which is not yet compatible with the existing RedCap system. As a result, the GA-NN model creates se-RedCap and eRedCap, providing support for these two scenarios, respectively.
-
-
-
Non-orthogonal Multiple Access (NOMA) Channel Estimation for Mobile & PLC-VLC Based Broadband Communication System
More LessAuthors: Manidipa Sarkar, Ankit Nayak, Sarita Nanda and Suprava PatnaikBackgroundThe paper focuses on enhancing the performance of 5G wireless mobile communication systems. Furthermore, it addresses the increasing demand for high data rates, improved channel capacity, and spectrum efficiency outlined by the 3rd Generation Partnership Project (3GPP) protocol.
ObjectivesTo develop an innovative Non-orthogonal Multiple Access (NOMA)-based channel estimation (CE) model aimed at improving the performance of 5G wireless mobile communication systems.
MethodsA proportionate recursive least squares (PRLS) algorithm is utilized for estimating the characteristics of practical Rayleigh fading channels. The applicability of the PRLS algorithm is investigated in Lambertian channels for indoor broadband communication systems such as power line communication (PLC) and visual light communication (VLC) systems.
ResultsThe assessment of evaluation metrics, including mean square error (MSE), bit error rate (BER), spectral efficiency (SE), energy efficiency (EE), capacity, and data rate, have been analysed. Faster convergence and higher accuracy compared to existing state-of-the-art approaches have been demonstrated.
ConclusionThe NOMA-based channel estimation model presents significant promise in enhancing the performance of 5G wireless communication systems. The demands for higher data rates and improved spectral efficiency as per 3GPP standards have been addressed.
-
-
-
Unveiling Data Fairness Functional Requirements in Big Data Analytics Through Data Mapping and Classification Analysis
More LessAuthors: Palanimanickam Hemalatha and Jayaraman LavanyaAimsIn the realm of Big Data Analytics, ensuring the fairness of data-driven decision making processes is imperative. This abstract introduces the Learning Embedded Fairness Interpretation (LEFI) Model, a novel approach designed to uncover and address data fairness functional requirements with an exceptional accuracy rate of 97%. The model harnesses advanced data mapping and classification analysis techniques, employing Explainable-AI (xAI) for transparent insights into fairness within large datasets.
MethodsThe LEFI Model excels in navigating diverse datasets by mapping data elements to discern patterns contributing to biases. Through systematic classification analysis, LEFI identifies potential sources of unfairness, achieving an accuracy rate of 97% in discerning and addressing these issues. This high accuracy empowers data analysts and stakeholders with confidence in the model's assessments, facilitating informed and reliable decision-making. Crucially, the LEFI Model's implementation in Python leverages the power of this versatile programming language. The Python implementation seamlessly integrates advanced mapping, classification analysis, and xAI to provide a robust and efficient solution for achieving data fairness in Big Data Analytics.
ResultsThis implementation ensures accessibility and ease of adoption for organizations aiming to embed fairness into their data-driven processes. The LEFI Model, with its 97% accuracy, exemplifies a comprehensive solution for data fairness in Big Data Analytics. Moreover, by combining advanced technologies and implementing them in Python, LEFI stands as a reliable framework for organizations committed to ethical data usage.
ConclusionThe model not only contributes to the ongoing dialogue on fairness but also sets a new standard for accuracy and transparency in the analytics pipeline, advocating for a more equitable future in the realm of Big Data Analytics.
-
-
-
Outage Analysis of IRS-NOMA System over η - μ Fading Channel
More LessAuthors: K. Srinivasarao, V. Venkata Rao and Priyank SharmaBackgroundIntelligent reflecting surface (IRS) have evolved as one of the key technology by enabling reconfigurable, intelligent, and low-power solutions for sixth-generation (6G) wireless communication.
ObjectivesThe objective of this paper is to improve outage performance by deploying the IRS Module.
MethodsIn this research, an IRS-assisted NOMA network is explored over η - μ fading channel, where the IRS is placed on top of the base station (BS). IRS aids in fine-tuning the phase of incoming signals fromBS in a meticulous way, which improves the performance of the system. The statistical channel modelling of the downlink IRS-NOMA system is proposed and validated with Monte Carlo (MC) simulation. Also, analytical expressions of OP are derived for kth user in the IRS-NOMA system over η - μ fading channel.
ResultsThe influence of performance factors, such as the number of reflecting elements (M) on OP, is examined.
ConclusionSimulation results reveal that the IRS-NOMA system experiences less outage compared to IRS-OMA and conventional relaying techniques.
-
-
-
Light-Weighted Dynamic Encryption Decryption Algorithm (DEnDecA) for Internet of Things
More LessAuthors: Jasvir Singh Kalsi, Jagpal Singh Ubhi and Kota Solomon RajuIntroductionA recent boom in the development of IoT-enabled products has accelerated data transmission from end clients to cloud services and vice versa. Being resource-constrained, IoT devices have lower computational support, especially at IoT end nodes; hence, the probability of data breach has also increased to a greater extent.
MethodsA lightweight security algorithm for the Internet of Things (IoT) is a matter of concern for data security and integrity. IoT nodes transmit the data into small chunks and are vulnerable to attacks, such as probing attacks. In this study, a new approach of algorithm hopping using dynamic switching of encryption algorithm has been proposed.
ResultsDynamic Encryption Decryption Algorithm (DEnDecA) proves to be a lightweight choice of encryption by providing high-security shielding over less secure algorithms by their dynamic selection without any human interaction or interface. The hopping has been implemented using MATLAB along with AES-32, AES-64, and AES-128.
ConclusionThe results show only 8-bit data overhead and 2ms to 8ms additional time for encryption/decryption for data ranging from 1KB to 1MB for AES-128, AES-64, and AES-32 algorithms.
-
-
-
A Comparative Study of Artificial Neural Network Training Algorithms to Predict Photovoltaic Module Output Power
More LessIntroductionSolar energy is a crucial component and contributes a large portion of renewable resources, the demand for which has recently increased. During previous years, it was difficult to predict the amount of energy obtained from photovoltaic systems. The angle of inclination of the solar panel, the amount of radiation, the speed of the wind, the amount of humidity, and the temperature of the weather are major factors that effectively affect the production of electrical energy. Scientists have used many strategies to predict the power generated by PV modules accurately, but each method has different pros and cons.
MethodsThis study tested three training algorithms for artificial neural networks (ANN): scaled conjugate gradient (SCG), Levenberg Marquardt (LM), and Bayesian regularization (BR), determining which one performed best in terms of prediction speed and accuracy. Twenty-eight thousand two hundred ninety-six samples of experimental data for primary influencing environmental factors were fed into the artificial neural network, which consists of 15 hidden layers. Before training the network, we preprocessed the data to remove factors that have a secondary effect.
ResultsThe analytical results showed that the artificial neural network trained according to the LM algorithm is the best in terms of accuracy and speed of predicting the resulting photovoltaic energy.
ConclusionThe results showed that although the regression evaluation and MSE values for the LM, SCG, and BR algorithms are close (98.129%, 0.0622, 97.849%, 0.0587, 98.151%, and 0.0585, respectively), the LM training algorithm is the best in terms of speed of calculation and display of results.
-
Most Read This Month