Recent Advances in Computer Science and Communications - Volume 14, Issue 8, 2021
Volume 14, Issue 8, 2021
-
-
Binary Grasshopper Optimization Based Feature Selection for Intrusion Detection System Using Feed Forward Neural Network Classifier
Authors: M. Jeyakarthic and A. ThirumalairajBackground: The occurrence of intrusions and attacks has increased tremendously in recent years, thanks to the ever-growing technological advancements in the internet and networking domains. Intrusion Detection System (IDS) is employed nowadays to prevent distinct attacks. Several machine learning approaches have been presented for classifying IDS. However, IDS undergoes dimensionality issues that result in increased complexity and decreased resource exploitation. Consequently, it becomes necessary to investigate the significant features of data using IDS in order to reduce the dimensionality. Aim: In this article, a new Feature Selection (FS)-based classification system is presented which performs both FS and classification processes. Methods: In this study, a binary variant of the Grasshopper Optimization Algorithm called BGOA is applied as FS model. The significant features are integrated using an effective model to extract the useful ones and discard the useless features. The chosen features are given to Feed-Forward Neural Network (FFNN) model to train and test the KDD99 dataset. Results: The presented model was validated using the benchmark KDD Cup 1999 dataset. With the inclusion of FS process, the classifier results got increased by attaining a FPR of 0.43, FNR of 0.45, sensitivity of 99.55, specificity of 99.57, accuracy of 99.56, F-score of 99.59 and kappa value of 99.11. Conclusion: The experimental outcome confirmed the superior performance of the presented model compared to diverse models from several aspects and was found to be an appropriate tool for detecting intrusions.
-
-
-
Viability Prediction of Smart Meter Installation to Prevent Non-Technical Losses Using Naïve Bayes Classifier
Authors: Hadiza Umar, Rajesh Prasad and Mathias FonkamBackground: Energy regulators across the world resolved to curtail the liability of Non- Technical Losses (NTLs) in power by implementing the use of Smart Meters to measure consumed power. However, power regulators in developing countries are confronted with a huge metering gap in an era of unprecedentedenergy theft.This has resulted in deficits in revenue, an increase in debts and subsequently power cuts. Objective: The objective of this research is to predict whether the unmetered customers are eligible to be metered by identifying worthy and unworthy customers for metering given their bill payment history. Methods: The approach analyses the performance accuracy of some machine learning algorithms on small datasets by exploring the classification abilities of Deep learning, Naïve Bayes, Support Vector Machine and Extreme Learning Machine using data obtained from an electricity distribution company in Nigeria. Results: The performance analysis shows that Naïve Bayes classifier outperformed the Deep Learning, Support Vector Machine and Extreme Learning Machine algorithms. Experiments in deep learning have shown that the alteration of batch sizes has asignificant effect on the outputs. Conclusion: This paper presents a data-driven methodology for the prediction of consumers’ eligibility to be metered. The research has analysed the performance of deep learning, Naive Bayes, SVM and ELM on a small dataset. It is anticipated that the research will help utility companies in developing countries with large populations and huge metering gaps to prioritise the installation of smart meters based on consumer’s payment history.
-
-
-
Design of Psk Based Trusted Dtls for Smart Sensor Nodes
Authors: Anil Yadav, Sujata Pandey, Rajat Singh and Nitin RakeshBackground: RSA based key exchange is a heavy and time-consuming process, as it involves numerous message exchange between a client and the server. The pre-shared key (PSK) based handshake process attempts to reduce the messages during the key exchange between a client and the server. Method: This paper extends the TEE enabled dtls handshake design based on RSA to the TEE enabled pre-shared key based handshake. A dtls client and the server installs the pre-shared key in advance so that the message exchanges can be reduced during session key generation. Result: In this article, the authors have significantly reduced this penalty by fine-tuning of the tdtls algorithm for psk based handshake. On average, this gain is over 2 ms (50% - from 3.5 ms to 1.5 ms) across various cipher-suites. Conclusion: The tdtls approach increases the security of the session key and its intermediate keying materials, which is a huge gain as compared to minor handshake time increase. The algorithm ensures end-to-end security to the PSK based session key as well as its keying materials between a dtls client and a server.
-
-
-
IoT with Cloud-Based End to End Secured Disease Diagnosis Model Using Light Weight Cryptography and Gradient Boosting Tree
By K. ShankarBackground: With the evolution of the Internet of Things (IoT), technology and its associated devices employed in the medical domain, the different characteristics of online healthcare applications become advantageous for human wellbeing. Aim: The objective of this paper is to present an IoT and cloud-based secure disease diagnosis model. At present, various e-healthcare applications offer online services in diverse dimensions using the Internet of Things (IoT). Method: In this paper, an efficient IoT and cloud-based secure classification model are proposed for disease diagnosis. People can avail efficient and secure services globally over online healthcare applications through this model. The presented model includes an effective Gradient Boosting Tree (GBT)-based data classification and lightweight cryptographic technique named rectangle. The presented GBT–R model offers a better diagnosis in a secure way. Results: The proposed model was validated using Pima Indians diabetes data and extensive simulation was conducted to prove the consistent results of the employed GBT-R model. Conclusion: The experimental outcome strongly suggested that the presented model shows maximum performance with an accuracy of 94.92.
-
-
-
Empirical Evaluation of NoSQL and Relational Database Systems
Authors: Shivangi Kanchan, Parmeet Kaur and Pranjal ApoorvaAim: To evaluate the performance of Relational and NoSQL databases in terms of execution time and memory consumption during operations involving structured data. Objective: To outline the criteria that decision makers should consider while making a choice of the database most suited to an application. Methods: Extensive experiments were performed on MySQL, MongoDB, Cassandra, and Redis using the data for an IMDB movies schema prorated into 4 datasets of 1000, 10000, 25000 and 50000 records. The experiments involved typical database operations of insertion, deletion, update read of records with and without indexing as well as aggregation operations. Databases’ performance has been evaluated by measuring the time taken for operations and computing memory usage. Results: Redis provides the best performance for write, update and delete operations in terms of time elapsed and memory usage, whereas MongoDB gives the worst performance when the size of data increases, due to its locking mechanism. For the read operations, Redis provides better performance in terms of latency than Cassandra and MongoDB. MySQL shows the worst performance due to its relational architecture. On the other hand, MongoDB shows the best performance among all databases in terms of efficient memory usage. Indexing improves the performance of any database only for covered queries. Redis and MongoDB give good performance for range based queries and for fetching complete data in terms of elapsed time whereas MySQL gives the worst performance. MySQL provides better performance for aggregate functions. NoSQL is not suitable for complex queries and aggregate functions. Conclusion: It has been found from the extensive empirical analysis that NoSQL outperforms SQL based systems in terms of basic read and write operations. However, SQL based systems are better if queries on the dataset mainly involve aggregation operations.
-
-
-
Identification of Coronary Artery Disease using Artificial Neural Network and Case-Based Reasoning
Authors: Varun Sapra, M.L Saini and Luxmi VermaBackground: Cardiovascular diseases are increasing at an alarming rate with a very high rate of mortality. Coronary artery disease is one of the types of cardiovascular diseases, which is not easily diagnosed in its early stage. Prevention of coronary artery disease is possible only if it is diagnosed at an early stage and proper medication is done. Objective: An effective diagnosis model is important not only for the early diagnosis but also to check the severity of the disease. Method: In this paper, a hybrid approach is followed, with the integration of deep learning (multilayer perceptron) with case-based reasoning to design an analytical framework. This paper suggests two phases of the study, one in which the patient is diagnosed for coronary artery disease and in the second phase, if the patient is found suffering from the disease, then case-based reasoning is employed to diagnose the severity of the disease. In the first phase, a multilayer perceptron is implemented on a reduced dataset and with time-based learning for stochastic gradient descent, respectively. Results: The classification accuracy increased by 4.18 % with reduced data set using a deep neural network with time-based learning. In the second phase, when the patient was diagnosedpositive for coronary artery disease, then the case-based reasoning system was used to retrieve from the case base the most similar case to predict the severity of the disease for that patient. The CBR model achieved 97.3% accuracy. Conclusion: The model can be very useful for medical practitioners, supporting in the decisionmaking process and thus can save the patients from unnecessary medical expenses on costly tests and can improve the quality and effectiveness of medical treatment.
-
-
-
Time Series Features Extraction and Forecast from Multi-feature Stocks with Hybrid Deep Neural Networks
More LessIn this paper, we use LSTM and LSTM-CNN models to predict the rise and fall of stock data. It has been proved that LSTM-based models are powerful tools in time series stock data forecast. Background: Forecasting of time series stock data is important in financial works. Stock data usually have multi-features such as opening price, closing price and so on. The traditional forecast methods, however, are mainly applied to one feature – closing price, or a few, like four or five features. The massive information hidden in the multi-feature data is not thoroughly discovered and used. Objective: The study aimed to find a method to make use of all information about multi-features and get a forecast model. Method: LSTM based models are introduced in this paper. For comparison, three models are used, and they are single LSTM model, a hybrid model of LSTM-CNN, and a traditional ARIMA model. Results: Experiments with different models were performed on stock data with 50 and 230 features, respectively. Results showed that MSE of single LSTM model was 2.4% lower than the ARIMA model and MSE of LSTM-CNN model was 12.57% lower than that of a single LSTM model on 50 features data. On 230 features data, the LSTM-CNN model was found to be improved by 23.41% in forecast accuracy. Conclusion: In this paper, we used three different models – ARIMA, single LSTM and LSTMCNN hybrid model – to forecast the rise and fall of multi-features stock data. It has been found that the single LSTM model is better than the traditional ARIMA model on average, and the LSTMCNN hybrid model is better than a single LSTM model on 50-feature stock data. Moreover, we used LSTM-CNN model to perform experiments on stock data with 50 and 230 features, respectively and found that the results of the same model on 230 features data were better than that on 50 features data. It has been proved in our work that the LSTM-CNN hybrid model is better than other models and experiments on stock data with more features could result in better outcomes. We will carry out more works on hybrid models next.
-
-
-
Some Methods for Constructing Infinite Families of Quasi-Strongly Regular Graphs
Authors: Gholam H. Shirdel and Adel AsgariObjective: In this article, we examined some method of constructing infinite families of semi-strongly regular graphs, Also we obtained a necessary condition for the composition of several graphs to be semi-strongly regular graphs, and using it, we have constructed some infinite families of semi-strongly regular graphs, Also by using the Cartesian product of two graphs, we have constructed some infinite families of semi-strongly regular graphs. Intoduction: A regular graph is called strongly regular graph if the number of common neighbors of two adjacent vertices is a non-negative integer λ and the number of common neighbors of two nonadjacent vertices is a non- negative integer μ. Strongly regular graph introduced in 1963. Subsequently, studying of this graphs and methods of constructing them was a very important part of graph theory, There are two important branches in studying strongly regular graphs. Mehtods: A pairwise balanced incomplete bloc design (PBIBD) is a collection of subsets β of a vset X called blocks such that every pair of elements of X appears in exactly blocks,. If each block has k elements this design is called a 2-(v,k,λ) design, or simply a 2-design or a block design. We denote the number of blocks in β by b and it is easy to see that for each element x of X the number of blocks containing x is a constant (denoted by r) Result: We use a method of constructing new graph from the old ones, introduced and named as composition of graphs. A block design is usually displayed with an array, so that each column represents a block. Discussion: Interesting graphs have been introduced with certain properties that have proximity kinship with strongly regular graphs and quasi-strongly regular graphs. Conclusion: Strongly regular graphs are an important and interesting family of graphs that are generalized in a variety of ways. For example, the strongly regular digraphs, (λ, μ)- graphs and quasistrongly regular graphs are some generalizations of these graphs. In present article, in addition to a review of several methods of constructing strongly regular graphs.
-
-
-
Adaptive Privacy Preservation Approach for Big Data Publishing in Cloud using k-anonymization
Authors: Suman Madan and Puneet GoswamiBackground: Big data is an emerging technology that has numerous applications in the fields, like hospitals, government records, social sites, and so on. As the cloud computing can transfer large amount of data through servers, it has found its importance in big data. Hence, it is important in cloud computing to protect the data so that the third party users cannot access the information from the users. Methods: This paper develops an anonymization model and adaptive Dragon Particle Swarm Optimization (adaptive Dragon-PSO) algorithm for privacy preservation in the cloud environment. The development of proposed adaptive Dragon-PSO incorporates the integration of adaptive idea in the dragon-PSO algorithm. The dragon-PSO is the integration of Dragonfly Algorithm (DA) and Particle Swarm Optimization (PSO) algorithm. The proposed method derives the fitness function for the proposed adaptive Dragon-PSO algorithm to attain the higher value of privacy and utility. The performance of the proposed method was evaluated using the metrics, such as information loss and classification accuracy for different anonymization constant values. Conclusion: The proposed method provided a minimal information loss and maximal classification accuracy of 0.0110 and 0.7415, respectively when compared with the existing methods.
-
-
-
Unsymmetric Image Encryption Using Lower-Upper Decomposition and Structured Phase Mask in the Fractional Fourier Domain
Authors: Shivani Yadav and Hukum SinghBackground: An asymmetric cryptosystem using Structured Phase Mask (SPM) and Random Phase Mask (RPM) in fractional Fourier transform (FrFT) using Lower-Upper decomposition with partial pivoting is proposed in order to enhance security for an existing system. The usage of structured phase mask offers additional parameters in encryption. In the encoded process, the phase-truncation (PT) part is replaced by the Lower-Upper decomposition part. Objective: Introducing the asymmetric cryptosystem using LUDP is to prevent quick identification of encrypted image in the FrFT domain. Method: Initially input image is convoluted using SPM, FrFT and finally LUDP.Then the obtained result is multiplied using RPM, inverse FrFT and LUDP. Results: The strength and legitimacy of the proposed scheme have been verified by using numerical analysis on MATLAB R2018a (9.4.0.813654). For checking the viability of the proposed scheme, mathematical simulations have been carried out which inturn determines the performance and better quality of the image. These all simulations based on key sensitivity, occlusion attack, noise attacks and histograms. Conclusion: A novel asymmetric cryptosystem is proposed by using two phase masks; one is SPM and another is RPM. LUDP is proposed in which the encoded procedure is different from the decoded procedure. Security is enhanced by increasing the number of keys and the scheme is also robust against attacks. Statistical simulations are also carried out for inspecting the strength and viability of the algorithm.
-
-
-
hGWO-SA: A Novel Hybrid Grey Wolf Optimizer-Simulated Annealing Algorithm for Engineering and Power System Optimization Problems
More LessBackground: The improved variants of a Grey wolf optimizer have good exploration capability for the global optimum solution. However, the exploitation competence of the existing variants of grey wolf optimizer is unfortunate. Researchers are continuously trying to improve the exploitation phase of the existing grey wolf optimizer, but still, the improved variants of grey wolf optimizer lack in local search capability. In the proposed research, the exploitation phase of the existing grey wolf optimizer has been further improved using a simulated annealing algorithm and the proposed hybrid optimizer has been named as hGWO-SA algorithm. Methods: The effectiveness of the proposed hybrid variant has been tested for various benchmark problems, including multi-disciplinary optimization and design engineering problems and unit commitment problems of the electric power system and it has been experimentally found that the proposed optimizer performs much better than existing variants of grey wolf optimizer. The feasibility of hGWO-SA algorithm has been tested for small & medium scale power systems unit commitment problems, in which, the results for 4 unit, 5 unit, 6 unit, 7 unit, 10 units, 19 unit, 20 unit, 40 unit and 60 units are evaluated. The 10-generating units are evaluated with 5% and 10% spinning reserve. Result and Conclusion: The results obviously show that the suggested method gives the superior type of solutions as compared to other algorithms.
-
Most Read This Month
