Recent Advances in Computer Science and Communications - Volume 14, Issue 9, 2021
Volume 14, Issue 9, 2021
-
-
An Optimized Software Defect Prediction Model Based on PSO - ANFIS
Authors: Misha Kakkar, Sarika Jain, Abhay Bansal and P.S. GroverIntroduction The Software defect prediction (SDP) model plays a very important role in today’s software industry. SDP models can provide either only a list of defect-prone classes as output or the number of defects present in each class. This output can then be used by quality assurance teams to effectively allocate limited resources for validating software products by putting more effort into these defect-prone classes. The study proposes an OANFIS-SDP model that gives the number of defects as an output to software development teams. Development teams can then use this data for better allocation for their scarce resources such as time and manpower. Methods: OANFIS is a novel approach based on the Adaptive neuro-fuzzy inference system (ANFIS), which is optimized using Particle swarm optimization (PSO). OANFIS model combines the flexibility of ANFIS model with the optimization capabilities of PSO for better performance. Results: The proposed model is tested using the dataset from open source java projects of varied sizes (from 176 to 745 classes). Conclusion: The study proposes an SDP model-based OANFIS that gives the number of defects as an output to software development teams. Development teams can then use this data for better allocation for their scarce resources such as time and manpower. OANFIS is a novel approach that uses the flexibility provided by the ANFIS model and optimizes the same using PSO. The results given by OANFIS are very good and it can also be concluded that the performance of the SDP model based on OANFIS might be influenced by the size of the projects. Discussion: The performance of the SDP model based on OANFIS is better than the ANFIS model. It can also be concluded that the performance of the SDP model might be influenced by the size of projects.
-
-
-
The Impact of Varying Sinks on Load Distribution in IoT Routing Under Static and Mobile Scenarios
More LessBackground: The majority of routing algorithms proposed for IoT can be categorized under modifications in routing protocols designated for IoT such as RPL objective functions enhancements, while others are enhancements in routing algorithms derived from those which were designed for ad hoc networks. We have thoroughly investigated the literature on these modifications or enhancements. However, there is a lack of an in-depth study to examine the impact of varying number of sinks and nodes' mobility on routing algorithms based on MRHOF and OF0. Method: We examine the ability of MRHOF and OF0 in distributing the load with the impact of the varying number of sink nodes under static and mobile scenarios with the aid of COOJA simulator. This has been conducted using various metrics including regular metrics such as throughput and power consumption, and newly derived metrics including packets load deviation and power deviation which are derived for the purpose of measuring load distribution. Results: Increasing number of sinks has shown that OF0 outperforms MRHOF in terms of power consumption and power deviation under both static and mobile scenarios. On the other hand, in mobile scenarios, MRHOF outperforms OF0 in terms of packets load deviation for a high number of sinks. Conclusion: Both techniques demonstrated performance merits when challenged with varying number of sinks. Therefore, understanding their behaviour would play a key role in deriving an efficient load distribution technique, which is likely to be achieved by considering network conditions.
-
-
-
Hybrid Model using Firefly and BBO for Feature Selection in Software Product Line
Authors: Hitesh Yadav, Rita Chhikara and Charan KumariBackground: Software Product Line is the group of multiple software systems that share a similar set of features with multiple variants. Feature model is used to capture and organize features used in different multiple organizations. Objective: The objective of this research article is to obtain an optimized subset of features capable of providing high performance. Methods: In order to achieve the desired objective, two methods have been proposed; a) an improved objective function which is used to compute the contribution of each feature with weightbased methodology; and b) a hybrid model that is employed to optimize the Software Product Line problem. Results: Feature sets varying in size from 100 to 1000 have been used to compute the performance of the Software Product Line. Conclusion: The results show that the proposed hybrid model outperforms the state of art metaheuristic algorithms.
-
-
-
Comparative Analysis of Keyframe Extraction Techniques for Video Summarization
Authors: Vishal Parikh, Jay Mehta, Saumyaa Shah and Priyanka SharmaBackground: With technological advancement, the quality of life of people has improved. Also, with technological advancement, large amounts of data are produced by people. The data is in the forms of text, images and videos. Hence, there is a need for significant efforts and means of devising methodologies for analyzing and summarizing them to manage with the space constraints. Video summaries can be generated either by keyframes or by skim/shot. The keyframe extraction is done based on deep learning-based object detection techniques. Various object detection algorithms have been reviewed for generating and selecting the best possible frames as keyframes. A set of frames is extracted out of the original video sequence and based on the technique used, one or more frames of the set are decided as a keyframe, which then becomes the part of the summarized video. The following paper discusses the selection of various keyframe extraction techniques in detail. Methods: The research paper is focused on the summary generation for office surveillance videos. The major focus of the summary generation is based on various keyframe extraction techniques. For the same, various training models like Mobilenet, SSD, and YOLO are used. A comparative analysis of the efficiency for the same showed that YOLO gives better performance as compared to the other models. Keyframe selection techniques like sufficient content change, maximum frame coverage, minimum correlation, curve simplification, and clustering based on human presence in the frame have been implemented. Results: Variable and fixed-length video summaries were generated and analyzed for each keyframe selection technique for office surveillance videos. The analysis shows that the output video obtained after using the Clustering and the Curve Simplification approaches is compressed to half the size of the actual video but requires considerably less storage space. The technique depending on the change of frame content between consecutive frames for keyframe selection produces the best output for office surveillance videos. Conclusion: In this paper, we discussed the process of generating a synopsis of a video to highlight the important portions and discard the trivial and redundant parts. Firstly, we have described various object detection algorithms like YOLO and SSD, used in conjunction with neural networks like MobileNet, to obtain the probabilistic score of an object that is present in the video. These algorithms generate the probability of a person being a part of the image for every frame in the input video. The results of object detection are passed to keyframe extraction algorithms to obtain the summarized video. Our comparative analysis for keyframe selection techniques for office videos will help in determining which keyframe selection technique is preferable.
-
-
-
Sequential Model for Digital Image Contrast Enhancement
Authors: Monika Agarwal, Geeta Rani, Shilpy Agarwal and Vijaypal S. DhakaAims: The manuscript aims at designing and developing a model for optimum contrast enhancement of an input image. The output image of model ensures the minimum noise, the maximum brightness and the maximum entropy preservation. Objectives: * To determine an optimal value of threshold by using the concept of entropy maximization for segmentation of all types of low contrast images. * To minimize the problem of over enhancement by using a combination of weighted distribution and weighted constrained model before applying histogram equalization process. * To provide an optimum contrast enhancement with minimum noise and undesirable visual artefacts. * To preserve the maximum entropy during the contrast enhancement process and providing detailed information recorded in an image. * To provide the maximum mean brightness preservation with better PSNR and contrast. * To effectively retain the natural appearance of an images. * To avoid all unnatural changes that occur in Cumulative Density Function. * To minimize the problems such as noise, blurring and intensity saturation artefacts. Methods: 1. Histogram Building. 2. Segmentation using Shannon’s Entropy Maximization. 3. Weighted Normalized Constrained Model. 4. Histogram Equalization. 5. Adaptive Gamma Correction Process. 6. Homomorphic Filtering. Results: Experimental results obtained by applying the proposed technique MEWCHE-AGC on the dataset of low contrast images, prove that MEWCHE-AGC preserves the maximum brightness, yields the maximum entropy, high value of PSNR and high contrast. This technique is also effective in retaining the natural appearance of an images. The comparative analysis of MEWCHE-AGC with existing techniques of contrast enhancement is an evidence for its better performance in both qualitative as well as quantitative aspects. Conclusion: The technique MEWCHE-AGC is suitable for enhancement of digital images with varying contrasts. Thus useful for extracting the detailed and precise information from an input image. Thus becomes useful in identification of a desired regions in an image.
-
-
-
Comparative Analysis of a Deep Learning Approach with Various Classification Techniques for Credit Score Computation
Authors: Arvind Pandey, Shipra Shukla and Krishna K. MohbeyBackground: Large financial companies are perpetually creating and updating customer scoring techniques. From a risk management view, this research for the predictive accuracy of probability is of vital importance than the traditional binary result of classification, i.e., noncredible and credible customers. The customer's default payment in Taiwan is explored for the case study. Objective: The aim is to audit the comparison between the predictive accuracy of the probability of default with various techniques of statistics and machine learning. Method: In this paper, nine predictive models are compared from which the results of only six models are taken into consideration. Deep learning-based H2O, XGBoost, logistic regression, gradient boosting, naïve Bayes, logit model, and probit regression comparative analysis are performed. The software tools, such as R and SAS (university edition), are employed for machine learning and statistical model evaluation. Results: Through the experimental study, we demonstrate that XGBoost performs better than other AI and ML algorithms. Conclusion: Machine learning approach, such as XGBoost, is effectively used for credit scoring, among other data mining and statistical approaches.
-
-
-
Performance Analysis of various Front-end and Back End Amalgamations for Noise-robust DNN-based ASR
Authors: Mohit Dua, Pawandeep S. Sethi, Vinam Agrawal and Raghav ChawlaIntroduction: An Automatic Speech Recognition (ASR) system enables to recognize speech utterances and thus can be used to convert speech into text for various purposes. These systems are deployed in different environments such as clean or noisy and are used by all ages or types of people. These also present some of the major difficulties faced in the development of an ASR system. Thus, an ASR system needs to be efficient, while also being accurate and robust. Our main goal is to minimize the error rate during training as well as testing phases, while implementing an ASR system. The performance of ASR depends upon different combinations of feature extraction techniques and back-end techniques. In this paper, using a continuous speech recognition system, the performance comparison of different combinations of feature extraction techniques and various types of back-end techniques has been presented. Methods: Hidden Markov Models (HMMs), Subspace Gaussian Mixture Models (SGMMs) and Deep Neural Networks (DNNs) with DNN-HMM architecture, namely Karel’s, Dan’s and Hybrid DNN-SGMM architecture are used at the back-end of the implemented system. Mel frequency Cepstral Coefficient (MFCC), Perceptual Linear Prediction (PLP), and Gammatone Frequency Cepstral coefficients (GFCC) are used as feature extraction techniques at the front-end of the proposed system. Kaldi toolkit has been used for the implementation of the proposed work. The system is trained on the Texas Instruments-Massachusetts Institute of Technology (TIMIT) speech corpus for English language. Results: The experimental results show that MFCC outperforms GFCC and PLP in noiseless conditions, while PLP tends to outperform MFCC and GFCC in noisy conditions. Furthermore, the hybrid of Dan’s DNN implementation along with SGMM performs the best for the back-end acoustic modeling. The proposed architecture with the PLP feature extraction technique in the front end and hybrid of Dan’s DNN implementation along with SGMM at the back end outperforms the other combinations in a noisy environment. Conclusion: Automatic Speech recognition has numerous applications in our lives like Home automation, Personal assistant, Robotics, etc. It is highly desirable to build an ASR system with good performance. The performance of Automatic Speech Recognition is affected by various factors which include vocabulary size, whether the system is speaker dependent or independent, whether speech is isolated, discontinuous or continuous, and adverse conditions like noise. The paper presented an ensemble architecture that uses PLP for feature extraction at the front end and a hybrid of SGMM + Dan’s DNN in the backend to build a noise-robust ASR system. Discussion: The presented work in this paper discusses the performance comparison of continuous ASR systems developed using different combinations of front-end feature extraction (MFCC, PLP, and GFCC) and back-end acoustic modeling (mono-phone, tri-phone, SGMM, DNN and hybrid DNN-SGMM) techniques. Each type of front-end technique is tested in combination with each type of back-end technique. Finally, it compares the results of the combinations thus formed, to find out the best performing combination in noisy and clean conditions.
-
-
-
FSM based Intrusion Detection of Packet Dropping Attack using Trustworthy Watchdog Nodes
Authors: Radha R. Chandan and P.K MishraIntroduction: The proposed TWIST model aims to achieve a secure MANET by detecting and mitigating packet dropping attack using a finite state machine based IDS model. • To determine the trust values of the nodes using context-aware trust calculation • To select the trustworthy nodes as watchdog nodes for performing intrusion detection on the network • To detect and isolate the packet dropping attackers from routing activities, the scheme uses FSM based IDS for differentiating the packet dropping attacks from genuine nodes in the MANET. Methods: In this methodology, instead of launching an intrusion detection system (IDS) in all nodes, an FSM based IDS is placed in the trustworthy watchdog nodes for detecting packet dropping attacker nodes in the network. The proposed FSM based intrusion detection scheme has three steps. The three main steps in the proposed scheme are context-aware trust calculation, watchdog node selection, and FSM based intrusion detection. In the first process, the trust calculation for each node is based on specific parameters that are different for malicious nodes and normal nodes. The second step is the watchdog node selection based on context-aware trust value calculation for ensuring that the trustworthy network monitors are used for detecting attacker nodes in the network. The final process is FSM based intrusion detection, where the nodes acquire each state based on their behavior during the data routing. Based on the node behavior, the state transition occurs, and the nodes that drop data packets exceeding the defined threshold are moved to the malicious state and restricted to involve in further routing and services in the network. Results: The performance of the proposed (TWIST) mechanism is assessed using the Network Simulator 2 (NS2). The proposed TWIST model is implemented by modifying the Ad-Hoc On-Demand Distance Vector (AODV) protocol files in NS2. Moreover, the proposed scheme is compared with Detection and Defense against Packet Drop attack in the MANET (DDPD) scheme. A performance analysis is done for the proposed TWIST model using performance metrics such as detection accuracy, false-positive rate, and overhead and the performance result is compared with that of the DDPD scheme. After comparing the results, we analyzed that the proposed TWIST model exhibits better performance in terms of detection accuracy, false-positive rate, energy consumption, and overhead compared to the existing DDPD scheme. Discussion and Conclusion: In the TWIST model, an efficient packet dropping detection scheme based on the FSM model is proposed that efficiently detects the packet dropping attackers in the MANET. The trust is evaluated for each node in the network, and the nodes with the highest trust value are selected as watchdog nodes. The trust calculation based on parameters such as residual energy, the interaction between nodes and the neighbor count is considered for determining watchdog node selection. Thus, the malicious nodes that drop data packets during data forwarding cannot be selected as watchdog nodes. The FSM based intrusion detection is applied in the watchdog nodes for detecting attackers accurately by monitoring the neighbor nodes for malicious behavior. The performance analysis is conducted between the proposed TWIST mechanism and the existing DDPD scheme. The proposed TWIST model exhibits better performance in terms of detection accuracy, false-positive rate, energy consumption, and overhead compared to the existing DDPD scheme. This work may extend the conventional trust measurement of MANET routing, which adopts only routing behavior observation to cope with malicious activity. In addition, the performance evaluation of proposed work under packet dropping attack has not been performed for varying the mobility of nodes in terms of speed. Furthermore, various performance metric parameters like route discovery latency and malicious discovery ratio can be added to evaluate the performance of the protocol in the presence of malicious nodes. This may be considered in future work for the extension of protocol for better and efficient results. Furthermore, In the future, the scheme will focus on providing proactive detection of packet dropping attacker nodes in MANET using a suitable and efficient statistical method.
-
-
-
Implementing the Kalman Filter Algorithm in Parallel Form: Denoising Sound Wave as a Case Study
Authors: Hazem H. Osman, Ismail A. Ismail, Ehab Morsy and Hamid M. HawidiBackground: Kalman filter and its variants had achieved great success in many applications in the field of technology. However, the kalman filter is under heavy computations burden. Under big data, it becomes pretty slow. On the other hand, the computer industry has now entered the multicore era with hardware computational capacity increased by adding more processors (cores) on one chip, the sequential processors will not be available in near future, so we should have to move to parallel computations. Objective: This paper focuses on how to make Kalman Filter faster on multicore machines and implementing the parallel form of Kalman Filter equations to denoise sound wave as a case study. Method: Splitting the all signal points into large segments of data and applying equations on each segment simultaneously. After that, we merge the filtered points again in one large signal. Results: Our Parallel form of Kalman Filter can achieve nearly linear speed-up. Conclusion: Through implementing the parallel form of Kalman Filter equations on the noisy sound wave as a case study and using various numbers of cores, it is found that a kalman filter algorithm can be efficiently implemented in parallel by splitting the all signal points into large segments of data and applying equations on each segment simultaneously.
-
-
-
A New Approach for Simplification of Logical Propositions with Two Propositional Variables Using Truth Tables
Authors: Maher Nabulsi, Nesreen Hamad and Sokyna AlqatawnehBackground: Propositions simplification is a classic topic in discrete mathematics that is applied in different areas of science such as program development and digital circuits design. Investigating alternative methods would assist in presenting different approaches that can be used to obtain better results. This paper proposes a new method to simplify any logical proposition with two propositional variables without using logical equivalences. Methods: This method is based on constructing a truth table for the given proposition, and applying one of the following two concepts: the sum of Minterms or the product of Maxterms which has not been used previously in discrete mathematics, along with five new rules that are introduced for the first time in this work. Results: The proposed approach was applied to some examples, where its correctness was verified by applying the logical equivalences method. Applying the two methods showed that the logical equivalences method cannot give the simplest form easily; especially if the proposition cannot be simplified, and it cannot assist in determining whether the obtained solution represents the simplest form of this proposition or not. Conclusion: In comparison with the logical equivalences method, the results of all the tested propositions show that our method outperforms the currently used method as it provides the simplest form of logical propositions in fewer steps, and it overcomes the limitations of logical equivalences method. Originality/Value: This paper fulfills an identified need to provide a new method to simplify any logical proposition with two propositional variables.
-
-
-
Securing Energy Routing Protocol Against Black Hole Attacks in Mobile Ad-Hoc Network
Authors: Rajendra P. P. and Shiva ShankarIntroduction: The aim of securing energy routing protocol was to provide the countermeasures to the attacks, particularly to the black hole in a mobile ad-hoc network, and enhancing the network performance metric throughput, also reducing the end-to-end delay between the nodes in the network. To build the protocol that enhances the performance of the network by modifying the existing DSR protocol and by introducing a new route discovery mechanism in the proposed protocol. Methods: The proposed protocol implementation has two phases, route request/reply phase and route confirm phases. During the route discovery process, the route discovery from the source to the destination process is described by sending the RREQ packet from the source hub, as shown in Fig. 1(a), when it does not have one accessible and cravings a route to a destination. The source node transmits the RREQ to its associate nodes and the destination node reply with RREP. When the source receives a reply message, the source node responds with a reverse path with a confirm RCON message and providing security to the nodes in the network. Results: To verify the performance of the proposed protocol against the existing DSR protocol are compared with respect to various network metrics like end-to-end delay and packet delivery ratio and validated the result by comparing both routing algorithm using Network Simulator 2. Conclusion: The results of the proposed SERP strongly safeguard against the attacks in the network, and the packet delivery ratio is increased compared with the DSR; also, the end-to-end delay is reduced in the proposed protocol. Discussion: Mobile ad-hoc networks are being dynamic in nature; it associates with issues related to secure routing, energy and are generally vulnerable to several types of attacks. The DSR is one of the widely used reactive protocols available for the mobile ad-hoc network and the proposed work enhancing the security of the network in the existing pro.
-
-
-
Improved Background Subtraction Technique for Detecting Moving Objects
More LessIntroduction: Moving object detection from videos is among the most difficult task in different areas of computer vision applications. Among the traditional object detection methods, researchers conclude that the Background Subtraction method performs better in aspects of execution time and output quality. Method: Visual background extractor is a renowned algorithm in the Background Subtraction method for detecting a moving object in various applications. In recent years, a lot of research has been carried out to improve the existing Visual Background extractor algorithm. Results: After investigating many state of the art techniques and finding out the research gaps, this paper presents an improved background subtraction technique based on morphological operation and 2D median filter for detecting moving object which reduces the noise in the output video and also enhances its accuracy at a very limited additional cost. Experimental results in several benchmark datasets confirmed the superiority of the proposed method over the state-of-the-art object detection methods. Conclusion: In this article, a method has been proposed for the detection of a moving object where the quality of the output object is enhanced and good accuracy is achieved. This method provides accurate experimental results, which help in efficient object detection. The proposed technique also deals with Visual Background extractor Algorithm along with the Image Enhancement Procedure like Morphological and 2-D Filtering at a limited additional cost. Discussion: This article worked on a certain specific field, like noise reduction and image enhancement of output images of the existing ViBe Algorithm. The technique proposed in this article will be beneficial for various computer vision applications like video surveillance, road condition monitoring, airport safety, human activity analysis, monitoring the marine border for security purposes, etc.
-
-
-
Face Detection in Single and Multiple Images Using Different Skin Color Models
Authors: Manpreet Kaur, Jasdev Bhatti, Mohit K. Kakkar and Arun UpmanyuIntroduction: Face Detection is used in many different steams like video conferencing, human-computer interface, face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue (RGB) and Hue Saturation Value (HSV) color models in detecting the single, including multiple images of any face. Each color model HSV, Ycbcr and TSL are individually performed with the RGB color model to detect from the single and multiple images, the region of the face. Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains a face region or not. Canny edge detection is also used to show the boundaries of a candidate face region; in the end, the face can be shown detected by using a bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images, and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover, Ycbcr and TSL color models are observed to be outperformed than the existing HSV color model in the environment of multiple images. Discussion: The calculated results show that the HSV model works best for single-faced images, whereas YCbCr and TSL models work best for multiple faced images. In addition, the evaluated results by this paper provide better testing strategies that help to develop new techniques, which lead to an increase in research effectiveness. Conclusion: The calculated value of all parameters helps prove that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that the HSV model works best for single-faced images, whereas YCbCr and TSL models work best for multiple faced images.
-
-
-
SVM Kernel and Genetic Feature Selection Based Automated Diagnosis of Breast Cancer
Authors: Indu Singh, Shashank Garg, Shivam Arora, Nikhil Arora and Kripali AgrawalBackground: Breast cancer is the development of a malignant tumor in the breast of human beings (especially females). If not detected at the initial stages, it can substantially lead to an inoperable construct. It is a reason for the majority of cancer-related deaths throughout the world. Objectives: The main aim of this study is to diagnose breast cancer at an early stage so that the required treatment can be provided for survival. The tumor is classified as malignant or benign accurately at an early stage using a novel approach that includes an ensemble of the Genetic Algorithm for feature selection and kernel selection for SVM-Classifier. Methods: The proposed GA-SVM (Genetic Algorithm – Support Vector Machine) algorithm in this paper optimally selects the most appropriate features for training with the SVM classifier. Genetic Programming is used to select the features and the kernel for the SVM classifier. The Genetic Algorithm operates by exploring the optimal layout of features for breast cancer, thus, subjugating the problems faced in exponentially immense feature space. Results: The proposed approach accounts for a mean accuracy of 98.82% by using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset available on UCI with the training and testing ratio being 50:50, respectively. Conclusion: The results prove that the proposed model outperforms the previously designed models for breast cancer diagnosis. The outcome assures that the GA-SVM model may be used as an effective tool in assisting the doctors in treating the patients. Alternatively, it may be utilized as an alternate opinion in their eventual diagnosis.
-
-
-
Optimal Feature Selection Methods for Chronic Kidney Disease Classification using Intelligent Optimization Algorithms
Authors: Jerlin R. Lambert and Eswaran PerumalAim: The classification of medical data gives more importance to identify the existence of the disease. Background: Numerous classification algorithms for chronic kidney disease (CKD) are developed that produce better classification results. But, the inclusion of different factors in the identification of CKD reduces the effectiveness of the employed classification algorithm. Objective: To overcome this issue, feature selection (FS) approaches are proposed to minimize the computational complexity and also to improve the classification performance in the identification of CKD. Since numerous bio-inspired based FS methodologies are developed, a need arises to examine the performance of feature selection approaches of different algorithms on the identification of CKD. Method: This paper proposes a new framework for the classification and prediction of CKD. Three feature selection approaches are used, namely Ant Colony Optimization (ACO) algorithm, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) in the classification process of CKD. Finally, logistic regression (LR) classifier is employed for effective classification. Results: The effectiveness of the ACO-FS, GA-FS, and PSO-FS are validated by testing it against a benchmark CKD dataset. Conclusion: The empirical results state that the ACO-FS algorithm performs well and the results reported that the classification performance is improved by the inclusion of feature selection methodologies in CKD classification.
-
-
-
A DCT Fractional Bit Replacement Based Dual Watermarking Algorithm for Image Authentication
Authors: Rahul Dixit, Amita Nandal, Arvind Dhaka, Yohan V. Kuriakose and Vardan AgarwalAbstract: Watermarking is a process of embedding a message inside a digital signal like an image, video, or text. It is used for several key reasons such as authenticity verification, ownership recognition and hidden communication. In this paper, we discuss image watermarking, where secret messages are stored in images. Introduction: We propose a dual watermarking approach, which is based on Discrete Cosine Transform, Discrete Wavelet Transform and Singular Value Decomposition methods. This paper considers one watermark as robust and another watermark as fragile. Methods: The robust watermark is embedded in the Discrete Wavelet Transform- Singular Value Decomposition - domain and is used to transmit hidden messages. The fragile watermark is embedded in the Discrete Cosine Transform domain and is used for verification of the secret message of the robust watermark. The proposed algorithm is tested in the experimental results section and shows promising results against denoising, rotation, translation and cropping attacks. Results: The results show that the performance of the proposed algorithm in terms of mean squared error, structural similarity and peak signal to noise ratio is S4considerable as compared with the existing methods. Discussion: We present the comparison of results with the study by Himanshu et al., in Table 10, from which we can see that our method performs better with Gaussian noise and rotational attack only lacking with Salt and Pepper noise. Fig. 7 and Fig. 8, focusing on the resulting PSNR, show the variation in noise variance and degree of rotation. From the graphs, it is evident that our method performs better against Gaussian and rotational attacks. Conclusion: In this paper, a dual watermarking method is proposed in which one watermark is fragile, which is called as authentication watermark, whereas the other watermark is robust and is called as the information watermark. The authentication watermark is embedded in the fractional part of the DCT domain in the cover image and the information watermark is embedded in the diagonal vector of the LL sub-band.
-
-
-
Communication Cost Aware Resource Efficient Load Balancing (CARELB) Framework for Cloud Datacenter
Authors: Deepika Saxena and Ashutosh K. SinghBackground: Load balancing of communication-intensive applications, allowing efficient resource utilization and minimization of power consumption, is a challenging multi-objective virtual machine (VM) placement problem. The communication among inter-dependent VMs, raises network traffic, hampers cloud client’s experience and degrades overall performance by saturating the network. Introduction: Cloud computing has become an indispensable part of Information Technology (IT), which supports digitization throughout the world. It provides a shared pool of IT resources, which are: alwaysactive , accessible from anywhere, at any time and delivered on demand as a service. The scalability and pay-per-use benefits of cloud computing have driven the entire world towards on-demand IT services that facilitate increased usage of virtualized resources. The rapid growth in the demands of cloud resources has amplified the network traffic in and out of the datacenter. Cisco Global Cloud Index predicts that by the year 2021, the network traffic among the devices within the data center will grow at Compound Annual Growth Rate (CAGR) of 23.4%. Methods: To address these issues, a Communication cost Aware and Resource Efficient Load Balancing (CARE-LB) framework is presented that minimizes the communication cost, power consumption and maximizes resource utilization. To reduce the communication cost, VMs with high affinity and inter-dependency are intentionally placed closer to each other. The VM placement is carried out by applying the proposed integration of Particle Swarm Optimization and nondominated sorting based Genetic Algorithm i.e. PSOGA algorithm encoding VM allocation as particles as well as chromosomes. Results: The performance of the proposed framework is evaluated by the execution of numerous experiments in the simulated data center environment and it is compared with state-of-the-art methods like Genetic Algorithm, First-Fit, Random-Fit and Best-Fit heuristic algorithms. The experimental outcome reveals that the CARE-LB framework improves resource utilization by 11%, minimizes power consumption by 4.4% and communication cost by 20.3% with a reduction of execution time up to 49.7% over Genetic Algorithm based Load Balancing framework. Conclusion: The proposed CARE-LB framework provides a promising solution for faster execution of data-intensive applications with improved resource utilization and reduced power consumption. Discussion: In the observed simulation, we analyzed all the three objectives after the execution of the proposed multi-objective VM allocations and results are shown in Table 4. To choose the number of users for analysis of communication cost, the experiments were conducted with different numbers of users. For instance, for 100 VMs, we chose 10, 20 ,..., 80 users, and their request for VMs (number of VMs and type of VMs) were generated randomly, such that the total number of requested VMs did not exceed the number of available VMs.
-
-
-
Energy Adaptive and Max-Min based BFS Model for Route Optimization in Sensor Network
By Kapil JunejaBackground: The restricted energy and network life is a critical issue in the real-time sensor network. The occurrence of low-energy and faulty intermediate nodes can increase communication failure. The number of intermediate nodes affects the number of re-transmission, communication-failures, and increases the energy consumption on the routing path. The existing protocols take the greedy decision on all possible intermediate nodes collectively by considering one or more parameters. Objective: This work divides the distance between the source and destination into coveragespecific zones for restricting the hop-count. Now each zone is processed individually and collectively for generating the energy effective and failure preventive route. Methods: In this paper, the energy and coverage weighted BFS (Best First Model) algorithm is presented for route optimization in the sensor network. The max-min BFS is implied on sensor nodes of each zone and identified as the most reliable and effective intermediate node. The individual and composite weighted rules are applied to energy and distance parameters. This new routing protocol discovered the energy-adaptive route. Results: The proposed model is simulated on a randomly distributed network, and the analysis is done in terms of network life, energy consumption, hop count, and the number of route switching parameters. The comparative analysis is done against the MCP, MT-MR, Greedy, and other stateof- art routing protocols. Conclusion: The comparative results validate the significance of the proposed routing protocol in terms of energy effectiveness, lesser route switching, and improved the network life.
-
-
-
RCSA Protocol with Rayleigh Channel Fading in Massive-MIMO System
Authors: Umesha G. Basavaiah and Mysore N. S. SwamyIntroduction: Massive MIMO is a process where cellular BSs comprise a large number of antennas. In this study, the focus is to develop end-to-end massive-MIMO system under the Rayleigh channel fading effect. Also, it includes both inter-channel interference and the intra-channel interference in an m-MIMO network system. Methodology: The main aim of this research is to increase the throughput and network capacity and optimize the channel collision between the associated pilots. Here, we propose an RCSA protocol with the Rayleigh channel fading effect in the m-MIMO network to create a network like a real-time scenario. Here we have focused on the deployment of urban scenarios with the small timing variation and provided our novel RAP for the UEs, where the UEs can access the network. Furthermore, to validate the performance of the proposed scheme, the proposed model is compared with the state-ofthe- art model. Results: We herein provide the analysis based on two considered scenarios; such as scenario-A, where intra-channel interference is taken into account, whereas in scenario-B, both intra-cell channel interference as well as inter-cell channel interference are considered. Our RCSA approach is proposed with uncorrelated Rayleigh fading channels (URFC) that are used to increase the capacity of the network and decrease the collision probability. Conclusion: Here, we have proposed the RCSA approach; RCSA comprises four major steps such as system initialization and querying, response queuing, resource contention and channel state analysis, and resource allocation. The system performs in the TDD mode of operation and the resources of time-frequency are divided into the coherent blocks of channel Tuses. This research focuses on RAB where inactive UEs are admitted to PDB; also it proposes the RCSA approach for RAP that provides protection from strong inter-cell interference in m-MIMO systems. Discussion: In order to compare our RCSA-URFC approach, here we have considered the state-ofart technique, such as vertex graph-coloring-based pilot assignment (VGCPA) under URFC. In addition, we have also considered the bias term randomly to make decision regarding a particular UE. Moreover, it is very difficult to identify the strong probability of UE, therefore as per information obtained via b to UEs, the bias term can be selected for the UE in order to moderate the decision rule.
-
-
-
Parameter-Tuned Deep Learning Model for Credit Risk Assessment and Scoring Applications
Authors: Varadharajan Veeramanikandan and Mohan JeyakarthicBackground: At present, financial Credit Scoring (CS) is considered as one of the hottest research topics in finance domain, which assists in determining the credit value of individual persons as well as organizations. Data mining approaches are found to be useful in banking sectors, which assist them in designing and developing proper products or services to the customer with minimal risks. Credit risks are linked to loss and loan defaults, which are the main source of risks that exist in the banking sector. Aim: The current research article aims at presenting an effective credit score prediction model for banking sector which can assist them to foresee the credible customers, who have applied for loan. Methods: An optimal Deep Neural Network (DNN)-based framework is employed for credit score data classification using Stacked Autoencoders (SA). Here, SA is applied to extract the features from the dataset. These features are then classified using SoftMax layer. Besides, the network is also tuned Truncated Backpropagation Through Time (TBPTT) model in a supervised way using the training dataset. Results: The proposed model was tested using a benchmark German credit dataset, which includes the necessary variables to determine the credit score of a loan applicant. The presented SADNN model achieved the maximum classification while the model attained high accuracy rate of 96.10%, F-score of 97.25% and kappa value of 90.52%. Conclusion: The experimental results pointed out that a maximum classification performance was attained by the proposed model on all different aspects. The proposed method helped in determining the capability of a borrower in repaying the loan and computing the credit risks properly.
-
-
-
Keyphrase Extraction by Improving TextRank with an Integration of Word Embedding and Syntactic Information
Authors: Sheng Zhang, Qi Luo, Yukun Feng, Ke Ding, Daniela Gifu, Silan Zhang, Xiaohang Ma and Jingbo XiaBackground: As a known keyphrase extraction algorithm, TextRank is an analog of the PageRank algorithm, which relies heavily on the statistics of term frequency in the manner of cooccurrence analysis. Objective: The frequency-based characteristic made it a bottleneck for performance enhancement, and various improved TextRank algorithms were proposed in recent years. Most of the improvements incorporated semantic information into the keyphrase extraction algorithm and achieved improvement. Method: In this research, taking both syntactic and semantic information into consideration, we integrated syntactic tree algorithm and word embedding and put forward an algorithm of Word Embedding and Syntactic Information Algorithm (WESIA), which improved the accuracy of the TextRank algorithm. Results: By applying our method on a self-made test set and a public test set, the result implied that the proposed unsupervised keyphrase extraction algorithm outperformed the other algorithms to some extent.
-
-
-
LWT-DCT based Image Watermarking Scheme using Normalized SVD
Authors: Rahul Dixit, Amita Nandal, Arvind Dhaka, Vardan Agarwal and Yohan V. KuriakoseBackground: Nowadays, information security is one of the most significant issues of social networks. The multimedia data can be tampered with, and the attackers can then claim its ownership. Image watermarking is a technique that is used for copyright protection and authentication of multimedia. Objective: We aim to create a new and more robust image watermarking technique to prevent illegal copying, editing and distribution of media. Method: The watermarking technique proposed in this paper is non-blind and employs Lifting Wavelet Transform on the cover image to decompose the image into four coefficient matrices. Then Discrete Cosine Transform is applied which separates a selected coefficient matrix into different frequencies and later Singular Value Decomposition is applied. Singular Value Decomposition is also applied to the watermarking image and it is added to the singular matrix of the cover image, which is then normalized, followed by the inverse Singular Value Decomposition, inverse Discrete Cosine Transform and inverse Lifting Wavelet Transform respectively to obtain an embedded image. Normalization is proposed as an alternative to the traditional scaling factor. Results: Our technique is tested against attacks like rotation, resizing, cropping, noise addition and filtering. The performance comparison is evaluated based on Peak Signal to Noise Ratio, Structural Similarity Index Measure, and Normalized Cross-Correlation. Conclusion: The experimental results prove that the proposed method performs better than other state-of-the-art techniques and can be used to protect multimedia ownership.
-
-
-
Secure Electronic Voting System based on Mobile-app and Blockchain
Authors: Surbhi Dewan, Latika Singh and Neha GuptaIntroduction: The notion of electronic voting has evolved over a period of time, replacing the traditional system, which was based on paper ballots. Several types of electronic voting systems exist, still the implementation is partial and there is a scope for improvement for making it more secure and user-friendly. Method In this paper, a proof-of-concept is presented which aims to address the issues and challenges in the electoral system by using the concept of Ethereum blockchain and smart contracts. Result: These electronic electoral processes propose a centralized solution that can be easily tampered, thus increasing the problem of distrust in the citizens. To overcome this, blockchain technology can be used for implementing mobile-based electronic voting system. Blockchain technology is aiding in the development of novel digital services that are more secure and reliable. Discussion: The main objective of this paper is to depict how a feasible, secure, and reliable mobile voting system can be built by implementing the concept of blockchain and smart contracts. Conclusion: The issue of security and transparency in the voting system can be addressed using blockchain technology. The present study aims to fulfill these gaps partially by providing use-case for the voting process, which is based on mobile and blockchain technology.
-
-
-
Quantization Algorithms in Healthcare Information Telematics Using Wireless Interactive Services of Digital Video Broadcasting
Authors: Konstantinos Kardaras, George I. Lambrou and Dimitrios KoutsourisBackground: In the new era of wireless communications new challenges emerge including the provision of various services over the digital television network. In particular, such services become more important when referring to the tele-medical applications through terrestrial Digital Video Broadcasting (DVB). Objective: One of the most significant aspects of video broadcasting is the quality and information content of data. Towards that end, several algorithms have been proposed for image processing in order to achieve the most convenient data compression. Methods: Given that medical video and data are highly demanding in terms of resources it is imperative to find methods and algorithms that will facilitate medical data transmission with ordinary infrastructure such as DVB. Results: In the present work we have utilized a quantization algorithm for data compression and we have attempted to transform video signal in such a way that would transmit information and data with a minimum loss in quality and succeed a near maximum End-user approval. Conclusions: Such approaches are proven to be of great significance in emergency handling situations, which also include health care and emergency care applications.
-
-
-
MAGIC-I as an Assistance for the Visually Impaired People
Authors: Kavita Pandey, Vatsalya Yadav, Dhiraj Pandey and Shriya VikhramBackground: According to the WHO report, around 4.07% of the world's population is visually impaired. About 90% of the visually impaired users live in the lower economic strata. In the fast technological era, most of the inventions miss the need of these people. Mainly the technologies are designed for mainstream people; visually impaired people are always unable to access it. This inability arises primarily for reasons such as cost, for example, Perkins Brailler costs 80-248 dollars for the simple purpose of Braille input. Another major reason is the hassle of carrying the big equipment. Objective: Keeping all this in mind and making technology available to their doors, MAGIC-1 has been designed. The goal is to provide a solution in terms of an application, which helps the visually impaired people in their daily life activities. Method: The proposed solution assists visually impaired users through smartphone technology. If visually impaired users ever wished to have a touched guide in a smartphone, MAGIC-1 has the solution that consolidates all the important features related to their daily activities. Results: The performance of the proposed technology as a whole and its individual features in terms of usability, utility and other metrics, etc. have been tested based on a sample of visually impaired users. Moreover, performances in term of Errors per Word and Words per Minute have been observed. Conclusion: MAGIC-I, the proposed solution, works as an assistant of visually impaired users to overcome their daily struggles and let them stay more connected to the world. A visually impaired user can communicate via their mobile devices with features like eyes free texting using braille, voice calling, etc. They can easily take help in an emergency situation with the options of SOS emergency calling and video assistance.
-
-
-
Lifetime Maximization of Heterogeneous WSN Using Fuzzy-based Clustering
Authors: Ritu Saini, Kumkum Dubey, Prince Rajpoot, Sushma Gautam and Ritika YaduvanshiBackground: Wireless Sensor Network (WSN) is an arising field for research and development. It has various applications ranging from environmental monitoring to battlefield surveillance and more. WSN is a collection of multiple sensor nodes used for sensing the environment. But these sensing nodes are deployed in such areas where it is not that easy to reach, therefore, the battery used in these nodes becomes quite impossible to change, hence there is a need to utilize this energy to get the maximum sensing for a long time. Objective: To use the Fuzzy approach in the clustering algorithm. Clustering is a key approach to prolong the network lifetime with minimum energy utilization. In this paper, the focus is on the Cluster Head (CH) selection. So, we are proposing a clustering algorithm which is based on some of the attributes, including Average Residual Energy of CHs, Average Distance from nodes to CHs, Standard Deviation of member nodes, and Average Distance from CH to Base Station(BS). Methods: Initially, some of the nodes are found to have greater residual energy than the average network energy, and fifteen populations are made, each having an optimum number of CHs. The final and best CHs set is chosen by determining the maximum fitness value using a fuzzy approach. Result: The result positively supports the energy-efficient utilization with lifetime maximization, which is compared with the Base algorithm [1] and LEACH [2] protocol based on residual energy and the number of nodes that die after performing some rounds. Conclusion: The proposed algorithm determines a fuzzy-based fitness value, provides loadbalancing among all the networking nodes, and performs a selection of best Cluster Heads, resulting in prolonged network lifetime and maximized efficiency.
-
-
-
A Path Planning Method for Mobile Robots Based on Fuzzy Firefly Algorithms
Authors: Hui Fu and Xiaoyong LiuIntroduction: Mobile Robot is a kind of robot system consisting of sensors, remote control operators and automatic control mobile carriers. It is a product of the integrated application of integrated disciplines developed in recent years. In the research of mobile robot-related technology, navigation technology is its core, and path planning is an important link and subject of navigation research. Objective: An improved firefly algorithm is proposed for path planning of Mobile Robots in this paper. Methods: In this paper, an improved firefly algorithm is proposed. Compared with the traditional firefly algorithm, this algorithm has three main improvements: (1) using Sobol sequence to initialize population; (2) adding a dynamic disturbance coefficient to enhance the global search ability of the algorithm; (3) considering the uncertainty of search, the attraction between individuals is strong. Fuzzy control is carried out by setting the membership function. Results: The new algorithm takes advantage of the uniformity of Sobol sequence sampling and starts to optimize in a wider range, which makes the initial path of the algorithm longer, but because the new algorithm introduces the dynamic disturbance coefficient and the fuzzy control strategy, the average running time is shorter. Conclusion: In the simulation experiment of the mobile robot path planning problem, the improved firefly algorithm proposed in this paper is easier to jump out of local optimum than the traditional firefly algorithm and has a more robust search ability. Discussion: It is obvious from the graph that in 100 iterations, the FaFA algorithm takes advantage of the uniformity of Sobol sequence sampling and starts to optimize in a wider range, which makes the initial path of the algorithm longer, but because the FaFA algorithm introduces the dynamic disturbance coefficient and the fuzzy control strategy, it makes the algorithm able.
-
-
-
Three-Dimension Measurement of Mechanical Parts Based on Structure from Motion(SfM) Algorithm
Authors: LI Hang, Tong Xi, Jiang Wei and XU HongmeiBackground: As an important branch of computer vision, visual measurement is a fast developing cutting-edge technology, which has been widely used in the manufacturing field. In recent years, the visual measurement of feature size of probes through small IC probes has aroused wide concern. Objective: This study aims to take small shaft parts as the research object in order to provide a full set of novel and reliable technical means for the three-dimension measurement of mechanical parts. Methods: Firstly, the trinocular vision measurement system based on the curved cantilever mechanism was designed and constructed. Secondly, the measurement system was used to collect the part images from different angles, and the images derived from the four categories of segmentation algorithms such as threshold-based, region-based segmentation algorithm were compared and analyzed. Lazy Snapping image segmentation algorithm was used to extract the foreground parts of each image. After comparing and analyzing SfM-based algorithm and Visual Hull-based algorithm, the SfM-based algorithm was adopted to reconstruct the 3D morphology of the parts. The measurement of the relevant dimensions was performed. Results: The results show that Lazy Snapping's human-computer interaction brush function improves the accuracy and stability of image segmentation of different algorithms, such as threshold value method, regional method, Grab Cut, and Dense Cut. The SfM-based 3D reconstruction algorithm is of high robustness and fast speed. Conclusion: This study provides an effective method for measuring small mechanical parts, which will shorten the measurement cycle, improve the measurement speed, and reduce the measurement cost.
-
-
-
A Case Study of Network Mobility (NEMO-BSP) Integration with Leo Constellation System
Authors: M. A. Sheikh and Neeta SinghIntroduction: In this paper, we address the issue of network connectivity inside an airplane. Around several thousand feet above the earth’s surface, where ground-based infrastructure failed to provide an internet connection, we proposed a solution to integrate Network Mobility Basic support (NEMO-BSP) protocol with Low Earth Orbit (LEO) constellations system. Discussion: Right now, passengers of flight are forced to interact with the flight’s pre-recorded entertainment system. Most of the airlines do not allow passengers to use the internet or any other type of interaction medium with the ground while during the flight. They force passengers to even switch their mobile to the flight mode so that there is no active communication between a passenger and the remaining world. Method: This paper investigates four different possible scenarios of relative movement of flight and satellite constellations. All four possible scenarios simulated in Network Simulator 2 (NS-2). Result: Results were discussed with respect to the network performance perspective. Parameters like an end-to-end delay, jitter, throughput, and other such parameters were obtained and discussed in detail. Conclusion: This paper proposes a method to deal with the internet connection issue of flights. The proposed system of LEO satellite and NEMO-BSP protocol works efficiently in the simulation study.
-
Most Read This Month
