Recent Advances in Computer Science and Communications - Volume 14, Issue 9, 2021
Volume 14, Issue 9, 2021
-
-
An Optimized Software Defect Prediction Model Based on PSO - ANFIS
Authors: Misha Kakkar, Sarika Jain, Abhay Bansal and P.S. GroverIntroduction The Software defect prediction (SDP) model plays a very important role in today’s software industry. SDP models can provide either only a list of defect-prone classes as output or the number of defects present in each class. This output can then be used by quality assurance teams to effectively allocate limited resources for validating software products by putting more effort into these defect-prone classes. The study proposes an OANFIS-SDP model that gives the number of defects as an output to software development teams. Development teams can then use this data for better allocation for their scarce resources such as time and manpower. Methods: OANFIS is a novel approach based on the Adaptive neuro-fuzzy inference system (ANFIS), which is optimized using Particle swarm optimization (PSO). OANFIS model combines the flexibility of ANFIS model with the optimization capabilities of PSO for better performance. Results: The proposed model is tested using the dataset from open source java projects of varied sizes (from 176 to 745 classes). Conclusion: The study proposes an SDP model-based OANFIS that gives the number of defects as an output to software development teams. Development teams can then use this data for better allocation for their scarce resources such as time and manpower. OANFIS is a novel approach that uses the flexibility provided by the ANFIS model and optimizes the same using PSO. The results given by OANFIS are very good and it can also be concluded that the performance of the SDP model based on OANFIS might be influenced by the size of the projects. Discussion: The performance of the SDP model based on OANFIS is better than the ANFIS model. It can also be concluded that the performance of the SDP model might be influenced by the size of projects.
-
-
-
The Impact of Varying Sinks on Load Distribution in IoT Routing Under Static and Mobile Scenarios
More LessBackground: The majority of routing algorithms proposed for IoT can be categorized under modifications in routing protocols designated for IoT such as RPL objective functions enhancements, while others are enhancements in routing algorithms derived from those which were designed for ad hoc networks. We have thoroughly investigated the literature on these modifications or enhancements. However, there is a lack of an in-depth study to examine the impact of varying number of sinks and nodes' mobility on routing algorithms based on MRHOF and OF0. Method: We examine the ability of MRHOF and OF0 in distributing the load with the impact of the varying number of sink nodes under static and mobile scenarios with the aid of COOJA simulator. This has been conducted using various metrics including regular metrics such as throughput and power consumption, and newly derived metrics including packets load deviation and power deviation which are derived for the purpose of measuring load distribution. Results: Increasing number of sinks has shown that OF0 outperforms MRHOF in terms of power consumption and power deviation under both static and mobile scenarios. On the other hand, in mobile scenarios, MRHOF outperforms OF0 in terms of packets load deviation for a high number of sinks. Conclusion: Both techniques demonstrated performance merits when challenged with varying number of sinks. Therefore, understanding their behaviour would play a key role in deriving an efficient load distribution technique, which is likely to be achieved by considering network conditions.
-
-
-
Hybrid Model using Firefly and BBO for Feature Selection in Software Product Line
Authors: Hitesh Yadav, Rita Chhikara and Charan KumariBackground: Software Product Line is the group of multiple software systems that share a similar set of features with multiple variants. Feature model is used to capture and organize features used in different multiple organizations. Objective: The objective of this research article is to obtain an optimized subset of features capable of providing high performance. Methods: In order to achieve the desired objective, two methods have been proposed; a) an improved objective function which is used to compute the contribution of each feature with weightbased methodology; and b) a hybrid model that is employed to optimize the Software Product Line problem. Results: Feature sets varying in size from 100 to 1000 have been used to compute the performance of the Software Product Line. Conclusion: The results show that the proposed hybrid model outperforms the state of art metaheuristic algorithms.
-
-
-
Comparative Analysis of Keyframe Extraction Techniques for Video Summarization
Authors: Vishal Parikh, Jay Mehta, Saumyaa Shah and Priyanka SharmaBackground: With technological advancement, the quality of life of people has improved. Also, with technological advancement, large amounts of data are produced by people. The data is in the forms of text, images and videos. Hence, there is a need for significant efforts and means of devising methodologies for analyzing and summarizing them to manage with the space constraints. Video summaries can be generated either by keyframes or by skim/shot. The keyframe extraction is done based on deep learning-based object detection techniques. Various object detection algorithms have been reviewed for generating and selecting the best possible frames as keyframes. A set of frames is extracted out of the original video sequence and based on the technique used, one or more frames of the set are decided as a keyframe, which then becomes the part of the summarized video. The following paper discusses the selection of various keyframe extraction techniques in detail. Methods: The research paper is focused on the summary generation for office surveillance videos. The major focus of the summary generation is based on various keyframe extraction techniques. For the same, various training models like Mobilenet, SSD, and YOLO are used. A comparative analysis of the efficiency for the same showed that YOLO gives better performance as compared to the other models. Keyframe selection techniques like sufficient content change, maximum frame coverage, minimum correlation, curve simplification, and clustering based on human presence in the frame have been implemented. Results: Variable and fixed-length video summaries were generated and analyzed for each keyframe selection technique for office surveillance videos. The analysis shows that the output video obtained after using the Clustering and the Curve Simplification approaches is compressed to half the size of the actual video but requires considerably less storage space. The technique depending on the change of frame content between consecutive frames for keyframe selection produces the best output for office surveillance videos. Conclusion: In this paper, we discussed the process of generating a synopsis of a video to highlight the important portions and discard the trivial and redundant parts. Firstly, we have described various object detection algorithms like YOLO and SSD, used in conjunction with neural networks like MobileNet, to obtain the probabilistic score of an object that is present in the video. These algorithms generate the probability of a person being a part of the image for every frame in the input video. The results of object detection are passed to keyframe extraction algorithms to obtain the summarized video. Our comparative analysis for keyframe selection techniques for office videos will help in determining which keyframe selection technique is preferable.
-
-
-
Sequential Model for Digital Image Contrast Enhancement
Authors: Monika Agarwal, Geeta Rani, Shilpy Agarwal and Vijaypal S. DhakaAims: The manuscript aims at designing and developing a model for optimum contrast enhancement of an input image. The output image of model ensures the minimum noise, the maximum brightness and the maximum entropy preservation. Objectives: * To determine an optimal value of threshold by using the concept of entropy maximization for segmentation of all types of low contrast images. * To minimize the problem of over enhancement by using a combination of weighted distribution and weighted constrained model before applying histogram equalization process. * To provide an optimum contrast enhancement with minimum noise and undesirable visual artefacts. * To preserve the maximum entropy during the contrast enhancement process and providing detailed information recorded in an image. * To provide the maximum mean brightness preservation with better PSNR and contrast. * To effectively retain the natural appearance of an images. * To avoid all unnatural changes that occur in Cumulative Density Function. * To minimize the problems such as noise, blurring and intensity saturation artefacts. Methods: 1. Histogram Building. 2. Segmentation using Shannon’s Entropy Maximization. 3. Weighted Normalized Constrained Model. 4. Histogram Equalization. 5. Adaptive Gamma Correction Process. 6. Homomorphic Filtering. Results: Experimental results obtained by applying the proposed technique MEWCHE-AGC on the dataset of low contrast images, prove that MEWCHE-AGC preserves the maximum brightness, yields the maximum entropy, high value of PSNR and high contrast. This technique is also effective in retaining the natural appearance of an images. The comparative analysis of MEWCHE-AGC with existing techniques of contrast enhancement is an evidence for its better performance in both qualitative as well as quantitative aspects. Conclusion: The technique MEWCHE-AGC is suitable for enhancement of digital images with varying contrasts. Thus useful for extracting the detailed and precise information from an input image. Thus becomes useful in identification of a desired regions in an image.
-
-
-
Comparative Analysis of a Deep Learning Approach with Various Classification Techniques for Credit Score Computation
Authors: Arvind Pandey, Shipra Shukla and Krishna K. MohbeyBackground: Large financial companies are perpetually creating and updating customer scoring techniques. From a risk management view, this research for the predictive accuracy of probability is of vital importance than the traditional binary result of classification, i.e., noncredible and credible customers. The customer's default payment in Taiwan is explored for the case study. Objective: The aim is to audit the comparison between the predictive accuracy of the probability of default with various techniques of statistics and machine learning. Method: In this paper, nine predictive models are compared from which the results of only six models are taken into consideration. Deep learning-based H2O, XGBoost, logistic regression, gradient boosting, naïve Bayes, logit model, and probit regression comparative analysis are performed. The software tools, such as R and SAS (university edition), are employed for machine learning and statistical model evaluation. Results: Through the experimental study, we demonstrate that XGBoost performs better than other AI and ML algorithms. Conclusion: Machine learning approach, such as XGBoost, is effectively used for credit scoring, among other data mining and statistical approaches.
-
-
-
Performance Analysis of various Front-end and Back End Amalgamations for Noise-robust DNN-based ASR
Authors: Mohit Dua, Pawandeep S. Sethi, Vinam Agrawal and Raghav ChawlaIntroduction: An Automatic Speech Recognition (ASR) system enables to recognize speech utterances and thus can be used to convert speech into text for various purposes. These systems are deployed in different environments such as clean or noisy and are used by all ages or types of people. These also present some of the major difficulties faced in the development of an ASR system. Thus, an ASR system needs to be efficient, while also being accurate and robust. Our main goal is to minimize the error rate during training as well as testing phases, while implementing an ASR system. The performance of ASR depends upon different combinations of feature extraction techniques and back-end techniques. In this paper, using a continuous speech recognition system, the performance comparison of different combinations of feature extraction techniques and various types of back-end techniques has been presented. Methods: Hidden Markov Models (HMMs), Subspace Gaussian Mixture Models (SGMMs) and Deep Neural Networks (DNNs) with DNN-HMM architecture, namely Karel’s, Dan’s and Hybrid DNN-SGMM architecture are used at the back-end of the implemented system. Mel frequency Cepstral Coefficient (MFCC), Perceptual Linear Prediction (PLP), and Gammatone Frequency Cepstral coefficients (GFCC) are used as feature extraction techniques at the front-end of the proposed system. Kaldi toolkit has been used for the implementation of the proposed work. The system is trained on the Texas Instruments-Massachusetts Institute of Technology (TIMIT) speech corpus for English language. Results: The experimental results show that MFCC outperforms GFCC and PLP in noiseless conditions, while PLP tends to outperform MFCC and GFCC in noisy conditions. Furthermore, the hybrid of Dan’s DNN implementation along with SGMM performs the best for the back-end acoustic modeling. The proposed architecture with the PLP feature extraction technique in the front end and hybrid of Dan’s DNN implementation along with SGMM at the back end outperforms the other combinations in a noisy environment. Conclusion: Automatic Speech recognition has numerous applications in our lives like Home automation, Personal assistant, Robotics, etc. It is highly desirable to build an ASR system with good performance. The performance of Automatic Speech Recognition is affected by various factors which include vocabulary size, whether the system is speaker dependent or independent, whether speech is isolated, discontinuous or continuous, and adverse conditions like noise. The paper presented an ensemble architecture that uses PLP for feature extraction at the front end and a hybrid of SGMM + Dan’s DNN in the backend to build a noise-robust ASR system. Discussion: The presented work in this paper discusses the performance comparison of continuous ASR systems developed using different combinations of front-end feature extraction (MFCC, PLP, and GFCC) and back-end acoustic modeling (mono-phone, tri-phone, SGMM, DNN and hybrid DNN-SGMM) techniques. Each type of front-end technique is tested in combination with each type of back-end technique. Finally, it compares the results of the combinations thus formed, to find out the best performing combination in noisy and clean conditions.
-
-
-
FSM based Intrusion Detection of Packet Dropping Attack using Trustworthy Watchdog Nodes
Authors: Radha R. Chandan and P.K MishraIntroduction: The proposed TWIST model aims to achieve a secure MANET by detecting and mitigating packet dropping attack using a finite state machine based IDS model. • To determine the trust values of the nodes using context-aware trust calculation • To select the trustworthy nodes as watchdog nodes for performing intrusion detection on the network • To detect and isolate the packet dropping attackers from routing activities, the scheme uses FSM based IDS for differentiating the packet dropping attacks from genuine nodes in the MANET. Methods: In this methodology, instead of launching an intrusion detection system (IDS) in all nodes, an FSM based IDS is placed in the trustworthy watchdog nodes for detecting packet dropping attacker nodes in the network. The proposed FSM based intrusion detection scheme has three steps. The three main steps in the proposed scheme are context-aware trust calculation, watchdog node selection, and FSM based intrusion detection. In the first process, the trust calculation for each node is based on specific parameters that are different for malicious nodes and normal nodes. The second step is the watchdog node selection based on context-aware trust value calculation for ensuring that the trustworthy network monitors are used for detecting attacker nodes in the network. The final process is FSM based intrusion detection, where the nodes acquire each state based on their behavior during the data routing. Based on the node behavior, the state transition occurs, and the nodes that drop data packets exceeding the defined threshold are moved to the malicious state and restricted to involve in further routing and services in the network. Results: The performance of the proposed (TWIST) mechanism is assessed using the Network Simulator 2 (NS2). The proposed TWIST model is implemented by modifying the Ad-Hoc On-Demand Distance Vector (AODV) protocol files in NS2. Moreover, the proposed scheme is compared with Detection and Defense against Packet Drop attack in the MANET (DDPD) scheme. A performance analysis is done for the proposed TWIST model using performance metrics such as detection accuracy, false-positive rate, and overhead and the performance result is compared with that of the DDPD scheme. After comparing the results, we analyzed that the proposed TWIST model exhibits better performance in terms of detection accuracy, false-positive rate, energy consumption, and overhead compared to the existing DDPD scheme. Discussion and Conclusion: In the TWIST model, an efficient packet dropping detection scheme based on the FSM model is proposed that efficiently detects the packet dropping attackers in the MANET. The trust is evaluated for each node in the network, and the nodes with the highest trust value are selected as watchdog nodes. The trust calculation based on parameters such as residual energy, the interaction between nodes and the neighbor count is considered for determining watchdog node selection. Thus, the malicious nodes that drop data packets during data forwarding cannot be selected as watchdog nodes. The FSM based intrusion detection is applied in the watchdog nodes for detecting attackers accurately by monitoring the neighbor nodes for malicious behavior. The performance analysis is conducted between the proposed TWIST mechanism and the existing DDPD scheme. The proposed TWIST model exhibits better performance in terms of detection accuracy, false-positive rate, energy consumption, and overhead compared to the existing DDPD scheme. This work may extend the conventional trust measurement of MANET routing, which adopts only routing behavior observation to cope with malicious activity. In addition, the performance evaluation of proposed work under packet dropping attack has not been performed for varying the mobility of nodes in terms of speed. Furthermore, various performance metric parameters like route discovery latency and malicious discovery ratio can be added to evaluate the performance of the protocol in the presence of malicious nodes. This may be considered in future work for the extension of protocol for better and efficient results. Furthermore, In the future, the scheme will focus on providing proactive detection of packet dropping attacker nodes in MANET using a suitable and efficient statistical method.
-
-
-
Implementing the Kalman Filter Algorithm in Parallel Form: Denoising Sound Wave as a Case Study
Authors: Hazem H. Osman, Ismail A. Ismail, Ehab Morsy and Hamid M. HawidiBackground: Kalman filter and its variants had achieved great success in many applications in the field of technology. However, the kalman filter is under heavy computations burden. Under big data, it becomes pretty slow. On the other hand, the computer industry has now entered the multicore era with hardware computational capacity increased by adding more processors (cores) on one chip, the sequential processors will not be available in near future, so we should have to move to parallel computations. Objective: This paper focuses on how to make Kalman Filter faster on multicore machines and implementing the parallel form of Kalman Filter equations to denoise sound wave as a case study. Method: Splitting the all signal points into large segments of data and applying equations on each segment simultaneously. After that, we merge the filtered points again in one large signal. Results: Our Parallel form of Kalman Filter can achieve nearly linear speed-up. Conclusion: Through implementing the parallel form of Kalman Filter equations on the noisy sound wave as a case study and using various numbers of cores, it is found that a kalman filter algorithm can be efficiently implemented in parallel by splitting the all signal points into large segments of data and applying equations on each segment simultaneously.
-
-
-
A New Approach for Simplification of Logical Propositions with Two Propositional Variables Using Truth Tables
Authors: Maher Nabulsi, Nesreen Hamad and Sokyna AlqatawnehBackground: Propositions simplification is a classic topic in discrete mathematics that is applied in different areas of science such as program development and digital circuits design. Investigating alternative methods would assist in presenting different approaches that can be used to obtain better results. This paper proposes a new method to simplify any logical proposition with two propositional variables without using logical equivalences. Methods: This method is based on constructing a truth table for the given proposition, and applying one of the following two concepts: the sum of Minterms or the product of Maxterms which has not been used previously in discrete mathematics, along with five new rules that are introduced for the first time in this work. Results: The proposed approach was applied to some examples, where its correctness was verified by applying the logical equivalences method. Applying the two methods showed that the logical equivalences method cannot give the simplest form easily; especially if the proposition cannot be simplified, and it cannot assist in determining whether the obtained solution represents the simplest form of this proposition or not. Conclusion: In comparison with the logical equivalences method, the results of all the tested propositions show that our method outperforms the currently used method as it provides the simplest form of logical propositions in fewer steps, and it overcomes the limitations of logical equivalences method. Originality/Value: This paper fulfills an identified need to provide a new method to simplify any logical proposition with two propositional variables.
-
-
-
Securing Energy Routing Protocol Against Black Hole Attacks in Mobile Ad-Hoc Network
Authors: Rajendra P. P. and Shiva ShankarIntroduction: The aim of securing energy routing protocol was to provide the countermeasures to the attacks, particularly to the black hole in a mobile ad-hoc network, and enhancing the network performance metric throughput, also reducing the end-to-end delay between the nodes in the network. To build the protocol that enhances the performance of the network by modifying the existing DSR protocol and by introducing a new route discovery mechanism in the proposed protocol. Methods: The proposed protocol implementation has two phases, route request/reply phase and route confirm phases. During the route discovery process, the route discovery from the source to the destination process is described by sending the RREQ packet from the source hub, as shown in Fig. 1(a), when it does not have one accessible and cravings a route to a destination. The source node transmits the RREQ to its associate nodes and the destination node reply with RREP. When the source receives a reply message, the source node responds with a reverse path with a confirm RCON message and providing security to the nodes in the network. Results: To verify the performance of the proposed protocol against the existing DSR protocol are compared with respect to various network metrics like end-to-end delay and packet delivery ratio and validated the result by comparing both routing algorithm using Network Simulator 2. Conclusion: The results of the proposed SERP strongly safeguard against the attacks in the network, and the packet delivery ratio is increased compared with the DSR; also, the end-to-end delay is reduced in the proposed protocol. Discussion: Mobile ad-hoc networks are being dynamic in nature; it associates with issues related to secure routing, energy and are generally vulnerable to several types of attacks. The DSR is one of the widely used reactive protocols available for the mobile ad-hoc network and the proposed work enhancing the security of the network in the existing pro.
-
-
-
Improved Background Subtraction Technique for Detecting Moving Objects
More LessIntroduction: Moving object detection from videos is among the most difficult task in different areas of computer vision applications. Among the traditional object detection methods, researchers conclude that the Background Subtraction method performs better in aspects of execution time and output quality. Method: Visual background extractor is a renowned algorithm in the Background Subtraction method for detecting a moving object in various applications. In recent years, a lot of research has been carried out to improve the existing Visual Background extractor algorithm. Results: After investigating many state of the art techniques and finding out the research gaps, this paper presents an improved background subtraction technique based on morphological operation and 2D median filter for detecting moving object which reduces the noise in the output video and also enhances its accuracy at a very limited additional cost. Experimental results in several benchmark datasets confirmed the superiority of the proposed method over the state-of-the-art object detection methods. Conclusion: In this article, a method has been proposed for the detection of a moving object where the quality of the output object is enhanced and good accuracy is achieved. This method provides accurate experimental results, which help in efficient object detection. The proposed technique also deals with Visual Background extractor Algorithm along with the Image Enhancement Procedure like Morphological and 2-D Filtering at a limited additional cost. Discussion: This article worked on a certain specific field, like noise reduction and image enhancement of output images of the existing ViBe Algorithm. The technique proposed in this article will be beneficial for various computer vision applications like video surveillance, road condition monitoring, airport safety, human activity analysis, monitoring the marine border for security purposes, etc.
-
-
-
Face Detection in Single and Multiple Images Using Different Skin Color Models
Authors: Manpreet Kaur, Jasdev Bhatti, Mohit K. Kakkar and Arun UpmanyuIntroduction: Face Detection is used in many different steams like video conferencing, human-computer interface, face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue (RGB) and Hue Saturation Value (HSV) color models in detecting the single, including multiple images of any face. Each color model HSV, Ycbcr and TSL are individually performed with the RGB color model to detect from the single and multiple images, the region of the face. Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains a face region or not. Canny edge detection is also used to show the boundaries of a candidate face region; in the end, the face can be shown detected by using a bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images, and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover, Ycbcr and TSL color models are observed to be outperformed than the existing HSV color model in the environment of multiple images. Discussion: The calculated results show that the HSV model works best for single-faced images, whereas YCbCr and TSL models work best for multiple faced images. In addition, the evaluated results by this paper provide better testing strategies that help to develop new techniques, which lead to an increase in research effectiveness. Conclusion: The calculated value of all parameters helps prove that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that the HSV model works best for single-faced images, whereas YCbCr and TSL models work best for multiple faced images.
-
-
-
SVM Kernel and Genetic Feature Selection Based Automated Diagnosis of Breast Cancer
Authors: Indu Singh, Shashank Garg, Shivam Arora, Nikhil Arora and Kripali AgrawalBackground: Breast cancer is the development of a malignant tumor in the breast of human beings (especially females). If not detected at the initial stages, it can substantially lead to an inoperable construct. It is a reason for the majority of cancer-related deaths throughout the world. Objectives: The main aim of this study is to diagnose breast cancer at an early stage so that the required treatment can be provided for survival. The tumor is classified as malignant or benign accurately at an early stage using a novel approach that includes an ensemble of the Genetic Algorithm for feature selection and kernel selection for SVM-Classifier. Methods: The proposed GA-SVM (Genetic Algorithm – Support Vector Machine) algorithm in this paper optimally selects the most appropriate features for training with the SVM classifier. Genetic Programming is used to select the features and the kernel for the SVM classifier. The Genetic Algorithm operates by exploring the optimal layout of features for breast cancer, thus, subjugating the problems faced in exponentially immense feature space. Results: The proposed approach accounts for a mean accuracy of 98.82% by using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset available on UCI with the training and testing ratio being 50:50, respectively. Conclusion: The results prove that the proposed model outperforms the previously designed models for breast cancer diagnosis. The outcome assures that the GA-SVM model may be used as an effective tool in assisting the doctors in treating the patients. Alternatively, it may be utilized as an alternate opinion in their eventual diagnosis.
-
-
-
Optimal Feature Selection Methods for Chronic Kidney Disease Classification using Intelligent Optimization Algorithms
Authors: Jerlin R. Lambert and Eswaran PerumalAim: The classification of medical data gives more importance to identify the existence of the disease. Background: Numerous classification algorithms for chronic kidney disease (CKD) are developed that produce better classification results. But, the inclusion of different factors in the identification of CKD reduces the effectiveness of the employed classification algorithm. Objective: To overcome this issue, feature selection (FS) approaches are proposed to minimize the computational complexity and also to improve the classification performance in the identification of CKD. Since numerous bio-inspired based FS methodologies are developed, a need arises to examine the performance of feature selection approaches of different algorithms on the identification of CKD. Method: This paper proposes a new framework for the classification and prediction of CKD. Three feature selection approaches are used, namely Ant Colony Optimization (ACO) algorithm, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) in the classification process of CKD. Finally, logistic regression (LR) classifier is employed for effective classification. Results: The effectiveness of the ACO-FS, GA-FS, and PSO-FS are validated by testing it against a benchmark CKD dataset. Conclusion: The empirical results state that the ACO-FS algorithm performs well and the results reported that the classification performance is improved by the inclusion of feature selection methodologies in CKD classification.
-
-
-
A DCT Fractional Bit Replacement Based Dual Watermarking Algorithm for Image Authentication
Authors: Rahul Dixit, Amita Nandal, Arvind Dhaka, Yohan V. Kuriakose and Vardan AgarwalAbstract: Watermarking is a process of embedding a message inside a digital signal like an image, video, or text. It is used for several key reasons such as authenticity verification, ownership recognition and hidden communication. In this paper, we discuss image watermarking, where secret messages are stored in images. Introduction: We propose a dual watermarking approach, which is based on Discrete Cosine Transform, Discrete Wavelet Transform and Singular Value Decomposition methods. This paper considers one watermark as robust and another watermark as fragile. Methods: The robust watermark is embedded in the Discrete Wavelet Transform- Singular Value Decomposition - domain and is used to transmit hidden messages. The fragile watermark is embedded in the Discrete Cosine Transform domain and is used for verification of the secret message of the robust watermark. The proposed algorithm is tested in the experimental results section and shows promising results against denoising, rotation, translation and cropping attacks. Results: The results show that the performance of the proposed algorithm in terms of mean squared error, structural similarity and peak signal to noise ratio is S4considerable as compared with the existing methods. Discussion: We present the comparison of results with the study by Himanshu et al., in Table 10, from which we can see that our method performs better with Gaussian noise and rotational attack only lacking with Salt and Pepper noise. Fig. 7 and Fig. 8, focusing on the resulting PSNR, show the variation in noise variance and degree of rotation. From the graphs, it is evident that our method performs better against Gaussian and rotational attacks. Conclusion: In this paper, a dual watermarking method is proposed in which one watermark is fragile, which is called as authentication watermark, whereas the other watermark is robust and is called as the information watermark. The authentication watermark is embedded in the fractional part of the DCT domain in the cover image and the information watermark is embedded in the diagonal vector of the LL sub-band.
-
-
-
Communication Cost Aware Resource Efficient Load Balancing (CARELB) Framework for Cloud Datacenter
Authors: Deepika Saxena and Ashutosh K. SinghBackground: Load balancing of communication-intensive applications, allowing efficient resource utilization and minimization of power consumption, is a challenging multi-objective virtual machine (VM) placement problem. The communication among inter-dependent VMs, raises network traffic, hampers cloud client’s experience and degrades overall performance by saturating the network. Introduction: Cloud computing has become an indispensable part of Information Technology (IT), which supports digitization throughout the world. It provides a shared pool of IT resources, which are: alwaysactive , accessible from anywhere, at any time and delivered on demand as a service. The scalability and pay-per-use benefits of cloud computing have driven the entire world towards on-demand IT services that facilitate increased usage of virtualized resources. The rapid growth in the demands of cloud resources has amplified the network traffic in and out of the datacenter. Cisco Global Cloud Index predicts that by the year 2021, the network traffic among the devices within the data center will grow at Compound Annual Growth Rate (CAGR) of 23.4%. Methods: To address these issues, a Communication cost Aware and Resource Efficient Load Balancing (CARE-LB) framework is presented that minimizes the communication cost, power consumption and maximizes resource utilization. To reduce the communication cost, VMs with high affinity and inter-dependency are intentionally placed closer to each other. The VM placement is carried out by applying the proposed integration of Particle Swarm Optimization and nondominated sorting based Genetic Algorithm i.e. PSOGA algorithm encoding VM allocation as particles as well as chromosomes. Results: The performance of the proposed framework is evaluated by the execution of numerous experiments in the simulated data center environment and it is compared with state-of-the-art methods like Genetic Algorithm, First-Fit, Random-Fit and Best-Fit heuristic algorithms. The experimental outcome reveals that the CARE-LB framework improves resource utilization by 11%, minimizes power consumption by 4.4% and communication cost by 20.3% with a reduction of execution time up to 49.7% over Genetic Algorithm based Load Balancing framework. Conclusion: The proposed CARE-LB framework provides a promising solution for faster execution of data-intensive applications with improved resource utilization and reduced power consumption. Discussion: In the observed simulation, we analyzed all the three objectives after the execution of the proposed multi-objective VM allocations and results are shown in Table 4. To choose the number of users for analysis of communication cost, the experiments were conducted with different numbers of users. For instance, for 100 VMs, we chose 10, 20 ,..., 80 users, and their request for VMs (number of VMs and type of VMs) were generated randomly, such that the total number of requested VMs did not exceed the number of available VMs.
-
-
-
Energy Adaptive and Max-Min based BFS Model for Route Optimization in Sensor Network
By Kapil JunejaBackground: The restricted energy and network life is a critical issue in the real-time sensor network. The occurrence of low-energy and faulty intermediate nodes can increase communication failure. The number of intermediate nodes affects the number of re-transmission, communication-failures, and increases the energy consumption on the routing path. The existing protocols take the greedy decision on all possible intermediate nodes collectively by considering one or more parameters. Objective: This work divides the distance between the source and destination into coveragespecific zones for restricting the hop-count. Now each zone is processed individually and collectively for generating the energy effective and failure preventive route. Methods: In this paper, the energy and coverage weighted BFS (Best First Model) algorithm is presented for route optimization in the sensor network. The max-min BFS is implied on sensor nodes of each zone and identified as the most reliable and effective intermediate node. The individual and composite weighted rules are applied to energy and distance parameters. This new routing protocol discovered the energy-adaptive route. Results: The proposed model is simulated on a randomly distributed network, and the analysis is done in terms of network life, energy consumption, hop count, and the number of route switching parameters. The comparative analysis is done against the MCP, MT-MR, Greedy, and other stateof- art routing protocols. Conclusion: The comparative results validate the significance of the proposed routing protocol in terms of energy effectiveness, lesser route switching, and improved the network life.
-
-
-
RCSA Protocol with Rayleigh Channel Fading in Massive-MIMO System
Authors: Umesha G. Basavaiah and Mysore N. S. SwamyIntroduction: Massive MIMO is a process where cellular BSs comprise a large number of antennas. In this study, the focus is to develop end-to-end massive-MIMO system under the Rayleigh channel fading effect. Also, it includes both inter-channel interference and the intra-channel interference in an m-MIMO network system. Methodology: The main aim of this research is to increase the throughput and network capacity and optimize the channel collision between the associated pilots. Here, we propose an RCSA protocol with the Rayleigh channel fading effect in the m-MIMO network to create a network like a real-time scenario. Here we have focused on the deployment of urban scenarios with the small timing variation and provided our novel RAP for the UEs, where the UEs can access the network. Furthermore, to validate the performance of the proposed scheme, the proposed model is compared with the state-ofthe- art model. Results: We herein provide the analysis based on two considered scenarios; such as scenario-A, where intra-channel interference is taken into account, whereas in scenario-B, both intra-cell channel interference as well as inter-cell channel interference are considered. Our RCSA approach is proposed with uncorrelated Rayleigh fading channels (URFC) that are used to increase the capacity of the network and decrease the collision probability. Conclusion: Here, we have proposed the RCSA approach; RCSA comprises four major steps such as system initialization and querying, response queuing, resource contention and channel state analysis, and resource allocation. The system performs in the TDD mode of operation and the resources of time-frequency are divided into the coherent blocks of channel Tuses. This research focuses on RAB where inactive UEs are admitted to PDB; also it proposes the RCSA approach for RAP that provides protection from strong inter-cell interference in m-MIMO systems. Discussion: In order to compare our RCSA-URFC approach, here we have considered the state-ofart technique, such as vertex graph-coloring-based pilot assignment (VGCPA) under URFC. In addition, we have also considered the bias term randomly to make decision regarding a particular UE. Moreover, it is very difficult to identify the strong probability of UE, therefore as per information obtained via b to UEs, the bias term can be selected for the UE in order to moderate the decision rule.
-
Most Read This Month
