Recent Advances in Computer Science and Communications - Volume 13, Issue 6, 2020
Volume 13, Issue 6, 2020
-
-
Vedic Arithmetic Based High Speed & Less Area MAC Unit for Computing Devices
More LessBackground: The rapid improvement in technology enables design of high-speed devices, with development of modified computational elements for FPGA implementation. With complexity increasing day-to-day, there is demand for modified VLSI computational elements. Basically, for the past decade an improvement in basic VLSI Operators like Adder, multiplier is significant. The basic multiplication operator is been completely refined in the aspects of FPGA implementation. Materials and Methods: This paper presents a design of 32-bit high-speed MAC unit based on Vedic computations. Among the many sutras of Vedic mathematics, by using the urdhvatriyagbhyam sutra the products are generated in parallel. This proposed technique results in multiplication step reduction. Results: The result shows that the proposed MAC unit, the number of steps required for multiplication and addition has been reduced, it leads to the decrease in area size. In comparison with the performance of existing method to proposed MAC, the LUT's are reduced by 50 percent. Conclusion: This paper comprehensively describes the basic Multiplication operation using urdhvatriyaghyam sutra for parallel multiplication process. Based on the Vedic sutras, the performance was analyzed on a hardware platform Spartan-3E Xilinx FPGA Device for a 32-bit MAC unit. The Implementation results shoes reduction in critical delay and area when compared to conventional booth multiplier-based MAC Design. Hence this works concludes that the proposed Vedic multiplier is suitable for constructing high speed MAC units.
-
-
-
An Accomplished Energy-Aware Approach for Server Load Balancing in Cloud Computing
Authors: Alekhya Orugonda and V. Kiran KumarBackground: It is important to minimize bandwidth that improves battery life, system reliability and other environmental concerns and energy optimization. It also do everything within their power to reduce the amount of data that flows through their pipes.To increase resource exertion, task consolidation is an effective technique, greatly enabled by virtualization technologies, which facilitate the concurrent execution of several tasks and, in turn, reduce energy consumption. : MaxUtil, which aims to maximize resource exertion, and Energy Conscious Task Consolidation which explicitly takes into account both active and idle energy consumption. Materials and Methods: In this paper an Energy Aware Cloud Load Balancing Technique (EACLBT) is proposed for the performance improvement in terms of energy and run time. It predicts load of host after VM allocation and if according to prediction host become overloaded than VM will be created on different host. So it minimize the number of migrations due to host overloading conditions. This proposed technique results in minimize bandwidth and energy utilization. Result: The result shows that the energy efficient method has been proposed for monitor energy exhaustion and support static and dynamic system level optimization.The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving.Besides minimization in bandwidth along with energy exertion, reduction in the number of executed instructions is also achieved. Conclusion: This paper comprehensively describes the EACLBT (Energy Aware Cloud Load Balancing Technique) to deploy the virtual machines for power saving purpose. The average power consumption is used as performance metrics and the result of PALB is used as baseline. The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving. It shown that on average an idle server consumes approximately 70% of the power consumed by the server running at the full CPU speed.The performance holds better for Common sub utterance elimination. So, we can say the proposed Energy Aware Cloud Load Balancing Technique (EACLBT) is effective in bandwidth minimization and reduction of energy exertion.
-
-
-
Optimal Adaptive Data Dissemination Protocol for VANET Road Safety Using Optimal Congestion Control Algorithm
Authors: Bhavani S. Raj and Srimathi ChandrasekaranIntroduction: VANET is a mobile ad-hoc system (MANET) that concerns vehicle-to-vehicle and vehicle-to-infrastructure communication. Unique characteristics such as higher node speeds and restricted road topology extricate VANET from other MANETs. Messages between vehicles and roadside infrastructure in VANET is over short to medium range wireless technologies. Dedicated Short Range Communications (DSRC) is a RF technology-based standard that has been designed exclusively for automotive communications. Like most MANETs, data are broadcasted in VANETs through the exchange of messages between the nodes. The limited road topology unlike MANETs, executes a directional way to the message flow. It becomes important that the data to be transmitted in the most effective ways with less delay, due to higher node speeds and unbalanced connectivity among the nodes. Hence, propagating data to the intended node or region of interest is a unique problem in VANETs and requires incorporating effective techniques to disseminate data. Data broadcasted from a vehicle will be received by all the nodes within the broadcast range. The difficulty of data dissemination is hence related to propagating data, not within but beyond the transmission range of the sender. An intuitively simple way to disseminate data beyond the transmission range is through flooding. In flooding each node which receives the message would simply rebroadcast the message without any regard to its current position or any other factors. Thus, data are propagated beyond the transmission range when a node at the border of the broadcast range rebroadcasts the message. For effective broadcasting, each vehicle upon receiving a message would make a decision to rebroadcast the message depending on whether or not it is the farthest node in the transmission range. Thus, the decision-making ability of each vehicle on participating in the message propagation is dependent on its awareness of vehicles around it and determines the overall effectiveness of the technique used in disseminating data. Objectives: To identify an optimal cluster head based on effective parameters to reduce Control Overhead messages in a collision based traffic network. Methods: The proposed system consists of two processes namely candidate selection and control OH message reduction. The candidate selection process is carried out by Chaotic Fish Swarm Optimization (CFSO) algorithm which consists of cluster formation and Cluster Head (CH) selection. The control OH messages are reduced by a Predictor based Decision Making (PDM) algorithm. Results: Based on the performance metrics such as success rate, redundant rate, collision rate, number of control OH messages, data propagation distance, data propagation time and data dissemination efficiency, the proposed system is evaluated. The results shows that the proposed system performs well than the existing system. Conclusion: In this paper, we have suggested an Optimal Adaptive Data Dissemination Protocol (OAddP) for VANET road safety. The proposed OAddP mechanism uses the Chaotic Fish Swarm Optimization (CFSO) algorithm to perform the clustering and uses a Predictor based Decision Making (PDM) algorithm for control overhead messages reduction.
-
-
-
A Novel Encryption of Text Messages Using Two Fold Approach
Authors: T. Sivakumar, S. Veeramani, M. Pandi and G. GopalBackground: The amount of digital data created and shared via internet has been increasing every day. The number of security threats has also increased due to the vulnerabilities present in the network hardware and software. Cryptography is the practice and study of techniques to secure communication in the presence of third parties. Though there are several cryptosystems to secure the information, there is a necessity to introduce new methods in order to protect information from the attackers. Objective: To propose a new encryption method using Binary Tree Traversal (BTT) and XOR operation to protect the text messages. Methods: The proposed method uses both transposition and substitution techniques for converting plaintext into ciphertext. The notion binary tree traversal is adapted as transposition and bitwise XOR operation is used for substitution. Results: The repeating letters appeared in the plaintext are replaced with different cipher letters and placed on different location in the cipher text. Hence, it is infeasible to identify the plaintext message easily. The time taken by the proposed method for encryption is very less. Conclusion: A simple encryption method using binary tree traversals and XOR operation is developed. Encrypting data using binary tree traversals is a different way while compared with other traditional encryption methods. The proposed method is fast, secure and can be used to encrypt short messages in real time applications.
-
-
-
State-of-the-Art: A Systematic Literature Review of Image Segmentation in Latent Fingerprint Forensics
Authors: Megha Chhabra, Manoj K. Shukla and Kiran Kumar RavulakolluLatent fingerprints are unintentional finger skin impressions left as invisible ridge patterns at crime scenes or objects. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene captured by investigators. Forensics investigators are in permanent search of novel breakthroughs in effective technologies to capture and process such low-quality images. The accuracy of the recognition often depends upon 1) the quality of the image captured at the beginning, 2) metrics used to assess the quality and thereafter 3) the level of enhancement required. Low-performance scanners, unstructured background noise, poor ridge quality, overlapping structured noise, etc. are often reasoned for poor image quality. Insufficient image quality results in the detection of false minutiae and hence reduces the recognition rate. Traditionally, image segmentation and enhancement are manually carried out from highly skilled experts. The use of an automated system is definitely challenging and can only be effective, if a significant amount of time is saved. This survey amplifies a comparative study of various segmentation techniques available for latent fingerprint forensics.
-
-
-
Analysis and Synthesis of A Human Prakriti Identification System Based on Soft Computing Techniques
Authors: Vishu Madaan and Anjali GoyalBackground: The research done on the side effects of modern medicines motivates us to bring Ayurveda back in our modern lifestyle. All allopathic medicines are artificially created and the chemicals used are designed in such a way that they only cure the problem on the surface. This paper will discuss the how can we retain our health for longer time. Objective: Building a trained and intelligent decision making system that can categorize any health or unhealthy human being into a suitable category of human prakriti dosha. Methods: Proposed adaptive neuro-fuzzy inference system is trained using hybrid learning technique. Grid Partitioning method is used for membership functions. Total 28 parameters that identify human prakriti are reduced to 7 effective components to get maximum accuracy of results. System is trained with data of 346 healthy individuals to avoid biasing in the result. Results: The resulting system can answer to any individual about his prakriti dosha, based on its output one can make changes in his lifestyle to avoid the effect of diseases in future. System is obtained with 94.23% accuracy for identifying prakriti dosha. Conclusion: Building an ANFIS system trained with 346 individuals has shown the improved performance. Consideration of 28 input parameters have actually enhanced the robustness of the system aimed to identify human prakriti dosha.
-
-
-
Anuvaadika: Implementation of Sanskrit to Hindi Translation Tool Using Rule-Based Approach
Authors: Prateek Agrawal and Leena JainBackground: Sanskrit is claimed as the second oldest language of the world and in ancient days, Sanskrit was considered as mother tongue in the larger part of India. But now, it is struggling for its acceptance in common people. Objective: Objective of the work is to develop algorithms for stemming of Sanskrit words, implement semantic analysis, discourse integration and pragmatic analysis. Another objective is to implement a translation tool that is able to translate Sanskrit text into Hindi. Methods: Rule Based Method is used to prepare the corpora and implement the proposed work. Results: An interface is made available through which step by step translation can be seen and understood. Conclusion: This tool will be helpful to those who are familiar in Hindi language but unable to learn Sanskrit due to the scarcity of language experts. More than 60 million people from India and abroad who are active Hindi users and work on computer, can get themselves connected with Sanskrit and also can learn the Sanskrit fundamentals by self.
-
-
-
Design of Dynamic Morphological Analyser for Hindi Nouns Using Rule Based Approach
Authors: Ishan Kumar, Renu Dhir, Gurpreet S. Lehal and Sanjeev Kumar SharmaBackground: A grammar checker can be used as a proof reading tool, which depends upon the basic definition of the words. If words are not defined correctly or are not being tagged with correct grammatical meaning, the results will not be accurate. Objective: To attain this accuracy, Morphological analyser plays a crucial role. In Hindi, as the whole structure and meaning of a sentence depends upon a Noun, so it is mandatory to tag a Noun word properly. But, to tag a Noun with its correct grammatical meaning is a challenging chore. Methods: To tag a word, the word is being input in the tool, which is firstly searched inside the dictionary. If the word is not found in the dictionary, then the grammar rules are applied to analyse the word. As noun, contains names also, so some times the rules are not possible to apply on the words. In that scenario, words are manually tagged and then added to dictionary for further use. Grammar tag set of 650 tags is used to generate more accurate results. All the words are stored in a database. The performance is measured by using Precision and Recall. Furthermore, this technique can be extended to define other categories of grammar like verb, adjective, adverb, etc. Results: This paper represents a method for a Rule-Based Morphological Analyser for Hindi Nouns only. It utilizes-a dictionary and a rule-based approach for defining words with their grammatical meanings using the morphological analyser. The designed morph which has been discussed in this work stores all the words in a database. As this morph analyser uses a set of 650 plus grammatical tag sets (for complete Hindi morphological analyser), the user will always get more accurate results. The Authors have preferred both time and accuracy over memory space, which is not a big issue these days. Therefore this approach can be used for both types of morphological approach. Conclusion: Furthermore, this method can be extended to the other categories of the Hindi Grammar like Adverbs, Adjectives, and Verbs, etc. The results are very promising and are expected to provide even more advancement to the existing strategies and methodologies.
-
-
-
SDRFHBLoc- A Secure Framework for Localization in Wireless Sensor Networks
Authors: Deepak Prashar, Kiran Jyoti and Dilip KumarBackground: Deployment of nodes and their security always remain the main point of concern in Wireless Sensor Networks (WSNs). Applications that are position-centric, location estimation is the major requirement and this leads to the emergence of various localization techniques. There are broadly two types of localization methods: range-based and free- range. Along with the position estimation, security of the position computation process is the prime concern as the wrong position estimation through the presence of adversaries in the environment may compromise the whole process. There are some localization systems that are estimating the position coordinate values of the nodes, but none of them has addressed the issues to overcome the inclusion of the adversaries effect on the localization process. Methods: Here, we develop and analyze a new framework SDRFHBLoc (Secure Distributed Range Free Hop Based Localization) that provides the functionality for the deployment of nodes based on different aspects like the node amount, malicious node amount, range and deployment area. Security features are also integrated in the system based on authentication of the nodes who are participating in the localization process using the signature generation and verification. Results: The proposed security framework removes the faulty nodes from the network before they participate in the localization process through the improved DV-Hop approach which is implemented in the system. Also, the optimization of the entire process provides the best and precise position estimation using the Particle Swarm Optimization (PSO) module. Conclusion: The proposed framework is quite efficient to mitigate the risk of any kind of attack possible on localization process and also customize as per the requirement of the algorithms.
-
-
-
Experimental and Comparison Based Study on Diabetes Prediction Using Artificial Neural Network
Authors: Nitesh Pradhan, Vijaypal S. Dhaka and Satish C. KulhariBackground: Diabetes is spreading in the entire world. In a survey, it is observed that every generation from child to old age people are suffering from diabetes. If diabetes is not identified in time, it may lead to deadliest disease. Prediction of diabetes is of the utmost challenging task by machines. In the human body, diabetes is one of the perilous maladies that creates depended disease such as kidney disease, heart attack, blindness etc. Thus it is very important to diagnose diabetes in time. Objective: Our target is to develop a system using Artificial Neural Network (ANN), with the ability to predict whether a patient suffers from diabetes or not. Methods: This paper illustrates various machine learning techniques in form of literature review; such as Support Vector Machine, Naïve Bayes, K Nearest Neighbor, Decision Tree, Random Forest, etc. We applied ANN to predict diabetes. In this paper, the architecture of ANN consists of four hidden layers each of six neurons and one output layer with one neuron. Optimizer used for the architecture is ‘Adam’. Results: We have Pima Indian diabetes dataset of sufficient number of patients with nine different symptoms with respect to the patients and nine different features in connection with the mathematical computation/prediction. Hence we bifurcate the dataset into training and testing set in majority and minority ratio of 80:20 respectively. It facilitates us the majority patient’s data to be used as training set and minority data to be used as testing set. We train our network for multiple epoch with different activation function. We used four hidden layers with six neurons in each hidden layer and one output layer. On the hidden layer, we used multiple activation functions such as sigmoid, ReLU etc. and obtained beat accuracy (88.71%) in 600 epochs with ReLU activation function. On the output layer, we used only sigmoid activation function because we have only two classes in our dataset. Conclusion: Diabetes prediction by machine is a challenging task. So many machine learning algorithms exist to predict the diabetes such as Naïve Bayes, decision tree, K nearest neighbor, support vector machine etc. This paper presents a novel approach to predict whether a patient has diabetes or not based on Pima Indian diabetes dataset. In this paper, we used artificial neural network to train out network and it is observed that artificial neural network approach performs better than all other classifiers.
-
-
-
Moving Object Detection and Recognition Using Optical Flow and Eigen Face Using Low Resolution Video
Authors: Prateek Agrawal, Ranjit Kaur, Vishu Madaan, M. S. Babu and Dimple SethiBackground: As crime is increasing day by day, various applications are proposed to protect public places. Monitoring and tracking of video surveillance system is the most difficult task and it is prominent that human beings are not reliable and efficacious in doing this job. Objective: The prime objective of this research is to develop an automatic monitoring and inspecting system that is competent enough to detect and track the moving objects in real-time using a low-resolution video surveillance camera. Methods: Firstly, the video data acquired from a low-resolution video surveillance camera is used for generating RGB video frames which are converted into gray scale. Optical flow and Eigen face algorithms are applied to extract and match the moving object in the video sequence with the images stored in the database. Results: The proposed system is compared with the already existing systems and it is observed that this approach gives more accurate results. This system can meet the requirement of real-time tracking even when the targeted image resolution is smaller than 160x120. Conclusion: This method uses optical flow and Eigen face algorithm to track and detect the moving objects. The system gives high performance and can be used for real time object tracking. The same experiment can be applied for the human faces too.
-
-
-
An Optimal Framework for Spatial Query Optimization Using Hadoop in Big Data Analytics
Authors: Pankaj Dadheech, Dinesh Goyal, Sumit Srivastava and Ankit KumarBackground and Objective: Spatial queries frequently used in Hadoop for significant data process. However, vast and massive size of spatial information makes it difficult to process the spatial inquiries proficiently, so they utilized the Hadoop system for process the Big Data. Boolean Queries & Geometry Boolean Spatial Data for Query Optimization using Hadoop System are used. In this paper, a lightweight and adaptable spatial data index for big data have discussed, which have used to process in Hadoop frameworks. Results demonstrate the proficiency and adequacy of spatial ordering system for various spatial inquiries. Methods: In this section, the different type of approaches are used which helps to understand the procedure to develop an efficient system by involving the methods like efficient and scalable method for processing Top-k spatial Boolean Queries, Efficient query processing in Geographic web search engines. Geographic search engine query processing combines text and spatial data processing technique & Top-k spatial preference Queries. In this work, the implementation of all the methods is done for comparative analysis. Results and Discussion: The execution of algorithm gives results which show the difference of performance over different data types. Three different graphs are presented here based on the different data inputs indexing and data types. Results show that when the number of rows to be executed increases the performance of geohash decreases, while the crucial point for change in performance of execution is not visible due to sudden hike in number of rows returned. Conclusion: The query processing have discussed in geographic web search engines. In this work a general framework for ranking search results based on a combination of textual and spatial criteria, and proposed several algorithms for efficiently executing ranked queries on very large collections have discussed. The integrated of proposed algorithms into an existing high-performance search engine query processor and works on evaluating them on a large data set and realistic geographic queries. The results shows that in many cases geographic query processing can be performed at about the same level of efficiency as text-only queries.
-
-
-
Performance Analysis of Kalman Filter in Computed Tomography Thorax for Image Denoising
Authors: Manoj Gupta, J Lechner and Basant AgarwalMedical image processing is a very important field of study due to its large number of applications in human life. For diagnosis of any disease, several methods of medical image acquisition are possible such as Ultrasound (US), Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Depending upon the type of image acquisition, different types of noise can occur. Background: The most common types of noises in medical images are Gaussian noise, Speckle noise, Poisson noise, Rician noise and Salt & Pepper noise. The related noise models and distributions are described in this paper. We compare several filtering methods for denoising the mentioned types of noise. Objective: The main purpose of this paper is to compare well-known filtering methods such as arithmetic mean, median and enhanced lee filter with only rarely used filtering methods like Kalman filter as well as with relative new methods like Non-Local Means (NLM) filter. Methods: To compare these different filtering methods, we use comparative parameters like Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Mean Structural Similarity (MSSIM), Edge Preservation Index (EPI) and the Universal Image Quality Index (UIQI). Results: The processed images are shown for a specific noise density and noise variance. We show that the Kalman filter performs better than Mean, Median and Enhanced Lee filter for removing Gaussian, Speckle, Poisson and Rician noise. Conclusion: Experimental results show that the Kalman filter provides better results as compared to other methods. It could be also a good alternative to NLM filter due to almost equal results and lower computation time.
-
-
-
Burrows Wheeler Transform and Wavelet Tree Based Retrieval of Genome Sequence in an Indexed Genome Database
Authors: Sanjeev Kumar, Suneeta Agarwal and RanvijayBackground: New generation sequencing machinery such as Illumina and Solexa can generate millions of reads from given genome sequence on a single run. There is a need for suitable data structure, efficient with respect to memory as well as time to align these enormous reads into reference genome. There are a number of existing techniques of indexing and reads alignment, such as MAQ, Bowtie, BWA, BWBBLE and Kart. Memory efficient versions of these techniques are 10- 20% slower than their respective normal versions. Objective: A new approach for efficient indexing and retrieval of large genomic data. Methods: In this paper, we propose an efficient method based on Burrows Wheeler Transform and Wavelet Tree (BWIT) for genome sequence indexing and reads alignment. Both types of alignments (exact and approximate) are possible by the proposed approach (BWIT). Results: The performance of BWIT is experimentally found to be better than existing ones with respect to both memory and speed. Experimental work shows that proposed approach performs best in case of protein sequence indexing. All the existing read alignment approaches depend upon the size of index used. In general, time required increases with reduction in index size used. Experiments have been done with Bowtie, BWA & a Kart by taking index size as 1.25N, 1.05N, .98N, where N is the size of the text (reference genome). In our approach BWIT index size is .6N which is lesser than index size used in all other approaches. It is observed that even using smallest index size alignment time in our approach is least. Conclusion: An innovative indexing technique is presented to address the problem of storage, transmission and retrieval of large DNA/Protein Sequence data.
-
-
-
Chaos-Based Controlled System Using Discrete Map
Authors: Anup K. Das and Mrinal Kanti MandalBackground: The design of efficient and fast controller for controlling the process parameter is always a challenging work to the control system designer. The main objective of this article is to design a secure chaos based controller by synchronizing two chaotic systems. The initial values of the chaotic systems are considered as the set value and initial process value of the physical parameter to be controlled. Methods: The proposed design of the controlled is done by synchronizing two-dimensional chaotic Henon map through nonlinear control method. One map is taken as a driver system and its initial value is considered as the set value of a specific process of a given system. On the other hand, another identical map is taken as the driven system and its initial value is the initial process value of the given process control system. Both the chaotic map become synchronized via nonlinear control law. The accumulation of error until synchronization is achieved which is converted into a suitable signal to operate the final control element to enhance or decrease the initial process value towards the set value. This self-repetitive process will achieve the control of the process parameter. Results: In experiment we have observer that the error signal becomes zero after a small time interval (in simulation it takes only few iteration) and the accumulated error remain fixed in a steady value. This error is responsible to maintain the process value to the set value. The entire process has been implemented in hardware environment by using microcontroller ATMEGA 16 and also in the Proteus simulation software. Conclusion: The controller is very fast because the algorithm of nonlinear control law for synchronization is very fast. Since the controller is designed in chaotic regime so it is secure.
-
-
-
Numerical Studies of Blood Flow in Left Coronary Model
Authors: Rupali Pandey, Manoj Kumar and Vivek K. SrivastavIntroduction: Artery blockage is the most prevailing cause of Coronary Artery Disease (CAD). The presence of blockage inside the artery breaks the continuity of blood supply to the other part of the body and therefore causes for heart attack. Objectives: Two different three-dimensional models namely; normal and 50% plaque are used for the numerical studies. Five inlet velocities 0.10, 0.20, 0.50, 0.70 and 0.80 m/s are considered corresponding to different blood flow conditions to study the effect of velocity on the human heart. Methods: Finite Volume Method (FVM) based Computational Fluid Dynamics (CFD) technique is executed for the numerical simulation of blood flow. Hemodynamic factors are computed and compared for the two geometrical models (Normal Vs. Blockage model). Results: Blood hemodynamic factor i.e. Area Average Wall Shear Stress (AAWSS) ranges from 4.1-33.6 Pa at the façade of the Left Anterior Descending (LAD) part of the Left Coronary Artery (LCA) for the constricted artery. Conclusion: The predominantly low WSS index is analogous to the normal artery affirms the existence of plaque. From the medical point of view, this can prove as an excellent factor for early diagnosis of CAD. Therefore, a hindrance can be created in the increasing frequency of Myocardial Infarction (MI). In future research we will adopt the unsteady flow with both rigid and elastic arterial wall.
-
-
-
Optimized Overcurrent Relay Coordination in a Microgrid System
Authors: Odiyur V.G. Swathika and Udayanga HemapalaBackground: Microgrids are a conglomeration of loads and distributed generators at a distribution level network. Since this network is no longer a single source fed network, the typical protection strategies may not be deployed. Reconfiguration is a topology changing feature that is visible in microgrid. This is also another factor that is to be considered while protecting the microgrid setup. Objective: To develop an optimized overcurrent relay coordination scheme for microgrid networks. Methods: Inorder to devise suitable overcurrent protection scheme for microgrids, initially the normal and fault currents are captured for all topologies of the microgrid. For each topology, the optimized time multiplier settings of overcurrent relays are computed using Dual Simplex Algorithm. This aids in clearing the fault as fast as possible from the network. Results: A 21-bus microgrid system is considered and the optimized overcurrent relay coordination scheme is realized for the same. Conclusion: The proposed optimized overcurrent relay coordination was tested successfully on the 21-bus microgrid system. The proposed protection scheme was capable of identifying the optimized Time Multiplier Setting values of the overcurrent relays in the path of fault clearance. It is evident that the proposed scheme can be conveniently extended to larger networks.
-
-
-
Energy Saving Using Green Computing Approach for Internet of Thing (IoT) Based Tiny Level Computational Devices
More LessBackground: It is important to minimize energy consumption that improves battery life, system reliability and other environmental concerns and energy optimization is turning into a very important in the tiny devices in Internet of Things (IoT) research area with the increasing demand for battery operated devices. IoT need battery life improvement in tiny device, so power optimization is significant. Methods: In this paper an Experimental Design (ED) is proposed for the performance improvement in terms of energy and run time. Among the many sutras of optimizations, by using the Green computing the products are work with limited battery. This proposed technique results in reduce power conumption. Results: The result shows that the proposed Energy saving on tiny devices based on IoT are the energy consumed and run time by the code after applying optimization techniques, which is the minimum among all four. Besides reduction in energy and runtime, reduction in the number of executed instructions is also achieved. Conclusion: This paper comprehensively describes the. Its average performance percentage reached 91.1 % for energy, the performance reduction in energy and runtime, reduction in the number of executed instructions is also achieved. The performance holds better for Common subexpression elimination. So, we can say the proposed Experimental Design (ED) is effective in reduction of energy consumption and runtime of the program.
-
-
-
To Improve the Web Personalization Using the Boosted Random Forest for Web Information Extraction
Authors: Pappu S. Rao and Vasumathi DevaraBackground: Web personalization is kind of method that is applied to modify a web site to suit the exact needs of the users, achieving the advantage of data accomplished for the understanding the directional conduct of users concerning inclusion of more materials in the web framework. Methods: In this paper an Finding Large Itemsets produce all blends of things that comprises of a bolstering an incentive over a client illustrated least support. The total number of exchanges which consists of the itemset is nothing but the support for an itemset. For the purpose of ranking the list, the calculated values with the user ranked list are offered to the fuzzy-bat. Results: The result shows that the proposed methodology that perceives the challenge of mining affiliation presides over in an organization of exchanges could be defined as the problem of producing all the affiliation henceforth making a decision which would possess bolster esteem more significant in comparison to a client that exemplified least support as well as certainty esteem more noteworthy than a client characterized least certainty. Conclusion: This paper comprehensively describes the our proposed the preciseness of the system is in contrast to a great extent with the present method for the varied questions that has been provided. The computed esteems with the client positioned rundown are provided to the fluffy bat to rank the rundown. The proposed technique has been compared with the prevailing fuzzy technique with regard to the response time as well as the precision.
-
-
-
Collaborative Packet Dropping Intrusion Detection in MANETs
Authors: Gopichand Ginnela and Ramaiah K. SaravanaguruBackground: Wireless Networks treat Mobile Ad hoc Network as a network that requires no preexisting infrastructure for setting up the network and is self-organized dynamically, which is made on impermanent basis. Before transmitting the packets from the source to the destination node, route is searched to the destination node from the source node. Due to the absence of special routers in this network, the nodes themselves act as routers and co-operates in performing the routing mechanism. During the packet dispatch to the destination from the source, there might be a critical attack which leads to the dropping of the packet. This dropping of packets is the most popular risks in mobile ad hoc networks. Objective: To find a detection mechanism for collaborative packet dropping in Mobile Ad hoc network. Methods: The proposed work refers to the diverse properties of collective packet dropping intrusions and scrutinizes the classes of proposed protocols with specific topographies warehoused in wireless ad hoc networks. Results: The results of End-to-End delay shows that Mobile Ad hoc Network beneath supportive black hole intrusion had a minor decrease and are more efficient in terms of performance. Conclusion: In this paper, we mainly engrossed on the special routing protocols existing in MANET like AODV, DSR, and DSDV in detecting the packet dropping attack in a MANET. Hence we refer to the diverse stuffs in collaborative packet dropping attacks and scrutinize classifications of proposed protocols with certain configurations warehoused in mobile ad hoc networks.
-
-
-
A Novel Permutation Based Encryption Using Tree Traversal Approach
Background: In 21st century one of the emerging issues is to secure the information stored and communicated in digital form. There is no assurance that the data we have sent may be hacked by any hacker and the data we have sent may reach correctly to the receiver or not. Thus, confidentiality, integrity, and authentication services play major role in Internet communication. Encryption is the process of encoding messages in such a way that only authorized parties can read and understand after successful decryption. Several data security techniques have been emerged in recent years, but still there is a need to develop new and different techniques to protect the digital information from attackers. This paper provides a new idea for data encryption and decryption using the notion of binary tree traversal to secure digital data. Objective: To develop a new data encryption and decryption method using the notion of binary tree traversal to secure data. Method: The proposed method uses both transposition and substitution techniques for converting plaintext into ciphertext. The notion binary tree in-order traversal is adapted as transposition and Caesar cipher technique for substitution. Results: From the result, it observed that the repeating letters in the plaintext are replaced with different cipher letters. Hence, it is infeasible to predict the plaintext message easily using letter frequency analysis. From the experimental result, it is concluded that the proposed method provides different ciphertext for the same plaintext message when the number of round varies. The time taken by the proposed method for encryption is very less. Conclusion: A simple encryption method using binary tree in-order traversal and Caesar cipher is developed. Encrypting data using binary tree traversals is a different way while compared with other traditional encryption methods. The proposed method is fast, secure and can be used to encrypt short messages in real time applications.
-
-
-
Sentiment Polarity Classification Using Conjure of Genetic Algorithm and Differential Evolution Methods for Optimized Feature Selection
Authors: Jeevanandam Jotheeswaran and S. KoteeswaranObjectives: Sentiment Analysis (SA) has a big role in Big data applications regarding consumer attitude detection, brand/product positioning, customer relationship management and market research. SA is a natural language processing method to track the public mood on a specific product. SA builds a system to collect/examine opinions on a product in comments, blog posts, re- views or tweets. Machine learning applicable to Sentiment Analysis belongs to supervised classifi- cation in general. Methods: Two sets of documents, training and test set are required in machine learning based classification: Training set is used by classifiers to learn documents differentiating character- istics; it is thus called supervised learning. Results: Test sets validate the classifier’s performance. Se- mantic orientation approach to SA is unsupervised learning because it requires no prior training for mining data. It measures how far a word is either positive or negative. This paper uses a hybrid GA- DE optimization technique for sentiment classification to classify features from movie reviews and medical data. Conclusion: Our research has enhanced the variables on learning rate as well as momentum values which are optimized by genetic approach that in turn improve the accuracy of classification procedure.
-
-
-
Quantum-Inspired Ant-Based Energy Balanced Routing in Wireless Sensor Networks
Authors: Manisha Rathee, Sushil Kumar and Kumar DilipBackground: Limited energy capacity of battery operated Wireless Sensor Networks (WSNs) is the prime impediment in the ubiquity of WSNs as the network lifetime depends on the available energy at the nodes. Prolonging the network lifetime is the principal issue in WSNs and the challenge lies in devising a strategy for judicious use of available energy resources. Routing has been one of the most commonly used strategies for minimizing and balancing the energy consumption of nodes in a WSN. Methods: Routing in large networks has been proved to be NP-Hard and therefore meta heuristic techniques have been applied for handling this problem. Quantum-inspired algorithms are relatively new meta heuristic techniques which have been shown performing better than their traditional counter- parts. Therefore, Quantum inspired ant Based Energy balanced Routing (QBER) algorithm has been proposed in this paper for addressing the problem of energy balanced routing in WSNs. Results: Simulation results confirm that the proposed QBER algorithm performs comparatively better than other quantum inspired routing algorithms for WSNs. Conclusion: In this paper, a Quantum Inspired Ant-based routing (QBER) algorithm has been proposed for solving the problem of energy balanced routing in wireless sensor networks.
-
-
-
Cost-effective Heuristic Workflow Scheduling Algorithm in Cloud Under Deadline Constraint
Authors: Jasraj Meena and Manu VardhanBackground: Cloud computing is used to deliver IT resources over the internet. Due to the popularity of cloud computing, nowadays, most of the scientific workflows are shifted towards this environment. Many algorithms have been proposed in the literature to schedule scientific workflows in the cloud, but their execution cost is very high as they are not meeting the user-defined deadline constraint. Aims: This paper focuses on satisfying the user-defined deadline of a scientific workflow while minimizing the total execution cost. Methods: So, to achieve this, we proposed a Cost-Effective under Deadline (CEuD) constraint workflow scheduling algorithm. Results: The proposed CEuD algorithm considers all the essential features of the Cloud and resolves the major issues such as performance variation and acquisition delay. Conclusion: We compared the proposed CEuD algorithm with the existing literature algorithms for scientific workflows (i.e., Montage, Epigenomics, and CyberShake) and obtained better results for minimizing the overall execution cost of the workflow while satisfying the user-defined deadline.
-
-
-
Iot Secured Disjunctive XOR Two Factor Mutual Authentication for Users
Authors: Meenu Talwar and Balamurugan BalusamyAims: The paper has introduced an algorithmic modification of M.L.Das's previous work done of " IoT Mutual Authentication". Background: IoT has proven that if there exists a thing on the earth then it is bound to be connected to the internet to tell its feature on its own. IoT plays a remarkable role in all aspects of our daily lives it covers entertainment, sports, healthcare, education, security, automobiles, industrial as well as home appliances and many more real time applications. To ease our everyday activities, it reinforcing the way people interact with their surroundings. This holistic view brings some major concerns in terms of security and privacy. Objective: The objective of the work is to increase security and protect the algorithm from various attackers so to use with real time application. Method: In this we have used XoR (Exclusive OR) operation and because of this we can protect the algorithm from DoS attacks, bypass attack, intruder attack and allow user to change password. Conclusion: The proposed work has secured all the connected IoT devices to work inde-pendently and secured because every device has to verify at each step in the IoT system before initiating its operation.
-
-
-
A Recommendation Approach Using Forwarding Graph to Analyze Mapping Algorithms for Virtual Network Functions
Authors: Lyes Bouali, Selma Khebbache, Samia Bouzefrane and Mehammed DaouiBackground: Network Functions Virtualization (NFV) is a paradigm shift in the way network operators deploy and manage their services. The basic idea behind this new technology is the separation of network functions from the traditional dedicated hardware by implementing them as a software that is able to run on top of general-purpose hardware. Thus, the resulting pieces of software are called Virtual Network Functions (VNFs). NFV is expected, on one hand, to lead to increased deployment flexibility and agility of network services and, on the other hand, to reduce operating and capital expenditures. One of the major challenges in NFV adoption is the NFV Infrastructure's Resource Allocation (NFVI-RA) for the requested VNF-Forwarding Graph (VNF-FG). This problem is named VNF-forwarding graph mapping problem and is known to be an NP-hard problem. Objective: To address the VNF-FG mapping problem, the objective is to design a solution that uses a meta-heuristic method to minimize the mapping cost. Methods: To cope with this NP-Hard problem, this paper proposes an algorithm based on Greedy Randomized Adaptive Search Procedure (GRASP), a cost-efficient meta-heuristic algorithm, in which the main objective is to minimize the mapping cost. Another method named MARA (Most Available Resource Algorithm) was devised with the objective of reducing the Substrate Network’s resources use at the bottleneck clusters. Results: The Performance evaluation is conducted using real and random network topologies to confront the proposed version of GRASP with another heuristic, existing in the literature, based on the Viterbi algorithm. The results of these evaluations reveal the efficiency of the proposed GRASP ‘s version in terms of reducing the cost mapping and performs consistently well across all the evaluations and metrics. Conclusion: The problem of VNF-FG mapping is formalized, and a solution based on GRASP meta- heuristic is proposed. Performance analysis based on simulations are given to show the behavior and efficiency of this solution.
-
-
-
Performance Optimization of IoT Networks Within the Gateway Layer
Authors: J.K.R. Sastry, G. S. Ramya, V.M. Niharika and K.V. SowmyaBackground: IoT networks are being used frequently for meeting the requirements of some specifications such as applications related to automobiles, aerospace, etc. The performance is always an issue as many intricacies exist built into the developed IoT networks such as handling heterogeneity, failures of communication paths, lack of bandwidth, non-availability of alternate paths for communication, etc. Many layers exist in an IoT network. Each layer built using specific technology and is faced with many performance bottlenecks addressed. The performance of the entire network is affected when there are many performance issues in any layer. The performance of an IoT network must be analyzed considering all the layers and the issues related to those layers. Objective: The main objective of this paper is to present the way the performance of an IoT network improved by using a specific networking topology used at the gateway level of the IoT network. Methods: Receiving data in multiple low-speed channels using different communication systems, stacking the same and transmitting results using the splitters using dual high-speed channels to improve the time required for transmission and also to reduce the latency at the gateway level. Results: The splitter method introduced at the gateway level improved the performance of the IoT network from 1519 Microseconds to 1029 Microseconds for transmitting 100 data packets either way. Throughput as such improved from 0.31 packets / Microsecond to 0.19 packets / Microsecond. Conclusion: The performance of IoT networks suffers due to various reasons. The performance of an IoT network gets improved at gateway by using splitters that merge and bifurcate the communication traffic.
-
Most Read This Month
