Recent Advances in Computer Science and Communications - Volume 13, Issue 6, 2020
Volume 13, Issue 6, 2020
-
-
Vedic Arithmetic Based High Speed & Less Area MAC Unit for Computing Devices
More LessBackground: The rapid improvement in technology enables design of high-speed devices, with development of modified computational elements for FPGA implementation. With complexity increasing day-to-day, there is demand for modified VLSI computational elements. Basically, for the past decade an improvement in basic VLSI Operators like Adder, multiplier is significant. The basic multiplication operator is been completely refined in the aspects of FPGA implementation. Materials and Methods: This paper presents a design of 32-bit high-speed MAC unit based on Vedic computations. Among the many sutras of Vedic mathematics, by using the urdhvatriyagbhyam sutra the products are generated in parallel. This proposed technique results in multiplication step reduction. Results: The result shows that the proposed MAC unit, the number of steps required for multiplication and addition has been reduced, it leads to the decrease in area size. In comparison with the performance of existing method to proposed MAC, the LUT's are reduced by 50 percent. Conclusion: This paper comprehensively describes the basic Multiplication operation using urdhvatriyaghyam sutra for parallel multiplication process. Based on the Vedic sutras, the performance was analyzed on a hardware platform Spartan-3E Xilinx FPGA Device for a 32-bit MAC unit. The Implementation results shoes reduction in critical delay and area when compared to conventional booth multiplier-based MAC Design. Hence this works concludes that the proposed Vedic multiplier is suitable for constructing high speed MAC units.
-
-
-
An Accomplished Energy-Aware Approach for Server Load Balancing in Cloud Computing
Authors: Alekhya Orugonda and V. Kiran KumarBackground: It is important to minimize bandwidth that improves battery life, system reliability and other environmental concerns and energy optimization. It also do everything within their power to reduce the amount of data that flows through their pipes.To increase resource exertion, task consolidation is an effective technique, greatly enabled by virtualization technologies, which facilitate the concurrent execution of several tasks and, in turn, reduce energy consumption. : MaxUtil, which aims to maximize resource exertion, and Energy Conscious Task Consolidation which explicitly takes into account both active and idle energy consumption. Materials and Methods: In this paper an Energy Aware Cloud Load Balancing Technique (EACLBT) is proposed for the performance improvement in terms of energy and run time. It predicts load of host after VM allocation and if according to prediction host become overloaded than VM will be created on different host. So it minimize the number of migrations due to host overloading conditions. This proposed technique results in minimize bandwidth and energy utilization. Result: The result shows that the energy efficient method has been proposed for monitor energy exhaustion and support static and dynamic system level optimization.The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving.Besides minimization in bandwidth along with energy exertion, reduction in the number of executed instructions is also achieved. Conclusion: This paper comprehensively describes the EACLBT (Energy Aware Cloud Load Balancing Technique) to deploy the virtual machines for power saving purpose. The average power consumption is used as performance metrics and the result of PALB is used as baseline. The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving. It shown that on average an idle server consumes approximately 70% of the power consumed by the server running at the full CPU speed.The performance holds better for Common sub utterance elimination. So, we can say the proposed Energy Aware Cloud Load Balancing Technique (EACLBT) is effective in bandwidth minimization and reduction of energy exertion.
-
-
-
Optimal Adaptive Data Dissemination Protocol for VANET Road Safety Using Optimal Congestion Control Algorithm
Authors: Bhavani S. Raj and Srimathi ChandrasekaranIntroduction: VANET is a mobile ad-hoc system (MANET) that concerns vehicle-to-vehicle and vehicle-to-infrastructure communication. Unique characteristics such as higher node speeds and restricted road topology extricate VANET from other MANETs. Messages between vehicles and roadside infrastructure in VANET is over short to medium range wireless technologies. Dedicated Short Range Communications (DSRC) is a RF technology-based standard that has been designed exclusively for automotive communications. Like most MANETs, data are broadcasted in VANETs through the exchange of messages between the nodes. The limited road topology unlike MANETs, executes a directional way to the message flow. It becomes important that the data to be transmitted in the most effective ways with less delay, due to higher node speeds and unbalanced connectivity among the nodes. Hence, propagating data to the intended node or region of interest is a unique problem in VANETs and requires incorporating effective techniques to disseminate data. Data broadcasted from a vehicle will be received by all the nodes within the broadcast range. The difficulty of data dissemination is hence related to propagating data, not within but beyond the transmission range of the sender. An intuitively simple way to disseminate data beyond the transmission range is through flooding. In flooding each node which receives the message would simply rebroadcast the message without any regard to its current position or any other factors. Thus, data are propagated beyond the transmission range when a node at the border of the broadcast range rebroadcasts the message. For effective broadcasting, each vehicle upon receiving a message would make a decision to rebroadcast the message depending on whether or not it is the farthest node in the transmission range. Thus, the decision-making ability of each vehicle on participating in the message propagation is dependent on its awareness of vehicles around it and determines the overall effectiveness of the technique used in disseminating data. Objectives: To identify an optimal cluster head based on effective parameters to reduce Control Overhead messages in a collision based traffic network. Methods: The proposed system consists of two processes namely candidate selection and control OH message reduction. The candidate selection process is carried out by Chaotic Fish Swarm Optimization (CFSO) algorithm which consists of cluster formation and Cluster Head (CH) selection. The control OH messages are reduced by a Predictor based Decision Making (PDM) algorithm. Results: Based on the performance metrics such as success rate, redundant rate, collision rate, number of control OH messages, data propagation distance, data propagation time and data dissemination efficiency, the proposed system is evaluated. The results shows that the proposed system performs well than the existing system. Conclusion: In this paper, we have suggested an Optimal Adaptive Data Dissemination Protocol (OAddP) for VANET road safety. The proposed OAddP mechanism uses the Chaotic Fish Swarm Optimization (CFSO) algorithm to perform the clustering and uses a Predictor based Decision Making (PDM) algorithm for control overhead messages reduction.
-
-
-
A Novel Encryption of Text Messages Using Two Fold Approach
Authors: T. Sivakumar, S. Veeramani, M. Pandi and G. GopalBackground: The amount of digital data created and shared via internet has been increasing every day. The number of security threats has also increased due to the vulnerabilities present in the network hardware and software. Cryptography is the practice and study of techniques to secure communication in the presence of third parties. Though there are several cryptosystems to secure the information, there is a necessity to introduce new methods in order to protect information from the attackers. Objective: To propose a new encryption method using Binary Tree Traversal (BTT) and XOR operation to protect the text messages. Methods: The proposed method uses both transposition and substitution techniques for converting plaintext into ciphertext. The notion binary tree traversal is adapted as transposition and bitwise XOR operation is used for substitution. Results: The repeating letters appeared in the plaintext are replaced with different cipher letters and placed on different location in the cipher text. Hence, it is infeasible to identify the plaintext message easily. The time taken by the proposed method for encryption is very less. Conclusion: A simple encryption method using binary tree traversals and XOR operation is developed. Encrypting data using binary tree traversals is a different way while compared with other traditional encryption methods. The proposed method is fast, secure and can be used to encrypt short messages in real time applications.
-
-
-
State-of-the-Art: A Systematic Literature Review of Image Segmentation in Latent Fingerprint Forensics
Authors: Megha Chhabra, Manoj K. Shukla and Kiran Kumar RavulakolluLatent fingerprints are unintentional finger skin impressions left as invisible ridge patterns at crime scenes or objects. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene captured by investigators. Forensics investigators are in permanent search of novel breakthroughs in effective technologies to capture and process such low-quality images. The accuracy of the recognition often depends upon 1) the quality of the image captured at the beginning, 2) metrics used to assess the quality and thereafter 3) the level of enhancement required. Low-performance scanners, unstructured background noise, poor ridge quality, overlapping structured noise, etc. are often reasoned for poor image quality. Insufficient image quality results in the detection of false minutiae and hence reduces the recognition rate. Traditionally, image segmentation and enhancement are manually carried out from highly skilled experts. The use of an automated system is definitely challenging and can only be effective, if a significant amount of time is saved. This survey amplifies a comparative study of various segmentation techniques available for latent fingerprint forensics.
-
-
-
Analysis and Synthesis of A Human Prakriti Identification System Based on Soft Computing Techniques
Authors: Vishu Madaan and Anjali GoyalBackground: The research done on the side effects of modern medicines motivates us to bring Ayurveda back in our modern lifestyle. All allopathic medicines are artificially created and the chemicals used are designed in such a way that they only cure the problem on the surface. This paper will discuss the how can we retain our health for longer time. Objective: Building a trained and intelligent decision making system that can categorize any health or unhealthy human being into a suitable category of human prakriti dosha. Methods: Proposed adaptive neuro-fuzzy inference system is trained using hybrid learning technique. Grid Partitioning method is used for membership functions. Total 28 parameters that identify human prakriti are reduced to 7 effective components to get maximum accuracy of results. System is trained with data of 346 healthy individuals to avoid biasing in the result. Results: The resulting system can answer to any individual about his prakriti dosha, based on its output one can make changes in his lifestyle to avoid the effect of diseases in future. System is obtained with 94.23% accuracy for identifying prakriti dosha. Conclusion: Building an ANFIS system trained with 346 individuals has shown the improved performance. Consideration of 28 input parameters have actually enhanced the robustness of the system aimed to identify human prakriti dosha.
-
-
-
Anuvaadika: Implementation of Sanskrit to Hindi Translation Tool Using Rule-Based Approach
Authors: Prateek Agrawal and Leena JainBackground: Sanskrit is claimed as the second oldest language of the world and in ancient days, Sanskrit was considered as mother tongue in the larger part of India. But now, it is struggling for its acceptance in common people. Objective: Objective of the work is to develop algorithms for stemming of Sanskrit words, implement semantic analysis, discourse integration and pragmatic analysis. Another objective is to implement a translation tool that is able to translate Sanskrit text into Hindi. Methods: Rule Based Method is used to prepare the corpora and implement the proposed work. Results: An interface is made available through which step by step translation can be seen and understood. Conclusion: This tool will be helpful to those who are familiar in Hindi language but unable to learn Sanskrit due to the scarcity of language experts. More than 60 million people from India and abroad who are active Hindi users and work on computer, can get themselves connected with Sanskrit and also can learn the Sanskrit fundamentals by self.
-
-
-
Design of Dynamic Morphological Analyser for Hindi Nouns Using Rule Based Approach
Authors: Ishan Kumar, Renu Dhir, Gurpreet S. Lehal and Sanjeev Kumar SharmaBackground: A grammar checker can be used as a proof reading tool, which depends upon the basic definition of the words. If words are not defined correctly or are not being tagged with correct grammatical meaning, the results will not be accurate. Objective: To attain this accuracy, Morphological analyser plays a crucial role. In Hindi, as the whole structure and meaning of a sentence depends upon a Noun, so it is mandatory to tag a Noun word properly. But, to tag a Noun with its correct grammatical meaning is a challenging chore. Methods: To tag a word, the word is being input in the tool, which is firstly searched inside the dictionary. If the word is not found in the dictionary, then the grammar rules are applied to analyse the word. As noun, contains names also, so some times the rules are not possible to apply on the words. In that scenario, words are manually tagged and then added to dictionary for further use. Grammar tag set of 650 tags is used to generate more accurate results. All the words are stored in a database. The performance is measured by using Precision and Recall. Furthermore, this technique can be extended to define other categories of grammar like verb, adjective, adverb, etc. Results: This paper represents a method for a Rule-Based Morphological Analyser for Hindi Nouns only. It utilizes-a dictionary and a rule-based approach for defining words with their grammatical meanings using the morphological analyser. The designed morph which has been discussed in this work stores all the words in a database. As this morph analyser uses a set of 650 plus grammatical tag sets (for complete Hindi morphological analyser), the user will always get more accurate results. The Authors have preferred both time and accuracy over memory space, which is not a big issue these days. Therefore this approach can be used for both types of morphological approach. Conclusion: Furthermore, this method can be extended to the other categories of the Hindi Grammar like Adverbs, Adjectives, and Verbs, etc. The results are very promising and are expected to provide even more advancement to the existing strategies and methodologies.
-
-
-
SDRFHBLoc- A Secure Framework for Localization in Wireless Sensor Networks
Authors: Deepak Prashar, Kiran Jyoti and Dilip KumarBackground: Deployment of nodes and their security always remain the main point of concern in Wireless Sensor Networks (WSNs). Applications that are position-centric, location estimation is the major requirement and this leads to the emergence of various localization techniques. There are broadly two types of localization methods: range-based and free- range. Along with the position estimation, security of the position computation process is the prime concern as the wrong position estimation through the presence of adversaries in the environment may compromise the whole process. There are some localization systems that are estimating the position coordinate values of the nodes, but none of them has addressed the issues to overcome the inclusion of the adversaries effect on the localization process. Methods: Here, we develop and analyze a new framework SDRFHBLoc (Secure Distributed Range Free Hop Based Localization) that provides the functionality for the deployment of nodes based on different aspects like the node amount, malicious node amount, range and deployment area. Security features are also integrated in the system based on authentication of the nodes who are participating in the localization process using the signature generation and verification. Results: The proposed security framework removes the faulty nodes from the network before they participate in the localization process through the improved DV-Hop approach which is implemented in the system. Also, the optimization of the entire process provides the best and precise position estimation using the Particle Swarm Optimization (PSO) module. Conclusion: The proposed framework is quite efficient to mitigate the risk of any kind of attack possible on localization process and also customize as per the requirement of the algorithms.
-
-
-
Experimental and Comparison Based Study on Diabetes Prediction Using Artificial Neural Network
Authors: Nitesh Pradhan, Vijaypal S. Dhaka and Satish C. KulhariBackground: Diabetes is spreading in the entire world. In a survey, it is observed that every generation from child to old age people are suffering from diabetes. If diabetes is not identified in time, it may lead to deadliest disease. Prediction of diabetes is of the utmost challenging task by machines. In the human body, diabetes is one of the perilous maladies that creates depended disease such as kidney disease, heart attack, blindness etc. Thus it is very important to diagnose diabetes in time. Objective: Our target is to develop a system using Artificial Neural Network (ANN), with the ability to predict whether a patient suffers from diabetes or not. Methods: This paper illustrates various machine learning techniques in form of literature review; such as Support Vector Machine, Naïve Bayes, K Nearest Neighbor, Decision Tree, Random Forest, etc. We applied ANN to predict diabetes. In this paper, the architecture of ANN consists of four hidden layers each of six neurons and one output layer with one neuron. Optimizer used for the architecture is ‘Adam’. Results: We have Pima Indian diabetes dataset of sufficient number of patients with nine different symptoms with respect to the patients and nine different features in connection with the mathematical computation/prediction. Hence we bifurcate the dataset into training and testing set in majority and minority ratio of 80:20 respectively. It facilitates us the majority patient’s data to be used as training set and minority data to be used as testing set. We train our network for multiple epoch with different activation function. We used four hidden layers with six neurons in each hidden layer and one output layer. On the hidden layer, we used multiple activation functions such as sigmoid, ReLU etc. and obtained beat accuracy (88.71%) in 600 epochs with ReLU activation function. On the output layer, we used only sigmoid activation function because we have only two classes in our dataset. Conclusion: Diabetes prediction by machine is a challenging task. So many machine learning algorithms exist to predict the diabetes such as Naïve Bayes, decision tree, K nearest neighbor, support vector machine etc. This paper presents a novel approach to predict whether a patient has diabetes or not based on Pima Indian diabetes dataset. In this paper, we used artificial neural network to train out network and it is observed that artificial neural network approach performs better than all other classifiers.
-
-
-
Moving Object Detection and Recognition Using Optical Flow and Eigen Face Using Low Resolution Video
Authors: Prateek Agrawal, Ranjit Kaur, Vishu Madaan, M. S. Babu and Dimple SethiBackground: As crime is increasing day by day, various applications are proposed to protect public places. Monitoring and tracking of video surveillance system is the most difficult task and it is prominent that human beings are not reliable and efficacious in doing this job. Objective: The prime objective of this research is to develop an automatic monitoring and inspecting system that is competent enough to detect and track the moving objects in real-time using a low-resolution video surveillance camera. Methods: Firstly, the video data acquired from a low-resolution video surveillance camera is used for generating RGB video frames which are converted into gray scale. Optical flow and Eigen face algorithms are applied to extract and match the moving object in the video sequence with the images stored in the database. Results: The proposed system is compared with the already existing systems and it is observed that this approach gives more accurate results. This system can meet the requirement of real-time tracking even when the targeted image resolution is smaller than 160x120. Conclusion: This method uses optical flow and Eigen face algorithm to track and detect the moving objects. The system gives high performance and can be used for real time object tracking. The same experiment can be applied for the human faces too.
-
-
-
An Optimal Framework for Spatial Query Optimization Using Hadoop in Big Data Analytics
Authors: Pankaj Dadheech, Dinesh Goyal, Sumit Srivastava and Ankit KumarBackground and Objective: Spatial queries frequently used in Hadoop for significant data process. However, vast and massive size of spatial information makes it difficult to process the spatial inquiries proficiently, so they utilized the Hadoop system for process the Big Data. Boolean Queries & Geometry Boolean Spatial Data for Query Optimization using Hadoop System are used. In this paper, a lightweight and adaptable spatial data index for big data have discussed, which have used to process in Hadoop frameworks. Results demonstrate the proficiency and adequacy of spatial ordering system for various spatial inquiries. Methods: In this section, the different type of approaches are used which helps to understand the procedure to develop an efficient system by involving the methods like efficient and scalable method for processing Top-k spatial Boolean Queries, Efficient query processing in Geographic web search engines. Geographic search engine query processing combines text and spatial data processing technique & Top-k spatial preference Queries. In this work, the implementation of all the methods is done for comparative analysis. Results and Discussion: The execution of algorithm gives results which show the difference of performance over different data types. Three different graphs are presented here based on the different data inputs indexing and data types. Results show that when the number of rows to be executed increases the performance of geohash decreases, while the crucial point for change in performance of execution is not visible due to sudden hike in number of rows returned. Conclusion: The query processing have discussed in geographic web search engines. In this work a general framework for ranking search results based on a combination of textual and spatial criteria, and proposed several algorithms for efficiently executing ranked queries on very large collections have discussed. The integrated of proposed algorithms into an existing high-performance search engine query processor and works on evaluating them on a large data set and realistic geographic queries. The results shows that in many cases geographic query processing can be performed at about the same level of efficiency as text-only queries.
-
-
-
Performance Analysis of Kalman Filter in Computed Tomography Thorax for Image Denoising
Authors: Manoj Gupta, J Lechner and Basant AgarwalMedical image processing is a very important field of study due to its large number of applications in human life. For diagnosis of any disease, several methods of medical image acquisition are possible such as Ultrasound (US), Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Depending upon the type of image acquisition, different types of noise can occur. Background: The most common types of noises in medical images are Gaussian noise, Speckle noise, Poisson noise, Rician noise and Salt & Pepper noise. The related noise models and distributions are described in this paper. We compare several filtering methods for denoising the mentioned types of noise. Objective: The main purpose of this paper is to compare well-known filtering methods such as arithmetic mean, median and enhanced lee filter with only rarely used filtering methods like Kalman filter as well as with relative new methods like Non-Local Means (NLM) filter. Methods: To compare these different filtering methods, we use comparative parameters like Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Mean Structural Similarity (MSSIM), Edge Preservation Index (EPI) and the Universal Image Quality Index (UIQI). Results: The processed images are shown for a specific noise density and noise variance. We show that the Kalman filter performs better than Mean, Median and Enhanced Lee filter for removing Gaussian, Speckle, Poisson and Rician noise. Conclusion: Experimental results show that the Kalman filter provides better results as compared to other methods. It could be also a good alternative to NLM filter due to almost equal results and lower computation time.
-
-
-
Burrows Wheeler Transform and Wavelet Tree Based Retrieval of Genome Sequence in an Indexed Genome Database
Authors: Sanjeev Kumar, Suneeta Agarwal and RanvijayBackground: New generation sequencing machinery such as Illumina and Solexa can generate millions of reads from given genome sequence on a single run. There is a need for suitable data structure, efficient with respect to memory as well as time to align these enormous reads into reference genome. There are a number of existing techniques of indexing and reads alignment, such as MAQ, Bowtie, BWA, BWBBLE and Kart. Memory efficient versions of these techniques are 10- 20% slower than their respective normal versions. Objective: A new approach for efficient indexing and retrieval of large genomic data. Methods: In this paper, we propose an efficient method based on Burrows Wheeler Transform and Wavelet Tree (BWIT) for genome sequence indexing and reads alignment. Both types of alignments (exact and approximate) are possible by the proposed approach (BWIT). Results: The performance of BWIT is experimentally found to be better than existing ones with respect to both memory and speed. Experimental work shows that proposed approach performs best in case of protein sequence indexing. All the existing read alignment approaches depend upon the size of index used. In general, time required increases with reduction in index size used. Experiments have been done with Bowtie, BWA & a Kart by taking index size as 1.25N, 1.05N, .98N, where N is the size of the text (reference genome). In our approach BWIT index size is .6N which is lesser than index size used in all other approaches. It is observed that even using smallest index size alignment time in our approach is least. Conclusion: An innovative indexing technique is presented to address the problem of storage, transmission and retrieval of large DNA/Protein Sequence data.
-
-
-
Chaos-Based Controlled System Using Discrete Map
Authors: Anup K. Das and Mrinal Kanti MandalBackground: The design of efficient and fast controller for controlling the process parameter is always a challenging work to the control system designer. The main objective of this article is to design a secure chaos based controller by synchronizing two chaotic systems. The initial values of the chaotic systems are considered as the set value and initial process value of the physical parameter to be controlled. Methods: The proposed design of the controlled is done by synchronizing two-dimensional chaotic Henon map through nonlinear control method. One map is taken as a driver system and its initial value is considered as the set value of a specific process of a given system. On the other hand, another identical map is taken as the driven system and its initial value is the initial process value of the given process control system. Both the chaotic map become synchronized via nonlinear control law. The accumulation of error until synchronization is achieved which is converted into a suitable signal to operate the final control element to enhance or decrease the initial process value towards the set value. This self-repetitive process will achieve the control of the process parameter. Results: In experiment we have observer that the error signal becomes zero after a small time interval (in simulation it takes only few iteration) and the accumulated error remain fixed in a steady value. This error is responsible to maintain the process value to the set value. The entire process has been implemented in hardware environment by using microcontroller ATMEGA 16 and also in the Proteus simulation software. Conclusion: The controller is very fast because the algorithm of nonlinear control law for synchronization is very fast. Since the controller is designed in chaotic regime so it is secure.
-
-
-
Numerical Studies of Blood Flow in Left Coronary Model
Authors: Rupali Pandey, Manoj Kumar and Vivek K. SrivastavIntroduction: Artery blockage is the most prevailing cause of Coronary Artery Disease (CAD). The presence of blockage inside the artery breaks the continuity of blood supply to the other part of the body and therefore causes for heart attack. Objectives: Two different three-dimensional models namely; normal and 50% plaque are used for the numerical studies. Five inlet velocities 0.10, 0.20, 0.50, 0.70 and 0.80 m/s are considered corresponding to different blood flow conditions to study the effect of velocity on the human heart. Methods: Finite Volume Method (FVM) based Computational Fluid Dynamics (CFD) technique is executed for the numerical simulation of blood flow. Hemodynamic factors are computed and compared for the two geometrical models (Normal Vs. Blockage model). Results: Blood hemodynamic factor i.e. Area Average Wall Shear Stress (AAWSS) ranges from 4.1-33.6 Pa at the façade of the Left Anterior Descending (LAD) part of the Left Coronary Artery (LCA) for the constricted artery. Conclusion: The predominantly low WSS index is analogous to the normal artery affirms the existence of plaque. From the medical point of view, this can prove as an excellent factor for early diagnosis of CAD. Therefore, a hindrance can be created in the increasing frequency of Myocardial Infarction (MI). In future research we will adopt the unsteady flow with both rigid and elastic arterial wall.
-
-
-
Optimized Overcurrent Relay Coordination in a Microgrid System
Authors: Odiyur V.G. Swathika and Udayanga HemapalaBackground: Microgrids are a conglomeration of loads and distributed generators at a distribution level network. Since this network is no longer a single source fed network, the typical protection strategies may not be deployed. Reconfiguration is a topology changing feature that is visible in microgrid. This is also another factor that is to be considered while protecting the microgrid setup. Objective: To develop an optimized overcurrent relay coordination scheme for microgrid networks. Methods: Inorder to devise suitable overcurrent protection scheme for microgrids, initially the normal and fault currents are captured for all topologies of the microgrid. For each topology, the optimized time multiplier settings of overcurrent relays are computed using Dual Simplex Algorithm. This aids in clearing the fault as fast as possible from the network. Results: A 21-bus microgrid system is considered and the optimized overcurrent relay coordination scheme is realized for the same. Conclusion: The proposed optimized overcurrent relay coordination was tested successfully on the 21-bus microgrid system. The proposed protection scheme was capable of identifying the optimized Time Multiplier Setting values of the overcurrent relays in the path of fault clearance. It is evident that the proposed scheme can be conveniently extended to larger networks.
-
Most Read This Month
