Recent Advances in Computer Science and Communications - Volume 14, Issue 2, 2021
Volume 14, Issue 2, 2021
-
-
Multi-Level Image Segmentation of Color Images Using Opposition Based Improved Firefly Algorithm
Authors: Abhay Sharma, Rekha Chaturvedi, Umesh Dwivedi and Sandeep KumarBackground: Image segmentation is the fundamental step in image processing. Numerous image segmentation algorithms have been proposed already for grey scale images and they are very complex and time-consuming. Since most of the algorithms suffer from over and under segmentation problem, multi-level image thresholding is very effective method against this problem. Nature inspired meta-heuristics algorithms such as firefly are fast and can enhance the performance of the method. Objective: This paper provides a modified firefly algorithm and its uses for multilevel thresholding in colored images. Opposition based learning is incorporated in the firefly algorithm to improve convergence rate and robustness. Between class variance method of thresholding is used to formulate the objective function. Methods: Numerous benchmark images were tested for evaluating the performance of the proposed method. Results: The experimental results validate the performance of Opposition Based Improved Firefly Algorithm (OBIFA) for multi-level image segmentation using Peak Signal to Noise Ratio (PSNR) and Structured Similarity Index Metric (SSIM) parameter. Conclusion: The OBIFA algorithm is best suited for multilevel image thresholding. It provides best results compared to Darwinian Particle Swarm Optimization (DPSO) and Electro magnetism optimization (EMO) for the parameter: convergence speed, PSNR and SSIM values.
-
-
-
Analysis of High-Efficiency Transformerless Inverter for a Grid-Tied PV Power System with Reactive Power Control
Authors: Selvamathi Ramachandran and Indragandhi VairavasundaramBackground: In recent days grid-tied PV power systems play a vital role in the entire energy system. In a grid –tied PV system Transformerless Inverter (TI) topologies are preferred for its reduced price, improved efficiency, lightweight etc. Therefore many transformerless topologies have been proposed and verified with real power injection only. Recently, almost every international standard has imposed that a specified amount of reactive power should be handled by the grid-tied PV inverter. According to the standard VDE-AR-N4105, grid-tied PV inverter of power rating below 3.68kVA, should attain Power Factor (PF) from 0.95 leading to 0.95 lagging. Objective: To address this issue of grid-tied PV system with reactive power control, in this paper Fuzzy gain scheduling controller is proposed as a power controller for High Efficiency Transformeless (HETL) inverter. The performance of the proposed scheme is analyzed and validated with the comparison of a conventional PI controller based active and reactive power controllers. Methods: This paper is with the intention of improvement in the performance of the system, the FGS controller is anticipated as active and reactive power controllers. In conventional PI controller gains are constant for any value of the error, which makes error and delay in optimum value of voltage (Vα and Vß). Hence in this analysis FGS controller tunes PI controller gain with respect to change in active and reactive power error. Results: Comparative performance of PI and FGS based HETL inverter is presented in this paper. It is noted that for any cases either Pref is constant or variable the FGS reduces ripple than PI based HETL inverter system. Compare to constant reference power case variable reference case produces more ripples in both systems. Conclusion: From the analysis FGS based HETL inverter in a grid-tied PV based power system produces the best performance in all aspects such as voltage, active power and reactive power.
-
-
-
An Empirical Comparison of t-GSC and ACO_TCSP Applied to Time Bound Test Selection
Authors: Nishtha Jatana and Bharti SuriBackground: Test case selection is a highly researched and inevitable software testing activity. Various kinds of optimization approaches are being used for solving the test selection and prioritization problem. Greedy approach and search based techniques have already been applied to address test case selection. A greedy approach to solve set cover problem is proposed to find a minimal test suite that is able to detect maximum faults. Search based approximation technique - Ant Colony Optimization reduces and prioritizes the test suite to form an optimized test suite. Time bound test case prioritization is an NP-complete problem. Mutation testing is used to inculcate known faults into the programs under test to generate effective test cases. Objective: To empirically evaluate the performance of a greedy approach and a search based approach for test case selection. Methods: This paper attempts to compare a search based approximation approach against a time sensitive greedy approach for test suite optimization using mutation testing. The proposed greedy approach for yielding optimized test suite within a time bound environment is implemented in Python. The two techniques used, search based and greedy approach have also been empirically compared on 24 programs in different languages (C, C#, and Java and Python). Results: The greedy approach has shown better complexity over the ACO approach and the same is validated by experimentation using the actual run time of the algorithms. The time bound greedy approach yielded best result for 16 out of the 24 programs. The ACO approach could find the best result 100% times for 11 programs and at least 30-95% times for rest 13 programs by running ACO 10 times on each program. Yet the percentage reductions achieved in the size and execution time of the resultant test suite were almost similar in both the techniques. Conclusion: The results inspire the further use of both the techniques in software testing.
-
-
-
Data Control in Public Cloud Computing: Issues and Challenges
Authors: Ashok Sharma, Pranay Jha and Sartaj SinghBackground: The Advancement in the Hardware and Progress in IoT based devices has led to significant transformation in digitalization and globalization of business models in the IT World. In fact Cloud Computing has attracted many companies to expand their business by providing IT infrastructure with very less budget in pay per use model. The Expansion and Migration of Companies to Cloud Computing facilities has really brought many pros and cons and opened new area of Research. The Management of IT infrastructure as per business requirement is a great challenge for the IT Infrastructure managers because of complex business models which needs to be updated with market trends and it requires huge and updated infrastructure to accelerate their business requirements. No doubt there are many benefits of moving to Cloud but several vulnerabilities and potential threats related to security is a major concern for any business sensitive data. These security challenges place restrictions on moving on-premises workloads to the Cloud. This paper has discussed key differences in cloud models and existing various Cloud Security Architectures and challenges in cloud computing related to Data Security at Rest and in Transit. Also data controlling mechanism need to be adopted by IT Industry along with end to end security mechanism has been explained. Objective: The main objective of this paper is to discuss about the prevailing issues in cloud in terms of data security which is discouraging the Industry and organizations to move their data into public cloud and also to discuss how to enhance security mechanism in cloud during data migration and multitenant environment. Methods: Based on different reports and analysis, it has been pointed that data breach and data security are most challenging and concerning factor for any customer when someone think to migrate the workloads from On-Premises datacenter to Cloud Computing. It needs more attention in every consideration. All criteria and considerations to secure and protect the customer’s information and data have been classified and discussed. Data-at-rest and Data-in-transit are transmission method for storing and moving the data from one source to destination. Different encryption methods for protecting and security data-at-rest and data-in-transit have been identity. However, there are still more areas need to work in for filling gaps for Cloud data control and security which is still a serious concern and on top of attackers every day. Results & Conclusion: Since, cyber-attacks occurring very frequently and causing a huge amount of investment on re-establishing the environment, it needs more control with effective usages of technology. All those concerns related to security are very reasonable concerns that needs to be addressed.
-
-
-
Ensemble Visual Content Based Search and Retrieval for Natural Scene Images
Authors: Pushpendra Singh, P.N. Hrisheekesha and Vinai K. SinghBackground: Content Based Image Retrieval (CBIR) is one of the fields for information retrieval where similar images are retrieved from database based on the various image descriptive parameters. The image descriptor vector is used by machine learning based systems to store, learn and template matching. These feature descriptor vectors locally or globally demonstrate the visual content present in an image using texture, colour, shape, and other information. Objective: The main vision of this paper is to categorize and evaluate those algorithms, which were proposed in the interval of last 10 years. In addition, experiment is performed using a hybrid content descriptors methodology that helps to gain the significant results as compared with state-of-art algorithms. Methods: In past, several algorithms were proposed to fetch the variety of contents from an image based on which the image is retrieved from database. But, the precision and recall for the gained results using single content descriptor is not significant. The proposed system architecture uses the hybrid ensemble feature set for image matching. The hybrid parameters include globally and locally defined feature extraction methodologies. Results: The hybrid methodology decreases the error rate and improves the precision and recall for large natural scene images dataset having more than 20 classes. The overall combination will provide almost 97% accurate results which is better than the existing literature results. Conclusion: In conclusion the experimentation result suggests that the use of the local feature extraction mechanism is better when compared with the global feature extraction methodology.
-
-
-
Test Case Prioritization Using Bat Algorithm
Authors: Anu Bajaj and Om P. SangwanBackground: Regression testing is very important stage of software maintenance and evolution phase. The software keeps on updating and to preserve the software quality, it needs to be retested every time it is updated. Due to limited resources, complete testing of the software becomes tedious task. The probable solution to this problem is to execute those test cases first that are more important prior to less important test cases. Objective: Optimization methods need to be acquired for efficient test case prioritization in minimum time, while maintaining the quality of the software. Various nature-inspired algorithms like genetic algorithm, particle swarm optimization and ant colony optimization, etc. have been applied for prioritizing test cases. In this paper, we have applied a relatively new nature-inspired optimization method, namely, Bat algorithm that utilizes the echolocation, loudness and pulse emission rate of bats in prioritizing the test cases. Methods: The proposed algorithm is experimented on sample case study of timetable management system and evaluated with the help of popular evaluation metric Average Percentage of Fault Detection. Results: Results have been compared with the baseline approaches i.e. untreated, random and reverse prioritization and well-established optimization method i.e. genetic algorithm and we have found a considerable increase in the value of evaluation metric. Conclusion: This preliminary study shows that the bat algorithm have great potential to solve test case prioritization problems.
-
-
-
Design of Relation Extraction Framework to Develop Knowledge Base
Authors: Poonam Jatwani, Pradeep Tomar and Vandana DhingraBackground: Web documents display information in the form of natural language text which is not understandable by machines. To search specific information from sea of web documents has become very challenging as it shows many unwanted non relevant documents along with relevant documents. To retrieve relevant information semantic knowledge can be stored in the domain specific ontology which helps in understanding user’s need to retrieve relevant information. Methods: In this paper, framework for extracting and visualising semantic knowledge has been designed. Proposed approach is based on the assumption that semantics of text can be extracted by creating syntactic structure of the text. To extracts syntactic structure Stanford parser has been used. Parsing of corpus text is done to obtain morphological structures which is in more machine readable format, and thus provides a better structure for constructing syntactic-semantic rules manually. The tagged form of each sentence is taken and set of rules based on dependency relationship are build manually. Sentence level analysis is performed for concepts generation, for properties and for hierarchical relation extraction using dependency parse tree as a means for relation extraction. Results: Extracted concepts and relation among various entities constitute knowledge base in the form of ontology. Conclusion: Proposed information extraction model successfully filter the desired information from the large ocean of internet and create semantic structure to represent data in standard machine understandable format which explain the details about entities along with their properties and their relationship.
-
-
-
Software Reliability Prediction of Open Source Software Using Soft Computing Technique
Authors: Saini G.L., Deepak Panwar and Vijander SinghBackground: In software development, reliability performs a significant role. It is the nonfunctional requirement of the software. Before using the Open Source Software (OSS) for software development, it is essential to check the quality of the open source software. It is challenging to identify that which OSS is suitable for development of software. Conventional software reliability prediction models are suitable for the Commercial Off The Shelf (COTS) software but it is not sufficient to predict the software reliability of open source software as it has some extra features such as source code is freely available and it is modifiable also.
Methods: Most of the researchers have given the mathematical model based on crisp set theory to estimate the reliability of software. Proposed methodology does not rely upon a scientific/mathematical model. In this approach, fuzzy logic based soft computing approach has been used to analyze the reliability of OSS. The goal of this paper is to propose a fuzzy logic soft computing technique based model using three reliability metrics for estimating the security of open source software.
Results: The software reliability model is tested on few software applications, and the outcomes affirm the productivity of the model.
Conclusion: In order to assess open source software reliability a fuzzy logic based soft computing technique has been proposed.
-
-
-
An Optimal Feature Selection Method for Automatic Face Retrieval Using Enhanced Grasshopper Optimization Algorithm
Authors: Arun K. Shukla and Suvendu KanungoBackground: Retrieval of facial images based on its contents is one of the main areas of research. However, images contain high dimensional feature vectors and it is a challenging task to select the relevant features due to the variations available in the images of similar objects. Therefore, the selection of relevant features is an important step to make the facial retrieval system computationally efficient and more accurate. Objective: The main aim of this paper is to design and develop an efficient feature selection method for obtaining relevant and non-redundant features from the face images so that the accuracy and computational cost of a face retrieval system can be improved. Methods: The proposed feature selection method uses a new enhanced grasshopper optimization algorithm to obtain the significant features from the high dimensional features vector of face images. The proposed algorithm modifies the target vector by considering more than one best solution which maintain the elitism property and save the search from local optimum. Furthermore, it has been utilized to select the prominent features from the high dimensional facial features vector. Results: The performance of the proposed feature selection method has been tested on Oracle Research Laboratory face database. The proposed method eliminates 89% features which are minimum among the other methods and increases the accuracy of face retrieval system to 96.5%. Conclusion: The enhanced grasshopper optimization algorithm-based feature selection method for face retrieval system outperforms the existing methods in terms of accuracy and computational cost.
-
-
-
Feature Selection Method Based on Grey Wolf Optimization and Simulated Annealing
Authors: Avinash C. Pandey and Dharmveer S. RajpootBackground: Feature selection sometimes also known as attribute subset selection is a process in which optimal subset of features are elected with respect to target data by reducing dimensionality and removing irrelevant features. There will be 2n possible solutions for a dataset having n number of features that is difficult to solve by conventional attribute selection method. In such cases metaheuristic-based methods generally outruns the conventional methods. Objective: The main aim of this paper is to enhance the classification accuracy and minimize the number of selected features and error rate. Methods: To achieve the objective, a binary metaheuristic feature selection method bGWOSA based on grey wolf optimization and simulated annealing has been introduced. The proposed feature selection method uses simulated annealing for equalizing the trade-off between exploration and exploitation. The performance of the proposed binary feature selection method has been examined on the ten feature selection benchmark datasets taken from UCI repository and compared with binary cuckoo search, binary particle swarm optimization, binary grey wolf optimization, binary bat algorithm and binary hybrid whale optimization method. Results: The proposed feature selection method achieves the highest accuracy for the most of datasets compared to state-of-the-art. Further, from the experimental and statistical results, efficacy of the proposed feature selection method has been validated. Conclusion: Classification accuracy can be enhanced by employing feature selection methods. Moreover, performance can also be enhanced by tuning the control parameters of metaheuristic methods.
-
-
-
EBPA: A Novel Framework for the Analysis of Process Performance on the Basis of Real-Time NASA Application
Authors: Shashank Sharma and Sumit SrivastavaBackground: Workflow extraction is the connecting link between process modelling and data mining. Extraction of information and make insight from it using event log is the primary objective of workflow mining. The learning got along these logs can build comprehension about the workflow of procedures and association of different processes. That can help with upgrading them if necessary. Objective: The aim of this paper is to display a process performance based framework where we compare reference model with extracted model (from a large information system) on the basis of key performance indices. Methods: Proposed approach perform extraction of workflow model using workflow mining. This process is effective and efficient as compare with building work flow model from scratch. This shows a logic about how to program event log data gathered from different sensors (Internet of Events). How we process and investigates to handle and propel the item work process by using the course of action. Results: Proposed approach displays a process based framework for the legacy system that ensures the effective and efficient working. So that accordingly extracted model behave like referenced model and results are validated by Key Performance Indices (KPI) for evaluating process performance. Conclusion: In this experimental data centric approach, our progressing work is to research a metric to quantify the nature of reference models and extracted model. On the basis of metric values, we take the decision on legacy information system process management.
-
-
-
The Development of a Modified Ear Recognition System for Personnel Identification
Authors: Haitham S. Hasan and Mais A. Al-SharqiBackground: This study proposed a Match Region Localization (MRL) Ear Recognition System (ERS). Captured ear images were pre-processed through cropping and enhancement. The preprocessed ear images were segmented using the proposed MRL segmentation algorithm and divided into 160 sub-images. The principal features of the segmented ear images were extracted and used in template generation. k-nearest neighbor classifiers with Euclidean distance metrics were applied in the classification. Objective: The proposed ERS exhibited a recognition accuracy of 97.7%. Other publicly available ear datasets can be tested using the proposed system for cross-database comparison and can be improved by reducing their errors. Methods: This research follows four major stages, namely, the development of a PCA-based ear recognition algorithm, implementation of the developed algorithm, determination of the optimum ear segmentation method, and evaluation of the performance of the technique. Results: The False Acceptance Rate (FAR) of the developed Ear Recognition System (ERS) is 0.06. This result implies that six out of every 100 intruders will be falsely accepted. Conclusion: The developed ERS outperforms the existing ERS by approximately 24.61% in terms of system recognition accuracy; the developed ERS can be tested on other publicly available ear databases to check its performance on larger platforms.
-
Most Read This Month
