Recent Advances in Computer Science and Communications - Volume 14, Issue 2, 2021
Volume 14, Issue 2, 2021
-
-
ECG Analysis: A Brief Review
Authors: Saumendra K. Mohapatra and Mihir N. MohantyIn recent years, cardiac problems have been found proportional to technology development. As the cardiac signal (Electrocardiogram) relates to the electrical activity of the heart of a living being, the technology develops accordingly day-by-day to support physicians and diagnosis. It has many applications along with medical applications. Accurate analysis of electrocardiogram (ECG) signal can information for detection, classification, and diagnosis. This paper insight towards a short review of earlier techniques. It will support the researchers in a new direction. The review is based on preprocessing, feature extraction, classification, and different measures for accuracy proof and the related work reported during the last decade is summarized.
-
-
-
Low Cost and Centimeter-Level Global Positioning System Accuracy Using Real-Time Kinematic Library and Real-Time Kinematic GPSA
Authors: Hemant K. Gianey, Mumtaz Ali, Varadarajan Vijayakumar, Ashutosh Sharma and Rajiv KumarIntroduction: Accuracy and total design and implementation cost of the GPS framework determine the viability of GPS based projects. As the greater part of the advanced framework including telemetry, IoT, Cloud, and AUTOSAR frameworks use GPS to get exact outcomes, finding a software-controlled error correction becomes important. The execution of open source library such as RTKLIB will help in controlling and revising GPS blunders. Methods: The project utilizes the RTKLIB along with two stations for better accuracy. The RTKGPS framework works under Linux environment, which is embedded in the Beagleboard. The communication between the GPS system is set up utilizing both serial communication protocol and TCP/IP suite. Results: To get high precision inside the network, two GPS modules are utilized. One of them will be mounted on the rover and another GPS is the base station of the setup. Both the GPS will have a double radio wire setup to increase the reception level to reduce the noise and obtain centimeterlevel precision. For long-range communication, rover utilizes Wi-Fi with TCP/IP stack protocol. In this research paper, setup is intended to accomplish the centimeter level precision through libraries in a Linux environment. Conclusion: The design will be set up and tried on a college campus under various conditions with different error parameters to acquire a low cost and centimeter level GPS accuracy.
-
-
-
Pose and Illumination Invariant Hybrid Feature Extraction for Newborn
Authors: Rishav Singh, Ritika Singh, Aakriti Acharya, Shrikant Tiwari and Hari OmBackground: Abduction, swapping and mix-ups are the unfortunate events that could happen to Newborn while in hospital premises and medical personnel are finding it difficult to curb this unfortunate incident. Accurate patient identification (ID) is essential for patient safety, especially with our smallest and most vulnerable paediatric patients. The level of security is very crucial issue in maternity ward and the problem of missing and swapping of Newborn is of prime concern to the persons involved and affected. There is a common perception in the society that nothing can be done to prevent this unfortunate tragedy. In comparison to developed nations the developing countries are facing more challenges because of overcrowding and scarcity of medical facilities in the hospital. The face of a newborn baby changes appearance in different lighting conditions in spite of various contrast enhancement techniques. The class of images which are under similar environment conditions (inter-class) may result in more differences than that of the intra-class images under different illumination conditions. This gives rise to the misclassification of images. Objective: The main objective of this paper is perform newborn face recognition in a hybrid approach. Speeded Up Robust Features (SURF) and Local Binary Pattern (LBP). Methods: The scientific contributions of the proposed work are stated below: • For face recognition of newborns in a semi controlled environment. • Overcoming the pose and illumination challenge. • Uses single gallery image. • Uses a hybrid approach to improve the results. Results: The average Rank 1 accuracy of the proposed method is 93.65% whereas that of the existing algorithms is quite low. The proposed technique is 12.8% more accurate than LBP, 10.5% more accurate than SURF, 12.8% more accurate than LDA and 18.1% more accurate than PCA for rank1. Conclusion: In this paper new semi supervised technique is used for demonstrate the improve the performance of the newborn face recognition system in different illumination and pose conditions. In our.
-
-
-
Intuitionistic Fuzzy Shapley-TOPSIS Method for Multi-Criteria Decision Making Problems Based on Information Measures
Authors: Reetu Kumari, Arunodaya R. Mishra and Deelip K. SharmaAims & Background: Cloud Computing (CC) offers unique scalable and all time available services to the user. As service provision is an important part of the Cloud computing concept, the proper choice of the desired service is of most relevance according to the users’ needs. Various associations like as Microsoft and Facebook have revealed momentous investments in CC and currently offer services with top levels of reliability. The well-organized and precise evaluation of cloud-based communication network is an essential step in assurance both the business constancy and the continuous open services. However, with a vast diversity in the cloud services, selection of a suitable cloud service is a very challenging task for a user under an unpredictable environment. Due to the multidimensional criteria of Quality of Service (QoS), cloud service selection problems are treated as Multiple Criteria Decision-Making (MCDM) problem. Objectives & Methodology: In present paper, a Multi-Criteria Decision Making (MCDM) method named as Shapley-TOPSIS method, which is extension of classical TOPSIS method for cloud service selection is developed. Thereafter, new divergence measures for IFSs with multiple parameters are studied. The interesting properties among the developed divergence and entropy measures have also been derived. Then, the criterion weights are ascertained by Shapley function via entropy and fuzzy measure approach. Next, Shapleydivergence measures are applied to calculate closeness coefficient of alternative. Finally, a problem of cloud service selection is demonstrated to show the applicability of new method and also compared with some existing method. Results & Conclusion: A decision making problem of cloud computing service provider has been considered for signifying the developed proposed intuitionistic fuzzy Shapley TOPSIS method and finishes with the outcomes coincide with the already developed methods which confirms the solidity of the developed method. For future, we plan to apply the proposed approach with different multi-criteria decision making problems. Also, we can extend proposed approach with other uncertain environment by using other decision making methods.
-
-
-
A Novel and Secure Hybrid iWD-MASK Algorithm for Enhanced Image Security
Authors: Righa Tandon and P.K. GuptaBackground: Most of the work has been done using DES, AES, and MAES for key generation in the image encryption. These algorithms provide better results for text based encryption when compared with digital image encryption. These algorithms are less secure and efficient. A new hybrid approach has been introduced for digital image encryption i.e. Chaos-based approach. This approach is designed for image related data. Obtained results outperform the previous results of encryption. Objective: To get the original image after decryption at the receiver’s end. Methods: Matrix array symmetric key algorithm has been used for key generation in which same key is used for encryption and decryption process. Here, we have to fix the total number of iterations. There will be three choices of iterations 10, 12 and 14 for 128, 192 and 256 bits key lengths respectively. Once the key is generated, IWD algorithm is applied for encryption process and at the end of sender and same key has to be used for decryption purpose at receiver’s end. Otherwise, system will stop its functioning and will result into system error. Results & Conclusion: The proposed hybrid concept of MASK with iWD algorithm is implemented on different set of images. Obtained results are satisfactorily while comparing the proposed approach with other existing approaches for the various parameters like precision, recall and Fmeasure. In future work, the proposed algorithm can be applied for the other types of cryptography as well, like text, video sharing, etc.
-
-
-
A Fully Convolutional Neural Network for Recognition of Diabetic Retinopathy in Fundus Images
Authors: Manaswini Jena, Smita P. Mishra and Debahuti MishraBackground: Diabetic retinopathy is one of the complexities of diabetics and a major cause of vision loss worldwide which come into sight due to prolonged diabetes. For the automatic detection of diabetic retinopathy through fundus images several technical approaches have been proposed. The visual information processing by convolutional neural network makes itself more suitable due to its spatial arrangement of units. Convolutional neural networks are at their peak of development and best results can be gained by proper use of the technique. The local connectivity, parameter sharing and pooling of hidden units are advantageous for various predictions. Objective: Objective of this paper is to design a model for classification of diabetic retinopathy. Methods: A fully convolutional neural network model is developed to classify the diseased and healthy fundus images. Here, proposed neural network consists of six convolutional layers along with rectified linear unit activations and max pooling layers. The absence of fully connected layer reduces the computational complexity of the model and trains faster as compared to traditional convolutional neural network models. Results and Conclusion: The validation of the proposed model is accomplished by training it with a publicly available high-resolution fundus image database. The model is also compared with various existing state-of-the-art methods which show competitive result as compared to these models. A behavioural study of different parameters of the network model is represented. The intelligence of our model lies in its ability to re-tune weight to overcome outliers encountered in future. The proposed model works well with satisfactory performance.
-
-
-
Threshold Detection Scheme Based on Parametric Distribution Fitting for Optical Fiber Channels
Authors: Mohammed Usman, Mohd Wajid, Muhammad Z. Shamim, Mohd D. Ansari and Vinit K. GunjanBackground: When data is transmitted by encoding it in the amplitude of the transmitted signal, such as in Amplitude Shift Keying, the receiver makes a decision on the transmitted data by observing the amplitude of the received signal. Objective: Depending on the level of the received signal, the receiver has to decode the received data, with minimum probability of error. For binary transmission, this is achieved by choosing a decision threshold. Methods: A threshold detection mechanism is evaluated in this paper, which uses parametric probability distribution fitting to the received signal. Results: Based on our earlier work on visible light communication, the idea is extended to a fiber optic channel and it is found that in a fiber optic channel, the threshold value obtained by fitting a Rayleigh distribution to the received data results in error probability approaching zero. Conclusion: It is found that the proposed threshold estimation method works well for both equal as well as unequal apriori probabilities of the binary symbols. The proposed method of threshold detection is adaptive when the apriori probabilities for 1’s and 0’s change.
-
-
-
A Fast Method for Defogging of Outdoor Visual Images
More LessBackground: Capturing image in severe atmospheric catastrophe especially in fog critically degrades the quality of an image and thereby reduces its visibility of which in turn affects several computer vision applications like visual surveillance detection, intelligent vehicles, remote sensing, etc. Thus acquiring clear vision is the prime requirement of any image. In the last few years, many approaches have been directed towards solving this problem. Methods: In this article, a comparative analysis has been made on different existing image defogging algorithms and then a technique has been proposed for image defogging based on dark channel prior strategy. Results: Experimental results show that the proposed method shows efficient results by significantly improving the visual effects of images in foggy weather. Also the much higher computational time of the existing techniques has been reduced in this paper by using the proposed method. Discussion: Qualitative assessment evaluation was performed on both benchmark and real time data sets for determining the efficacy of the technique used. Finally, the whole work is concluded with the relative advantages and shortcomings of the proposed technique.
-
-
-
Improving Scalability, Sparsity and Cold Start User Issues in Collaborative Tagging with Incremental Clustering and Trust
Authors: Latha Banda and Karan SinghBackground: Due to huge data in web sites, recommending users for every product is impossible. For this problem Recommender Systems (RS) are introduced. RS is categorized into Content-Based (CB), collaborative Filtering (CF) and Hybrid RS. Based on these techniques recommendations are done to user. In this, CF is the recent technique used in RS in which tagging feature also provided. Objective: Three main issues occur in RS are scalability problem which occurs when there is a huge data, sparsity problem occurs when rating data is missing and cols start user or item problem occurs when new user or new item enters in the system. To avoid these issues here we have proposed Incremental clustering and Trust in Collaborative Tagging. Methods: Here we have proposed a method Collaborative Tagging (CT) with Incremental Clustering and Trust which enhances the recommendation quality by removing the issues of scalability with the help of Incremental Clustering and sparsity and cold start user or item problems are resolved with the help of Trust. Results: Here we have compared the results of Collaborative tagging with Incremental Clustering and Trust (CFT-EDIC-TS) with the baseline approaches of CT with Cosine similarity (CFT-CS), CT with Euclidian Distance and Incremental Clustering (CFT-EDIC) and CT with Trust (CFT-TS). Conclusion: Here we have compare the proposed approach with the baseline approaches and the metrics are used MAE, prediction percentage, Precision and Recall. Based on these metrics for every split CFT-EDICTS shown best results as compared to other baseline approaches.
-
-
-
A Method for Webpage Classification Based on URL Using Clustering
Authors: Sunita, Gurvinder Singh and Vijay RanaBackground: Pattern mining is the mechanism of extracting useful information from a large dataset of information. A sub-field of web mining is sequential Noisy data extraction from user query, which is considered along with redundancy handling. This redundancy handling mechanism employed in the existing literature is known as ambiguity handling. The clustering mechanism employed in the existing system includes k means, semantic search and incremental growth of the internet. Aims: The proposed works comprise an analysis of techniques used to extract useful URLs to replace noisy data. Methods: We consider noisy data extraction from user query considered along with the redundancy handling. This redundancy handling mechanism employed in the existing literature is known as ambiguity handling. The clustering mechanism used in the existing system includes k means and semantic search. These mechanisms are static, causing performance degradation in terms of execution time. It suggests the performance improvement mechanism in this literature. Results: The methods MPV (Most-Probable-Values) clustering and N-gram techniques for improvement considered in existing literature can further be improved using the research methodology specified through this literature. Conclusion: In the proposed system, results are based on MPV clustering with N-grams techniques. N-gram analyzes the instances of a word or phrase across all query data. The parameters fetch the results in terms of execution time and the number of URLs retrieves for web page classification.
-
-
-
Performance Analysis of TCP Variants Using AODV and DSDV Routing Protocols in MANETs
Authors: Rajnesh Singh, Neeta Singh and Aarti G. DinkerBackground: TCP is the most reliable transport layer protocol that provides reliable data delivery from source to destination node. The background of TCP suggests that it works well in wired networks but it is assumed that TCP is less preferred for ad-hoc networks. However, for application in ad-hoc networks, TCP can be modified to improve its performance. Various researchers have proposed improvised variants of TCP by only one or two measures. Objective: These one or two measures do not seem to be sufficient for proper analysis of improvised version of TCP. Therefore, in this paper, our objective is to evaluate the performance of different TCP variants and their results are investigated with DSDV and AODV routing protocols. Methods: Our methodology includes the analysis of various performance measures such as throughput, delay, packet drop, packet delivery ratio and number of acknowledgements. The simulation results are carried out by varying number of nodes in network simulator tool NS2. Results: It is observed in results that TCP Newreno achieved higher throughput and packet delivery ratio with both AODV and DSDV routing protocols. Whereas TCP Vegas achieved minimum delay and packet loss with both DSDV and AODV protocol. However TCP sack achieved minimum acknowledgment with both AODV and DSDV routing protocols. Conclusion: The comparison of all these TCP variants helps in drawing conclusion that, TCP Newreno provides better performance with both AODV and DSDV protocols.
-
-
-
Logistic Step Size Based Artificial Bee Colony Algorithm for Optimum Induction Motor Speed Control
Authors: Fani B. Sharma and Shashi R. KapoorBackground: The disruption in the parameters of Induction Motor (IM) is quite frequent. In case of substantial parameter disruption, the IM becomes unreliable. Methods: To deal with this complication of parameter disruption, the Proportional-Integral (PI) controllers are utilized. The selection of PI controller parameters is process dependent and any inappropriate selection of controller setting may diverge to instability. Nowadays, the tuning of PI controller parameters for IM is executed using a significant SI algorithm namely, Artificial Bee Colony (ABC) algorithm. Further, the ABC algorithm is amended based on inspiration from the mathematical logistic formula. Results and Conclusion: The propound algorithm is titled as a logistic ABC algorithm and is applied for PI controller tuning of IM. The outcomes reveal that the propound algorithm performs better as compared to other state-of-art strategies available in the literature.
-
-
-
A Hybrid Protocol Using Fuzzy Logic and Rough Set Theory for Target Coverage
Authors: Pooja Chaturvedi and Ajai K. DanielBackground: Target coverage is considered a significant problem in the area of wireless sensor networks, which usually aims at monitoring a given set of targets with the required confidence level so that network lifetime can be enhanced while considering the constraints of the resources. Objective: To maximize the lifetime of the sensor network and minimize the overhead involved in the scheduling approach, such that the pre specified set of targets is monitored for a longer duration with the required confidence level. Methods: The paper uses a fuzzy logic system based on Mamdani inference in which the node status to remain in the active state is determined on the basis of coverage probability, trust values and node contribution. The rule set for determining the set cover is optimized by using the rough set theory, which aims to determine the node validity for the trust calculation. Results: The results show that the proposed approach improved the network performance in terms of processing time, throughput and energy conservation by a factor of 50%, 74% and 74%, respectively, as compared to the existing approaches. Conclusion: The paper proposes a scheduling strategy of the nodes for target coverage as an enhancement to the Energy Efficient Coverage Protocol (EECP) on the basis of rough set theory. The rule set for determining the set cover is optimized by using the rough set theory so that the network performance is improved in terms of the processing time, throughput and energy consumption.
-
-
-
A Hybrid Technique for Selection and Minimization of Test Cases in Regression Testing
Authors: Leena Singh, Shailendra N. Singh and Sudhir DawraBackground: In today’s era, modifications in a software is a common requirement by customers. When changes are made to existing software, re-testing of all the test cases is required to ensure that the newly introduced changes do not have any unwanted effect on the behavior of the software. However, retesting of all the test cases would not only be time consuming but also expensive. Therefore, there is a need for a technique that reduces the number of tests to be performed. Regression testing is one of the ways to reduce the number of test cases. Selection technique is one such method which seeks to identify the test cases that are relevant to some set of recent changes. Objective: It is evident that most of the studies have used different selection techniques and have focused only on one parameter for achieving reduced test suite size without compromising the performance of regression testing. However, to the best of our knowledge, no study has taken two or more parameters of coverage, and/or execution time in a single testing. This paper presents a hybrid technique that combines both regression test selection using slicing technique and minimization of test cases using modified firefly algorithm with combination of parameters coverage and execution time in a single testing. Methods: A hybrid technique has been described that combines both selection and minimization. Selection of test cases is based upon slicing technique while minimization is done using firefly algorithm. Hybrid technique selects and minimizes the test suite using information on statement coverage and execution time. Results: The proposed technique gives 43.33% much superior result as compared to the other hybrid approach in terms of significantly reduced number of test cases. It shows that the resultant test cases were effective enough to cover 100% of the statements, for all the programs. The proposed technique was also tested on four different programs namely Quadratic, Triangle, Next day, Commission respectively for test suite selection and minimization which gave comparatively superior result in terms of reduction (%) in number of test cases required for testing. Conclusion: The combination of parameters used in slicing based approach, reduces the number of test cases making software testing an economical, feasible and time saving option without any fault in the source code. This proposed technique can be used by software practitioners/experts to reduce time, efforts and resources for selection and minimization of test cases.
-
-
-
A Novel and Secure Image Encryption Scheme Based on Chaotic Logistic Map and Random Scrambling Using a Special Parameter of Gray Code Transformation
Authors: Sudeept S. Yadav and Yashpal SinghBackground: Image security is a major concern in the digital communication media. So there are so many algorithms symmetric and asymmetric are available for securely transmit the image. In symmetric key algorithm many algorithms are available based on bit plane decomposition and random scrambling using chaotic maps. Objective: A secure image encryption algorithm is proposed based on the combination of (n, k, p) gray code and chaotic logistic map to improve the performance of decomposition encryption methods. Methods: We suggest a new algorithm that improves the security of image encryption based on (n, k, p) gray code and bit plane decomposition method. This new encryption technique uses the bit plane decomposition of (n, k, p) gray code then random scrambling of each bit plane and chaotic logistic by map using pixel substitution method. Results: The performance and security of proposed algorithm have been analyzed on different evaluation metrics like MSE, PSNR, UAIC and NPCR and Correlation Coefficient etc. The experimental results show that the proposed algorithm is secure and provides better results from the XOR based substitution method. Conclusion: This new method can be utilized in imaging systems like tomography images of medical field, tele-medicine and digital video, video conferencing in communication system. Further, we will study to enhance and examine the execution of (n, k, p) gray code for data hiding and picture de-noising.
-
-
-
An Un-Supervised Approach for Backorder Prediction Using Deep Autoencoder
Authors: Gunjan Saraogi, Deepa Gupta, Lavanya Sharma and Ajay RanaBackground: Backorders are an accepted abnormality affecting accumulation alternation and logistics, sales, chump service, and manufacturing, which generally leads to low sales and low chump satisfaction. A predictive archetypal can analyze which articles are best acceptable to acquaintance backorders giving the alignment advice and time to adjust, thereby demography accomplishes to aerate their profit. Objective: To address the issue of predicting backorders, this paper has proposed an un-supervised approach to backorder prediction using Deep Autoencoder. Methods: In this paper, artificial intelligence paradigms are researched in order to introduce a predictive model for the present unbalanced data issues, where the number of products going on backorder is rare. Results: Un-supervised anomaly detection using deep auto encoders has shown better Area under the Receiver Operating Characteristic and precision-recall curves than supervised classification techniques employed with resampling techniques for imbalanced data problems. Conclusion: We demonstrated that Un-supervised anomaly detection methods specifically deep auto- encoders can be used to learn a good representation of the data. The method can be used as a predictive model for inventory management and help to reduce the bullwhip effect, raise customer satisfaction as well as improve operational management in the organization. This technology is expected to create the sentient supply chain of the future - able to feel, perceive and react to situations at an extraordinarily granular level.
-
-
-
A Hybrid Combination of LS-SVM and KPCA with Bat Algorithm for Intrusion Detection
Authors: Thiruppathy K. V. and Loheswaran K.Background: Intrusion Detection System refers to a kind of software which is designed for securing a network or an information system by alerting the administrators automatically when someone is trying to compromise the overall system through malicious activities. An intrusion is a process of causing a system to enter into an insecure state. Thus, an intruder is a person or an attacker who attempts to violate the security measures by interrupting the integrity and confidentiality of a system. Objective: Today, security threads are becoming big issues when working with high speed network connections, one solution is the intrusion detection system which promises to provide network security. This paper proposed Least Square Support Vector Machine (LS-SVM) based on Bat Algorithm (BA) for intrusion detection. Methods: The proposed techniques have divided into two phases. In the first phase, KPCA is utilized as a preprocessing of LS-SVM to decrease the dimension of feature vectors and abbreviate preparing time with a specific end goal to decrease the noise caused by feature contrasts and enhance the implementation of LSSVM. In the second phase, least square support vector machine with a BA is applied for the classification of detection. BA utilizes programmed zooming to adjust investigation and abuse amid the hunting procedure. Finally, as per the ideal feature subset, the feature weights and the parameters of LS-SVM are optimized at the same time. Results: The proposed algorithm named is Kernel principal component analysis based least square support vector machine with bat algorithm (KPCA-BA-LS-SVM). To show the adequacy of proposed method, the tests are completed on KDD 99 dataset which is viewed as an accepted benchmark for assessing the execution of intrusions detection. Furthermore, our proposed hybridization method gets a sensible execution regarding precision and efficiency. Conclusion: A BA model with a novel hybrid KPCA-LS-SVM was proposed in this paper for intrusion detection system. The parameters for LS-SVM classifier are chosen with the usage of BA, the necessary features of intrusion detection data was separated using KPCA and the multi-layer SVM classifier. This classifier checks whether any activity encounters an attack. The N-RBF supports the Gaussian kernel work by measuring the preparation time and forwarding the LS-SVM classifiers execution. As future work, several algorithms should be developed by combing other available classification methods with kernel systems resulting in efficient planning of examination and online intrusion detection.
-
-
-
Multi-Level Image Segmentation of Color Images Using Opposition Based Improved Firefly Algorithm
Authors: Abhay Sharma, Rekha Chaturvedi, Umesh Dwivedi and Sandeep KumarBackground: Image segmentation is the fundamental step in image processing. Numerous image segmentation algorithms have been proposed already for grey scale images and they are very complex and time-consuming. Since most of the algorithms suffer from over and under segmentation problem, multi-level image thresholding is very effective method against this problem. Nature inspired meta-heuristics algorithms such as firefly are fast and can enhance the performance of the method. Objective: This paper provides a modified firefly algorithm and its uses for multilevel thresholding in colored images. Opposition based learning is incorporated in the firefly algorithm to improve convergence rate and robustness. Between class variance method of thresholding is used to formulate the objective function. Methods: Numerous benchmark images were tested for evaluating the performance of the proposed method. Results: The experimental results validate the performance of Opposition Based Improved Firefly Algorithm (OBIFA) for multi-level image segmentation using Peak Signal to Noise Ratio (PSNR) and Structured Similarity Index Metric (SSIM) parameter. Conclusion: The OBIFA algorithm is best suited for multilevel image thresholding. It provides best results compared to Darwinian Particle Swarm Optimization (DPSO) and Electro magnetism optimization (EMO) for the parameter: convergence speed, PSNR and SSIM values.
-
-
-
Analysis of High-Efficiency Transformerless Inverter for a Grid-Tied PV Power System with Reactive Power Control
Authors: Selvamathi Ramachandran and Indragandhi VairavasundaramBackground: In recent days grid-tied PV power systems play a vital role in the entire energy system. In a grid –tied PV system Transformerless Inverter (TI) topologies are preferred for its reduced price, improved efficiency, lightweight etc. Therefore many transformerless topologies have been proposed and verified with real power injection only. Recently, almost every international standard has imposed that a specified amount of reactive power should be handled by the grid-tied PV inverter. According to the standard VDE-AR-N4105, grid-tied PV inverter of power rating below 3.68kVA, should attain Power Factor (PF) from 0.95 leading to 0.95 lagging. Objective: To address this issue of grid-tied PV system with reactive power control, in this paper Fuzzy gain scheduling controller is proposed as a power controller for High Efficiency Transformeless (HETL) inverter. The performance of the proposed scheme is analyzed and validated with the comparison of a conventional PI controller based active and reactive power controllers. Methods: This paper is with the intention of improvement in the performance of the system, the FGS controller is anticipated as active and reactive power controllers. In conventional PI controller gains are constant for any value of the error, which makes error and delay in optimum value of voltage (Vα and Vß). Hence in this analysis FGS controller tunes PI controller gain with respect to change in active and reactive power error. Results: Comparative performance of PI and FGS based HETL inverter is presented in this paper. It is noted that for any cases either Pref is constant or variable the FGS reduces ripple than PI based HETL inverter system. Compare to constant reference power case variable reference case produces more ripples in both systems. Conclusion: From the analysis FGS based HETL inverter in a grid-tied PV based power system produces the best performance in all aspects such as voltage, active power and reactive power.
-
-
-
An Empirical Comparison of t-GSC and ACO_TCSP Applied to Time Bound Test Selection
Authors: Nishtha Jatana and Bharti SuriBackground: Test case selection is a highly researched and inevitable software testing activity. Various kinds of optimization approaches are being used for solving the test selection and prioritization problem. Greedy approach and search based techniques have already been applied to address test case selection. A greedy approach to solve set cover problem is proposed to find a minimal test suite that is able to detect maximum faults. Search based approximation technique - Ant Colony Optimization reduces and prioritizes the test suite to form an optimized test suite. Time bound test case prioritization is an NP-complete problem. Mutation testing is used to inculcate known faults into the programs under test to generate effective test cases. Objective: To empirically evaluate the performance of a greedy approach and a search based approach for test case selection. Methods: This paper attempts to compare a search based approximation approach against a time sensitive greedy approach for test suite optimization using mutation testing. The proposed greedy approach for yielding optimized test suite within a time bound environment is implemented in Python. The two techniques used, search based and greedy approach have also been empirically compared on 24 programs in different languages (C, C#, and Java and Python). Results: The greedy approach has shown better complexity over the ACO approach and the same is validated by experimentation using the actual run time of the algorithms. The time bound greedy approach yielded best result for 16 out of the 24 programs. The ACO approach could find the best result 100% times for 11 programs and at least 30-95% times for rest 13 programs by running ACO 10 times on each program. Yet the percentage reductions achieved in the size and execution time of the resultant test suite were almost similar in both the techniques. Conclusion: The results inspire the further use of both the techniques in software testing.
-
-
-
Data Control in Public Cloud Computing: Issues and Challenges
Authors: Ashok Sharma, Pranay Jha and Sartaj SinghBackground: The Advancement in the Hardware and Progress in IoT based devices has led to significant transformation in digitalization and globalization of business models in the IT World. In fact Cloud Computing has attracted many companies to expand their business by providing IT infrastructure with very less budget in pay per use model. The Expansion and Migration of Companies to Cloud Computing facilities has really brought many pros and cons and opened new area of Research. The Management of IT infrastructure as per business requirement is a great challenge for the IT Infrastructure managers because of complex business models which needs to be updated with market trends and it requires huge and updated infrastructure to accelerate their business requirements. No doubt there are many benefits of moving to Cloud but several vulnerabilities and potential threats related to security is a major concern for any business sensitive data. These security challenges place restrictions on moving on-premises workloads to the Cloud. This paper has discussed key differences in cloud models and existing various Cloud Security Architectures and challenges in cloud computing related to Data Security at Rest and in Transit. Also data controlling mechanism need to be adopted by IT Industry along with end to end security mechanism has been explained. Objective: The main objective of this paper is to discuss about the prevailing issues in cloud in terms of data security which is discouraging the Industry and organizations to move their data into public cloud and also to discuss how to enhance security mechanism in cloud during data migration and multitenant environment. Methods: Based on different reports and analysis, it has been pointed that data breach and data security are most challenging and concerning factor for any customer when someone think to migrate the workloads from On-Premises datacenter to Cloud Computing. It needs more attention in every consideration. All criteria and considerations to secure and protect the customer’s information and data have been classified and discussed. Data-at-rest and Data-in-transit are transmission method for storing and moving the data from one source to destination. Different encryption methods for protecting and security data-at-rest and data-in-transit have been identity. However, there are still more areas need to work in for filling gaps for Cloud data control and security which is still a serious concern and on top of attackers every day. Results & Conclusion: Since, cyber-attacks occurring very frequently and causing a huge amount of investment on re-establishing the environment, it needs more control with effective usages of technology. All those concerns related to security are very reasonable concerns that needs to be addressed.
-
-
-
Ensemble Visual Content Based Search and Retrieval for Natural Scene Images
Authors: Pushpendra Singh, P.N. Hrisheekesha and Vinai K. SinghBackground: Content Based Image Retrieval (CBIR) is one of the fields for information retrieval where similar images are retrieved from database based on the various image descriptive parameters. The image descriptor vector is used by machine learning based systems to store, learn and template matching. These feature descriptor vectors locally or globally demonstrate the visual content present in an image using texture, colour, shape, and other information. Objective: The main vision of this paper is to categorize and evaluate those algorithms, which were proposed in the interval of last 10 years. In addition, experiment is performed using a hybrid content descriptors methodology that helps to gain the significant results as compared with state-of-art algorithms. Methods: In past, several algorithms were proposed to fetch the variety of contents from an image based on which the image is retrieved from database. But, the precision and recall for the gained results using single content descriptor is not significant. The proposed system architecture uses the hybrid ensemble feature set for image matching. The hybrid parameters include globally and locally defined feature extraction methodologies. Results: The hybrid methodology decreases the error rate and improves the precision and recall for large natural scene images dataset having more than 20 classes. The overall combination will provide almost 97% accurate results which is better than the existing literature results. Conclusion: In conclusion the experimentation result suggests that the use of the local feature extraction mechanism is better when compared with the global feature extraction methodology.
-
-
-
Test Case Prioritization Using Bat Algorithm
Authors: Anu Bajaj and Om P. SangwanBackground: Regression testing is very important stage of software maintenance and evolution phase. The software keeps on updating and to preserve the software quality, it needs to be retested every time it is updated. Due to limited resources, complete testing of the software becomes tedious task. The probable solution to this problem is to execute those test cases first that are more important prior to less important test cases. Objective: Optimization methods need to be acquired for efficient test case prioritization in minimum time, while maintaining the quality of the software. Various nature-inspired algorithms like genetic algorithm, particle swarm optimization and ant colony optimization, etc. have been applied for prioritizing test cases. In this paper, we have applied a relatively new nature-inspired optimization method, namely, Bat algorithm that utilizes the echolocation, loudness and pulse emission rate of bats in prioritizing the test cases. Methods: The proposed algorithm is experimented on sample case study of timetable management system and evaluated with the help of popular evaluation metric Average Percentage of Fault Detection. Results: Results have been compared with the baseline approaches i.e. untreated, random and reverse prioritization and well-established optimization method i.e. genetic algorithm and we have found a considerable increase in the value of evaluation metric. Conclusion: This preliminary study shows that the bat algorithm have great potential to solve test case prioritization problems.
-
-
-
Design of Relation Extraction Framework to Develop Knowledge Base
Authors: Poonam Jatwani, Pradeep Tomar and Vandana DhingraBackground: Web documents display information in the form of natural language text which is not understandable by machines. To search specific information from sea of web documents has become very challenging as it shows many unwanted non relevant documents along with relevant documents. To retrieve relevant information semantic knowledge can be stored in the domain specific ontology which helps in understanding user’s need to retrieve relevant information. Methods: In this paper, framework for extracting and visualising semantic knowledge has been designed. Proposed approach is based on the assumption that semantics of text can be extracted by creating syntactic structure of the text. To extracts syntactic structure Stanford parser has been used. Parsing of corpus text is done to obtain morphological structures which is in more machine readable format, and thus provides a better structure for constructing syntactic-semantic rules manually. The tagged form of each sentence is taken and set of rules based on dependency relationship are build manually. Sentence level analysis is performed for concepts generation, for properties and for hierarchical relation extraction using dependency parse tree as a means for relation extraction. Results: Extracted concepts and relation among various entities constitute knowledge base in the form of ontology. Conclusion: Proposed information extraction model successfully filter the desired information from the large ocean of internet and create semantic structure to represent data in standard machine understandable format which explain the details about entities along with their properties and their relationship.
-
-
-
Software Reliability Prediction of Open Source Software Using Soft Computing Technique
Authors: Saini G.L., Deepak Panwar and Vijander SinghBackground: In software development, reliability performs a significant role. It is the nonfunctional requirement of the software. Before using the Open Source Software (OSS) for software development, it is essential to check the quality of the open source software. It is challenging to identify that which OSS is suitable for development of software. Conventional software reliability prediction models are suitable for the Commercial Off The Shelf (COTS) software but it is not sufficient to predict the software reliability of open source software as it has some extra features such as source code is freely available and it is modifiable also.
Methods: Most of the researchers have given the mathematical model based on crisp set theory to estimate the reliability of software. Proposed methodology does not rely upon a scientific/mathematical model. In this approach, fuzzy logic based soft computing approach has been used to analyze the reliability of OSS. The goal of this paper is to propose a fuzzy logic soft computing technique based model using three reliability metrics for estimating the security of open source software.
Results: The software reliability model is tested on few software applications, and the outcomes affirm the productivity of the model.
Conclusion: In order to assess open source software reliability a fuzzy logic based soft computing technique has been proposed.
-
-
-
An Optimal Feature Selection Method for Automatic Face Retrieval Using Enhanced Grasshopper Optimization Algorithm
Authors: Arun K. Shukla and Suvendu KanungoBackground: Retrieval of facial images based on its contents is one of the main areas of research. However, images contain high dimensional feature vectors and it is a challenging task to select the relevant features due to the variations available in the images of similar objects. Therefore, the selection of relevant features is an important step to make the facial retrieval system computationally efficient and more accurate. Objective: The main aim of this paper is to design and develop an efficient feature selection method for obtaining relevant and non-redundant features from the face images so that the accuracy and computational cost of a face retrieval system can be improved. Methods: The proposed feature selection method uses a new enhanced grasshopper optimization algorithm to obtain the significant features from the high dimensional features vector of face images. The proposed algorithm modifies the target vector by considering more than one best solution which maintain the elitism property and save the search from local optimum. Furthermore, it has been utilized to select the prominent features from the high dimensional facial features vector. Results: The performance of the proposed feature selection method has been tested on Oracle Research Laboratory face database. The proposed method eliminates 89% features which are minimum among the other methods and increases the accuracy of face retrieval system to 96.5%. Conclusion: The enhanced grasshopper optimization algorithm-based feature selection method for face retrieval system outperforms the existing methods in terms of accuracy and computational cost.
-
-
-
Feature Selection Method Based on Grey Wolf Optimization and Simulated Annealing
Authors: Avinash C. Pandey and Dharmveer S. RajpootBackground: Feature selection sometimes also known as attribute subset selection is a process in which optimal subset of features are elected with respect to target data by reducing dimensionality and removing irrelevant features. There will be 2n possible solutions for a dataset having n number of features that is difficult to solve by conventional attribute selection method. In such cases metaheuristic-based methods generally outruns the conventional methods. Objective: The main aim of this paper is to enhance the classification accuracy and minimize the number of selected features and error rate. Methods: To achieve the objective, a binary metaheuristic feature selection method bGWOSA based on grey wolf optimization and simulated annealing has been introduced. The proposed feature selection method uses simulated annealing for equalizing the trade-off between exploration and exploitation. The performance of the proposed binary feature selection method has been examined on the ten feature selection benchmark datasets taken from UCI repository and compared with binary cuckoo search, binary particle swarm optimization, binary grey wolf optimization, binary bat algorithm and binary hybrid whale optimization method. Results: The proposed feature selection method achieves the highest accuracy for the most of datasets compared to state-of-the-art. Further, from the experimental and statistical results, efficacy of the proposed feature selection method has been validated. Conclusion: Classification accuracy can be enhanced by employing feature selection methods. Moreover, performance can also be enhanced by tuning the control parameters of metaheuristic methods.
-
-
-
EBPA: A Novel Framework for the Analysis of Process Performance on the Basis of Real-Time NASA Application
Authors: Shashank Sharma and Sumit SrivastavaBackground: Workflow extraction is the connecting link between process modelling and data mining. Extraction of information and make insight from it using event log is the primary objective of workflow mining. The learning got along these logs can build comprehension about the workflow of procedures and association of different processes. That can help with upgrading them if necessary. Objective: The aim of this paper is to display a process performance based framework where we compare reference model with extracted model (from a large information system) on the basis of key performance indices. Methods: Proposed approach perform extraction of workflow model using workflow mining. This process is effective and efficient as compare with building work flow model from scratch. This shows a logic about how to program event log data gathered from different sensors (Internet of Events). How we process and investigates to handle and propel the item work process by using the course of action. Results: Proposed approach displays a process based framework for the legacy system that ensures the effective and efficient working. So that accordingly extracted model behave like referenced model and results are validated by Key Performance Indices (KPI) for evaluating process performance. Conclusion: In this experimental data centric approach, our progressing work is to research a metric to quantify the nature of reference models and extracted model. On the basis of metric values, we take the decision on legacy information system process management.
-
-
-
The Development of a Modified Ear Recognition System for Personnel Identification
Authors: Haitham S. Hasan and Mais A. Al-SharqiBackground: This study proposed a Match Region Localization (MRL) Ear Recognition System (ERS). Captured ear images were pre-processed through cropping and enhancement. The preprocessed ear images were segmented using the proposed MRL segmentation algorithm and divided into 160 sub-images. The principal features of the segmented ear images were extracted and used in template generation. k-nearest neighbor classifiers with Euclidean distance metrics were applied in the classification. Objective: The proposed ERS exhibited a recognition accuracy of 97.7%. Other publicly available ear datasets can be tested using the proposed system for cross-database comparison and can be improved by reducing their errors. Methods: This research follows four major stages, namely, the development of a PCA-based ear recognition algorithm, implementation of the developed algorithm, determination of the optimum ear segmentation method, and evaluation of the performance of the technique. Results: The False Acceptance Rate (FAR) of the developed Ear Recognition System (ERS) is 0.06. This result implies that six out of every 100 intruders will be falsely accepted. Conclusion: The developed ERS outperforms the existing ERS by approximately 24.61% in terms of system recognition accuracy; the developed ERS can be tested on other publicly available ear databases to check its performance on larger platforms.
-
Most Read This Month
