Recent Advances in Computer Science and Communications - Volume 14, Issue 6, 2021
Volume 14, Issue 6, 2021
-
-
A Comprehensive Review of Load Balancing Techniques in Cloud Computing and Their Simulation with CloudSim Plus
Authors: Sudha Narang, Puneet Goswami and Anurag JainBackground: The field of cloud computing has been evolving for over a decade now. Load balancing is an important component of cloud computing. Load balancing implies scheduling of cloudlets (tasks) on virtual machines. Since this is an NP-hard problem, various heuristics for load balancing have been proposed in the research literature. The heuristics have been categorized, simulated and benchmarked in various ways; however, the information is scattered across many review articles. Objective: This review aims to bring a broad range of load balancing heuristics found in the research literature under one umbrella. It includes a comprehensive list of heuristics, a holistic set of criteria for their classification, and some key performance metrics and simulation tools used for their benchmarking. An illustration of a fair and comprehensive comparison of heuristics is provided using CloudSim Plus, a recent and advanced simulation tool. Methods: The simulations performed with CloudSim Plus employ a generic model of the task and machine heterogeneity with Poisson arrival of cloudlets and an exponential distribution of cloudlet length to emulate actual cloud-computing scenarios. The simulation results in terms of key performance metrics are used to compare four centralized load balancing heuristics including Join Shortest Queue (JSQ), Join Idle Queue (JIQ), Round Robin and Minimum Completion Time (MCT).
-
-
-
A Comparative Review for Question Answering Frameworks on the Linked Data
Authors: Ceren O. Tasar, Murat Komesli and Murat O. UnalirBackground: One of the state-of-the-art techniques for question answering frameworks is using linked data by converting the user input into SPARQL which is the query language for linked data. Objective: The main target is to emphasize the most fundamental issues while developing a question answering frameworks that accept input in natural language and converting it into SPARQL. Methods: The trend of applying linked data as a data source is gaining popularity among the researchers. In this study, question answering frameworks that combine both natural language processing techniques and linked data technologies are examined. Common principles of examined question answering frameworks recognize user intention, enriching natural language input and converting it to a SPARQL query. Results: 9 studies are selected for further examination to be compared by using selection criteria defined in the research methodology. Conclusion: Resulting outcomes are represented and compared in detail. In addition to the comparative review of systems, a general architecture of question answering frameworks on the linked data is drawn as an outcome of this study to provide a guideline for the researchers who are studying related research fields.
-
-
-
Inspirations from Nature for Meta-Heuristic Algorithms: A Survey
Authors: Rohit K. Sachan and Dharmender S. KushwahaBackground: Nature-Inspired Algorithms (NIAs) are the most efficient way to solve advanced engineering and real-world optimization problems. Since the last few decades, various researchers have proposed an immense number of NIAs. These NIAs get inspiration from natural phenomenon. A young researcher attempting to undertake or solve a problem using NIAs is bogged down by a plethora of proposals that exist today. Not every algorithm is suited for all kinds of problem. Some scores over others. Objective: This paper presents a comprehensive study of seven NIAs, which have new and unique inspirations. This study shall useful to easily understand the fundamentals of NIAs for any new entrant. Conclusion: Here, we classify the NIAs as natural evolution based, swarm intelligence based, biological based, science based and others. In this survey, well-establish and relatively new NIAs, namely- Shuffled Frog Leaping Algorithm (SFLA), Firefly Algorithm (FA), Gravitational Search Algorithm (GSA), Flower Pollination Algorithm (FPA), Water Cycle Algorithm (WCA), Jaya Algorithm and Anti-Predatory NIA (APNIA), have been studied. This study presents a theoretical perspective of NIAs in a simplified form based on its source of inspiration, mathematical formulations, control parameters, features, variants and area of application, where these algorithms have been successfully applied.
-
-
-
Non-Cooperative Iris Segmentation: A Two-Stage Approach
Authors: M. R. Kumar and K. ArthiAim: The accuracy of the non-cooperative iris recognition is highly dependent on the proper segmentation of the iris region from the input eye image. The traditional non-cooperative iris segmentation algorithms decrease significantly because of several noise factors such as specular reflections, occlusions, eyelashes, and eyelids. However, several techniques are developed to overcome these drawbacks in the iris segmentation process; it is still a challenging task to localize the iris texture regions. Background: Recently, segmentation of iris image is the most important process in a robust iris recognition system due to the images captured from non-cooperative environments which introduce occlusions, blur, specular reflections, and off-axis. However, several techniques are developed to overcome these drawbacks in the iris segmentation process; it is still a challenging task to localize the iris texture regions. In this research, an effective two-stage of iris segmentation technique is proposed in a non-cooperative environment. Objective: To proposed an effective two-stage of iris segmentation technique in a non-cooperative environment. Methods: Modified Geodesic Active Contour-based level set segmentation with Particle Swarm Optimization (PSO) is employed for iris segmentation. In this, the PSO algorithm is used to minimize the energy of the gradient descent equation in a region-based level set segmentation algorithm. The global threshold-based segmentation (enhanced Otsu’s method) is employed for pupil region segmentation. Results: The experiment considered two well-known databases such as UBIRIS.V1 and UBIRIS.V2. The simulation outcomes demonstrate that the proposed novel approach attained more accurate and robust iris segmentation under non-cooperative conditions. Also, the results of the modified Geodesic Active Contour-based level set segmentation with the PSO algorithm attained better results than the conventional segmentation techniques. Conclusion: An effective two- stage of iris segmentation is proposed in a non-cooperative environment using Geodesic active contour-based level set segmentation with particle swarm optimization and an enhanced Otsu method is employed for pupil segmentation. The proposed two-stage of iris segmentation technique segments the iris in its genuine shape which will provide a more robust and precise iris recognition system.
-
-
-
Optimal Stochastic Gradient Descent with Multilayer Perceptron Based Student's Academic Performance Prediction Model
Authors: S. Ranjeeth, T.P. Latchoumi and P. V. PaulIntroduction: Educational Data Mining and Machine Learning Models gained more interest among researchers and academicians in recent years. These are used to extract meaningful data from the educational database can be applied to predict the student academic performance. Objective: The main objective of this research shows the capability to predict the student's performance and is highly beneficial to take remedial actions in the present educational system. It also offers suitable student assistance, allows an educational institution or teacher to help the students in gaining extra marks. Methods: The presented model operates in two stages, namely classification and outlier detection. Initially, a multilayer perceptron is employed for data classification. Later, a stochastic gradient descent classifier and multi-layer perceptron are integrated to effectively classify the data. For further improvement in the classification, Student academic performance, outlier detection method namely radial basis function is applied to remove the misclassified instances. Results: The proposed model achieves superior classification performance with the maximum precision of 79.30, recall of 79.30, and accuracy of 79.30, F-score of 79.30 and kappa value of 52.40 respectively. The simulation outcome exhibited that the multilayer perceptron and stochastic gradient descentmodel offers better results over the other classifiers. However, the usage of outlier detection using the radial basis function model takes the classifier results to the next level. Conclusion: Using classification techniques a new student academic performance model is proposed by addressing some issues in the existing model which is used to predict the student academic performance assist inproper time.
-
-
-
Research on Interval-Valued Intuitionistic Fuzzy Multi-Attribute Decision Making Based on Projection Model
Authors: Sha Fu, Xilong Qu, Yezhi Xiao, Hangjun Zhou and Yun ZhouBackground: Regarding the multi-attribute decision making where the decision information is the interval-valued intuitionistic fuzzy number and the attribute weight information is not completely determined. Methods: Intuitionistic fuzzy set theory introduces non-membership function, as an extension of the fuzzy set theory, it has certain advantages in solving complex decision-making problems. A projection model based interval-valued intuitionistic fuzzy multi-attribute decision-making scheme was proposed in this study. The objective weight of the attribute was obtained using improved intervalvalued intuitionistic fuzzy entropy, and thus the comprehensive weight of the attribute was obtained according to the preference information. Results: In the aspect of the decision-making matrix processing, the concept of interval-valued intuitionistic fuzzy ideal point and its related concepts were defined, the score vector of each scheme was calculated, the projection model was constructed to measure the similarity between each scheme and the interval-valued intuitionistic fuzzy ideal point, and the scheme was sorted according to the projection value. Conclusion: The efficiency and usability of the proposed approach are considered in the case study.
-
-
-
Centre-of-Mass Based Gait Recognition for Person Identification
By Rajib GhoshBackground: Gait recognition focuses on the identification of persons from their walking activity. This type of system plays an important role in visual surveillance applications. The walking pattern of every person is unique and difficult to replicate by others. Objective: The present article focuses on to develop a person identification system based on gait recognition. Methods: In this article, a novel gait recognition approach is proposed to show how human body Centre-of-mass-based walking characteristics can be used to recognize unauthorized and suspicious persons when they enter in a surveillance area. Walking pattern varies from person to person mainly due to the differences in the footsteps and body movement. Initially, the background is modelled from the input video captured through static cameras deployed for security purpose. Foreground moving object in the individual frames is then segmented using the background subtraction algorithm. Centre-of-mass based discriminative features of various walking patterns are then studied using Support Vector Machine(SVM) classifier to identify each unique walking pattern. Results: The proposed system has been evaluated using a self-generated dataset containing a side view of various walking video clips. The experimental results demonstrate that the proposed system achieves an encouraging person identification rate. Conclusion: This work can be further extended to provide a general approach in developing an automatic person identification system in an unconstrained environment.
-
-
-
Quantitative Analysis and Evaluation of Tractor Gear Shifting Comfort with Entropy-Based Matter-Element Model
Authors: Wei Jiang, Hongmei Xu, Wenjie Zhong, Jiajun Dong and Yujun ShangBackground: The gear shifting comfort of tractors is affected by various factors, which make it extremely difficult to accurately and quantitatively describe the handling comfort through objective physical quantities. However, the calculation and evaluation processes of the index of manipulation comfort mostly involve the operation of matrix form, and the numerical operation is one of the powerful core functions of MATLAB. Objective: This study aims to take the gear shifting process of tractors as the research object and uses MATLAB /GUI toolbox to develop the shifting comfort evaluation system, thereby improving the gear shifting comfort. Methods: The procedures of holistic assessment, element subdivision, and detailed tracking were adopted as the analysis method. Thus, the operation process was divided into four action units, including pedal free travel, separation travel, gear change, and joint travel, according to the concept of “differentiation” and manipulation behavior. Based on the matter-element model, the process of clutch operation was described by the coupling of action elements. The closeness between the actual model and the target model was used to measure the degree of handling comfort. Then, the results were compared with those of the multi-index comprehensive evaluation. Results: It was shown that separation travel has the greatest influence on handling comfort. Taking vehicle 1 as an example, the matter-element analysis model provided the overall comfort value of the carrier (0.8359), which is close to the value of the multi-index comprehensive evaluation (0.8156). Moreover, the evaluation results presented the score of each action unit, which could directly reflect the action units with the lowest level of comfort. The evaluation results of matterelement model can not only reflect the handling comfort of different components, but also locate the negative aspects of the whole operation. Conclusion: This method is effective in overcoming the ambiguity of the concept of comfort and the generality of overall evaluation as well as avoiding the uncertainty of subjective evaluation.
-
-
-
The Prototype for Drone Self-Navigation Utilizing in Underground Mine
Authors: Azadeh Nazemi, Fatemeh Moghaddam, Niloofar Tavakolian and Shabnam Z. AfsharAim: This research aims to find a low cost, safe and feasible approach for localisation in underground tunnels. Objective: The objective of this study is to design a system for drone self-navigation in underground GPS-denied areas. Methods: The self-navigation system proposed by this research utilises triangle similarity for depth measurement and Quick SIFT key points for marker detection, whereas distance measurement relies on IMU data integration and marker global coordinate values. Results: In order to implement the designed self-navigation system, a prototype was made. This prototype facilitates drones with capturing devices such as night version camera, measurement devices such as IMU and processing units such as Raspberry Pi for real-time processing to collect data from IMU and camera. The processing unit is responsible for sending commands to motor drivers avoiding obstacles and heading off to the final point. Conclusion: Experimental results obtained under laboratory conditions indicated that the average time for navigating system update was about 0.7sec.
-
-
-
Single Channel EEG Signal for Automatic Detection of Absence Seizure Using Convolutional Neural Network
Authors: Niha K. Basha and Aisha B. WahabBackground: In this paper, a Convolutional Neural Network to extract the seizure features and classify them into normal or absence seizure class, is proposed as an empowerment of monitoring system by automatic detection of absence seizure. The training data is collected from the normal and absence seizure subjects in the form of Electroencephalography. Objective: To perform automatic detection of absence seizure using single channel electroencephalography signal as input. Methods: This data is then used to train the proposed Convolutional Neural Network to extract and classify absence seizure. The Convolutional Neural Network consist of three layers 1) convolutional layer – which extract the features in the form of vector 2) Pooling layer – the dimensionality of output from convolutional layer is reduced and 3) Fully connected layer–the activation function called soft-max is used to find the probability distribution of output class. Results: The paper goes through the automatic detection of absence seizure in detail and provide the comparative analysis of classification between Support Vector Machine and Convolutional Neural Network. Conclusion: The proposed approach outperforms the performance of Support Vector Machine in automatic detection of absence seizure.
-
-
-
An Anatomy of a Hybrid Color Descriptor with a Neural Network Model to Enhance the Retrieval Accuracy of an Image Retrieval System
Authors: Shikha Bhardwaj, Gitanjali Pandove and Pawan K. DahiyaBackground: In order to retrieve a particular image from vast repository of images, an efficient system is required and such an eminent system is well-known by the name Content-Based Image Retrieval (CBIR) system. Color is indeed an important attribute of an image and the proposed system consist of a hybrid color descriptor which is used for color feature extraction. Deep learning has gained a prominent importance in the current era. So, the performance of this fusion based color descriptor is also analyzed in the presence of Deep learning classifiers. Methods: This paper describes a comparative experimental analysis on various color descriptors and the best two are chosen to form an efficient color based hybrid system denoted as Combined Color Moment-Color Autocorrelogram (Co-CMCAC). Then, to increase the retrieval accuracy of the hybrid system, a Cascade Forward Back Propagation Neural Network (CFBPNN) is used. The classification accuracy obtained by using CFBPNN is also compared to Patternnet neural network. Results: The results of the hybrid color descriptor depict that the proposed system has superior results in the order 95.4%, 88.2%, 84.4% and 96.05% for Corel-1K, Corel-5K, Corel-10K and Oxford flower benchmark datasets respectively as compared to many state-of-the-art related techniques. Conclusion: This paper depicts an experimental and analytical analysis on different color feature descriptors namely, Color Moment (CM), Color Auto-Correlogram (CAC), Color Histogram (CH), Color coherence vector (CCV) and Dominant Color Descriptor (DCD). The proposed hybrid Color Descriptor (Co-CMCAC) is utilized for the withdrawal of color features with Cascade Forward Back Propagation Neural Network (CFBPNN) used as a classifier on four benchmark datasets namely Corel-1K, Corel-5K and Corel-10K and Oxford flower.
-
-
-
Energy-Efficient and Degree-Distance Clustering Based Hierarchical Routing Protocol for WSNs
Authors: Abdelkrim Hadjadj, Bilal Aribi and Mourad AmadBackground: Wireless Sensor Networks (WSN) are of crucial importance in today's applications and are very useful in creating an intelligent environment. The characteristics of WSN such as the increasingly smaller size of the sensors, their lower cost, and wireless communication support etc., allow the sensor networks to quickly invade several areas of application and have made them a very active field of research. They incorporate an efficient technique that guarantees an optimization and a better distribution of the energy resource. The sensors collaborate to carry out specific tasks according to the objectives of the application. Objective: This paper uses degree distance clustering for optimizing routing process in wireless sensor networks. Methods: Our protocol aims to more equitably exploit the energy of the selected nodes clusterheads, and to save energy dissipated while routing the captured data to the base station. Results: The results of the simulations show that our proposed protocol allows a reduction in energy dissipation and duration life of the larger network. Conclusion: Hierarchical routing in WSNs can be optimized when clustering technique is efficiently established.
-
-
-
Exhaust Emission Characteristics of a Three-Wheeler Auto Diesel Engine Fueled with Pongamia, Mahua and Jatropha Biodiesels
Authors: Bobbili Prasadarao, Aditya Kolakoti and Pudi SekharBackground: In India, three-wheeler auto diesel engines are also known as autorickshaw, play a vital role in day to day transportation. On the other hand, it pumps huge amount of harmful exhaust emissions into the atmosphere. As per the study by European Union 1% of India’s over two billion tonnes of annual vehicular CO2 emissions are from autorickshaws. Objective: To address the issue of high exhaust emissions from diesel engine, this paper has proposed Pongamia (PME), Mahua (PME) and Jatropha (JME) biodiesels as an alternative fuel. Methods: Biodiesel is produced by transesterification process; exhaust emissions analysis is carried out on a single cylinder four stroke three-wheeler auto diesel engine at constant speed of 1500rpm. Diesel as a reference fuel and cent percent of PME, MME, and JME as an alternative fuel. Results: Exhaust emissions reveals that there is a maximum reduction of Unburnt Hydrocarbons (UHC), Carbon Monoxide (CO), NOx, Carbon Dioxide (CO2), and smoke compared to diesel fuel. At maximum load the NOx emission reduced by 18.41% for JME, 17.46% for MME and 7.61% for PME. Low levels of CO emissions are recorded for JME (66%) followed by MME (33%) and PME (22%). UHC were reduced by 85.75% for JME, MME and for PME 14.28% reduction is observed. Smoke emissions are also reduced for PME and MME by 18.84%, for JME 14.49%. Conclusion: It is observed that all the methyl esters exhibit significant reduction in harmful exhaust emissions compared to diesel fuel and jatropha biodiesel is noted as a better choice.
-
-
-
YOLOv3-Tesseract Model for Improved Intelligent form Recognition
Authors: Zhang Yun-An, Pan Ziheng, Dui Hongyan and Bai GuanghanBackground: YOLOv3-Tesseract is widely used for the intelligent form recognition because it exhibits several attractive properties. It is important to improve the accuracy and efficiency of the optical character recognition. Methods: The YOLOv3 exhibits the classification advantages for the object detection. Tesseract can effectively recognize regular characters in the field of the optical character recognition. In this study, a YOLOv3 and Tesseract-based model of improved intelligent form recognition is proposed. Results: First, YOLOv3 is trained to detect the position of the text in the table and to subsequently segment text blocks. Second, Tesseract is used to individually detect text blocks and combine YOLOv3 and Tesseract to achieve the goal of table character recognition. Conclusion: Based on the Tianchi big data, experimental simulation is used to demonstrate the proposed method. The YOLOv3-Tesseract model is trained and tested to effectively accomplish the recognition task.
-
-
-
A Cluster Analysis Method of Software Development Activities Based on Event Log
Authors: MingJing Tang, Tong Li, Rui Zhu and ZiFei MaBackground: Event log data generated in the software development process contains historical information and future trends in software development activities. The mining and analysis of event log data contribute to identify and discover software development activities and provide effective support for software development process mining and modeling. Methods: Firstly, a deep learning model (Word2vec) was used for feature extraction and vectorization of software development process event logs. Then, the K-means clustering algorithm and measure of silhouette coefficient and intra-cluster SSE were used for clustering and clustering effect evaluation of vectorized software development process event logs. Results: This paper obtained the mapping relationship between software development activities and events, and realized the identification and discovery of software development activities. Conclusion: Two practical software development projects (jEdit and Argouml) are given to prove the feasibility, rationality and effectiveness of our proposed method. This work provides effective support for software development process mining and software development behavior guidance.
-
-
-
Prediction Strategies of Stock Market Data Using Deep Learning Algorithm
Authors: John Ayeelyan, Praveen Dominic, Malayalathan Adimoolam and Mohanan N. BalamuruganBackground: Predictive analytics has a multiplicity of statistical schemes from predictive modelling, data mining, machine learning. It scrutinizes present and chronological data to make predictions about expectations or if not unexplained measures. Most predictive models are used for business analytics to overcome loses and profit gaining. Predictive analytics is used to exploit the pattern in old and historical data. Objective: People used to follow some strategies for predicting stock value to invest in the more profit-gaining stocks and those strategies to search the stock market prices which are incorporated in some intelligent methods and tools. Such strategies will increase the investor’s profits and also minimize their risks. So prediction plays a vital role in stock market gaining and is also a very intricate and challenging process. Methods: The proposed optimized strategies are the Deep Neural Network with Stochastic Gradient for stock prediction. The Neural Network is trained using Back-propagation neural networks algorithm and stochastic gradient descent algorithm as optimal strategies. Results: The experiment is conducted for stock market price prediction using python language with the visual package. In this experiment RELIANCE.NS, TATAMOTORS.NS, and TATAGLOBAL. NS dataset are taken as input dataset and it is downloaded from National Stock Exchange site. The artificial neural network component including Deep Learning model is most effective for more than 100,000 data points to train this model. This proposed model is developed on daily prices of stock market price to understand how to build model with better performance than existing national exchange method.
-
-
-
Performance Estimation of a Wireless Sensor Network with a Mobile Sink Moving in Different Trajectories at Different Velocities
Authors: Vikas Raina, Ranjana Thalore and Jeetu SharmaBackground: Throughout the past few years, numerous Medium Access Control (MAC) protocols, routing protocols, node deployment mechanisms and duty cycle variation schemes have been designed for achieving high throughput, low delay and jitter, and long network lifetime in Wireless Sensor Networks (WSNs). In a WSN with static sink, voluminous sensors transmit their sensed data to the sink node. The coordinators mutually present in the range of sensors and sink have to forward a number of packets, which cause rapid depletion of their battery. These coordinators become dead too early resulting in the breakage of the communication channel and formation of energy holes. However, to save energy with a static sink, the duty cycle should be short. A mobile sink is a better option than a static sink if the duty cycle is long, as it balances energy consumption among the sensors. It is well observed that the mobile sink is capable of acquiring homogeneous energy depletion leading to stretched lifetime enhancing network performance. Methods: The vital benefaction of this paper is to present a simulation-based analysis of the network performance with a mobile sink having different trajectories of path traversed at different velocities. The intent is to find out the most appropriate and efficient trajectory and a particular velocity for a specific WSN with 100 nodes. The terrain area of the network is 210×210 m2 with the communication range of 20 m. The routing, network and MAC protocols implemented are Ad hoc On-Demand Distance Vector (AODV), Internet Protocol version 4 (IPv4) and Institute of Electrical and Electronics Engineers (IEEE) 802.15.4 respectively. This paper has evaluated and analyzed the influence of lawn mower, elliptical and circular trajectories of a mobile sink moving at different velocities of 0.5, 1 and 2 m/s. The optimum performance is achieved at the velocity of 2 m/s for a circular trajectory of the mobile sink. It is observed that performance has significantly varied with the variation of trajectories and velocities. The notion of precise utilization of sink mobility improves the performance than a static sink. It is equally important to determine the most effective mechanism to implement mobile sinks and to find out the most appropriate scheme out of them. Results: The attainment parameters just as total messages received, average end to end delay (seconds), jitter (seconds), throughput (bits per second), number of packets dropped, number of packets dropped due to channel access failure, residual battery (mAh) and network lifetime (hours) for different trajectories such as lawn mower, elliptical and circular at different speeds of 0.5, 1 and 2 m/s of the sink node are evaluated and compared. The simulation results present that the circular trajectory and the velocity of 2 m/s have provided optimum performance. Conclusion: The objective was to precisely analyze and evaluate the influence of different trajectories of a mobile sink moving at different velocities in a WSN of 100 nodes to determine the most effective and appropriate trajectory and velocity to optimize the attainments. The intent is to uniform the power exhaustion amidst the sensors. The purpose was to gain the attention of researchers of this field to significantly contribute to novel research.
-
-
-
Evolutionary Intelligent Data Warehousing Approach to Knowledge Discovery Systems: Dynamic Cubing
Authors: Harkiran Kaur, Kawaljeet Singh and Tejinder KaurBackground: Numerous E – Migrants databases assist the migrants to locate their peers in various countries; hence contributing largely in communication of migrants, staying overseas. Presently, these traditional E – Migrants databases face the issues of non – scalability, difficult search mechanisms and burdensome information update routines. Furthermore, analysis of migrants’ profiles in these databases has remained unhandled till date and hence do not generate any knowledge. Objective: To design and develop an efficient and multidimensional knowledge discovery framework for E - Migrants databases. Methods: In the proposed technique, results of complex calculations related to most probable On-Line Analytical Processing operations required by end users, are stored in the form of Decision Trees, at the pre- processing stage of data analysis. While browsing the Cube, these pre-computed results are called; thus offering Dynamic Cubing feature to end users at runtime. This data-tuning step reduces the query processing time and increases efficiency of required data warehouse operations. Results: Experiments conducted with Data Warehouse of around 1000 migrants’ profiles confirm the knowledge discovery power of this proposal. Using the proposed methodology, authors have designed a framework efficient enough to incorporate the amendments made in the E – Migrants Data Warehouse systems on regular intervals, which was totally missing in the traditional E – Migrants databases. Conclusion: The proposed methodology facilitate migrants to generate dynamic knowledge and visualize it in the form of dynamic cubes. Applying Business Intelligence mechanisms, blending it with tuned OLAP operations, the authors have managed to transform traditional datasets into intelligent migrants Data Warehouse.
-
-
-
An Optimal IoT Device Placement Strategy for Agro-IoT Using Edge Computing
Authors: Gubba Balakrishna and Nageswara R. MoparthiBackground: Development in innovation for remote detecting and mechanization have persuaded the wide selection of the mechanical improvements in all fields of studies and applications. Objective: Fundamentally, the countries with high reliance on horticulture have improved the conventional support of the harvests and procedures with the summon of advances like remote detecting or sensor-based systems or Internet of Things. Methods: The consolidation of trend setting innovation has exhibited high effect on profitability and reachability. The misfortunes because of the catastrophic events have essentially decreased as a similar time. The adjustments of these innovations were conceded because of the underlying information gain. The real bottlenecks of the adjustment of IoT for horticulture were three. Initially, the advancement of the money saving advantages can't be accomplished because of the impediment of the innovation. Furthermore, the inconsistent conveyance of the figuring forces makes the total IoT organize inflexible for making further upgrades or expensive to keep up. At long last, the structure for district explicit agro-IoT is as yet the interest for the business. Results: Consequently, this work proposes a novel IoT gadget situation technique with the advantages of cost enhancements and upgrades over the transmissions. Likewise, the advantages from the edge figuring instrument are accomplished in this work. Conclusion: At long last, this work likewise contributes towards the locale explicit system for Agro-IoT with the intension of making the monetarily provoked countries to likewise profit the advantages of innovation shifts.
-
-
-
Research on Edge Detection of Agricultural Pest and Disease Leaf Image Based on LVQ Neural Network
By Tongke FanBackground: Roberts, Sobel, Prewitt and other operators are commonly used in image edge detection, but because of the complex background of agricultural pests and diseases, the efficiency of using these operators to detect is not ideal. Objective: To improve the accuracy of crop disease image edge detection, the method of using LVQ neural network to detect crop disease image edge was studied. Methods: It is proposed to use LVQ1 neural network to detect the edge of the image. The commonly used median feature quantity, directional information feature quantity and Krisch operator direction feature quantity are used as the input signal of LVQ1 neural network for network training. On the basis of simulation, an image feature vector that solves the image pixel neighborhood consistency is added, and an algorithm for edge detection using LVQ2 neural network is proposed. Computer simulations show that the improved algorithm significantly improved the edge image continuity of the output. Results: LVQ2 neural network can complete the edge detection of gray-scale image better, the output edge image has good continuity, clear contour and keeps most of the original image information. Compared with the LVQ1 neural network detection results, the edge image detected by LVQ2 neural network has obvious improvement in the processing of small edge, and the contour is clearer. It shows that the training method can converge the network better and obtain more ideal output results. Conclusion: The simulation comparison is carried out under the Matlab platform. The results show that based on the LVQ2 neural network, the four image feature quantities are used as the input signal detection algorithm, which significantly improved the output edge image continuity, compared with the traditional Sobel algorithm and LVQ1 nerve. The network is more superior, robust and generalized.
-
-
-
Isolated Word-Based Spoken Dialogue System Using Odia Phones
Authors: Basanta K. Swain, Sanghamitra Mohanty and Chiranji L. ChowdharyAims: To develop Spoken Dialogue System in Indian language with Voice response and voice based biometric feature. Background: Most of research works in spoken dialogue system are carried out in U.S. and Europe and currently, few government funding projects on spoken dialogue system (SDS) are carried out in Indian academic institutes. Objective: We have tried to use our developed spoken language system to eliminate the desktop clutter. It is very normal tendency of computer users to place the most frequently used files, folders, applications shortcuts on their computer’s desktop. Cluttering of desktop not only slows down the productivity of computer but also looks very messy and very difficult to find files as well. Therefore, we tried to use the spoken dialogue system to eliminate the desktop clutters in painless manner and the services are provided to the computer users by opening the files, folders and frequently used application of users in spoken command mode with voice response. Methods: In this research article, we have attempted to utilize an Indian spoken language for communication with spoken dialogue system. We have adopted a statistical machine learning algorithm called Hidden Markov Model for development of speech recognition engine. The speaker verification module is developed using fuzzy c-means algorithm. Speech synthesis is carried out using diphone corpus. Results: The speaker verification module has yielded satisfactory results with average accuracy of 66.2% using FCM algorithm. It is also seen that fundamental frequency and formant frequency carry the distinctive characteristics of speaker verification over Indian spoken language. The vital module of SDS i.e. speech recognition engine is developed by using HMM, a statistical algorithm. It is observed that word accuracy of ASR engine is 78.22 % and 62.31 % for seen and unseen users respectively. The voice response is given to the user in terms of synthesized speech. The audio quality of synthesized speech is measured using the MOS test. The MOS test value is found as 3.8 and 3.6 over two distinct groups of listeners. Conclusion: In this research paper, we have developed a spoken dialogue system based on Odia language phone set. We have integrated speaker verification module in order to provide additional biometric based security.
-
-
-
Analysis of Performance of Two Wavelet Families Using GLCM Feature Extraction for Mammogram Classification of Breast Cancer
Authors: Shivangi Singla and Uma KumariBackground: Mammogram images are low dose x-ray images which detect the breast cancer before the women can actually experience it. Objective: To determine the accurate methodology for feature extraction using different wavelet families and different classification algorithms. Methods: Two wavelet families are used namely Daubechies (db8) and Biorthogonal (bior3.7). The Gray-Level Co-occurrence Matrix is used for extracting 9 features at each sub-band. 27 features are extracted at three sub-bands of Discrete Wavelet Transform. The features are extracted at three levels of decomposition and after that the classification algorithm named as Naive Bayes, Multilayer Perceptron, Fuzzy-NN and Genetic Programming are applied to extracted features. The feature selection algorithms are applied named as Wavelet and Principle Component Analysis for selecting the features and then classification accuracy is determined and compared between these. Results: Mammographic Image Analysis Society, database including 322 mammogram images from 161 patients is used. The classification algorithm without feature selection named as Fuzzy-NN gives better results at the third level of decomposition having classification accuracy for db8 wavelet family up to 99.68% and for bior3.7 wavelet family up to 99.98%. Wavelet with Multilayer Perceptron using feature selection algorithm gives the classification accuracy for db8 wavelet family up to 96.27% and for bior3.7 up to 93.47%. Conclusion: Fuzzy-NN algorithm gives highest accuracy of 99.98% for bior3.7 wavelet family. It indicates that with feature selection and without feature selection, the wavelet families differ as db8 is better consideration for with feature selection and bior3.7 wavelet family for without feature selection.
-
-
-
Designing an Expert System for the Diagnosis of Multiple Myeloma by Using Rough Set Theory
Authors: Tooraj Karimi, Arvin Hojati and Reza RazaviBackground: One of the most interesting and important topics in the field of information systems and knowledge management is the concept of eliciting rules and collecting the knowledge of human experts in various subjects to be used in expert systems. Many scientists have used decision support systems to support businesses or organizational decision-making activities, including clinical decision support systems for medical diagnosis. Objective: In this study, a rough set based expert system is designed for the diagnosis of one type of blood cancer called multiple myeloma. In order to improve the validity of generated models, three condition attributes that define the shape of “Total protein”, “Beta2%” and “Gamma%” are added to the models to improve the decision attribute value domain. Methods: In this study, 1100 serum protein electrophoresis tests are investigated and based on these test results, 15 condition attributes are defined. Four different rule models are obtained through extracting rules from reducts. Janson and Genetic Algorithm with "Full" and "ORR" approaches have been used to generate reducts. Results: The GA/ORR of the information system with 87% accuracy is used as an inference engine of an expert system and a unique user interface is designed to automatically analyze test results based on these generated models. Gamma% is detected as a core attribute of the information system. Conclusion: Based on the results of generating reducts, the Gamma% attribute is detected as a core of the information system. This means that information, which is resulted from this conditional attribute, has the greatest impact on the diagnosis of multiple myeloma. The GA/ORR model with 87% accuracy is selected as the inference engine of the expert system and finally, a unique user interface is created to help specialists diagnose multiple myeloma.
-
-
-
Multi-Criteria Decision-Making Techniques for Asset Selection
Authors: Shraddha Harode, Manoj Jha and Namita SrivastavaBackground: It has been a matter of discussion in years even after using an FST. Because of being a single valued membership, a fuzzy set can't express desired information. Further extended as HFSs which allows all possible membership degree lying between [0,1] is used widely where hesitancy occurs in taking preference over matters in Decision Making. Objective: The aim in this paper is to create a diversified portfolio where the return is maximum and the risk is minimal. Methods: Decision making methods like Fuzzy Soft Set, Mean Potentially Approach, and Soft Hesitant Fuzzy Rough Set which are based on Fuzzy Soft Set theory for construct the optimal portfolio. And non-fuzzy set method is applied for the optimal portfolio. It is found that a Soft hesitant fuzzy rough set is best as compare to other methods. Then, the ratio of optimal portfolio is obtained with the help of firefly optimization. Results: Soft Hesitant Fuzzy Rough Set has better outcomes on the basis of Performance Measure of these methods. With the help of Soft Hesitant Fuzzy Rough Set, diversified portfolio is also constructed. After constructing the optimal portfolio, Firefly algorithm is applied for obtain the proportions of seven assets. It is clearly showing the firmness of the ranked portfolio having maximum return and minimum risk on comparing without rank portfolio. Conclusion: Firefly algorithm is applied for optimization to the proportion of optimal assets of seven assets. The main result is, return and dividend is better and risk is less when compared to without ranking method. It is clear that the optimal portfolio with the ranking method is better than without ranking method.
-
-
-
Performance Comparison of Web Backend and Database: A Case Study of Node.JS, Golang and MySQL, Mongo DB
Authors: Faried Effendy, Taufik and Bramantyo AdhilaksonoAims: This study aims to compare the performance of Golang and node.js as web applications backend regarding response time, CPU utilization, and memory usage by using MySQL and MongoDB as databases. Background: There has been a lot of literature and research that addresses web server comparisons and database comparisons, but no one has discussed the combination of the two. Node.js and Golang (Go) were popular platforms that widely used as web and mobile application backends. While MySQL and Go are the two best open source databases that have different characters. Objective: To compare the performance of Golang and node.js as web applications backend regarding response time, CPU utilization, and memory usage by using MySQL and MongoDB as databases. Methods: In this study, we use four combinations of the web server and databases to compare, that is Node.js-Mysql, Node.js-MongoDB, Go-Mysql, and Go-MongoDB. Each database consists of 25 attributes with 1000 records. Each combination has the same routing URLs. From the previous study found a significant time difference between MySQL and MongoDB in query operations with 1000 data, so that in this study, the routing/showAll URL uses 1000 data. Results: The result shows that the combination of Go and MySQL is superior regarding CPU utilization and memory usage, while Node.js and MySQL combination is superior regarding response time. Conclusion: From this study can be concluded that the combination of Go-MySQL is superior regarding memory usage and CPU utilization, while Node.js-MySQL is superior regarding response time. Other: With this research, web developers can determine the right platform for their application; they also can reduce application developing cost so that the development process can be completed more quickly. For the next research best performance platform can be tested for WebSocket communication protocol and real-time technology, because it may provide different results from this research.
-
-
-
A Hybrid Hyper-Heuristic Flower Pollination Algorithm for Service Composition Problem in IoT
Authors: Neeti Kashyap, A. C. Kumari and Rita ChhikaraObjectives: The modern science applications have non-continuous and multivariate nature due to which the traditional optimization methods suffer a lack of efficiency. Flower pollination is a natural interesting procedure in the real world. The novel optimization algorithms can be designed by employing the evolutionary capability of the flower pollination to optimize resources. Methods: This paper introduces the hybrid algorithm named Hybrid Hyper-Heuristic Flower Pollination Algorithm, HHFPA. It uses a combination of Flower Pollination Algorithm (FPA) and Hyper- Heuristic Evolutionary Algorithm (HypEA). This paper compares the basic FPA with the proposed algorithm named HHFPA. FPA is inspired by the pollination process of flowers whereas the hyper-heuristic evolutionary algorithm operates on the heuristics search space that contains all the heuristics to find a solution for a given problem. The proposed algorithm is implemented to solve the Quality of Service (QoS) based Service Composition Problem (SCoP) in Internet of Things (IoT). With increasing services with same functionality on the web, selecting a suitable candidate service based on non-functional characteristics such as QoS has become an inspiration for optimization. Results: This paper includes experimental results showing better outcomes to find the best solution using the proposed algorithm as compared to Basic FPA. Conclusion: The empirical analysis also reveals that HHFPA outperformed basic FPA in solving the SCoP with more convergence rates.
-
-
-
Analysis of Epidemic, PROPHET and Spray and Wait Routing Protocols in the Mobile Opportunistic Networks
Authors: Jasvir Singh and Raman MainiBackground: The Opportunistic Mobile Networks (OMNs) are a type of Mobile Adhoc Networks (MANETs) with Delay-Tolerant Network (DTN) features, where the sender to receiver connectivity never exists most of the time, due to dynamic nature of the nodes and the network partition. The real use of OMNs is to provide connectivity in challenged environments. Methods: The paper presents the detailed analysis of three routing protocols, namely Epidemic, PROPHET and Spray and Wait, against variable size of the messages and the Time To Live (TTL) in the networks. The key contribution of the paper is to explore routing protocols with mobility models for the dissemination of data to the destination. Routing uses the store-carry-forward mechanism for message transfer and network has to keep compromise between message delivery ratio and delivery delay. Results: The results are generated from the experiments with Opportunistic Network Environment (ONE) simulator. The performance is evaluated based on three metrics, the delivery ratio, overhead ratio and the average latency. The results show that the minimum message size (256 KB) offers better performance in the delivery than the larger message size (1 MB). It has also been observed that with the epidemic routing, since there are more message replicas, which in turn increase the cost of delivery, so with a smaller message, the protocol can reduce the overhead ratio with a high proportion. Conclusion: The average latency observed increases with the increase of the TTL of the message in three protocols with variation of the message size from 256KB to 1 MB.
-
-
-
An Image Encryption Scheme Based on Hybrid Fresnel Phase Mask and Singular Value Decomposition
Authors: Shivani Yadav and Hukum SinghBackground: An asymmetric cryptanalysis is suggested in Affine and Fresnel transform using Hybrid Fresnel phase Mask (HFM), Hybrid Mask (HM) and Singular Value Decomposition (SVD) to deliver additional security to the scheme. The usage of Affine Transform (AT) provides randomness in the input plane which benefits in enlarging the key space and SVD gives the nonlinearity in the process. Objective: In the FrT domain, usage of hybrid masks and AT in an asymmetric cryptosystem with SVD is to make encoded procedure difficult. Methods: On the plain image we firstly apply affine transform and then convoluted it with HFM, in FrT domain with propagation distance Z1 and the obtained part is convoluted with HM in FrT with propagation distance Z2 and then lastly on the encoded image SVD is applied. Results: Validity of the suggested scheme has been confirmed by using MATLAB R2018a (9.4.0.813654). The capability of the recommended scheme has been tested by statistical simulations such as histogram, entropy and correlation coefficient. Noise attack analysis has also done so that the system becomes robust against attacks. Conclusion: Asymmetric cryptosystem is recommended using pixel scrambling technique i.e. affine transform which shuffles the pixels hence helps for security of the system. Usage of SVD in the algorithm is to make the system robust. Performance and strength analysis are carried out for scrutiny of the forte and feasibility of the algorithm.
-
-
-
Modified Gamma Network: Design and Reliability Evaluation
Authors: Shilpa Gupta and Gobind L. PahujaBackground: VLSI technology advancements have resulted the requirements of high computational power, which can be achieved by implementing multiple processors in parallel. These multiple processors have to communicate with their memory modules by using Interconnection Networks (IN). Multistage Interconnection Networks (MIN) are used as IN, as they provide efficient computing with low cost. Objective: The objective of the study is to introduce new reliable Gamma MIN named as a Modified Gamma Interconnection Network (MGIN), which provide reliability and fault-tolerance with less number of stages of Switching element only. Methods: Switching Element (SE) of bigger size i.e. 2×3/3×2 has been employed at input/output stages inspite of 1×3/3×1 sized SE at input/output stages with reduction in one intermidiate stage. Fault tolerance has been introduced in the form of disjoint paths formed between each sourcedestnation node pair. Hence reliability has been improved. Results: Terminal, Broadcast and Network Reliability has been evaluated by using Reliability Block Diagrams for each source-destination node pair. The results have been shown, which depicts the higher reliability values for newly proposed network. The cost analysis shows that new MGIN is a cheaper network than other Gamma variants. Conclusion: MGIN has better reliability and Fault-tolerance than priviously proposed Gamma MIN.
-
-
-
Dynamic Trust Management Model for the Internet of Things and Smart Sensors: The Challenges and Applications
Authors: Anshu K. Dwivedi, A. K. Sharma and Rakesh KumarBackground: Internet of Things (IoT) is an important technology that promises a smart human being life, by allowing a communications between objects, machines and every things together with peoples. Trust is an important parameter which is closely related IOT with respect to sending a message source to destination. Objective: In this model the Internet of Thing (IOT) system to pledge with misbehaving nodes whose status are non deterministic. This paper also presents an overview of trust management model in IoT. The accuracy, robustness, and lightness of the proposed model is approved through a wide arrangement of recreations. Methods: In order to achieve the desired objective, the following four contributions has been proposed to improve the trust over internet of thing (IoT):1) End-to-end packet forwarding ratio (EPFR) 2) AEC 3) Packet Delivery Ratio 4) Detection Probability. Results: In this paper we calculate the performance of TM-IoT in term of End-to-End Packet Forwarding Ratio (EPFR) 2) AEC 3) Packet Delivery Ratio 4) Detection Probability. The exploratory result analysis shows the efficiency of proposed model as compared to existing work. Conclusion: The proposed model TM-IoT show the better exploratory result as compared to existing work in terms of End-to-End Packet Forwarding Ratio (EPFR) , AEC, Packet Delivery Ratio, Detection Probability.
-
-
-
Audio-Visual Speech Recognition Using LSTM and CNN
Authors: Eslam E. El Maghraby, Amr M. Gody and M. H. FaroukBackground: Multimodal speech recognition is proved to be one of the most promising solutions for robust speech recognition, especially when the audio signal is corrupted by noise. As the visual speech signal not affected by audio noise, it can be used to obtain more information used to enhance the speech recognition accuracy in noisy system. The critical stage in designing robust speech recognition system is choosing of reliable classification method from large variety of available classification techniques. Deep learning is well-known as a technique that has the ability to classify a nonlinear problem, and takes into consideration the sequential characteristic of the speech signal. Numerous researches have been done in applying deep learning to overcome Audio-Visual Speech Recognition (AVSR) problems due to its amazing achievements in both speech and image recognition. Even though optimistic results have been obtained from the continuous studies, researches on enhancing accuracy in noise system and selecting the best classification technique are still gaining lots of attention. Objective: This paper aims to build AVSR system that uses both acoustic combined with visual speech information and use classification technique based on deep learning to improve the recognition performance in a clean and noisy environment. Methods: Mel Frequency Cepstral Coefficient (MFCC) and Discrete Cosine Transform (DCT) are used to extract the effective features from audio and visual speech signal respectively. The audio feature rate is greater than the visual feature rate, so that linear interpolation is needed to obtain equal feature vectors size then early integrating them to get combined feature vector. Bidirectional Long-Short Term Memory (BiLSTM), one of the Deep learning techniques, are used for classification process and compare the obtained results to other classification techniques like Convolution Neural Network (CNN) and the traditional Hidden Markov Models (HMM). The effectiveness of the proposed model is proved by using two multi-speaker AVSR datasets termed AVletters and GRID. Results: The proposed model gives promising results where the obtained results In case of GRID, using integrated audio-visual features achieved highest recognition accuracy of 99.07% and 98.47% , with enhancement up to 9.28% and 12.05% over audio-only for clean and noisy data respectively. For AVletters, the highest recognition accuracy is 93.33% with enhancement up to 8.33% over audio- only. Conclusion: Based on the obtained results, we can conclude that increasing the size of audio feature vector from 13 to 39 doesn’t give effective enhancement for the recognition accuracy in clean environment, but in noisy environment, it gives better performance. BiLSTM is considered to be the optimal classifier for a robust speech recognition system when compared to CNN and traditional HMM, because it takes into consideration the sequential characteristic of the speech signal (audio and visual). The proposed model gives great improvement in the recognition accuracy and decreasing the loss value for both clean and noisy environments than using audio-only features. Comparing the proposed model to previously obtain results which using the same datasets, we found that our model gives higher recognition accuracy and confirms the robustness of our model.
-
Most Read This Month
