Recent Advances in Computer Science and Communications - Volume 13, Issue 2, 2020
Volume 13, Issue 2, 2020
-
-
Into the World of Underwater Swarm Robotics: Architecture, Communication, Applications and Challenges
Authors: Koyippilly S. Keerthi, Bandana Mahapatra and Varun Girijan MenonBackground: With the curiosity of exploring the underwater world, science has devised various technologies and machines that can help them in performing activities like exploring, navigating and plunging into the unknown world of oceanography. Underwater Robot or vehicle can be claimed as an outcome of extensive research done by the scientists who aimed at discovering the unknown mysterious world of ocean and how it can benefit humanity. Swarm robotics is an entirely new section of robotics that has been developed based on swarm intelligence. Considering the fact, swarm robotics being still in nuptial stage, researchers have provided immense contribution with an aim to develop this technology. The objective of the paper is to present a comprehensive review covering the various technical and conceptual aspects of underwater swarm robotic system. Methods: A systematic review on state-of-the-art has been performed where contributions of various researchers was considered. The study emphasis on the concepts, technical background, architecture and communication medium along with its applicability in various fields that also include various issues and challenges faced while attaining them. Results: The incorporation of swarm intelligence in underwater robotics provides a new angle altogether into the working pattern of underwater robotic system. Conclusion: The article is a systematic presentation of swarm robot technologies, their mechanisms, conceived and designed communication medium with respect to adaptability of the vehicle to the versatile nature of water. The paper delineates the various conceptual and technical details and its beneficence to humanity.
-
-
-
Tree-Based Ant Colony Optimization Algorithm for Effective Multicast Routing in Mobile Adhoc Network
Authors: Priyanka Sharma, Manish K. Nunia, Madhushree Basavarajaish and Sudeep TanwarBackground: Multimedia transmission over wireless communication is gaining momentum with rapid use of mobile hand-held devices. Providing a QoS based routing solution is a major challenge, due to the transient and inaccurate state of Mobile Ad hoc Networks. Discovering optimal multicast routes is an NP-Problem and hence, QoS based routing is typically an optimization problem. Swarm Intelligence is a heuristic-based approach to find solutions to various complex problems using the principle of collective behaviour of natural agents. Objective: An ACO based approach for optimization of QoS based multicast routing algorithm for multimedia streaming applications is proposed. Proposed approach performed well in comparison to other state-of-the-art approaches with respect to path maintenance, packet delivery ratio, and end-toend delay. Methods: The multicast routing model is simulated as a tree structure, where the nodes represent stations and the edges represent the link between the stations. Results: Results show that proposed approach is much faster in convergence speed than the conventional AntNet. With the increasing size of the MANET environment, the convergence time of proposed approach is much better than AntNet. This is mainly due to the trace maintenance, treebased approach for path selection and implementation of local update and global update of the pheromone values. Conclusion: We can conclude that the proposed approach is a more effective algorithm for multiconstraints multicast routing.
-
-
-
Determining Network Communities Based on Modular Density Optimization
Authors: Seema Rani and Monica MehrotraBackground: In today’s world, complex systems are conceptually observed in the form of network structure. Communities inherently existing in the networks have a recognizable elucidation in understanding the organization of networks. Community discovery in networks has grabbed the attention of researchers from multi-discipline. Community detection problem has been modeled as an optimization problem. In broad-spectrum, existing community detection algorithms have adopted modularity as the optimizing function. However, the modularity is not able to identify communities of smaller size as compared to the size of the network. Methods: This paper addresses the problem of the resolution limit posed by modularity. Modular density measure succeeds in countering the resolution limit problem. Finding network communities with maximum modular density is an NP-hard problem In this work, the discrete bat algorithm with modular density as the optimization function is recommended. Results: Experiments are conducted on three real-world datasets. For determining the consistency, ten independent runs of the proposed algorithm has been carried out. The experimental results show that our proposed algorithm produces high-quality community structure along with small size communities. Conclusion: The results are compared with traditional and evolutionary community detection algorithms. The final outcome shows the superiority of discrete bat algorithm with modular density as the optimization function with respect to number of communities, maximum modularity, and average modularity.
-
-
-
Learning-Based Task Scheduling Using Big Bang Big Crunch for Cloud Computing Environment
Authors: Pradeep S. Rawat, Priti Dimri and Punit GuptaCloud Computing is a growing industry for secure and low cost pay per use resources. Efficient resource allocation is the challenging issue in cloud computing environment. Many task scheduling algorithms used to improve the performance of system. It includes ant colony, genetic algorithm and Round Robin improve the performance but these are not cost efficient at the same time. Scheduling issue and resource cost resolve using improved meta-heuristic approaches. In this work, a cost aware algorithm improved using Big-Bang Big-Crunch based task mapping is proposed which reduces the execution time and cost paid for the resources at the time of execution. The cost aware meta-heuristic technique used. Results show that the proposed algorithm provides better cost efficiency than the existing genetic algorithm. The proposed Big-Bang Big-Crunch based resource allocation technique evaluated against the Genetic approach. Results: Performance is measured using an optimization criteria tasks completion time and resource operational cost in the duration of execution. The population size and user requests measures the performance of the proposed model. The simulation shows that the proposed cost and time aware technique outperforms using performance measurement parameters (average finish time, resource cost).
-
-
-
Catechize Global Optimization through Leading Edge Firefly Based Zone Routing Protocol
Authors: Neha Sharma, Sherin Zafar and Usha BatraBackground: Zone Routing Protocol is evolving as an efficient hybrid routing protocol with an extremely high potentiality owing to the integration of two radically different schemes, proactive and reactive in such a way that a balance between control overhead and latency is achieved. Its performance is impacted by various network conditions such as zone radius, network size, mobility, etc. Objective: The research work described in this paper focuses on improving the performance of zone routing protocol by reducing the amount of reactive traffic which is primarily responsible for degraded network performance in case of large networks. The usage of route aggregation approach helps in reducing the routing overhead and also help achieve performance optimization. Method: The performance of proposed protocol is assessed under varying node size and mobility. Further applied is the firefly algorithm which aims to achieve global optimization that is quite difficult to achieve due to non-linearity of functions and multimodality of algorithms. For performance evaluation a set of benchmark functions are being adopted like, packet delivery ratio and end-to-end delay to validate the proposed approach. Results: Simulation results depict better performance of leading edge firefly algorithm when compared to zone routing protocol and route aggregation based zone routing protocol. The proposed leading edge FRA-ZRP approach shows major improvement between ZRP and FRA-ZRP in Packet Delivery Ratio. FRA-ZRP outperforms traditional ZRP and RA-ZRP even in terms of End to End Delay by reducing the delay and gaining a substantial QOS improvement. Conclusion: The achievement of proposed approach can be credited to the formation on zone head and attainment of route from the head hence reduced queuing of data packets due to control packets, by adopting FRA-ZRP approach. The routing optimized zone routing protocol using Route aggregation approach and FRA augments the QoS, which is the most crucial parameter for routing performance enhancement of MANET.
-
-
-
A Comparative Study of Energy Retaining Objective Functions in RPL for Improving Network Lifetime in IoT Environment
Authors: Robin Cyriac and Marimuthu KaruppiahInternet of Things will be inevitable in all walks of our life, where it becomes necessary for all smart devices to have end-to-end data transfer capability. These low power and low-cost end devices need to be enabled with IPv6 address and corresponding routing mechanism for participating in Internet of Things environment. To enable fast and efficient routing in Internet of Things network and constrained with limited energy, Routing Protocol for Low-power Lossy Network (RPL) has been developed by ROLL-Work Group. As RPL is proactive and energyconserving, it has become the most promising routing protocol for Internet of Things. Nodes in Low-power Lossy Network (LLN) are designed to conserve energy by maintaining radio silence over 90% of its lifetime. It is possible to further improve the node’s lifetime and thereby considerably extending network’s longevity by performing sensible routing. Different routing structure in Internet of Things network can be attained by carefully crafting the objective function for the same set of nodes which satisfies different goals. This paper focuses on different objective functions in RPL which have been developed over time with the emphasis on energy conservation and maximizing the lifetime of the network. In this work, we have carefully studied different metric compositions used for creating an objective function. The study revealed that combining metrics provides better results in terms of energy conservation when compared to single metric defined as part of RPL standard. It was also noted that considering some metric as a constraint can increase the rate of route convergence without affecting the performance of the network.
-
-
-
Enhanced Adaptive Distributed Energy-Efficient Clustering (EADEEC) for Wireless Sensor Networks
Authors: Ravi K. Poluru, M. Praveen Kumar Reddy, Syed Muzamil Basha, Rizwan Patan and Suresh KallamBackground: Recently Wireless Sensor Network (WSN) is a composed of a full number of arbitrarily dispensed energy-constrained sensor nodes. The sensor nodes help in sensing the data and then it will transmit it to sink. The Base station will produce a significant amount of energy while accessing the sensing data and transmitting data. High energy is required to move towards base station when sensing and transmitting data. WSN possesses significant challenges like saving energy and extending network lifetime. In WSN the most research goals in routing protocols such as robustness, energy efficiency, high reliability, network lifetime, fault tolerance, deployment of nodes and latency. Most of the routing protocols are based upon clustering has been proposed using heterogeneity. For optimizing energy consumption in WSN, a vital technique referred to as clustering. Methods: To improve the lifetime of network and stability we have proposed an Enhanced Adaptive Distributed Energy-Efficient Clustering (EADEEC). Results: In simulation results describes the protocol performs better regarding network lifetime and packet delivery capacity compared to EEDEC and DEEC algorithm. Stability period and network lifetime are improved in EADEEC compare to DEEC and EDEEC. Conclusion: The EADEEC is overall Lifetime of a cluster is improved to perform the network operation: Data transfer, Node Lifetime and stability period of the cluster. EADEEC protocol evidently tells that it improved the throughput, extended the lifetime of network, longevity, and stability compared with DEEC and EDEEC.
-
-
-
Energy Harvesting in Cognitive Networks Using Feasible Channel Selection Assignment
Authors: M. Balasubramanian and V. RajamaniBackground: The importance of this paper is to achieve maximum spectrum efficiency and proper channel allotment between Primary and Secondary User. The licensed and unlicensed users gets promoted as the channel allotment is properly carried out. To improve energy capability and spectral proficiency consider energy collecting cognitive radio systems to update both energy feasibility and spectral viability. Energy Harvesting Provides possibility of sharing energy in wireless networks which improves the performance of channel capacity. Methods: In this paper an Token Passing algorithm is proposed that switches the channels between Primary User and Secondary User. The energy efficiency decision is taken according to when primary user is idle or not. When the primary user is idle the secondary user cannot harvest any energy and when the primary channel is occupied the secondary channel harvest energy from primary user so that the harvested energy will be used by the secondary user during channel allotment. This proposed algorithm provides energy harvesting and spectrum efficiency. Results: The result shows that the most extraordinary achievable throughput R (eh) of the energy harvesting cognitive radio. The State Transition will move from busy to idle and idle to busy which is represented as S0 and S1. The other parameters are Sensing Energy es, Sampling frequency fs, Primary Signal which accepts a noise SNR γp. As Token Passing Algorithm provides tokens for primary and secondary user it takes lesser time and achieves better throughput than the FDMA and suboptimal algorithm. Conclusion: This paper achieves the maximum spectrum efficiency and energy harvesting by properly allotting spectrum for both primary and secondary user. The primary user and secondary user and spectrum management perform the channel allotment efficiently through the idle and busy state and Token Passing Algorithm does energy harvesting. An efficient scheme is developed for allocating energy in energy harvesting cognitive radio systems.
-
-
-
A Weighted Assignment Approach Using Recommendation Routing for Congestion Control in Wireless Sensor Networks-WEAR
Authors: Siva R. K. Somayaji and Arun K. ThangaveluBackground: Congestion can happen in WSNs while congregating the information and sending it towards the sink. It leads to increased packet delay, indiscriminate packet loss, severe fidelity degradation and wasted node energy. Objective: The aim of the proposed work is to suppress congestion using a dynamic weight assignment scheme, where each sensor node transmits the data in accordance to the weight assigned to it, thus ensuring priority fairness and minimizing packet loss. The proposed congestion control technique shall boost the overall performance of the system by supporting assured delivery of high importance events to sinks. Methods: Weights are assigned to each record based on two things, the delay in sending and receiving, and the change in value of the variables passed by the nodes. If the difference in timestamp is below a threshold (alpha) and the difference in the values is above a threshold (Beta), it gets a weight of W1, which is like a high priority data record. Similarly, if the difference in timestamp is above a threshold (gamma) and the values passed by the nodes have changed a negligible amount, that record is assigned a weight of W3, which is the least priority. Results: From the analysis, it is inferred that proposed method had a higher throughput and the throughput was equally distributed among all the participating nodes. Conclusion: The proposed method endeavours to avoid congestion by implementing an effective queuing mechanism to reinforce definitive wireless communication.
-
-
-
Robotic Path Planning Using Flower Pollination Algorithm
Authors: Ishita Mehta, Geetika Singh, Yogita Gigras, Anuradha Dhull and Priyanka RastogiBackground: Robotic path planning is an important facet of robotics. Its purpose is to make robots move independently in their work environment from a source to a destination whilst satisfying certain constraints. Constraint conditions are as follows: avoiding collision with obstacles, staying as far as possible from the obstacles, traversing the shortest path, taking minimum time, consuming minimum energy and so on. Hence, the robotic path planning problem is a conditional constraint optimization problem. Methods: To overcome this problem, the Flower Pollination Algorithm, which is a metaheuristic approach is employed. The effectiveness of Flower Pollination Algorithm is showcased by using diverse maps. These maps are composed of several fixed obstacles in different positions, a source and a target position. Initially, the pollinators carrying pollen (candidate solutions) are at the source location. Subsequently, the pollinators must pave a way towards the target location while simultaneously averting any obstacles that are encountered enroute. The pollinators should also do so with the minimum cost possible in terms of distance. The performance of the algorithm in terms of CPU time is evaluated. Flower Pollination Algorithm was also compared to the Particle Swarm Optimization algorithm and Ant Colony Optimization algorithm. Result: It was observed that Flower Pollination Algorithm is faster than Particle Swarm Optimization and Ant Colony Optimization in terms of CPU time for the same number of iterations to find an optimized solution for robotic path planning. Conclusion: The Flower Pollination Algorithm can be effectively applied for solving robotic path planning problem with static obstacles.
-
-
-
SVM and HMM Classifier Combination Based Approach for Online Handwritten Indic Character Recognition
Authors: Rajib Ghosh and Prabhat KumarBackground: The growing use of smart hand-held devices in the daily lives of the people urges for the requirement of online handwritten text recognition. Online handwritten text recognition refers to the identification of the handwritten text at the very moment it is written on a digitizing tablet using some pen-like stylus. Several techniques are available for online handwritten text recognition in English, Arabic, Latin, Chinese, Japanese, and Korean scripts. However, limited research is available for Indic scripts. Objective: This article presents a novel approach for online handwritten numeral and character (simple and compound) recognition of three popular Indic scripts - Devanagari, Bengali and Tamil. Methods: The proposed work employs the Zone wise Slopes of Dominant Points (ZSDP) method for feature extraction from the individual characters. Support Vector Machine (SVM) and Hidden Markov Model (HMM) classifiers are used for recognition process. Recognition efficiency is improved by combining the probabilistic outcomes of the SVM and HMM classifiers using Dempster-Shafer theory. The system is trained using separate as well as combined dataset of numerals, simple and compound characters. Results: The performance of the present system is evaluated using large self-generated datasets as well as public datasets. Results obtained from the present work demonstrate that the proposed system outperforms the existing works in this regard. Conclusion: This work will be helpful to carry out researches on online recognition of handwritten character in other Indic scripts as well as recognition of isolated words in various Indic scripts including the scripts used in the present work.
-
-
-
Techniques and Trends for Fine-Grained Opinion Mining and Sentiment Analysis: Recent Survey
Authors: Dalila Bouras, Mohamed Amroune, Hakim Bendjenna and Nabiha AziziBackground: Nowadays, with the appearance of web 2.0, more users express their opinions, judgments, and thoughts towards certain objects, services, organizations, and their attributes via social networking, forum entries, websites, and blogs and so on. In this way, the volume of raw content generated by these users will increase rapidly with enormous size, where people often find difficulties in identifying and summarizing fine-grained sentiments buried in the opinion-rich resources. The traditional opinion mining techniques, which focused on the overall sentiment of the review, fails to uncover the sentiments expressed on the aspects of the reviewed entity. For that, researchers in Aspect-based opinion mining community try to solve and handle this problem. Objectives: Our proposed study aims to present, survey and compare in the first place the important recent Aspect-based opinion mining approaches relevant to important languages such English, Arabic and Chinese and commonly datasets used in literature so that future researchers could improve their results. The cited approaches used the last techniques in the area on Opinion mining field, relevant to the Deep Learning models. In the second place, we try to highlight and give special attention to the Arabic language by introducing a dashboard of deep learning methods dedicated to the Arabic language. Finally, we emphasize the research gaps and future challenges in both English and Arabic languages that provide some new potential research fields. Methods: We have carefully summarized 48 models according to their algorithm into three categories: supervised, semi-supervised and unsupervised. Due to a large number of approaches with diverse datasets and techniques, we propose some statistical graphics to compare different experimentation results namely precision, Recall, and F-measure. Also, the study has conferred a comparative analysis and a comprehensive discussion of different approaches and techniques dedicated to the aspect extraction sub-task using the new tendency that of deep learning on both Arabic, English and Chinese language. We have introduced some future challenges, research gaps, and new trends in the opinion mining task, which need more efforts and investigations to present new solutions that make the opinion mining field more pervasive and give more ideas about the different researches done in the field of OM. Conclusion: We have compared the different approaches and techniques dedicated to the extraction of aspects using the new tendency that of deep learning. Our contribution illustrates the add values given by deep learning models in the treatment of user reviews expressed in the Arabic language. At the same time, this work is mainly based on the use of the evaluation performance metrics (precision, recall, and f-measure).
-
-
-
The Application of the Positive Semi-Definite Kernel Space for SVM in Quality Prediction
Authors: Wang Meng, Dui Hongyan, Zhou Shiyuan, Dong Zhankui and Wu ZigeBackground: A transformation toward 4th Generation Industrial Revolution (Industry 4.0) is being led by Germany based on Cyber-Physical System-enabled manufacturing and service innovation. Smart manufacturing is an important feature of Industry 4.0 which uses the networked manufacturing systems for smart production. Current manufacturing systems (5M1E systems) require deeper mining of the data which is generated from manufacturing process. Objective: To map low-dimensional embedding into the input space would meet the requirement of “kernel trick” to solve a problem in feature space. On the other hand, the distance can be calculated more precisely. Methods: In this research, we proposed a positive semi-definite kernel space by using a constant additive method based on a kernel view of ISOMAP. There were 6 steps in the algorithm. Results: The classification precision of KMLSVM was better than SVM in the enterprise data set, in which SVM selected the RBF kernel and optimized its parameters. Conclusion: We adopted the additive constant method in kernel space construction and the positive semi-definite kernel was built. The typical mixed data set of an enterprise was used in simulation. We compared the SVM and KMLSVM in this data set and optimized the SVM kernel function parameters. The simulation results demonstrated the KMLSVM was a better algorithm in mix type data set than SVM.
-
-
-
The Kernel Rough K-Means Algorithm
Authors: Wang Meng, Dui Hongyan, Zhou Shiyuan, Dong Zhankui and Wu ZigeBackground: Clustering is one of the most important data mining methods. The k-means (c-means ) and its derivative methods are the hotspot in the field of clustering research in recent years. The clustering method can be divided into two categories according to the uncertainty, which are hard clustering and soft clustering. The Hard C-Means clustering (HCM) belongs to hard clustering while the Fuzzy C-Means clustering (FCM) belongs to soft clustering in the field of k-means clustering research respectively. The linearly separable problem is a big challenge to clustering and classification algorithm and further improvement is required in big data era. Objective: RKM algorithm based on fuzzy roughness is also a hot topic in current research. The rough set theory and the fuzzy theory are powerful tools for depicting uncertainty, which are the same in essence. Therefore, RKM can be kernelized by the mean of KFCM. In this paper, we put forward a Kernel Rough K-Means algorithm (KRKM) for RKM to solve nonlinear problem for RKM. KRKM expanded the ability of processing complex data of RKM and solve the problem of the soft clustering uncertainty. Methods: This paper proposed the process of the Kernel Rough K-Means algorithm (KRKM). Then the clustering accuracy was contrasted by utilizing the data sets from UCI repository. The experiment results shown the KRKM with improved clustering accuracy, comparing with the RKM algorithm. Results: The classification precision of KFCM and KRKM were improved. For the classification precision, KRKM was slightly higher than KFCM, indicating that KRKM was also an attractive alternative clustering algorithm and had good clustering effect when dealing with nonlinear clustering. Conclusion: Through the comparison with the precision of KFCM algorithm, it was found that KRKM had slight advantages in clustering accuracy. KRKM was one of the effective clustering algorithms that can be selected in nonlinear clustering.
-
-
-
Improving Recommender Systems Using Co-Appearing and Semantically Correlated User Interests
Authors: Bilal Hawashin, Darah Aqel, Shadi Alzubi and Mohammad ElbesBackground: Recommender Systems use user interests to provide more accurate recommendations according to user actual interests and behavior. Methods: This work aims at improving recommender systems by discovering hidden user interests from the existing interests. User interest expansion would contribute in improving the accuracy of recommender systems by finding more user interests using the given ones. Two methods are proposed to perform the expansion: Expanding interests using correlated interests’ extractor and Expanding interests using word embeddings. Results: Experimental work shows that such expanding is efficient in terms of accuracy and execution time. Conclusion: Therefore, expanding user interests proved to be a promising step in the improvement of the recommender systems performance.
-
-
-
Fault Diagnosis of Wind Turbine Gearbox Based on Neighborhood QPSO and Improved D-S Evidence Theory
Authors: Jiatang Cheng, Yan Xiong and Li AiBackground: Gearbox is the key equipment of wind turbine drive chain. Due to the harsh operating environment of wind turbine, gearbox failures occur frequently. Methods: To improve the accuracy of fault identification for wind turbine gearbox, an intelligent fault diagnosis method based on Neighborhood Quantum Particle Swarm Optimization (NQSPO) and improved Dempster-Shafer (D-S) evidence theory is proposed. In NQPSO algorithm, the best solution information in the neighborhood is introduced to guide the individual search behavior and enhance the population diversity. Also, the consistency coefficient is used to determine the weight of evidence, and the original evidence is amended to enhance the ability of D-S theory to fuse conflict evidence. Results: Experimental results show that the proposed method can overcome the influence of bad evidence on the diagnosis result and has high reliability. Conclusion: The research can effectively improve the accuracy of fault diagnosis of wind turbine gearbox, and provide a feasible idea for the fault diagnosis of nonlinear complex system.
-
-
-
A Comparative Study on Transformation of UML/OCL to Other Specifications
Authors: Jagadeeswaran Thangaraj and Senthilkumaran UlaganathanBackground: Static verification is a sound programming methodology that permits automated reasoning about the correctness of an implementation with respect to its formal specification before its execution. Unified Modelling Language is most commonly used modelling language which describes the client’s requirement. Object Constraint Language is a formal language which allows users to express textual constraints regarding the UML model. Therefore, UML/OCL express formal specification and helps the developers to implement the code according to the client’s requirement through software design. Objective: This paper aims to compare the existing approaches generating Java, C++, C# code or JML, Spec# specifications from UML/OCL. Method: Nowadays, software system is developed via automatic code generation from software design to implementation when using formal specification and static analysis. In this paper, the study considers transformation from design to implementation and vice versa using model transformation, code generation or other techniques. Results: The related tools, which generate codes, do not support verification at the implementation phase. On the other hand, the specification generation tools do not generate all the required properties which are needed for verification at the implementation phase. Conclusion: If the generated system supports the verification with all required properties, code developer needs less efforts to produce correct software system. Therefore, this study recommends introducing a new framework which can act as an interface between design and implementation to generate verified software systems.
-
Most Read This Month
