Recent Advances in Computer Science and Communications - Volume 14, Issue 3, 2021
Volume 14, Issue 3, 2021
-
-
A Survey of Load Balancing and Implementation of Clustering-Based Approach for Clouds
Authors: Anju Sharma, Rohit Pandey, Simar P. Singh and Rajesh KumarBackground: Generally, it is observed that there is not a single algorithm that classifies the task using Quality of Service (QoS) parameters requested by the task but instead focuses on classifying resources and balancing the task's using the availability of resources. In past literature, authors divided the load balancing solutions in three main parts workload estimation, decision making, task transferring. Workload estimation deals with identifying requirements for the incoming tasks on the system. Decision making is done to analyze that whether or not load balancing should be performed for the given node. If the decision for load balancing has been made then third step deals with transferring task to appropriate node to reach a saturation point where the system will be in the stable state. Objective: To address this issue, our approach is more focused upon on workload estimation and its main objective is to cluster the incoming heterogeneous task into generic groups. Another issue for this approach is that the client demand varies for the number of tasks. Thus, some attributes may be much more critical to a user then the others and this demand changes from user to user. Methods: This paper classify the tasks using QoS parameters and focused on work-load estimation. The main objective is to cluster the incoming heterogeneous task into generic groups. For this, KMedoid based clustering approach for cloud computing is devised and implemented. This approach is then compared with its different iterations to analyses the workload execution more deeply. Results: The analysis of our approach is computed using cloudsim simulator. Results and computations shows that the data is very uneven in initial times, as some clusters have only four elements and others are having much more elements. Whereas after the 20th iteration data observed is more normally balanced, so the clusters formed after 20th iteration were more stable than clusters formed initially i.e. 1st iteration. The number of iterations is also minimized to do unnecessary clustering as after a few steps the changes in medoids are very less. Conclusion: A brief survey of various load balancing techniques in cloud computing is discussed. These approaches are meta-heuristic in nature and have complex behavior and can be implemented in cloud computing. In our paper, K-Medoid based clustering approach for identifying the task into similar groups has also been implemented. Implementation is done on cloudsim simulation package provided by Cloud Labs, which is a java based open source package. The results obtained in our approach are limited to classi fication of tasks into various clusters. It would also useful where new task arrives and simply assign it to a VM that was created for some other element of that class. In future, this work can be expanded to create an effective clustering based model for load balancing.
-
-
-
Distributed Content-Based Image Retrieval of Satellite Images on Hadoop
Authors: Tapan Sharma, Vinod Shokeen and Sunil MathurBackground: Owing to increased growth in satellite imagery, the development of an architecture that rapidly and efficiently identifies similar images has become crucial. Hadoop has become a de-facto platform for storing large amounts of data. Apache Spark and MapReduce have also become key frameworks for distributed processing of big data. Objective: This paper proposes a novel Distributed Content-Based Image Retrieval (DCBIR) architecture that leverages the qualities of these engines, which were not utilized in previous studies. Methods: Features of 40 satellite images with sizes greater than 500 MB were indexed, on a 15-node Hadoop cluster with two different databases viz. Neo4J, a graph database, and HBase, a columnar database. Results: Performance and Scalability of both indexing and query phases, along with precision and recall were observed for both databases. Conclusion: Experimental results show that the proposed system can efficiently perform image retrieval on large remote sensing images.
-
-
-
A Localization Scheme for Underwater Acoustic Wireless Sensor Networks using AoA
Authors: Archana Toky, Rishi P. Singh and Sanjoy DasBackground: Underwater Acoustic Sensor Networks (UWASNs) have been proposed for the hard oceanographic application where human efforts are not possible. In UWASNs localization is a challenging task due to unavailability of Global Positioning System (GPS), high propagation delay and the dynamic mobility of the sensor nodes due to ocean dynamics. Objective: To address the issues related to the localization of the sensors for the network deployed under water. This paper presents a localization scheme specially designed for UWASNs. Methods: In this paper, we propose a localization scheme using Angle-of-Arrival (AoA) technique for UWASNs. The proposed localization scheme is divided into Angle estimation phase, Projection Phase and Localization phase. The angle estimation phase estimations the angle of the signal arriving at the sensors. The projection phase converts the 3-Dimensional localization problem in equivalent 2-Dimensional by projecting the sensor nodes to a virtual projection plane. In the localization phase, the position of the sensor nodes is estimated based on the Angle-of-Arrival and distance from neighboring nodes information. Results: The simulation result shows that the proposed scheme provides a high localization ratio and Localization coverage with less energy consumption. Conclusion: A distributed range-based localization scheme for UWASNs using AoA technique is presented. The localization scheme projects the sensor nodes to a virtual plane and calculates the angle of signals initiated by the reference nodes. The scheme provides a great success in node localization and network coverage.
-
-
-
Time-Synchronization Free Localization Scheme with Mobility Prediction for UAWSNs
Authors: Archana Toky, Rishi P. Singh and Sanjoy DasBackground: Underwater Acoustic Wireless Sensor Network supports a lot of civil and military applications. It has come out as an effective tool to explore the ocean area of the earth. Sensor deployed underwater can help to relate the events occurring underwater with the rest of the world. To achieve the goal, the information gained from the sensors needs to be tagged with their real-time locations. Objective: To study the effect of the mobility of the sensor nodes on the accuracy of the location estimation during the localization period and develop a localization scheme which can give an accurate result by predicting the mobility behavior of the sensors. Methods: In this paper, a time-synchronization free localization scheme for underwater networks has been presented. The scheme employs a mobile beacon in the network to move vertically and broadcast the beacon messages. Results: The performance evaluation shows that the scheme reduces the error in location estimation caused by the mobility of the sensors by predicting their further location according to the mobility pattern of the sensor node. In an existing localization scheme, nodes are localized without time-synchronization but the scheme does not consider the mobility of the senor between the reception of two messages. The result of shows that the proposed localization scheme has achieved success in reducing the localization error by introducing the mobility behavior of the sensors in the existing localization scheme. Conclusion: localization scheme without a need of time-synchronization is presented. The main reason of the inaccuracy of a localization scheme is the mobility of the sensor node during the range estimation. The accuracy of a localization scheme can be improved by prediction the mobility pattern of the sensor during a localization period.
-
-
-
A Novel Approach for Density-Based Optimal Semantic Clustering of Web Objects via Identification of KingPins
Authors: Sonia Setia, Jyoti Verma and Neelam DuhanBackground: Clustering is one of the important techniques in Data Mining to group the related data. Clustering can be applied to numerical data as well as web objects such as URLs, websites, documents, keywords, etc. which is the building block for many recommender systems as well as prediction models. Objective: The objective of this research article is to develop an optimal clustering approach, which considers the semantics of web objects to cluster them in a group. More so importantly, the purpose of the proposed work is to strictly improve the computation time of the clustering process. Methods: In order to achieve the desired objectives, the following two contributions have been proposed to improve the clustering approach 1) Semantic Similarity Measure based on Wu-Palmer Semantics- based similarity and 2)Two-Level Density-based Clustering technique to reduce the computational complexity of density-based clustering approach. Results: The efficacy of the proposed method has been analyzed on AOL search logs containing 20 million web queries. The results showed that our approach increases the F-measure, and decreases the entropy. It also reduces the computational complexity and provides a competitive alternative strategy of semantic clustering when conventional methods do not provide helpful suggestions. Conclusion: A clustering model has been proposed, which is composed of two components, i.e., similarity measure and Density-based two-level clustering technique. The proposed model reduced the time cost of the density-based clustering approach without effecting the performance.
-
-
-
Dimensionality Reduction Techniques for IoT Based Data
Authors: Dimpal Tomar and Pradeep TomarBackground: Internet of Things (IoT) plays a vital role by connecting several heterogeneous devices seamlessly via the Internet through new services. Every second, the scale of IoT keeps on increasing in various sectors like smart home, smart city, health, smart transportation and so on. Therefore, IoT becomes the reason for the massive rise in the volume of data which is computationally difficult to work out on such a huge amount of heterogeneous data. This high dimensionality in data has become a challenge for data mining and machine learning. Hence, with respect to efficiency and effectiveness, dimensionality reduction techniques show the roadmap to resolve this issue by removing redundant, irrelevant and noisy data, making the learning process faster with respect to computation time and accuracy. Methods: In this study, we provide a broad overview on advanced dimensionality reduction techniques to facilitate selection of required features necessary for IoT based data analytics and for machine learning on the basis of criterion measure, training dataset and inspired by soft computation technology followed by significant challenges of dimensionality reduction techniques for IoT generated data that exists as scalability, streaming datasets and features, stability and sustainability. Results & Conclusion: In this survey, the various dimensionality reduction algorithms reviewed delivers the essential information in order to recommend the future prospect to resolve the current challenges in the use of dimensionality reduction techniques for IoT data. In addition, we highlight the comparative study of various methods and algorithms with respect to certain factors along with their pros and cons.
-
-
-
A Hybrid Fog Architecture: Improving the Efficiency in IoT-Based Smart Parking Systems
Authors: Bhawna Suri, Pijush K.D. Pramanik and Shweta TanejaBackground: The abundant use of personal vehicles has raised the challenge of parking the vehicle in crowded places such as shopping malls. To help the driver with efficient and troublefree parking, a smart and innovative parking assistance system is required. In addition to discussing the basics of smart parking, Internet of Things (IoT), Cloud computing, and Fog computing, this chapter proposes an IoT-based smart parking system for shopping malls. Methods: To process the IoT data, a hybrid Fog architecture is adopted in order to reduce the latency, where the Fog nodes are connected across the hierarchy. The advantages of this auxiliary connection are discussed critically by comparing with other Fog architectures (hierarchical and P2P). An algorithm is defined to support the proposed architecture and is implemented on two real- world use-cases having requirements of identifying the nearest free car parking slot. The implementation is simulated for a single mall scenario as well as for a campus with multiple malls with parking areas spread across them. Results: The simulation results have proved that our proposed architecture shows lower latency as compared to the traditional smart parking systems that use Cloud architecture. Conclusion: The hybrid Fog architecture minimizes communication latency significantly. Hence, the proposed architecture can suitably be applied for other IoT-based real-time applications.
-
-
-
Safety Monitoring and Warning System for Subway Construction Workers Using Wearable Technology
Authors: Junhua Chen, Dahu Wang and Cunyuan SunObjective: This study focused on the application of wearable technology in the safety monitoring and early warning for subway construction workers. Methods: With the help of real-time video surveillance and RFID positioning which was applied in the construction has realized the real-time monitoring and early warning of on-site construction to a certain extent, but there are still some problems. Real-time video surveillance technology relies on monitoring equipment, while the location of the equipment is fixed, so it is difficult to meet the full coverage of the construction site. However, wearable technologies can solve this problem, they have outstanding performance in collecting workers’ information, especially physiological state data and positioning data. Meanwhile, wearable technology has no impact on work and is not subject to the inference of dynamic environment. Results and Conclusion: The first time the system applied to subway construction was a great success. During the construction of the station, the number of occurrences of safety warnings was 43 times, but the number of occurrences of safety accidents was 0, which showed that the safety monitoring and early warning system played a significant role and worked out perfectly.
-
-
-
Dual Data Selection Using Multi-Objective Micro-CHC
Authors: Seema Rathee and Saroj RatnooObjective: Redundant and superfluous features or instances reduce the efficiency and efficacy of data mining algorithms. Hence, selecting relevant and significant features and instances is very important for a data mining process to be able to discern some meaning information. Dual selection deals with the problem of generating a small subset of non-redundant features as well as instances simultaneously from a large and noisy data set. The two main objectives for dual selection are to maximize the classification accuracy and to bring as much as possible data reduction. The two objectives, accuracy and data reduction rate, are conflicting because maximizing the data reduction rate generally results in a lower accuracy rate and vice versa. These objectives are mutually dependent and must be tackled simultaneously. Therefore, the problem of dual data selection ought to be naturally approached with multi-objective optimization techniques which give a set of nondominated solutions instead of a single best solution. The problem of dual selection has exhaustively large search space and has been addressed through single and Multi-Objective Genetic Algorithms (MOGAs). More often, evolutionary approaches may it be single or multi-objective work with large population sizes and take unacceptably long execution times due to computationally expensive fitness functions. These approaches also suffer from premature convergence. Methods: This paper proposes a hybrid Multi-Objective Micro-CHC (MO-Micro-CHC) to address the task of dual selection. The suggested approach uses a population of only a few individuals and elitism advised in Micro Genetic Algorithm (Micro-GA), Heterogeneous Uniform Recombination (HUX) and Cataclysmic mutation inspired by CHC, and non-dominated sorting of NSGA-II- a most popular and widely implemented multi-objective genetic algorithm. Results: We have conducted extensive experimentation using numerous datasets from the UCI data repository. Analysis of results approves that Mo-Micro-CHC achieves high accuracy and competing reduction rate in comparison to similar approaches. In addition, it takes far less execution time as compared to many of its counterparts.
-
-
-
Impact of Chatbot in Transforming the Face of Retailing- An Empirical Model of Antecedents and Outcomes
Authors: Kumari Anshu, Loveleen Gaur and Arun SolankiBackground: Chatbot has emerged as a significant resolution to the swiftly growing customer care demands in recent times. Chatbot has emerged as one of the biggest technological disruptions. In simple words, a software agent that facilitates interaction between computers and humans in natural language. It is a simulated, intellectual dialogue agent functional in a range of consumer engagement circumstances. It is the easiest and simplest means to enable interaction between the retailers and the customers. Aim: Most of the research work on chatbot is concerned with the technical aspects. The recent research on chatbot pays little attention on the impact it has created on users’ experience. Through this work, the author made an effort to know the customer-oriented impact that the chatbot has on the shoppers. The aim of this study was to develop and empirically test a framework that identifies the customer oriented attributes of chatbot and the impact these attributes create on customers. Objectives: The study intended to bridge the gap between conceptual and actual attributes and their applications on the subject of Chatbot. The following research objectives addressed the various aspects of Chatbot affecting the different characteristics of consumers’ shopping behaviors: a) Identification of various attributes of chatbot that bears an impression on consumer’s shopping behavior. b) Evaluation of the impact of chatbot on consumer’s shopping behavior that leads to the development of chatbot usage and adoption by the customer. Methodology: For the purpose of analysis, the author carried out Factor analysis and Multiple regression using SPSS version 23 for the identification of various attributes of Chatbot and their impact on shoppers. A self-administered questionnaire was developed. Industry experts in the field of retailing and academician evaluated the questionnaire. Primary information from the respondents was gathered using this questionnaire. The questionnaire comprised of Likert scale on a scale of 1 to 5 where 1 stands for strongly disagree and 5 stands for strongly agree. Data was collected from 126 respondents, out of which 111 respondents were finally considered for study and analysis purpose. Results/Findings: The empirical results showed that the study identified various attributes of chatbot like trust, usefulness, satisfaction, readiness to use and accessibility. It was also found that chatbot greatly influenced the customers in providing them with shopping experience, which can be very helpful to the businesses for increasing the sales and creating repurchase intention in the customers. Conclusion: The recent research on chatbot pays little attention on the impact it is creating on customers who are actually interacting with it on regular basis. The research paper extends information for understanding and appreciating the customer oriented attributes of artificially intelligent Chatbot. In this regard, the author developed a model framework and proposed the attributes identified. Through the work, author also made an effort to empirically test the impact of the identified attributes on the shoppers.
-
-
-
Variable Gain for Iterative Learning Control
Authors: Jianhuan Su, Yinjun Zhang and Mengji ChenBackground: At present, the gain of most ILC algorithms is fixed, and the convergence speed of the system depends on the learning law, which will lead to the complexity of the structure of the learning law, and variable gain can accelerate the convergence speed without changing the structure of the learning law as variable gains are introduced into ILC. Objective: In this paper, the D-type learning law is used. Firstly, the variable gain iterative learning controller is designed. Secondly, the convergence of the learning law is analyzed. Methods: Finally, in order to illustrate the effectiveness of this method, the simulation is carried out using MATLAB. Results and Conclusion: The simulation results show that the variable gain iterative learning control can improve the convergence speed of the iteration, and weaken the restrictions on the initial input.
-
-
-
The Traffic Sign Detection Algorithm Based on Region of Interest Extraction and Double Filter
Authors: Dongxian Yu, Jiatao Kang, Zaihui Cao and Anand NayyarObjective: In order to solve the current traffic sign detection problem due to the interference of various complex factors, as it is difficult to effectively carry out the correct detection of traffic signs, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed. Methods: First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo. Secondly, in order to improve the extraction ability of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) is used, and candidate regions are selected through the ROI detector. Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background. Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy. In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested. Results: The results show that the proposed algorithm has a higher detection accuracy and robustness compared with the current traffic sign recognition technology.
-
-
-
An Efficient Clustering-Based Segmentation Approach for Biometric Image
Authors: Aparna Shukla and Suvendu KanungoBackground: Image analysis plays a vital role in the biometric identification system. To achieve the effective outcome of any biometric identification system, the inputted biometric image taken should be of fine quality as it greatly impacts the decision. Image segmentation is a significant aspect of image analysis that must be carried out for enhancing the quality of an image. It efficiently differentiates the foreground and background region of the inputted biometric image and facilitates further image processing simply by providing a segmented binary image which is more coherent to the system. Objective: We present an efficient clustering-based image segmentation approach to obtain the quality segmented binary image that was further processed to get the quality decision in the biometricbased identification system. Methods: A centre of mass-based centroid clustering approach for image segmentation was proposed to perform binarization of an image so as the adequate and operative results can be found. Results: The performance of the proposed approach was applied to different sets of biometric data set having a different number of hand images. This approach provides sharp and lucid images so that good and effective intended results can be obtained. Conclusion: The centroid based clustering approach for image segmentation outperforms the existing clustering approach. In order to measure the quality of the segmented binary image, different statistical performance parameters are used: PSNR, Dunn Index, Silhouette, and Run Time (sec).
-
-
-
GRADE: A Novel Gravitational Density-Based Clustering Approach in the Multi-Objective Framework
Authors: Naveen Trivedi and Suvendu KanungoBackground: Clustering analysis plays a vital role in obtaining knowledgeable data from the huge amount of data sets in knowledge discovery. Most of the traditional clustering algorithms do not work well with high dimensional data. The objective of effective clustering is to obtain well connected, compact, and separated clusters. Density-Based Clustering (DBSCAN) is one of the popular clustering algorithms that use local density information of data points to detect clusters with arbitrary shapes. The Gravitational Search Algorithm (GSA) is one of the effective approaches inspired by Newton’s law of gravitational force where every particle in the universe attracts every other particle with a force. Objective: The primary aim of this paper is to design and develop a novel multi-objective clustering approach to produce the desired number of valid clusters. Further, these resulting clusters are to be optimized to obtain an optimal solution. Methods: In the proposed approach, a hybrid clustering algorithm based on GSA along with DBSCAN is recommended to group the data into the desired number of clusters, and in the next phase of the algorithm, Particle swarm optimization technique is applied in order to optimize the solutions using the fitness functions. Results: In the analysis of the result, we employed two objective functions namely quantization error and inter–cluster distance on four real-life data sets such as Iris, Wine, Wisconsin, and Yeast to evaluate the performance of our algorithm. Conclusion: The effectiveness of the GRADE algorithm is comprehensively demonstrated by comparing it with the well-known traditional K-mean algorithm in terms of accuracy and computational time.
-
-
-
Design and Implementation of Low Energy Wireless Network Nodes Based on Hardware Compression Acceleration
Authors: Hui Yang and Anand NayyarBackground: With the fast development of information, the information data is increasing in geometric multiples, and the speed of information transmission and storage space are required to be higher. Objective: In order to reduce the use of storage space and further improve the transmission efficiency of data, data need to be compressed. In the process of data compression, it is very important to ensure the lossless nature of data, and lossless data compression algorithms appear. The gradual optimization design of the algorithm can often achieve the energy-saving optimization of data compression. Similarly, the effect of energy saving can also be obtained by improving the hardware structure of node. Methods: In this paper, a new structure is designed for sensor node, which adopts hardware acceleration, and the data compression module is separated from the node microprocessor. Result: On the basis of the ASIC design of the algorithm, by introducing hardware acceleration, the energy consumption of the compressed data was successfully reduced, and the proportion of energy consumption and compression time saved by the general-purpose processor was as high as 98.4 % and 95.8 %, respectively. It greatly reduces the compression time and energy consumption.
-
-
-
CACK—A Counter Based Authenticated ACK to Mitigate Misbehaving Nodes from MANETs
Authors: C. Atheeq and M. Munir A. RabbaniBackground: The evolution of wireless network from wired network presents a worldwide pattern in the previous couple of decades. All the mobile nodes in MANET act as router as well as host at the same time send and receive messages directly to one another until they are in the communication range and use multiple hops if the nodes are outside the communication range. The self-organizing property of nodes in MANET made it prominent among the principal applications like military and or emergency rescue sites. In spite of, the openness and dynamic nature of mobile nodes, MANET suffers from malicious nodes. Studies show that the existing mechanism lacks with cost effective and reduced overhead in the network. Objective: It is vital to design a system that detects the malicious nodes to guard MANET from attackers. With the enhancements of the innovation and cost effective and minimum overhead, our visualization presents a tremendous expansion of MANETs into modern applications. Methods: In this article, we present our proposed model Counter based authenticated acknowledgement uniquely developed for MANETs that uses chebyshev polynomials and digested acknowledgment message for detection of misbehaving nodes in MANET. Results: Implementation shows that the proposed model outperforms in terms of reduced overhead, delay and packet delivery by mitigating attacks. Conclusion: Finally we conclude that we design an effective intrusion detection system that can be adaptable in MANET applications.
-
-
-
Web Service Discovery Using Bio-Inspired Holistic Matching Based Linked Data Clustering Model for RDF Data
Authors: Manish K. Mehrotra and Suvendu KanungoIntorduction: Resource Description Framework (RDF) is the de-facto standard language model for semantic data representation on semantic web. Designing an efficient management of RDF data with huge volume and efficient querying techniques are the primary research areas in semantic web. Methods: So far, several RDF management methods have been offered with data storage designs and query processing algorithms for data retrieval. However, these methods do not adequately address the presence of irrelevant links that degrade the performance of web service discovery. In this paper, we propose a Bio-inspired Holistic Matching based Linked Data Clustering (BHM-LDC) technique for efficient management and querying of RDF data. This technique is essentially based on three algorithms which are designed for RDF data storing, clustering the linked data and web service discovery respectively. Initially, the BHM-LDC technique store the RDF dataset as graph based linked data. Results and Discussion: Then, an Integrated Holistic Entity Matching based Distributed Genetic Algorithm (IHEM-DGA) is proposed to cluster the linked data. Finally, a sub-graph matching based web Service discovery Algorithm that uses the clustered triples has been proposed to find the best web services. Our experimental results reveal the performance of the proposed web service discovery approach by applying on business RDF dataset.
-
-
-
Energy-Efficient Routing Protocol for Network Life Enhancement in Wireless Sensor Networks
Authors: Amairullah K. Lodhi, M. Santhi S. Rukmini and Syed AbdulsattarBackground: The Wireless Sensor Network (WSN) is composed of autonomous nodes consist of sensors to collect the status of the surrounding environment. These nodes are equipped with limited batteries. One cannot recharge or replace the batteries of the nodes during the mission, as the applications of WSNs include in underwater, forest driven and mountain based. Objective: Thus available energy must be utilized effectively. Energy efficient routing is one of the primary sources of energy management. Cluster-based routing in WSN is a prevalent method to achieve network performance and energy efficiency. In literature, the number of cluster-based energy efficient routing protocols and their route selection metric is designed based on the residual status of node energy. However, this metric causes some of the intermediate nodes to drain energy instantly. In wireless networks this situation roots intermediate nodes to turn into a bottleneck node, and thereby performance degradation in terms of efficiency and packet delivery caused. Methods: Thus our paper aims to design a cluster based routing protocol to prevent the creation of intermediate bottleneck node. We introduce a novel routing metric called “ranking status” for the bottleneck problem. Results: Performances results indicate that the proposed routing protocol prevents the creation of intermediate bottleneck node, and improve the network's performance.
-
Most Read This Month
