Recent Advances in Computer Science and Communications - Volume 14, Issue 3, 2021
Volume 14, Issue 3, 2021
-
-
A Survey of Load Balancing and Implementation of Clustering-Based Approach for Clouds
Authors: Anju Sharma, Rohit Pandey, Simar P. Singh and Rajesh KumarBackground: Generally, it is observed that there is not a single algorithm that classifies the task using Quality of Service (QoS) parameters requested by the task but instead focuses on classifying resources and balancing the task's using the availability of resources. In past literature, authors divided the load balancing solutions in three main parts workload estimation, decision making, task transferring. Workload estimation deals with identifying requirements for the incoming tasks on the system. Decision making is done to analyze that whether or not load balancing should be performed for the given node. If the decision for load balancing has been made then third step deals with transferring task to appropriate node to reach a saturation point where the system will be in the stable state. Objective: To address this issue, our approach is more focused upon on workload estimation and its main objective is to cluster the incoming heterogeneous task into generic groups. Another issue for this approach is that the client demand varies for the number of tasks. Thus, some attributes may be much more critical to a user then the others and this demand changes from user to user. Methods: This paper classify the tasks using QoS parameters and focused on work-load estimation. The main objective is to cluster the incoming heterogeneous task into generic groups. For this, KMedoid based clustering approach for cloud computing is devised and implemented. This approach is then compared with its different iterations to analyses the workload execution more deeply. Results: The analysis of our approach is computed using cloudsim simulator. Results and computations shows that the data is very uneven in initial times, as some clusters have only four elements and others are having much more elements. Whereas after the 20th iteration data observed is more normally balanced, so the clusters formed after 20th iteration were more stable than clusters formed initially i.e. 1st iteration. The number of iterations is also minimized to do unnecessary clustering as after a few steps the changes in medoids are very less. Conclusion: A brief survey of various load balancing techniques in cloud computing is discussed. These approaches are meta-heuristic in nature and have complex behavior and can be implemented in cloud computing. In our paper, K-Medoid based clustering approach for identifying the task into similar groups has also been implemented. Implementation is done on cloudsim simulation package provided by Cloud Labs, which is a java based open source package. The results obtained in our approach are limited to classi fication of tasks into various clusters. It would also useful where new task arrives and simply assign it to a VM that was created for some other element of that class. In future, this work can be expanded to create an effective clustering based model for load balancing.
-
-
-
Distributed Content-Based Image Retrieval of Satellite Images on Hadoop
Authors: Tapan Sharma, Vinod Shokeen and Sunil MathurBackground: Owing to increased growth in satellite imagery, the development of an architecture that rapidly and efficiently identifies similar images has become crucial. Hadoop has become a de-facto platform for storing large amounts of data. Apache Spark and MapReduce have also become key frameworks for distributed processing of big data. Objective: This paper proposes a novel Distributed Content-Based Image Retrieval (DCBIR) architecture that leverages the qualities of these engines, which were not utilized in previous studies. Methods: Features of 40 satellite images with sizes greater than 500 MB were indexed, on a 15-node Hadoop cluster with two different databases viz. Neo4J, a graph database, and HBase, a columnar database. Results: Performance and Scalability of both indexing and query phases, along with precision and recall were observed for both databases. Conclusion: Experimental results show that the proposed system can efficiently perform image retrieval on large remote sensing images.
-
-
-
A Localization Scheme for Underwater Acoustic Wireless Sensor Networks using AoA
Authors: Archana Toky, Rishi P. Singh and Sanjoy DasBackground: Underwater Acoustic Sensor Networks (UWASNs) have been proposed for the hard oceanographic application where human efforts are not possible. In UWASNs localization is a challenging task due to unavailability of Global Positioning System (GPS), high propagation delay and the dynamic mobility of the sensor nodes due to ocean dynamics. Objective: To address the issues related to the localization of the sensors for the network deployed under water. This paper presents a localization scheme specially designed for UWASNs. Methods: In this paper, we propose a localization scheme using Angle-of-Arrival (AoA) technique for UWASNs. The proposed localization scheme is divided into Angle estimation phase, Projection Phase and Localization phase. The angle estimation phase estimations the angle of the signal arriving at the sensors. The projection phase converts the 3-Dimensional localization problem in equivalent 2-Dimensional by projecting the sensor nodes to a virtual projection plane. In the localization phase, the position of the sensor nodes is estimated based on the Angle-of-Arrival and distance from neighboring nodes information. Results: The simulation result shows that the proposed scheme provides a high localization ratio and Localization coverage with less energy consumption. Conclusion: A distributed range-based localization scheme for UWASNs using AoA technique is presented. The localization scheme projects the sensor nodes to a virtual plane and calculates the angle of signals initiated by the reference nodes. The scheme provides a great success in node localization and network coverage.
-
-
-
Time-Synchronization Free Localization Scheme with Mobility Prediction for UAWSNs
Authors: Archana Toky, Rishi P. Singh and Sanjoy DasBackground: Underwater Acoustic Wireless Sensor Network supports a lot of civil and military applications. It has come out as an effective tool to explore the ocean area of the earth. Sensor deployed underwater can help to relate the events occurring underwater with the rest of the world. To achieve the goal, the information gained from the sensors needs to be tagged with their real-time locations. Objective: To study the effect of the mobility of the sensor nodes on the accuracy of the location estimation during the localization period and develop a localization scheme which can give an accurate result by predicting the mobility behavior of the sensors. Methods: In this paper, a time-synchronization free localization scheme for underwater networks has been presented. The scheme employs a mobile beacon in the network to move vertically and broadcast the beacon messages. Results: The performance evaluation shows that the scheme reduces the error in location estimation caused by the mobility of the sensors by predicting their further location according to the mobility pattern of the sensor node. In an existing localization scheme, nodes are localized without time-synchronization but the scheme does not consider the mobility of the senor between the reception of two messages. The result of shows that the proposed localization scheme has achieved success in reducing the localization error by introducing the mobility behavior of the sensors in the existing localization scheme. Conclusion: localization scheme without a need of time-synchronization is presented. The main reason of the inaccuracy of a localization scheme is the mobility of the sensor node during the range estimation. The accuracy of a localization scheme can be improved by prediction the mobility pattern of the sensor during a localization period.
-
-
-
A Novel Approach for Density-Based Optimal Semantic Clustering of Web Objects via Identification of KingPins
Authors: Sonia Setia, Jyoti Verma and Neelam DuhanBackground: Clustering is one of the important techniques in Data Mining to group the related data. Clustering can be applied to numerical data as well as web objects such as URLs, websites, documents, keywords, etc. which is the building block for many recommender systems as well as prediction models. Objective: The objective of this research article is to develop an optimal clustering approach, which considers the semantics of web objects to cluster them in a group. More so importantly, the purpose of the proposed work is to strictly improve the computation time of the clustering process. Methods: In order to achieve the desired objectives, the following two contributions have been proposed to improve the clustering approach 1) Semantic Similarity Measure based on Wu-Palmer Semantics- based similarity and 2)Two-Level Density-based Clustering technique to reduce the computational complexity of density-based clustering approach. Results: The efficacy of the proposed method has been analyzed on AOL search logs containing 20 million web queries. The results showed that our approach increases the F-measure, and decreases the entropy. It also reduces the computational complexity and provides a competitive alternative strategy of semantic clustering when conventional methods do not provide helpful suggestions. Conclusion: A clustering model has been proposed, which is composed of two components, i.e., similarity measure and Density-based two-level clustering technique. The proposed model reduced the time cost of the density-based clustering approach without effecting the performance.
-
-
-
Dimensionality Reduction Techniques for IoT Based Data
Authors: Dimpal Tomar and Pradeep TomarBackground: Internet of Things (IoT) plays a vital role by connecting several heterogeneous devices seamlessly via the Internet through new services. Every second, the scale of IoT keeps on increasing in various sectors like smart home, smart city, health, smart transportation and so on. Therefore, IoT becomes the reason for the massive rise in the volume of data which is computationally difficult to work out on such a huge amount of heterogeneous data. This high dimensionality in data has become a challenge for data mining and machine learning. Hence, with respect to efficiency and effectiveness, dimensionality reduction techniques show the roadmap to resolve this issue by removing redundant, irrelevant and noisy data, making the learning process faster with respect to computation time and accuracy. Methods: In this study, we provide a broad overview on advanced dimensionality reduction techniques to facilitate selection of required features necessary for IoT based data analytics and for machine learning on the basis of criterion measure, training dataset and inspired by soft computation technology followed by significant challenges of dimensionality reduction techniques for IoT generated data that exists as scalability, streaming datasets and features, stability and sustainability. Results & Conclusion: In this survey, the various dimensionality reduction algorithms reviewed delivers the essential information in order to recommend the future prospect to resolve the current challenges in the use of dimensionality reduction techniques for IoT data. In addition, we highlight the comparative study of various methods and algorithms with respect to certain factors along with their pros and cons.
-
-
-
A Hybrid Fog Architecture: Improving the Efficiency in IoT-Based Smart Parking Systems
Authors: Bhawna Suri, Pijush K.D. Pramanik and Shweta TanejaBackground: The abundant use of personal vehicles has raised the challenge of parking the vehicle in crowded places such as shopping malls. To help the driver with efficient and troublefree parking, a smart and innovative parking assistance system is required. In addition to discussing the basics of smart parking, Internet of Things (IoT), Cloud computing, and Fog computing, this chapter proposes an IoT-based smart parking system for shopping malls. Methods: To process the IoT data, a hybrid Fog architecture is adopted in order to reduce the latency, where the Fog nodes are connected across the hierarchy. The advantages of this auxiliary connection are discussed critically by comparing with other Fog architectures (hierarchical and P2P). An algorithm is defined to support the proposed architecture and is implemented on two real- world use-cases having requirements of identifying the nearest free car parking slot. The implementation is simulated for a single mall scenario as well as for a campus with multiple malls with parking areas spread across them. Results: The simulation results have proved that our proposed architecture shows lower latency as compared to the traditional smart parking systems that use Cloud architecture. Conclusion: The hybrid Fog architecture minimizes communication latency significantly. Hence, the proposed architecture can suitably be applied for other IoT-based real-time applications.
-
-
-
Safety Monitoring and Warning System for Subway Construction Workers Using Wearable Technology
Authors: Junhua Chen, Dahu Wang and Cunyuan SunObjective: This study focused on the application of wearable technology in the safety monitoring and early warning for subway construction workers. Methods: With the help of real-time video surveillance and RFID positioning which was applied in the construction has realized the real-time monitoring and early warning of on-site construction to a certain extent, but there are still some problems. Real-time video surveillance technology relies on monitoring equipment, while the location of the equipment is fixed, so it is difficult to meet the full coverage of the construction site. However, wearable technologies can solve this problem, they have outstanding performance in collecting workers’ information, especially physiological state data and positioning data. Meanwhile, wearable technology has no impact on work and is not subject to the inference of dynamic environment. Results and Conclusion: The first time the system applied to subway construction was a great success. During the construction of the station, the number of occurrences of safety warnings was 43 times, but the number of occurrences of safety accidents was 0, which showed that the safety monitoring and early warning system played a significant role and worked out perfectly.
-
-
-
Dual Data Selection Using Multi-Objective Micro-CHC
Authors: Seema Rathee and Saroj RatnooObjective: Redundant and superfluous features or instances reduce the efficiency and efficacy of data mining algorithms. Hence, selecting relevant and significant features and instances is very important for a data mining process to be able to discern some meaning information. Dual selection deals with the problem of generating a small subset of non-redundant features as well as instances simultaneously from a large and noisy data set. The two main objectives for dual selection are to maximize the classification accuracy and to bring as much as possible data reduction. The two objectives, accuracy and data reduction rate, are conflicting because maximizing the data reduction rate generally results in a lower accuracy rate and vice versa. These objectives are mutually dependent and must be tackled simultaneously. Therefore, the problem of dual data selection ought to be naturally approached with multi-objective optimization techniques which give a set of nondominated solutions instead of a single best solution. The problem of dual selection has exhaustively large search space and has been addressed through single and Multi-Objective Genetic Algorithms (MOGAs). More often, evolutionary approaches may it be single or multi-objective work with large population sizes and take unacceptably long execution times due to computationally expensive fitness functions. These approaches also suffer from premature convergence. Methods: This paper proposes a hybrid Multi-Objective Micro-CHC (MO-Micro-CHC) to address the task of dual selection. The suggested approach uses a population of only a few individuals and elitism advised in Micro Genetic Algorithm (Micro-GA), Heterogeneous Uniform Recombination (HUX) and Cataclysmic mutation inspired by CHC, and non-dominated sorting of NSGA-II- a most popular and widely implemented multi-objective genetic algorithm. Results: We have conducted extensive experimentation using numerous datasets from the UCI data repository. Analysis of results approves that Mo-Micro-CHC achieves high accuracy and competing reduction rate in comparison to similar approaches. In addition, it takes far less execution time as compared to many of its counterparts.
-
-
-
Impact of Chatbot in Transforming the Face of Retailing- An Empirical Model of Antecedents and Outcomes
Authors: Kumari Anshu, Loveleen Gaur and Arun SolankiBackground: Chatbot has emerged as a significant resolution to the swiftly growing customer care demands in recent times. Chatbot has emerged as one of the biggest technological disruptions. In simple words, a software agent that facilitates interaction between computers and humans in natural language. It is a simulated, intellectual dialogue agent functional in a range of consumer engagement circumstances. It is the easiest and simplest means to enable interaction between the retailers and the customers. Aim: Most of the research work on chatbot is concerned with the technical aspects. The recent research on chatbot pays little attention on the impact it has created on users’ experience. Through this work, the author made an effort to know the customer-oriented impact that the chatbot has on the shoppers. The aim of this study was to develop and empirically test a framework that identifies the customer oriented attributes of chatbot and the impact these attributes create on customers. Objectives: The study intended to bridge the gap between conceptual and actual attributes and their applications on the subject of Chatbot. The following research objectives addressed the various aspects of Chatbot affecting the different characteristics of consumers’ shopping behaviors: a) Identification of various attributes of chatbot that bears an impression on consumer’s shopping behavior. b) Evaluation of the impact of chatbot on consumer’s shopping behavior that leads to the development of chatbot usage and adoption by the customer. Methodology: For the purpose of analysis, the author carried out Factor analysis and Multiple regression using SPSS version 23 for the identification of various attributes of Chatbot and their impact on shoppers. A self-administered questionnaire was developed. Industry experts in the field of retailing and academician evaluated the questionnaire. Primary information from the respondents was gathered using this questionnaire. The questionnaire comprised of Likert scale on a scale of 1 to 5 where 1 stands for strongly disagree and 5 stands for strongly agree. Data was collected from 126 respondents, out of which 111 respondents were finally considered for study and analysis purpose. Results/Findings: The empirical results showed that the study identified various attributes of chatbot like trust, usefulness, satisfaction, readiness to use and accessibility. It was also found that chatbot greatly influenced the customers in providing them with shopping experience, which can be very helpful to the businesses for increasing the sales and creating repurchase intention in the customers. Conclusion: The recent research on chatbot pays little attention on the impact it is creating on customers who are actually interacting with it on regular basis. The research paper extends information for understanding and appreciating the customer oriented attributes of artificially intelligent Chatbot. In this regard, the author developed a model framework and proposed the attributes identified. Through the work, author also made an effort to empirically test the impact of the identified attributes on the shoppers.
-
-
-
Variable Gain for Iterative Learning Control
Authors: Jianhuan Su, Yinjun Zhang and Mengji ChenBackground: At present, the gain of most ILC algorithms is fixed, and the convergence speed of the system depends on the learning law, which will lead to the complexity of the structure of the learning law, and variable gain can accelerate the convergence speed without changing the structure of the learning law as variable gains are introduced into ILC. Objective: In this paper, the D-type learning law is used. Firstly, the variable gain iterative learning controller is designed. Secondly, the convergence of the learning law is analyzed. Methods: Finally, in order to illustrate the effectiveness of this method, the simulation is carried out using MATLAB. Results and Conclusion: The simulation results show that the variable gain iterative learning control can improve the convergence speed of the iteration, and weaken the restrictions on the initial input.
-
-
-
The Traffic Sign Detection Algorithm Based on Region of Interest Extraction and Double Filter
Authors: Dongxian Yu, Jiatao Kang, Zaihui Cao and Anand NayyarObjective: In order to solve the current traffic sign detection problem due to the interference of various complex factors, as it is difficult to effectively carry out the correct detection of traffic signs, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed. Methods: First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo. Secondly, in order to improve the extraction ability of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) is used, and candidate regions are selected through the ROI detector. Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background. Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy. In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested. Results: The results show that the proposed algorithm has a higher detection accuracy and robustness compared with the current traffic sign recognition technology.
-
-
-
An Efficient Clustering-Based Segmentation Approach for Biometric Image
Authors: Aparna Shukla and Suvendu KanungoBackground: Image analysis plays a vital role in the biometric identification system. To achieve the effective outcome of any biometric identification system, the inputted biometric image taken should be of fine quality as it greatly impacts the decision. Image segmentation is a significant aspect of image analysis that must be carried out for enhancing the quality of an image. It efficiently differentiates the foreground and background region of the inputted biometric image and facilitates further image processing simply by providing a segmented binary image which is more coherent to the system. Objective: We present an efficient clustering-based image segmentation approach to obtain the quality segmented binary image that was further processed to get the quality decision in the biometricbased identification system. Methods: A centre of mass-based centroid clustering approach for image segmentation was proposed to perform binarization of an image so as the adequate and operative results can be found. Results: The performance of the proposed approach was applied to different sets of biometric data set having a different number of hand images. This approach provides sharp and lucid images so that good and effective intended results can be obtained. Conclusion: The centroid based clustering approach for image segmentation outperforms the existing clustering approach. In order to measure the quality of the segmented binary image, different statistical performance parameters are used: PSNR, Dunn Index, Silhouette, and Run Time (sec).
-
-
-
GRADE: A Novel Gravitational Density-Based Clustering Approach in the Multi-Objective Framework
Authors: Naveen Trivedi and Suvendu KanungoBackground: Clustering analysis plays a vital role in obtaining knowledgeable data from the huge amount of data sets in knowledge discovery. Most of the traditional clustering algorithms do not work well with high dimensional data. The objective of effective clustering is to obtain well connected, compact, and separated clusters. Density-Based Clustering (DBSCAN) is one of the popular clustering algorithms that use local density information of data points to detect clusters with arbitrary shapes. The Gravitational Search Algorithm (GSA) is one of the effective approaches inspired by Newton’s law of gravitational force where every particle in the universe attracts every other particle with a force. Objective: The primary aim of this paper is to design and develop a novel multi-objective clustering approach to produce the desired number of valid clusters. Further, these resulting clusters are to be optimized to obtain an optimal solution. Methods: In the proposed approach, a hybrid clustering algorithm based on GSA along with DBSCAN is recommended to group the data into the desired number of clusters, and in the next phase of the algorithm, Particle swarm optimization technique is applied in order to optimize the solutions using the fitness functions. Results: In the analysis of the result, we employed two objective functions namely quantization error and inter–cluster distance on four real-life data sets such as Iris, Wine, Wisconsin, and Yeast to evaluate the performance of our algorithm. Conclusion: The effectiveness of the GRADE algorithm is comprehensively demonstrated by comparing it with the well-known traditional K-mean algorithm in terms of accuracy and computational time.
-
-
-
Design and Implementation of Low Energy Wireless Network Nodes Based on Hardware Compression Acceleration
Authors: Hui Yang and Anand NayyarBackground: With the fast development of information, the information data is increasing in geometric multiples, and the speed of information transmission and storage space are required to be higher. Objective: In order to reduce the use of storage space and further improve the transmission efficiency of data, data need to be compressed. In the process of data compression, it is very important to ensure the lossless nature of data, and lossless data compression algorithms appear. The gradual optimization design of the algorithm can often achieve the energy-saving optimization of data compression. Similarly, the effect of energy saving can also be obtained by improving the hardware structure of node. Methods: In this paper, a new structure is designed for sensor node, which adopts hardware acceleration, and the data compression module is separated from the node microprocessor. Result: On the basis of the ASIC design of the algorithm, by introducing hardware acceleration, the energy consumption of the compressed data was successfully reduced, and the proportion of energy consumption and compression time saved by the general-purpose processor was as high as 98.4 % and 95.8 %, respectively. It greatly reduces the compression time and energy consumption.
-
-
-
CACK—A Counter Based Authenticated ACK to Mitigate Misbehaving Nodes from MANETs
Authors: C. Atheeq and M. Munir A. RabbaniBackground: The evolution of wireless network from wired network presents a worldwide pattern in the previous couple of decades. All the mobile nodes in MANET act as router as well as host at the same time send and receive messages directly to one another until they are in the communication range and use multiple hops if the nodes are outside the communication range. The self-organizing property of nodes in MANET made it prominent among the principal applications like military and or emergency rescue sites. In spite of, the openness and dynamic nature of mobile nodes, MANET suffers from malicious nodes. Studies show that the existing mechanism lacks with cost effective and reduced overhead in the network. Objective: It is vital to design a system that detects the malicious nodes to guard MANET from attackers. With the enhancements of the innovation and cost effective and minimum overhead, our visualization presents a tremendous expansion of MANETs into modern applications. Methods: In this article, we present our proposed model Counter based authenticated acknowledgement uniquely developed for MANETs that uses chebyshev polynomials and digested acknowledgment message for detection of misbehaving nodes in MANET. Results: Implementation shows that the proposed model outperforms in terms of reduced overhead, delay and packet delivery by mitigating attacks. Conclusion: Finally we conclude that we design an effective intrusion detection system that can be adaptable in MANET applications.
-
-
-
Web Service Discovery Using Bio-Inspired Holistic Matching Based Linked Data Clustering Model for RDF Data
Authors: Manish K. Mehrotra and Suvendu KanungoIntorduction: Resource Description Framework (RDF) is the de-facto standard language model for semantic data representation on semantic web. Designing an efficient management of RDF data with huge volume and efficient querying techniques are the primary research areas in semantic web. Methods: So far, several RDF management methods have been offered with data storage designs and query processing algorithms for data retrieval. However, these methods do not adequately address the presence of irrelevant links that degrade the performance of web service discovery. In this paper, we propose a Bio-inspired Holistic Matching based Linked Data Clustering (BHM-LDC) technique for efficient management and querying of RDF data. This technique is essentially based on three algorithms which are designed for RDF data storing, clustering the linked data and web service discovery respectively. Initially, the BHM-LDC technique store the RDF dataset as graph based linked data. Results and Discussion: Then, an Integrated Holistic Entity Matching based Distributed Genetic Algorithm (IHEM-DGA) is proposed to cluster the linked data. Finally, a sub-graph matching based web Service discovery Algorithm that uses the clustered triples has been proposed to find the best web services. Our experimental results reveal the performance of the proposed web service discovery approach by applying on business RDF dataset.
-
-
-
Energy-Efficient Routing Protocol for Network Life Enhancement in Wireless Sensor Networks
Authors: Amairullah K. Lodhi, M. Santhi S. Rukmini and Syed AbdulsattarBackground: The Wireless Sensor Network (WSN) is composed of autonomous nodes consist of sensors to collect the status of the surrounding environment. These nodes are equipped with limited batteries. One cannot recharge or replace the batteries of the nodes during the mission, as the applications of WSNs include in underwater, forest driven and mountain based. Objective: Thus available energy must be utilized effectively. Energy efficient routing is one of the primary sources of energy management. Cluster-based routing in WSN is a prevalent method to achieve network performance and energy efficiency. In literature, the number of cluster-based energy efficient routing protocols and their route selection metric is designed based on the residual status of node energy. However, this metric causes some of the intermediate nodes to drain energy instantly. In wireless networks this situation roots intermediate nodes to turn into a bottleneck node, and thereby performance degradation in terms of efficiency and packet delivery caused. Methods: Thus our paper aims to design a cluster based routing protocol to prevent the creation of intermediate bottleneck node. We introduce a novel routing metric called “ranking status” for the bottleneck problem. Results: Performances results indicate that the proposed routing protocol prevents the creation of intermediate bottleneck node, and improve the network's performance.
-
-
-
A Practical Conflicting Role-Based Cloud Security Risk Evaluation Method
Authors: Jin Han, Jing Zhan, Xiaoqing Xia and Xue FanBackground: Currently, Cloud Service Provider (CSP) or third party usually proposes principles and methods for cloud security risk evaluation, while cloud users have no choice but to accept them. However, since cloud users and cloud service providers have conflicts of interests, cloud users may not trust the results of security evaluation performed by the CSP. Different cloud users may have different security risk preferences, which makes it difficult for the third party to consider all users' needs during evaluation. In addition, current security evaluation indexes for the cloud are too impractical to test (e.g., indexes like interoperability, transparency, portability are not easy to be evaluated). Methods: To solve the above problems, this paper proposes a practical cloud security risk evaluation method of decision-making based on conflicting roles by using the Analytic Hierarchy Process (AHP) with Aggregation of Individual Priorities (AIP). Results: Not only can our method bring forward a new index system based on risk source for cloud security and corresponding practical testing methods, but also can obtain the evaluation result with the risk preferences of conflicting roles, namely CSP and cloud users, which can lay a foundation for improving mutual trusts between the CSP and cloud users. The experiments show that the method can effectively assess the security risk of cloud platforms and in the case where the number of clouds increased by 100% and 200%, the evaluation time using our methodology increased by only 12% and 30%. Conclusion: Our method can achieve consistent decisions based on conflicting roles, high scalability and practicability for cloud security risk evaluation.
-
-
-
Collaborative Filtering Recommendation Algorithm Based on Class Correlation Distance
Authors: Hanfei Zhang, Yumei Jian and Ping ZhouBackground: With the proposal of collaborative filtering algorithm, recommendation system has become an important approach for users to filter excessive Internet information. Objective: A class correlation distance collaborative filtering recommendation algorithm is proposed to solve the problems of category judgment and distance metric in the traditional collaborative filtering recommendation algorithm, which is using the advantage of the distance between the same samples and the class related distance. Methods: First, the class correlation distance between the training samples is calculated and stored. Second, the K nearest neighbor samples are selected, the class correlation distance of training samples and the difference ratio between the test samples and training samples are calculated respectively. Finally, according to the difference ratio, we classify the different types of samples. Results: The experimental result shows that the algorithm combined with user rating preference can get lower MAE value, and the recommendation effect is better. Conclusion: With the change of K value, CCDKNN algorithm is obviously better than KNN algorithm and DWKNN algorithm, and the accuracy performance is more stable. The algorithm improves the accuracy of similarity and predictability, which has better performance than the traditional algorithm.
-
-
-
Website Quality Analytics Using Metaheuristic Based Optimization
Authors: Akshi Kumar and Anshika AroraBackground: Studies are indicative of the fact that high-quality websites get better rankings on the search engines. A good website is the one which provides reliable content, has good design and user interface and can address global audience. But the end-users struggle with the predicament of selecting qualitative websites. Although “Quality” is fairly a subjective term, there is an obvious need for a useful and valid model which evaluates the quality attributes of a website. “A Website quality model essentially consists of a set of criteria used to determine if a website reaches certain levels of fineness”. Objective: The quality of a website must be assured in terms of technicality, the accuracy of information, response time, design of website, ease of use, and many more. The aim is to identify features of a website that determines its quality and build an automatic website quality prediction model. Methods: We conduct an empirical study on 700 websites and run 6 baseline classifiers to categorize websites into good, average and poor using quality attributes. Subsequently, metaheuristicbased algorithms (Particle Swarm Optimization, Elephant Search Algorithm and Wolf Search Algorithm) for optimal feature selection have been implemented to get an optimal subset of quality attributes that is able to predict the quality of websites more accurately. Results: The study confirms that the proposed implementation of metaheuristics for feature selection in website quality classification improves the performance of the supervised learning algorithms. An average 12.74% improvement in accuracy was observed using the features selected by Particle Swarm Optimization, 5.56% average improvement in accuracy using Elephant Search Algorithm for feature selection while an average improvement of 5.77% was observed using Wolf Search Algorithm for feature selection. Conclusion: The study validates that Particle Swarm Optimization for feature selection in website quality analytics task outperforms Wolf Search Algorithm and Elephant Search Algorithm.
-
-
-
A Parallel Algorithm of Association Rules Applicable to Sales Data Analysis
Authors: Guoping Lei, Ke Xiao, Feiyi Cui, Xiuying Luo and Minlu DaiBackground: This paper puts forward a parallel algorithm of association rules applicable for sales data analysis based on association rules by utilizing the idea of division and designs a sales management system for mall including behavior recognition and data analysis function as the application model of this algorithm with clothing store data management system as study object. Objective: To adapt to the data particularity of the study object, while mining the association rules, the improved algorithm also considers the priority relations, weight, negative association rules, and other factors among different items of the database. Methods: This improved algorithm is applied to Apriori algorithm, dividing the original database into n local data sets, mining the local data sets parallelly, finding out the local frequent data sets in each local data set, and finally counting the support and determine the final overall frequent sets. Results: Experiment verifies that this algorithm reduces the visit times of the database, shortens the mining time of algorithm, and improves the effectiveness and adaptability of the mining result. Conclusion: With the application with negative association rules added, data with diversified results can be mined during analyzing specific problems, mining efficiency is improved, the accuracy and adaptability of mining result is guaranteed, and the high efficiency of algorithm is also ensured. The improvement of increment mining efficiency of database will be considered next while the database is updated continuously.
-
-
-
Hybrid Deep Neural Model for Duplicate Question Detection in Trans-Literated Bi-Lingual Data
Authors: Seema Rani, Avadhesh Kumar and Naresh KumarBackground: Duplicate content often corrupts the filtering mechanism in online question answering. Moreover, as users are usually more comfortable conversing in their native language questions, transliteration adds to the challenges in detecting duplicate questions. This compromises with the response time and increases the answer overload. Thus, it has now become crucial to build clever, intelligent, and semantic filters which semantically match linguistically disparate questions. Objective: Most of the research on duplicate question detection has been done on mono-lingual, majorly English Q&A platforms. The aim is to build a model which extends the cognitive capabilities of machines to interpret, comprehend, and learn features for semantic matching in transliterated bi-lingual Hinglish (Hindi + English) data acquired from different Q&A platforms. Methods: In the proposed DQDHinglish (Duplicate Question Detection) Model, firstly language transformation (transliteration & translation) is done to convert the bi-lingual transliterated question into a monolingual English only text. Next, a hybrid of Siamese neural network containing two identical Long-Term- Short-Memory (LSTM) models and Multi-layer perceptron network is proposed to detect semantically similar question pairs. Manhattan distance function is used as the similarity measure. Results: A dataset was prepared by scrapping 100 question pairs from various social media platforms, such as Quora and TripAdvisor. The performance of the proposed model on the basis of accuracy and Fscore. The proposed DQDHinglish achieves a validation accuracy of 82.40%. Conclusion: A deep neural model was introduced to find a semantic match between an English question and a Hinglish (Hindi + English) question such that similar intent questions can be combined to enable fast and efficient information processing and delivery. A dataset was created and the proposed model was evaluated on the basis of performance accuracy. To the best of our knowledge, this work is the first reported study on transliterated Hinglish semantic question matching.
-
-
-
Modified Local Binary Pattern Algorithm for Feature Dimensionality Reduction
Authors: Manish Kumar, Rahul Gupta, Kota S. Raju and Dinesh KumarBackground: Bio metric authentication is becoming popular now a days and becoming integral part of IoT and other systems. Face recognition is one of the major and important aspect of bio metric systems after the fingerprint. Objective: A face recognition algorithm with feature dimensionality reduction is proposed which is very much required in recognition system for high speed and accuracy. Methods: The proposed algorithm is based on a variant of Local Binary Pattern (LBP) for face detection and recognition. The features of each block of face image is extracted and then global feature of face is constructed from super histogram. Results: For recognition, traditional methods are used. The query image is compared with the data set (ORL Dataset, LFW Dataset and Yale Dataset) in similarity index and the minimum distance. The maximum similarity is used to define as the class of query image. The reduction in number of features is achieved by modifying the traditional LBP process. Conclusion: The proposed modified method is observed as more fast and efficient for face recognition as compared to the existing algorithms.
-
-
-
Power Grid Cloud Resource Status Data Compression Method Based on Deep-Learning
Authors: Weixuan Liang, Youchan Zhu and Guoliang LiBackground: As the "three-type two-net, world-class" strategy is proposed, the key issues to be addressed are that the number of cloud resources in power grid continues to grow and there is a large amount of data to be filed every day. The long-term preservation of data, using backup data for the operation and maintenance, fault recovery, fault drill and tracking of cloud platform are essential. The traditional compression algorithm faces severe challenges. Methods: In this case, this paper proposes the deep-learning method for data compression. First, a more accurate and complete grid cloud resource status data is gathered through data cleaning, correction, and standardization, the preprocessed data is then compressed by SaDE-MSAE. Results: Experiments show that the SaDE-MSAE method can compress data faster. The data compression ratio based on neural network is basically between 45% and 60%, which is relatively stable and stronger than the traditional compression algorithm. Conclusion: The paper can compress the data quickly and efficiently in a large amount of power data. Improve the speed and accuracy of the algorithm while ensuring that the data is correct and complete, and improve the compression time and efficiency through the neural network. It gives better compression schemes and cloud resource data grid.
-
-
-
HHT-Based Detection Method of Cutter Abnormal Vibration in Spiral Surface Machining
Authors: Xin Li, Yuliang Zhang, Jianping Yu and Xiaolei DengBackground: Cutter abnormal vibration occurs frequently during the spiral surface machining process, and it results in low quality of the finished surface. In order to suppress cutter abnormal vibration effectively, it is necessary to detect abnormal vibration as soon as possible, but the analysis and processing of the cutter abnormal vibration signal in spiral surface machining are difficult because of its complicated components and non-linear non-stationary characteristics. In this paper, a detection method of cutter abnormal vibration signal based on Empirical Mode Decomposition (EMD) and Hilbert–Huang Transform (HHT) is proposed to be applied in spiral surface machining. Methods: First of all, EMD of the cutter vibration signal in the spiral surface machining is performed to obtain a series of Intrinsic Mode Function (IMF) components in different frequency bands. Secondly, the variation in the energy of each IMF component in the frequency domain and the correlation with the original signal are analyzed to obtain the IMF component with the largest amount of information on abnormal vibration symptom. Finally, the Hilbert transform is conducted on the IMF component to extract the symptom features of abnormal vibration. Results: The Hilbert-Huang spectrogram obtained by Hilbert transform is a two-variable function of time and frequency, from which the frequency information at any time can be obtained, including the magnitude and amplitude of the frequency and the corresponding moments appearing, which can describe the time-frequency characteristics of the non-stationary non-linear signal in detail. Experimental results show that the HHT based method to analyze the cutter vibration signal in the spiral surface machining can extract the symptom of abnormal vibration quickly and effectively, and can detect cutter abnormal vibration rapidly. Conclusion: The proposed method based on HHT in this paper is fundamentally different from the traditional signal time-frequency analysis methods, and has achieved good results in practical applications. This method could be successfully used in abnormal vibration detection, which could also provide basis and guarantee for the subsequent suppression of abnormal vibration.
-
-
-
Reliability Analysis of Cold Standby Parallel System Possessing Failure and Repair Rate Under Geometric Distribution
Authors: Jasdev Bhatti and Mohit K. KakkarBackground and Aim: With an increase in demands about the reliability of industrial machines following continuous or discrete distribution, the important thing to be noticed is that in all previous research works where systems had more than one failure, no iteration technique was studied to separate the failed unit on the basis of its failure. Therefore, the aim of our paper is to analyze the real industrial problems following cold standby units arranged in parallel manner with the new concept of inspection procedure for failed units to detect the exact failure and being the communicator to the repairman for repairing the exact failed part of the unit for saving time and maintenance cost. Methods: The geometric distribution and regenerative techniques were applied for calculating different reliability measures like mean time to system failure, availability of a system, inspection, repair and the time of unit failure. Results: Graphical and analytical analyses were also conducted to analyze the increasing/decreasing behavior of profit function w.r.t repair and failure rate. The system responded properly in fulfilling the basic needs. Conclusion: The calculated value of all reliability parameters proved helpful for studying any other models following the same concept under different environmental conditions. Thus, it can be concluded that, reliability increases with an increase in the repair and decreases with an increase in the failure rate. Also, the results evaluated in this paper provide the better reliability testing strategies that help develop new techniques which lead to increase the effectiveness of the system.
-
-
-
Analysis and Fitting of a Thorax Path Loss Based on Implantable Galvanic Coupling Intra-Body Communication
Authors: Shuang Zhang, Yao Li, Yuanyu Yu, Jiang-ming Kuang, Jining Yang, Jiujiang Wang and Yihe LiuObjective: The aim of this research was to study the channel transmission characteristics of living and dead animal bodies and signal path loss characteristics of implantable communication in the axial direction. Methods: By injecting fentanyl citrate injection solution, we kept the research object (a piglet) in a comatose state and then a death state, so as to analyze the channel characteristics in each state. To analyze channel gain when using an implantable device with a fixed implantation depth and varying the axial distance, we proposed an implantable two-way communication path loss model. Results: Comparing the living-body and dead-body results showed that the channel gain difference was approximately 10dB for the same position and distance, heartbeat, pulse and breathing of the living animal contributed approximately 1dB of noise. Analyzing the calculated and experimental results of the path loss model showed that the determination correlation coefficients of the model were 0.999 and 0.998, respectively. The model prediction result and the experimental verification result also agreed closely. Conclusion: The path loss model not only fits the experimental results but also has better predictability for those positions not measured.
-
Most Read This Month
