Recent Advances in Computer Science and Communications - Volume 14, Issue 3, 2021
Volume 14, Issue 3, 2021
-
-
A Practical Conflicting Role-Based Cloud Security Risk Evaluation Method
Authors: Jin Han, Jing Zhan, Xiaoqing Xia and Xue FanBackground: Currently, Cloud Service Provider (CSP) or third party usually proposes principles and methods for cloud security risk evaluation, while cloud users have no choice but to accept them. However, since cloud users and cloud service providers have conflicts of interests, cloud users may not trust the results of security evaluation performed by the CSP. Different cloud users may have different security risk preferences, which makes it difficult for the third party to consider all users' needs during evaluation. In addition, current security evaluation indexes for the cloud are too impractical to test (e.g., indexes like interoperability, transparency, portability are not easy to be evaluated). Methods: To solve the above problems, this paper proposes a practical cloud security risk evaluation method of decision-making based on conflicting roles by using the Analytic Hierarchy Process (AHP) with Aggregation of Individual Priorities (AIP). Results: Not only can our method bring forward a new index system based on risk source for cloud security and corresponding practical testing methods, but also can obtain the evaluation result with the risk preferences of conflicting roles, namely CSP and cloud users, which can lay a foundation for improving mutual trusts between the CSP and cloud users. The experiments show that the method can effectively assess the security risk of cloud platforms and in the case where the number of clouds increased by 100% and 200%, the evaluation time using our methodology increased by only 12% and 30%. Conclusion: Our method can achieve consistent decisions based on conflicting roles, high scalability and practicability for cloud security risk evaluation.
-
-
-
Collaborative Filtering Recommendation Algorithm Based on Class Correlation Distance
Authors: Hanfei Zhang, Yumei Jian and Ping ZhouBackground: With the proposal of collaborative filtering algorithm, recommendation system has become an important approach for users to filter excessive Internet information. Objective: A class correlation distance collaborative filtering recommendation algorithm is proposed to solve the problems of category judgment and distance metric in the traditional collaborative filtering recommendation algorithm, which is using the advantage of the distance between the same samples and the class related distance. Methods: First, the class correlation distance between the training samples is calculated and stored. Second, the K nearest neighbor samples are selected, the class correlation distance of training samples and the difference ratio between the test samples and training samples are calculated respectively. Finally, according to the difference ratio, we classify the different types of samples. Results: The experimental result shows that the algorithm combined with user rating preference can get lower MAE value, and the recommendation effect is better. Conclusion: With the change of K value, CCDKNN algorithm is obviously better than KNN algorithm and DWKNN algorithm, and the accuracy performance is more stable. The algorithm improves the accuracy of similarity and predictability, which has better performance than the traditional algorithm.
-
-
-
Website Quality Analytics Using Metaheuristic Based Optimization
Authors: Akshi Kumar and Anshika AroraBackground: Studies are indicative of the fact that high-quality websites get better rankings on the search engines. A good website is the one which provides reliable content, has good design and user interface and can address global audience. But the end-users struggle with the predicament of selecting qualitative websites. Although “Quality” is fairly a subjective term, there is an obvious need for a useful and valid model which evaluates the quality attributes of a website. “A Website quality model essentially consists of a set of criteria used to determine if a website reaches certain levels of fineness”. Objective: The quality of a website must be assured in terms of technicality, the accuracy of information, response time, design of website, ease of use, and many more. The aim is to identify features of a website that determines its quality and build an automatic website quality prediction model. Methods: We conduct an empirical study on 700 websites and run 6 baseline classifiers to categorize websites into good, average and poor using quality attributes. Subsequently, metaheuristicbased algorithms (Particle Swarm Optimization, Elephant Search Algorithm and Wolf Search Algorithm) for optimal feature selection have been implemented to get an optimal subset of quality attributes that is able to predict the quality of websites more accurately. Results: The study confirms that the proposed implementation of metaheuristics for feature selection in website quality classification improves the performance of the supervised learning algorithms. An average 12.74% improvement in accuracy was observed using the features selected by Particle Swarm Optimization, 5.56% average improvement in accuracy using Elephant Search Algorithm for feature selection while an average improvement of 5.77% was observed using Wolf Search Algorithm for feature selection. Conclusion: The study validates that Particle Swarm Optimization for feature selection in website quality analytics task outperforms Wolf Search Algorithm and Elephant Search Algorithm.
-
-
-
A Parallel Algorithm of Association Rules Applicable to Sales Data Analysis
Authors: Guoping Lei, Ke Xiao, Feiyi Cui, Xiuying Luo and Minlu DaiBackground: This paper puts forward a parallel algorithm of association rules applicable for sales data analysis based on association rules by utilizing the idea of division and designs a sales management system for mall including behavior recognition and data analysis function as the application model of this algorithm with clothing store data management system as study object. Objective: To adapt to the data particularity of the study object, while mining the association rules, the improved algorithm also considers the priority relations, weight, negative association rules, and other factors among different items of the database. Methods: This improved algorithm is applied to Apriori algorithm, dividing the original database into n local data sets, mining the local data sets parallelly, finding out the local frequent data sets in each local data set, and finally counting the support and determine the final overall frequent sets. Results: Experiment verifies that this algorithm reduces the visit times of the database, shortens the mining time of algorithm, and improves the effectiveness and adaptability of the mining result. Conclusion: With the application with negative association rules added, data with diversified results can be mined during analyzing specific problems, mining efficiency is improved, the accuracy and adaptability of mining result is guaranteed, and the high efficiency of algorithm is also ensured. The improvement of increment mining efficiency of database will be considered next while the database is updated continuously.
-
-
-
Hybrid Deep Neural Model for Duplicate Question Detection in Trans-Literated Bi-Lingual Data
Authors: Seema Rani, Avadhesh Kumar and Naresh KumarBackground: Duplicate content often corrupts the filtering mechanism in online question answering. Moreover, as users are usually more comfortable conversing in their native language questions, transliteration adds to the challenges in detecting duplicate questions. This compromises with the response time and increases the answer overload. Thus, it has now become crucial to build clever, intelligent, and semantic filters which semantically match linguistically disparate questions. Objective: Most of the research on duplicate question detection has been done on mono-lingual, majorly English Q&A platforms. The aim is to build a model which extends the cognitive capabilities of machines to interpret, comprehend, and learn features for semantic matching in transliterated bi-lingual Hinglish (Hindi + English) data acquired from different Q&A platforms. Methods: In the proposed DQDHinglish (Duplicate Question Detection) Model, firstly language transformation (transliteration & translation) is done to convert the bi-lingual transliterated question into a monolingual English only text. Next, a hybrid of Siamese neural network containing two identical Long-Term- Short-Memory (LSTM) models and Multi-layer perceptron network is proposed to detect semantically similar question pairs. Manhattan distance function is used as the similarity measure. Results: A dataset was prepared by scrapping 100 question pairs from various social media platforms, such as Quora and TripAdvisor. The performance of the proposed model on the basis of accuracy and Fscore. The proposed DQDHinglish achieves a validation accuracy of 82.40%. Conclusion: A deep neural model was introduced to find a semantic match between an English question and a Hinglish (Hindi + English) question such that similar intent questions can be combined to enable fast and efficient information processing and delivery. A dataset was created and the proposed model was evaluated on the basis of performance accuracy. To the best of our knowledge, this work is the first reported study on transliterated Hinglish semantic question matching.
-
-
-
Modified Local Binary Pattern Algorithm for Feature Dimensionality Reduction
Authors: Manish Kumar, Rahul Gupta, Kota S. Raju and Dinesh KumarBackground: Bio metric authentication is becoming popular now a days and becoming integral part of IoT and other systems. Face recognition is one of the major and important aspect of bio metric systems after the fingerprint. Objective: A face recognition algorithm with feature dimensionality reduction is proposed which is very much required in recognition system for high speed and accuracy. Methods: The proposed algorithm is based on a variant of Local Binary Pattern (LBP) for face detection and recognition. The features of each block of face image is extracted and then global feature of face is constructed from super histogram. Results: For recognition, traditional methods are used. The query image is compared with the data set (ORL Dataset, LFW Dataset and Yale Dataset) in similarity index and the minimum distance. The maximum similarity is used to define as the class of query image. The reduction in number of features is achieved by modifying the traditional LBP process. Conclusion: The proposed modified method is observed as more fast and efficient for face recognition as compared to the existing algorithms.
-
-
-
Power Grid Cloud Resource Status Data Compression Method Based on Deep-Learning
Authors: Weixuan Liang, Youchan Zhu and Guoliang LiBackground: As the "three-type two-net, world-class" strategy is proposed, the key issues to be addressed are that the number of cloud resources in power grid continues to grow and there is a large amount of data to be filed every day. The long-term preservation of data, using backup data for the operation and maintenance, fault recovery, fault drill and tracking of cloud platform are essential. The traditional compression algorithm faces severe challenges. Methods: In this case, this paper proposes the deep-learning method for data compression. First, a more accurate and complete grid cloud resource status data is gathered through data cleaning, correction, and standardization, the preprocessed data is then compressed by SaDE-MSAE. Results: Experiments show that the SaDE-MSAE method can compress data faster. The data compression ratio based on neural network is basically between 45% and 60%, which is relatively stable and stronger than the traditional compression algorithm. Conclusion: The paper can compress the data quickly and efficiently in a large amount of power data. Improve the speed and accuracy of the algorithm while ensuring that the data is correct and complete, and improve the compression time and efficiency through the neural network. It gives better compression schemes and cloud resource data grid.
-
-
-
HHT-Based Detection Method of Cutter Abnormal Vibration in Spiral Surface Machining
Authors: Xin Li, Yuliang Zhang, Jianping Yu and Xiaolei DengBackground: Cutter abnormal vibration occurs frequently during the spiral surface machining process, and it results in low quality of the finished surface. In order to suppress cutter abnormal vibration effectively, it is necessary to detect abnormal vibration as soon as possible, but the analysis and processing of the cutter abnormal vibration signal in spiral surface machining are difficult because of its complicated components and non-linear non-stationary characteristics. In this paper, a detection method of cutter abnormal vibration signal based on Empirical Mode Decomposition (EMD) and Hilbert–Huang Transform (HHT) is proposed to be applied in spiral surface machining. Methods: First of all, EMD of the cutter vibration signal in the spiral surface machining is performed to obtain a series of Intrinsic Mode Function (IMF) components in different frequency bands. Secondly, the variation in the energy of each IMF component in the frequency domain and the correlation with the original signal are analyzed to obtain the IMF component with the largest amount of information on abnormal vibration symptom. Finally, the Hilbert transform is conducted on the IMF component to extract the symptom features of abnormal vibration. Results: The Hilbert-Huang spectrogram obtained by Hilbert transform is a two-variable function of time and frequency, from which the frequency information at any time can be obtained, including the magnitude and amplitude of the frequency and the corresponding moments appearing, which can describe the time-frequency characteristics of the non-stationary non-linear signal in detail. Experimental results show that the HHT based method to analyze the cutter vibration signal in the spiral surface machining can extract the symptom of abnormal vibration quickly and effectively, and can detect cutter abnormal vibration rapidly. Conclusion: The proposed method based on HHT in this paper is fundamentally different from the traditional signal time-frequency analysis methods, and has achieved good results in practical applications. This method could be successfully used in abnormal vibration detection, which could also provide basis and guarantee for the subsequent suppression of abnormal vibration.
-
-
-
Reliability Analysis of Cold Standby Parallel System Possessing Failure and Repair Rate Under Geometric Distribution
Authors: Jasdev Bhatti and Mohit K. KakkarBackground and Aim: With an increase in demands about the reliability of industrial machines following continuous or discrete distribution, the important thing to be noticed is that in all previous research works where systems had more than one failure, no iteration technique was studied to separate the failed unit on the basis of its failure. Therefore, the aim of our paper is to analyze the real industrial problems following cold standby units arranged in parallel manner with the new concept of inspection procedure for failed units to detect the exact failure and being the communicator to the repairman for repairing the exact failed part of the unit for saving time and maintenance cost. Methods: The geometric distribution and regenerative techniques were applied for calculating different reliability measures like mean time to system failure, availability of a system, inspection, repair and the time of unit failure. Results: Graphical and analytical analyses were also conducted to analyze the increasing/decreasing behavior of profit function w.r.t repair and failure rate. The system responded properly in fulfilling the basic needs. Conclusion: The calculated value of all reliability parameters proved helpful for studying any other models following the same concept under different environmental conditions. Thus, it can be concluded that, reliability increases with an increase in the repair and decreases with an increase in the failure rate. Also, the results evaluated in this paper provide the better reliability testing strategies that help develop new techniques which lead to increase the effectiveness of the system.
-
-
-
Analysis and Fitting of a Thorax Path Loss Based on Implantable Galvanic Coupling Intra-Body Communication
Authors: Shuang Zhang, Yao Li, Yuanyu Yu, Jiang-ming Kuang, Jining Yang, Jiujiang Wang and Yihe LiuObjective: The aim of this research was to study the channel transmission characteristics of living and dead animal bodies and signal path loss characteristics of implantable communication in the axial direction. Methods: By injecting fentanyl citrate injection solution, we kept the research object (a piglet) in a comatose state and then a death state, so as to analyze the channel characteristics in each state. To analyze channel gain when using an implantable device with a fixed implantation depth and varying the axial distance, we proposed an implantable two-way communication path loss model. Results: Comparing the living-body and dead-body results showed that the channel gain difference was approximately 10dB for the same position and distance, heartbeat, pulse and breathing of the living animal contributed approximately 1dB of noise. Analyzing the calculated and experimental results of the path loss model showed that the determination correlation coefficients of the model were 0.999 and 0.998, respectively. The model prediction result and the experimental verification result also agreed closely. Conclusion: The path loss model not only fits the experimental results but also has better predictability for those positions not measured.
-
Most Read This Month
