Recent Advances in Computer Science and Communications - Volume 15, Issue 4, 2022
Volume 15, Issue 4, 2022
-
-
Inspecting Briquette Machine with Different Faults
Authors: Divesh Garg, Reena Garg and Vanita GargBackground: A briquette machine can be considered very useful in modern times as the need for energy consumption is increasing rapidly. Considering the harm to the environment, the study of the briquette machine is the need of present times. In this paper, the operative unit is considered as briquette machine, also known as bio-coal, which is used for agroforestry waste. Objective: A single operative unit has been analyzed stochastically. The inspection of the breakdown of a unit reveals the feasibility of the unit under the supervision of either ordinary or expert repairmen. Two types of faults are revealed by the repairmen, either minor or major. Minor faults are repaired immediately by the same repairmen, but whenever a major fault is held, the machine’s fault will be handled by an expert person. Method: It is assumed that the repair needs no modification once served. Availability, Mean-time for system failure, and profits are analyzed by utilizing the Regenerative point graphical technique and Semi-markov process. Results: The study reveals that the mean-time for system failure of the system model goes on decreasing as the failure rate increases and the availability goes on decreasing as failure-rate increases. Moreover, the study shows that the systems' profit goes down on an increase in Failure-rate. Conclusion: Findings of the study support the hypothesis that the limits of failure/repair/inspection rate will surely have effective profitability. Moreover, it is found that the utility of the scale of operation can easily be derived. The practical importance of biomass briquettes for burning coal or wood is very well appreciated.
-
-
-
IFWG-TOPSIS Model for Supporting Infant Failure Assessment in an Offshore Wind Turbine System
Authors: Daniel O. Aikhuele, Desmond E. Ighravwe and Olubayo Moses BabatundeIntroduction: System failure analysis is an essential aspect of equipment management. This analysis improves equipment reliability and availability. However, to assess infant failure under dynamic criteria, reliability engineers require special models. Methods: Hence, this study uses an Intuitionistic Fuzzy Weighted Geometric (IFWG) and Intuitionistic Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) methods to develop an IFWG-TOPSIS model for infant failure assessment. We consider a case study of Offshore Wind (OFW) turbine infant failure assessment. Results: During the model evaluation, this study considered an infant failure of the turbine's main shaft, blade bearings, pitch system, jacket and monopile support structure, and gearbox. Risk factor, spare part weight, technical importance, cost, and complexity criteria were used to evaluate these components’ reliability. The results show that the blade bearings (S2) and main shaft are the most and least reliable components, respectively. To validate the model’s performance, we compared its results with GümüŦ#159; and Bali’s and standard VIKOR models results. These models selected the same components as the most and least reliable components, respectively. Thus, the proposed model is suitable for OFW turbine’s infant failure assessment. Discussion: It can be deduced that the use of the modified IFWG operator to calculate the intuitionistic fuzzy distance measure in standard TOPSIS model as the capacity to produce realistic results that can compete with existing decision-making methods. Conclusion: This study has investigated the use of techno-economic criteria for OFW turbine components’ infant failure assessment. A fuzzy-based model was used to establish the connection between the criteria and the components. We developed this model using the Intuitionistic Fuzzy Weighted Geometric (IFWG) and Intuitionistic Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) methods. Using experts’ judgments, data were obtained for the developed model evaluation and validation.
-
-
-
Effect of Human Errors on an Inventory Model Under Two Warehouse Environments
Authors: Mandeep Mittal, Sarla Pareek and Aastha PanwarBackground: The surety of production of all the goods in the standard inventory model to be of good quality comes out to be an impractical assumption. Though, the observation in the practical environment shows the possibility of occurrence of errors in the controlled batch. The collection of these items is generally done at the time of the process of selection and selling is done in a single batch when the process of selection ends. Consideration of human errors of Type I and Type II , in this model, is done when the inspector shows tiredness and ignorance at the time of the process of inspection. Objective: The objective of this article is to carry out the evaluation of the storage value and inventory decision, which results in higher profit. In this paper, the effect of human errors has been considered under the two-warehouse environments. Methods: It is also assumed that the cost of a stock rent warehouse is greater than own warehouse because of the better preservation facilities in RW. The validation of results has been done by utilizing numerical based examples. The sensitivity analysis has also been presented. Result & Conclusion: As per the results, it is clear that as the percentage of error increases, the expected total profit decreases with respect to time.
-
-
-
An Adaptive Jamming Device
More LessBackground: All currently known wireless signals jammers have in common is the fact that there is no delimiting of the geographical location where the jamming is done. This jamming is delimited only by the power of the generated signal. The covered area is a sphere with a center on the emitter of the wireless signal. The reviewed patent provides a solution to the problem of how to restrict the wireless signal in a specific geographic area. Objective: This paper is a Mini-Review of the US Patent # 9,787425. We analyzed the current technical problems that this patent tried to solve and discussed the solutions presented. Method: This study starts by analyzing the current state of the art for jamming wireless signals, indicates drawbacks of the previous approaches, compares and contrasts these previous arts with the reviewed patent, analyzes if the proposed solutions are suitable to solve the critical problem of jamming wireless signals on a delimited geographic area, describes some disadvantages of the reviewed patent, and suggests future research issues. Conclusion: This study discusses some non-technical issues with this reviewed patent, like legal issues which are dependent on the country of use of the patent. It also describes the current and future use of the reviewed patent.
-
-
-
Meta-heuristic Techniques to Train Artificial Neural Networks for Medical Image Classification: A Review
Authors: Priyanka and Dharmender KumarMedical imaging has been utilized in various forms in clinical applications for better diagnosis and treatment of diseases. These imaging technologies help in recognizing body's ailing region easily. In addition, it causes no pain to the patient as the interior part of the body can be examined without difficulty. Nowadays, various image processing techniques such as segmentation, registration, classification, restoration, contrast enhancement and many more exist to enhance image quality. Among all these techniques, classification plays an important role in computer-aided diagnosis for easy analysis and interpretation of these images. Image classification not only classifies diseases with high accuracy but also analyses which part of the body is infected. The usage of Neural networks classifier in medical imaging applications has opened new doors or opportunities to researchers stirring them to excel in this domain. Moreover, accuracy in clinical practices and the development of more sophisticated equipment are necessary in the medical field for more accurate and quicker decisions. Therefore, keeping this in mind, researchers started using meta-heuristic techniques to classify the methods. This paper provides a brief survey on the role of artificial neural networks in medical image classification, various types of meta-heuristic algorithms applied for optimization purposes, and their hybridization. A comparative analysis showing the effect of applying these algorithms on some classification parameters such as accuracy, sensitivity, and specificity is also provided. From the comparison, it can be observed that the usage of these methods significantly optimizes these parameters leading us to diagnose and treat a number of diseases in their early stage.
-
-
-
Assembly Sequence Planning: A Review
By Han-Ye ZhangBackground: Assembly Sequence Planning (ASP) is an important stage in the process of product design and manufacturing. A reasonable assembly sequence can reduce the complexity of assembly operations, reduce the number of tools and fixtures, improve assembly efficiency, reduce assembly time, and reduce assembly costs. Methods: The purpose of this paper is to review the objective function, constraint conditions, and the solving methods of the ASP. Conclusion: The research methods can be roughly divided into three categories: graph-based method, knowledge-based method, and artificial intelligence algorithm. The advantages and disadvantages of the three methods are compared. Finally, future research is discussed, which could provide a reference for the ASP.
-
-
-
Automating Duplicate Detection for Lexical Heterogeneous Web Databases
Authors: Anil Ahlawat and Kalpna SagarIntroduction: The need for efficient search engines has been identified with the everincreasing technological advancement and huge growing demand for data on the web. Method: Automating duplicate detection over a query results in identifying the records from multiple web databases that point to a similar real-world entity and return non-matching records to the end-users. The proposed algorithm in this paper is based on an unsupervised approach with classifiers over heterogeneous web databases that return more accurate results with high precision, Fmeasure, and recall. Different assessments have also been executed to analyze the efficacy of the proposed algorithm for the identification of duplicates. Result: Results show that the proposed algorithm has greater precision, F-score measure, and the same recall values as compared to standard UDD. Discussion: This paper aims to introduce an algorithm that automates the process of duplicate detection for lexical heterogeneous web databases. Conclusion: This paper concludes that the proposed algorithm outperforms the standard UDD.
-
-
-
Multi-objective Optimization Model for Clustering Based Secure Routing with Congestion Avoidance in Mobile ADHOC Networks
Authors: Srinivasan Murugan and Jeyakarthic MohanIntroduction: In Mobile Ad-Hoc Network (MANET), some of the major design issues are clustering, routing, and security. Clustering and routing techniques distribute the load to many network correlations so as to attain better utilization of resources, improved throughput and reduction in response time and workload. Besides, trust-based schemes help in sending messages in a secured manner and prevent the data from attackers by integrating authorized sender and receiver inside the network. Aims: The aim of the research article is to propose a new Multi-Objective Optimization (MOO) technique which intends to allocate the available networking resources properly, and balance the load in network, security, and effective data transmission. Methods: The projected model operates on three major processes such as clustering, secured routing and data aggregation-based transmission scheme. MOO model involves different processes such as Fuzzy Logic (FL)-based clustering process, Lion Whale optimization algorithm with Congestion Avoidance (LW-CA) technique for routing process and integrated XOR and Huffman (IXH)-based data transmission process. So, the proposed model is collectively called FCAXH technique which achieves energy efficiency, proper load balancing and security. Results: Simulations were carried out using network simulator tool and the model was verified with and without the presence of attackers. The proposed method attained the maximum results in terms of throughput, Packet Delivery Ratio (PDR) and energy efficiency. Conclusion: The projected FCAXH model achieved energy efficiency, proper load balancing and security over other compared approaches even under the presence of attackers in the network.
-
-
-
Research on Crack Detection Algorithm of Mining Car Baffle
Authors: Weiwei Li and Fanlei YanIntroduction: Image processing technology is widely used for crack detection. This technology is used to build a data acquisition system and uses computer vision technology for image analysis. Because of its simplicity in processing, many of the image processing detection methods were proposed. It is relatively easy to deploy and has a low cost. Methods: The heterogeneity of the external light usually changes the authenticity of each target in the image, which will seriously cause the experiment to fail. At this time, the image needs to be processed by the gamma transform. Based on the analysis of the characteristics of the image of the mine car baffle, this paper improves the Gamma transform and uses the it to enhance the image. Results: It can be concluded that the algorithm in this paper can accurately detect crack areas with an actual width greater than 1.2 mm, and the error between the detected crack length and the actual length is between (-2, 2) mm. In practice, this error is completely acceptable. Discussion: To compare the performance of a new crack detection method with the existing methods. The two most well-known traditional methods, Canny and Sobel edge detection, are selected. Although the Sobel edge detection provides some crack information. The texture of the surface of the mine cart baffle detected has caused great interference to the crack identification. Conclusion: If the cracks appearing on the mine car baffle are not found in time, they often cause accidents. Therefore, effective crack detection must be performed. If a manual inspection is adopted for crack detection, it will be labor-intensive and easy to miss inspection. In order to reduce the labor of crack detection of mine cars and improve the accuracy of detection, this paper, based on the detection platform built, performs pre-processing, image enhancement, and convolution operations on the collected crack images of the mine car baffle.
-
-
-
Improving Fine-grained Opinion Mining Approach with a Deep Constituency Tree-long Short Term Memory Network and Word Embedding
Authors: Dalila Bouras, Mohamed Amroune, Hakim Bendjenna and Issam BendibObjective: One key task of fine-grained opinion mining on product review is to extract product aspects and their corresponding opinion expressed by users. Previous work has demonstrated that precise modeling of opinion targets within the surrounding context can improve performances. However, how to effectively and efficiently learn hidden word semantics and better represent targets and the context still needs to be further studied. Recent years have seen a revival of the Long Short- Term Memory (LSTM), with its effectiveness being demonstrated on a wide range of problems. However, LSTM based approaches are still limited to linear data processing since it processes the information sequentially. As a result, they may perform poorly on user-generated texts, such as product reviews, tweets, etc., whose syntactic structure is not precise. Methods: In this research paper, we propose a constituency tree long short term memory neural network- based approach. We compare our model with state-of-the-art baselines on SemEval 2014 datasets. Results: Experiment results show that our models obtain competitive performances compared to various supervised LSTM architectures. Conclusion: Our work contributes to the improvement of state-of-the-art aspect-level opinion mining methods and offers a new approach to support the human decision-making process based on opinion mining results.
-
-
-
Research on Improved Gamma Transform Face Image Preprocessing Fusion Algorithm Under Complex Lighting Conditions
Authors: Xiaolin Tang, Xiaogang Wang, Jin Hou, Huafeng Wu and Ping HeIntroduction: Under complex illumination conditions, such as poor light sources and light changes rapidly, there are two disadvantages of current gamma transform in pre-processing face image: one is that the parameters of transformation need to be set based on experience; the other is that the details of the transformed image are not obvious enough. Objective: To improve the current gamma transform. Methods: This study proposes a weighted fusion algorithm of adaptive gamma transform and edge feature extraction. First, this paper proposes an adaptive gamma transform algorithm for face image pre-processing, that is, the parameter of transformation generated by calculation according to the specific gray value of the input face image. Secondly, this paper uses a Sobel edge detection operator to extract the edge information of the transformed image to get the edge detection image. Finally, this paper uses the adaptively transformed image and the edge detection image to obtain the final processing results through a weighted fusion algorithm. Results: The contrast of the face image after pre-processing is appropriate, and the details of the image are obvious. Conclusion: The method proposed in this paper can enhance the face image while retaining more face details, without human-computer interaction, and has a lower computational complexity degree.
-
-
-
Rule-based Classifiers for Suspect Detection from CCTV Footages
By Sunil PathakBackground: A significant work has been presented to identify suspects, gathering information and examining any videos from the CCTV Footage. This exploration work expects to recognize suspicious exercises, i.e., object trade, the passage of another individual, peeping into other's answer sheet and individual trade from the video caught by a reconnaissance camera amid examinations. This requires the procedure of face acknowledgment, hand acknowledgment and distinguishing the contact between the face and hands of a similar individual and that among various people. Methods: Segmented frames have been given as input to obtain a foreground image with the help of Gaussian filtering and background modeling method. Such foreground images have been given to Activity Recognition model to detect normal activity or suspicious activity. Result: Accuracy rate, Precision and Recall are calculated for activities detection, contact detection for Best Case, Average Case and Worst Case. Simulation results are compared with performance parameter such as Material Exchange, Position Exchange, and Introduction of a new person, Face and Hand Detection and Multi Person Scenario. Conclusion: In this paper, a framework is prepared for suspect detection. This framework will absolutely realize an unrest in the field of security observation in the training area.
-
-
-
Big Data Analytics for MANET Based Sustainable Smart Healthcare Solution
Authors: Ashu Gautam, Rashima Mahajan and Sherin ZafarObjective: Collaboration of most promising upcoming technologies like big data and Internet of Things (IoT) plays a significant role in the sustainable development of the smart world. One of the important applications is the amalgamation of such technology in the e-healthcare sector, where private information is being transferred from one end device to other equipment. Sensors and different other instruments engaged in the e-healthcare sector, for transferring important vital information of the patients, are categorized as Wireless Mesh Networks (WMN). An attacker can introduce many malicious activities via different types of attacks; ultimately, such activities can produce an outturn in Denial of Service (DoS) of important routines which need to be completed in stipulated time. Therefore, it is important to showcase the effect of various attacks affecting the routing methodology of the protocols of such networks. Methods: In this research study, the most suitable routing protocol to handle DDoS attacks is simulated and estimated for the Quality of Service (QoS) in smart network infrastructure in terms of energy consumption and jitter in changing nodes scenario, which aids in providing implications to enhance existing protocols and alleviate the consequence of DDoS instigated by such attacks. Results: The performance of AODV (Adhoc on Demand Vector), SAODV (Secured Adhoc On Demand) and HWMP (Hybrid Wireless Mesh Protocol) is compared and tabularized, which are the most popularly utilized protocols in the healthcare environment. Conclusion: The simulation results show that the HWMP outperformed well than the other two routing protocols in terms of evaluation metrics, namely energy consumption and jitter, which could be considered much less vulnerable against DDoS attacks prevailing in the sustainable healthcare sector.
-
-
-
Analysis of Univariate and Multivariate Filters Towards the Early Detection of Dementia
Authors: Deepika Bansal, Kavita Khanna, Rita Chhikara, Rakesh K. Dua and Rajeev MalhotraObjective: Dementia is a progressive neurodegenerative brain disease emerging as a global health problem in adults aged 65 years or above, resulting in the death of nerve cells. The elimination of redundant and irrelevant features from the datasets is however necessary for accurate detection thus timely treatment of dementia. Methods: For this purpose, an ensemble approach of univariate and multivariate feature selection methods has been proposed in this study. A comparison of four univariate feature selection techniques (t-Test, Wilcoxon, Entropy and ROC) and six multivariate feature selection approaches (ReliefF, Bhattacharyya, CFSSubsetEval, ClassifierAttributeEval, CorrelationAttributeEval, OneRAttributeEval) has been performed. The ensemble of best univariate & multivariate filter algorithms is proposed which helps in acquiring a subset of features that includes only relevant and non-redundant features. The classification is performed using Naïve Bayes, k-NN, and Random Forest algorithms. Results: Experimental results show that t-Test and ReliefF feature selection is capable of selecting 10 relevant features that give the same accuracy when all features are considered. In addition to it, the accuracy obtained using k-NN with an ensemble approach is 99.96%. The statistical significance of the method has been established using Friedman’s statistical test. Conclusion: The new ranking criteria computed by the ensemble method efficiently eliminate the insignificant features and reduces the computational cost of the algorithm. The ensemble method has been compared to the other approaches for ensuring the superiority of the proposed model. Discussion: The percentage gain in accuracy for all three classifiers, Naïve Bayes, k-NN, and Random Forest shows a remarkable difference noted down for the percentage gain in the accuracies after applying feature selection using Naïve Bayes and k-NN. Using univariate filter selection methods, the t-test is outshining among all the methods while selecting only 10 feature subsets.
-
-
-
A Computational Model for Driver Risk Evaluation and Crash Prediction Using Contextual Data from On-board Telematics
Authors: Neetika Jain and Sangeeta MittalIntroduction: Vehicle crashes can be hazardous to public safety and may cause infrastructure damage. Risky driving significantly raises the possibility of the occurrence of a vehicle crash. As per statistics by the World Health Organization (WHO), approximately 1.35 million people are involved in road traffic crashes resulting in loss of life or physical disability. WHO attributes events like over-speeding, drunken driving, distracted driving, dilapidated road infrastructure and unsafe practices such as non-use of helmets and seatbelts to road traffic accidents. As these driving events negatively affect driving quality and enhance the risk of a vehicle crash, they are termed as negative driving attributes. Methods: A multi-level hierarchical fuzzy rules-based computational model has been designed to capture risky driving by a driver as a driving risk index. Data from the onboard telematics device and vehicle controller area network is used for capturing the required information in a naturalistic way during actual driving conditions. Fuzzy rules-based aggregation and inference mechanisms have been designed to alert about the possibility of a crash due to the onset of risky driving. Results: On-board telematics data of 3213 sub-trips of 19 drivers has been utilized to learn long term risky driving attributes. Furthermore, the current trip assessment of these drivers demonstrates the efficacy of the proposed model in correctly modeling the driving risk index of all of them, including 7 drivers who were involved in a crash after the monitored trip. Conclusion: In this work, risky driving behavior has been associated not just with rash driving but also other contextual data like driver’s long-term risk aptitude and environmental context such as type of roads, traffic volume and weather conditions. Trip-wise risky driving behavior of six out of seven drivers, who had met with a crash during that trip, was correctly predicted during evaluation. Similarly, for the other 12 drivers, the model accurately predicted safe driving behavior as these drivers did not meet with any vehicle crash. The proposed model can be used as an alert mechanism to indicate potential crash scenarios to the driver. The current study did not study lane changing behavior in detail due to difficulty in capturing road lanes in the Indian context.
-
-
-
Fast and Efficient Data Masking Method for Securing Image Over Cloud Computing Environment
Authors: B.K. Siddartha and G.K. RavikumarObjective: Preserving the confidentiality of sensitive information is becoming more and more difficult and challenging, considering current scenarios, as a huge amount of multimedia data are stored and communicated over the internet among users and cloud computing environment. The existing cryptography security model for storing images on a cloud platform cannot resist various kinds of modern attacks, such as statistical, differential, brute force, cropping attack, etc., therefore, an improved bit scrambling technique using chaotic maps that can resist various kinds of security attacks is needed. The FEDM cipher image provides less correlation among neighboring pixels and images can be decrypted even under the presence of noise. This study proposed a FEDM model to achieve better UACI, NPCR, histogram, runtime, and processing time performance than the existing image security methods. Methods: Preserving the confidentiality of sensitive information is becoming more and more difficult and challenging considering current scenarios as a huge amount of multimedia data are stored and communicated over the internet among users and cloud computing environment. The existing cryptography security model for storing images on a cloud platform cannot resist various kinds of modern attacks such as statistical, differential, brute force, cropping attack, etc. Results: The overall results show that the proposed FEDM model attains much superior performance considering histogram, UACI, NPCR, and runtime. The FEDM model can resist against SA. The FEDM model attains better performance because IBS is used in each step of CS. Thus, a correlation between adjacent pixels is less and aids superior security performance. Further, the FEDM model attains better UACI and NPCR performance when compared with the exiting image encryption model. Conclusion: The FEDM security method can resist DA, noise, cropping attack, and linear attacks more efficiently due to a larger keyspace. Further, the FEDM takes less time for provisioning security. Along with this, FEDM works smoothly under a cloud computing environment. No prior work has considered runtime performance evaluation under the cloud computing environment. FEDM model will significantly aid in reducing the overall operational cost of a cloud computing environment with a reduction in processing time as cloud charge is based on hours of usage.
-
Most Read This Month
