Recent Advances in Computer Science and Communications - Volume 14, Issue 6, 2021
Volume 14, Issue 6, 2021
-
-
Research on Edge Detection of Agricultural Pest and Disease Leaf Image Based on LVQ Neural Network
By Tongke FanBackground: Roberts, Sobel, Prewitt and other operators are commonly used in image edge detection, but because of the complex background of agricultural pests and diseases, the efficiency of using these operators to detect is not ideal. Objective: To improve the accuracy of crop disease image edge detection, the method of using LVQ neural network to detect crop disease image edge was studied. Methods: It is proposed to use LVQ1 neural network to detect the edge of the image. The commonly used median feature quantity, directional information feature quantity and Krisch operator direction feature quantity are used as the input signal of LVQ1 neural network for network training. On the basis of simulation, an image feature vector that solves the image pixel neighborhood consistency is added, and an algorithm for edge detection using LVQ2 neural network is proposed. Computer simulations show that the improved algorithm significantly improved the edge image continuity of the output. Results: LVQ2 neural network can complete the edge detection of gray-scale image better, the output edge image has good continuity, clear contour and keeps most of the original image information. Compared with the LVQ1 neural network detection results, the edge image detected by LVQ2 neural network has obvious improvement in the processing of small edge, and the contour is clearer. It shows that the training method can converge the network better and obtain more ideal output results. Conclusion: The simulation comparison is carried out under the Matlab platform. The results show that based on the LVQ2 neural network, the four image feature quantities are used as the input signal detection algorithm, which significantly improved the output edge image continuity, compared with the traditional Sobel algorithm and LVQ1 nerve. The network is more superior, robust and generalized.
-
-
-
Isolated Word-Based Spoken Dialogue System Using Odia Phones
Authors: Basanta K. Swain, Sanghamitra Mohanty and Chiranji L. ChowdharyAims: To develop Spoken Dialogue System in Indian language with Voice response and voice based biometric feature. Background: Most of research works in spoken dialogue system are carried out in U.S. and Europe and currently, few government funding projects on spoken dialogue system (SDS) are carried out in Indian academic institutes. Objective: We have tried to use our developed spoken language system to eliminate the desktop clutter. It is very normal tendency of computer users to place the most frequently used files, folders, applications shortcuts on their computer’s desktop. Cluttering of desktop not only slows down the productivity of computer but also looks very messy and very difficult to find files as well. Therefore, we tried to use the spoken dialogue system to eliminate the desktop clutters in painless manner and the services are provided to the computer users by opening the files, folders and frequently used application of users in spoken command mode with voice response. Methods: In this research article, we have attempted to utilize an Indian spoken language for communication with spoken dialogue system. We have adopted a statistical machine learning algorithm called Hidden Markov Model for development of speech recognition engine. The speaker verification module is developed using fuzzy c-means algorithm. Speech synthesis is carried out using diphone corpus. Results: The speaker verification module has yielded satisfactory results with average accuracy of 66.2% using FCM algorithm. It is also seen that fundamental frequency and formant frequency carry the distinctive characteristics of speaker verification over Indian spoken language. The vital module of SDS i.e. speech recognition engine is developed by using HMM, a statistical algorithm. It is observed that word accuracy of ASR engine is 78.22 % and 62.31 % for seen and unseen users respectively. The voice response is given to the user in terms of synthesized speech. The audio quality of synthesized speech is measured using the MOS test. The MOS test value is found as 3.8 and 3.6 over two distinct groups of listeners. Conclusion: In this research paper, we have developed a spoken dialogue system based on Odia language phone set. We have integrated speaker verification module in order to provide additional biometric based security.
-
-
-
Analysis of Performance of Two Wavelet Families Using GLCM Feature Extraction for Mammogram Classification of Breast Cancer
Authors: Shivangi Singla and Uma KumariBackground: Mammogram images are low dose x-ray images which detect the breast cancer before the women can actually experience it. Objective: To determine the accurate methodology for feature extraction using different wavelet families and different classification algorithms. Methods: Two wavelet families are used namely Daubechies (db8) and Biorthogonal (bior3.7). The Gray-Level Co-occurrence Matrix is used for extracting 9 features at each sub-band. 27 features are extracted at three sub-bands of Discrete Wavelet Transform. The features are extracted at three levels of decomposition and after that the classification algorithm named as Naive Bayes, Multilayer Perceptron, Fuzzy-NN and Genetic Programming are applied to extracted features. The feature selection algorithms are applied named as Wavelet and Principle Component Analysis for selecting the features and then classification accuracy is determined and compared between these. Results: Mammographic Image Analysis Society, database including 322 mammogram images from 161 patients is used. The classification algorithm without feature selection named as Fuzzy-NN gives better results at the third level of decomposition having classification accuracy for db8 wavelet family up to 99.68% and for bior3.7 wavelet family up to 99.98%. Wavelet with Multilayer Perceptron using feature selection algorithm gives the classification accuracy for db8 wavelet family up to 96.27% and for bior3.7 up to 93.47%. Conclusion: Fuzzy-NN algorithm gives highest accuracy of 99.98% for bior3.7 wavelet family. It indicates that with feature selection and without feature selection, the wavelet families differ as db8 is better consideration for with feature selection and bior3.7 wavelet family for without feature selection.
-
-
-
Designing an Expert System for the Diagnosis of Multiple Myeloma by Using Rough Set Theory
Authors: Tooraj Karimi, Arvin Hojati and Reza RazaviBackground: One of the most interesting and important topics in the field of information systems and knowledge management is the concept of eliciting rules and collecting the knowledge of human experts in various subjects to be used in expert systems. Many scientists have used decision support systems to support businesses or organizational decision-making activities, including clinical decision support systems for medical diagnosis. Objective: In this study, a rough set based expert system is designed for the diagnosis of one type of blood cancer called multiple myeloma. In order to improve the validity of generated models, three condition attributes that define the shape of “Total protein”, “Beta2%” and “Gamma%” are added to the models to improve the decision attribute value domain. Methods: In this study, 1100 serum protein electrophoresis tests are investigated and based on these test results, 15 condition attributes are defined. Four different rule models are obtained through extracting rules from reducts. Janson and Genetic Algorithm with "Full" and "ORR" approaches have been used to generate reducts. Results: The GA/ORR of the information system with 87% accuracy is used as an inference engine of an expert system and a unique user interface is designed to automatically analyze test results based on these generated models. Gamma% is detected as a core attribute of the information system. Conclusion: Based on the results of generating reducts, the Gamma% attribute is detected as a core of the information system. This means that information, which is resulted from this conditional attribute, has the greatest impact on the diagnosis of multiple myeloma. The GA/ORR model with 87% accuracy is selected as the inference engine of the expert system and finally, a unique user interface is created to help specialists diagnose multiple myeloma.
-
-
-
Multi-Criteria Decision-Making Techniques for Asset Selection
Authors: Shraddha Harode, Manoj Jha and Namita SrivastavaBackground: It has been a matter of discussion in years even after using an FST. Because of being a single valued membership, a fuzzy set can't express desired information. Further extended as HFSs which allows all possible membership degree lying between [0,1] is used widely where hesitancy occurs in taking preference over matters in Decision Making. Objective: The aim in this paper is to create a diversified portfolio where the return is maximum and the risk is minimal. Methods: Decision making methods like Fuzzy Soft Set, Mean Potentially Approach, and Soft Hesitant Fuzzy Rough Set which are based on Fuzzy Soft Set theory for construct the optimal portfolio. And non-fuzzy set method is applied for the optimal portfolio. It is found that a Soft hesitant fuzzy rough set is best as compare to other methods. Then, the ratio of optimal portfolio is obtained with the help of firefly optimization. Results: Soft Hesitant Fuzzy Rough Set has better outcomes on the basis of Performance Measure of these methods. With the help of Soft Hesitant Fuzzy Rough Set, diversified portfolio is also constructed. After constructing the optimal portfolio, Firefly algorithm is applied for obtain the proportions of seven assets. It is clearly showing the firmness of the ranked portfolio having maximum return and minimum risk on comparing without rank portfolio. Conclusion: Firefly algorithm is applied for optimization to the proportion of optimal assets of seven assets. The main result is, return and dividend is better and risk is less when compared to without ranking method. It is clear that the optimal portfolio with the ranking method is better than without ranking method.
-
-
-
Performance Comparison of Web Backend and Database: A Case Study of Node.JS, Golang and MySQL, Mongo DB
Authors: Faried Effendy, Taufik and Bramantyo AdhilaksonoAims: This study aims to compare the performance of Golang and node.js as web applications backend regarding response time, CPU utilization, and memory usage by using MySQL and MongoDB as databases. Background: There has been a lot of literature and research that addresses web server comparisons and database comparisons, but no one has discussed the combination of the two. Node.js and Golang (Go) were popular platforms that widely used as web and mobile application backends. While MySQL and Go are the two best open source databases that have different characters. Objective: To compare the performance of Golang and node.js as web applications backend regarding response time, CPU utilization, and memory usage by using MySQL and MongoDB as databases. Methods: In this study, we use four combinations of the web server and databases to compare, that is Node.js-Mysql, Node.js-MongoDB, Go-Mysql, and Go-MongoDB. Each database consists of 25 attributes with 1000 records. Each combination has the same routing URLs. From the previous study found a significant time difference between MySQL and MongoDB in query operations with 1000 data, so that in this study, the routing/showAll URL uses 1000 data. Results: The result shows that the combination of Go and MySQL is superior regarding CPU utilization and memory usage, while Node.js and MySQL combination is superior regarding response time. Conclusion: From this study can be concluded that the combination of Go-MySQL is superior regarding memory usage and CPU utilization, while Node.js-MySQL is superior regarding response time. Other: With this research, web developers can determine the right platform for their application; they also can reduce application developing cost so that the development process can be completed more quickly. For the next research best performance platform can be tested for WebSocket communication protocol and real-time technology, because it may provide different results from this research.
-
-
-
A Hybrid Hyper-Heuristic Flower Pollination Algorithm for Service Composition Problem in IoT
Authors: Neeti Kashyap, A. C. Kumari and Rita ChhikaraObjectives: The modern science applications have non-continuous and multivariate nature due to which the traditional optimization methods suffer a lack of efficiency. Flower pollination is a natural interesting procedure in the real world. The novel optimization algorithms can be designed by employing the evolutionary capability of the flower pollination to optimize resources. Methods: This paper introduces the hybrid algorithm named Hybrid Hyper-Heuristic Flower Pollination Algorithm, HHFPA. It uses a combination of Flower Pollination Algorithm (FPA) and Hyper- Heuristic Evolutionary Algorithm (HypEA). This paper compares the basic FPA with the proposed algorithm named HHFPA. FPA is inspired by the pollination process of flowers whereas the hyper-heuristic evolutionary algorithm operates on the heuristics search space that contains all the heuristics to find a solution for a given problem. The proposed algorithm is implemented to solve the Quality of Service (QoS) based Service Composition Problem (SCoP) in Internet of Things (IoT). With increasing services with same functionality on the web, selecting a suitable candidate service based on non-functional characteristics such as QoS has become an inspiration for optimization. Results: This paper includes experimental results showing better outcomes to find the best solution using the proposed algorithm as compared to Basic FPA. Conclusion: The empirical analysis also reveals that HHFPA outperformed basic FPA in solving the SCoP with more convergence rates.
-
-
-
Analysis of Epidemic, PROPHET and Spray and Wait Routing Protocols in the Mobile Opportunistic Networks
Authors: Jasvir Singh and Raman MainiBackground: The Opportunistic Mobile Networks (OMNs) are a type of Mobile Adhoc Networks (MANETs) with Delay-Tolerant Network (DTN) features, where the sender to receiver connectivity never exists most of the time, due to dynamic nature of the nodes and the network partition. The real use of OMNs is to provide connectivity in challenged environments. Methods: The paper presents the detailed analysis of three routing protocols, namely Epidemic, PROPHET and Spray and Wait, against variable size of the messages and the Time To Live (TTL) in the networks. The key contribution of the paper is to explore routing protocols with mobility models for the dissemination of data to the destination. Routing uses the store-carry-forward mechanism for message transfer and network has to keep compromise between message delivery ratio and delivery delay. Results: The results are generated from the experiments with Opportunistic Network Environment (ONE) simulator. The performance is evaluated based on three metrics, the delivery ratio, overhead ratio and the average latency. The results show that the minimum message size (256 KB) offers better performance in the delivery than the larger message size (1 MB). It has also been observed that with the epidemic routing, since there are more message replicas, which in turn increase the cost of delivery, so with a smaller message, the protocol can reduce the overhead ratio with a high proportion. Conclusion: The average latency observed increases with the increase of the TTL of the message in three protocols with variation of the message size from 256KB to 1 MB.
-
-
-
An Image Encryption Scheme Based on Hybrid Fresnel Phase Mask and Singular Value Decomposition
Authors: Shivani Yadav and Hukum SinghBackground: An asymmetric cryptanalysis is suggested in Affine and Fresnel transform using Hybrid Fresnel phase Mask (HFM), Hybrid Mask (HM) and Singular Value Decomposition (SVD) to deliver additional security to the scheme. The usage of Affine Transform (AT) provides randomness in the input plane which benefits in enlarging the key space and SVD gives the nonlinearity in the process. Objective: In the FrT domain, usage of hybrid masks and AT in an asymmetric cryptosystem with SVD is to make encoded procedure difficult. Methods: On the plain image we firstly apply affine transform and then convoluted it with HFM, in FrT domain with propagation distance Z1 and the obtained part is convoluted with HM in FrT with propagation distance Z2 and then lastly on the encoded image SVD is applied. Results: Validity of the suggested scheme has been confirmed by using MATLAB R2018a (9.4.0.813654). The capability of the recommended scheme has been tested by statistical simulations such as histogram, entropy and correlation coefficient. Noise attack analysis has also done so that the system becomes robust against attacks. Conclusion: Asymmetric cryptosystem is recommended using pixel scrambling technique i.e. affine transform which shuffles the pixels hence helps for security of the system. Usage of SVD in the algorithm is to make the system robust. Performance and strength analysis are carried out for scrutiny of the forte and feasibility of the algorithm.
-
-
-
Modified Gamma Network: Design and Reliability Evaluation
Authors: Shilpa Gupta and Gobind L. PahujaBackground: VLSI technology advancements have resulted the requirements of high computational power, which can be achieved by implementing multiple processors in parallel. These multiple processors have to communicate with their memory modules by using Interconnection Networks (IN). Multistage Interconnection Networks (MIN) are used as IN, as they provide efficient computing with low cost. Objective: The objective of the study is to introduce new reliable Gamma MIN named as a Modified Gamma Interconnection Network (MGIN), which provide reliability and fault-tolerance with less number of stages of Switching element only. Methods: Switching Element (SE) of bigger size i.e. 2×3/3×2 has been employed at input/output stages inspite of 1×3/3×1 sized SE at input/output stages with reduction in one intermidiate stage. Fault tolerance has been introduced in the form of disjoint paths formed between each sourcedestnation node pair. Hence reliability has been improved. Results: Terminal, Broadcast and Network Reliability has been evaluated by using Reliability Block Diagrams for each source-destination node pair. The results have been shown, which depicts the higher reliability values for newly proposed network. The cost analysis shows that new MGIN is a cheaper network than other Gamma variants. Conclusion: MGIN has better reliability and Fault-tolerance than priviously proposed Gamma MIN.
-
-
-
Dynamic Trust Management Model for the Internet of Things and Smart Sensors: The Challenges and Applications
Authors: Anshu K. Dwivedi, A. K. Sharma and Rakesh KumarBackground: Internet of Things (IoT) is an important technology that promises a smart human being life, by allowing a communications between objects, machines and every things together with peoples. Trust is an important parameter which is closely related IOT with respect to sending a message source to destination. Objective: In this model the Internet of Thing (IOT) system to pledge with misbehaving nodes whose status are non deterministic. This paper also presents an overview of trust management model in IoT. The accuracy, robustness, and lightness of the proposed model is approved through a wide arrangement of recreations. Methods: In order to achieve the desired objective, the following four contributions has been proposed to improve the trust over internet of thing (IoT):1) End-to-end packet forwarding ratio (EPFR) 2) AEC 3) Packet Delivery Ratio 4) Detection Probability. Results: In this paper we calculate the performance of TM-IoT in term of End-to-End Packet Forwarding Ratio (EPFR) 2) AEC 3) Packet Delivery Ratio 4) Detection Probability. The exploratory result analysis shows the efficiency of proposed model as compared to existing work. Conclusion: The proposed model TM-IoT show the better exploratory result as compared to existing work in terms of End-to-End Packet Forwarding Ratio (EPFR) , AEC, Packet Delivery Ratio, Detection Probability.
-
-
-
Audio-Visual Speech Recognition Using LSTM and CNN
Authors: Eslam E. El Maghraby, Amr M. Gody and M. H. FaroukBackground: Multimodal speech recognition is proved to be one of the most promising solutions for robust speech recognition, especially when the audio signal is corrupted by noise. As the visual speech signal not affected by audio noise, it can be used to obtain more information used to enhance the speech recognition accuracy in noisy system. The critical stage in designing robust speech recognition system is choosing of reliable classification method from large variety of available classification techniques. Deep learning is well-known as a technique that has the ability to classify a nonlinear problem, and takes into consideration the sequential characteristic of the speech signal. Numerous researches have been done in applying deep learning to overcome Audio-Visual Speech Recognition (AVSR) problems due to its amazing achievements in both speech and image recognition. Even though optimistic results have been obtained from the continuous studies, researches on enhancing accuracy in noise system and selecting the best classification technique are still gaining lots of attention. Objective: This paper aims to build AVSR system that uses both acoustic combined with visual speech information and use classification technique based on deep learning to improve the recognition performance in a clean and noisy environment. Methods: Mel Frequency Cepstral Coefficient (MFCC) and Discrete Cosine Transform (DCT) are used to extract the effective features from audio and visual speech signal respectively. The audio feature rate is greater than the visual feature rate, so that linear interpolation is needed to obtain equal feature vectors size then early integrating them to get combined feature vector. Bidirectional Long-Short Term Memory (BiLSTM), one of the Deep learning techniques, are used for classification process and compare the obtained results to other classification techniques like Convolution Neural Network (CNN) and the traditional Hidden Markov Models (HMM). The effectiveness of the proposed model is proved by using two multi-speaker AVSR datasets termed AVletters and GRID. Results: The proposed model gives promising results where the obtained results In case of GRID, using integrated audio-visual features achieved highest recognition accuracy of 99.07% and 98.47% , with enhancement up to 9.28% and 12.05% over audio-only for clean and noisy data respectively. For AVletters, the highest recognition accuracy is 93.33% with enhancement up to 8.33% over audio- only. Conclusion: Based on the obtained results, we can conclude that increasing the size of audio feature vector from 13 to 39 doesn’t give effective enhancement for the recognition accuracy in clean environment, but in noisy environment, it gives better performance. BiLSTM is considered to be the optimal classifier for a robust speech recognition system when compared to CNN and traditional HMM, because it takes into consideration the sequential characteristic of the speech signal (audio and visual). The proposed model gives great improvement in the recognition accuracy and decreasing the loss value for both clean and noisy environments than using audio-only features. Comparing the proposed model to previously obtain results which using the same datasets, we found that our model gives higher recognition accuracy and confirms the robustness of our model.
-
Most Read This Month
