Recent Advances in Computer Science and Communications - Volume 15, Issue 2, 2022
Volume 15, Issue 2, 2022
-
-
Events in Tweets: Graph-Based Techniques
Authors: Abhaya K. Pradhan, Hrushikesha Mohanty and Rajendra Prasad LalBackground: Mining Twitter streaming posts (i.e., tweets) to find events or the topics of interest has become a hot research problem. In the last decade, researchers have come up with various techniques like bag-of-words techniques, statistical methods, graph-based techniques, topic modelling approaches, NLP and ontology-based approaches, machine learning and deep learning methods for detecting events from tweets. Among these techniques, the graph-based technique is efficient in capturing the latent structural semantics in the tweet content by modelling word cooccurrence relationships as a graph and able to capture the activity dynamics by modelling the user- tweet and user-user interactions. Discussion: This article presents an overview of different event detection techniques and their methodologies. Specifically, this paper focuses on graph-based event detection techniques in Twitter and presents a critical survey on these techniques, their evaluation methodologies and datasets used. Further, some challenges in the area of event detection in Twitter, along with future directions of research, are presented. Conclusion: Microblogging services and online social networking sites like Twitter provide a massive amount of valuable information on real-world happenings. There is a need for mining this information, which will help in understanding the social interest and effective decision making on various emergencies. However, event detection techniques need to be efficient in terms of time and memory and accurate for processing such voluminous, noisy and fast-arriving information from Twitter.
-
-
-
Chaotic Butterfly Optimization Algorithm Applied to Multi-objective Economic and Emission Dispatch in Modern Power System
Authors: Arun K. Sahoo, Tapas Kumar Panigrahi, Soumya Ranjan Das and Aurobinda BeheraAims: To optimize the economic and emission dispatch of the thermal power plant. Background: Considering both the economic and environmental aspects, a combined approach has been developed to attain a solution for a problem known as the combined economic and emission dispatch problem. The CEED problem is a non-linear bi-objective problem with conflicting behaviour having all the practical constraints. Objective: A new optimization method is improvised by applying the chaotic mapping to the butterfly optimization algorithm. This method is applied to the Combined Economic and Emission Dispatch (CEED) problem for optimizing consumed fuel cost and produced environment pollutants. Methods: Improved Chaotic Butterfly algorithm is applied to the optimization problem to optimize combined economic and emission dispatch. Results: The proposed technique is tested on four different test systems with various practical constraints like valve point loading, ramp rate limit and prohibited operating zones. The obtained results from the chaotic butterfly optimization algorithm (CBOA) are compared with other optimization techniques providing an optimum solution for the CEED problem. Conclusion Considering the environmental impact, the novel metaheuristic swarm intelligence technique is applied. Different test systems with different practical operational constraints like valve-point loading, prohibited operating zones and ramp rate limits and emission dispatch have been analyzed to validate the implementation of the proposed algorithm in real life CEED problem situations.
-
-
-
Harmonic Distortion Minimization in Power System Using Differential Evolution Based Active Power Filters
Authors: Alok K. Mishra, Soumya Ranjan Das, Prakash Kumar Ray, Ranjan Kumar Mallick and Himansu DasAims: The main focus of this work is to improve balanced and sinusoidal grid currents by feeding compensating current at the point of common coupling (PCC). Background: In recent years, the advancement in electronics and electrical appliances is widely improved and is also more sophisticated. These appliances require uninterrupted and quality power. Therefore in the growing power system scenario, several issues like malfunction of electrical sensitive devices, overheat in transformer, interference in communication, failures in the computer network, etc., adversely affect the power quality (PQ). These issues are generated due to the rapid use of non-linear loads in a three-phase system, which generates harmonics in the system. To overcome these PQ issues, several PQ mitigation custom power devices are integrated into the power distribution network. However, the conventional PQ mitigation devices are insufficient to eliminate PQ problems such as current and voltage harmonics, voltage sag/swell and voltage unbalances associated with the power distribution network. Objective: The objective of using A-PSO was to find the global optimum of the spread factor parameter at the upper level. A-PSO has a faster convergence speed and correct response compared to the PSO algorithm. Methods: PSO, A-PSO and M p-q were used in this study. Results: A-PSO gave better results than PSO. Conclusion: A three-phase system with SHAPF injected at PCC is proposed in this paper. The SHAPF injects filter current at PCC for suppressing the harmonics using a modified pq scheme. For controlling the PIC, two optimised parameters are discussed and found that reducing the harmonics distortions using A-PSO is giving better results compared to the conventional PSO.
-
-
-
Designing a Smart Cart Application with Zigbee and RFID Protocols
Authors: Palvadi S. Kumar, Abhishek Kumar, Rashmi Agrawal and Pramod Singh RathoreAim: To improve the smart cart application for better user flexibility. Background: Users are able to easily purchase products using a smart cart. We have designed a strategy for these smart carts such that a person does not have to stand in a queue for their bill. The stock update is automatically recorded in the server. Objective: The objective of this work is to find an alternate solution for shopping that is timesaving. Methods: We have used the radio-frequency identification mechanism for data transmission and have used a server for storing the information of stocks. Results: The system is observed to exhibit efficient performance. Conclusion: We have proven that it is possible to reduce the billing time by using our smart cart application and that product identification via image processing is also possible.
-
-
-
Sentiment Classification Using Feature Selection Techniques for Text Data Composed of Heterogeneous Sources
Authors: Vaishali Arya and Rashmi AgrawalAims: This study analyzes feature selection techniques for text data composed of heterogeneous sources for sentiment classification Objectives: The objective of work is to analyze the feature selection technique for text gathered from different sources to increase the accuracy of sentiment classification done on microblogs. Methods: Three feature selection techniques Bag-of-Word(BOW), TF-IDF, and word2vector were applied to find the most suitable feature selection techniques for heterogeneous datasets. Results: TF-IDF outperforms all of the three selected feature selection techniques for sentiment classification with SVM classifier. Conclusion: Feature selection is an integral part of any data preprocessing task, and along with that, it is also important for the machine learning algorithms to achieve good accuracy in classification results. Hence it is essential to find out the best suitable approach for heterogeneous sources of data. The heterogeneous sources are rich sources of information and they also play an important role in developing a model for adaptable systems as well. So keeping that also in mind, we compared the three techniques for heterogeneous source data and found that TF-IDF is the most suitable one for all types of data, whether it is balanced or imbalanced data, it is a single source or multiple source data. In all cases, the TF-IDF approach is the most promising approach in generating the results for the classification of sentiments of users.
-
-
-
Role of Digital Watermarking in Wireless Sensor Network
Authors: Sanjay Kumar, Binod K. Singh, Akshita, Sonika Pundir, Rashi Joshi and Simran BatraWSN has been exhilarated in many application areas such as military, medical, environment, etc. Due to the rapid increase in applications, it causes proportionality to security threats because of its wireless communication. Since nodes used are supposed to be independent of human reach and dependent on their limited resources, the major challenges can be framed as energy consumption and resource reliability. Ensuring security, integrity, and confidentiality of the transmitted data is a major concern for WSN. Due to the limitation of resources in the sensor nodes, the traditionally intensive security mechanism is not feasible for WSNs. This limitation brought the concept of digital watermarking in existence. Watermarking is an effective way to provide security, integrity, data aggregation and robustness in WSN. In this paper, several issues and challenges, as well as the various threats of WMSN, is briefly discussed. Also, we have discussed the digital watermarking techniques and its role in WMSN.
-
-
-
A Deep Convolutional Neural Network Based Approach for Effective Neonatal Cry Classification
Authors: K Ashwini and P.M. D. R. VincentCry is the universal language of babies to communicate with others. Infant cry classification is a kind of speech recognition problem that should be treated wisely. In the last few years, it has been gaining momentum and if it is researched in depth will be of help for caretakers and the community at large. Objective: This study aims to develop an infant cry classification system predictive model by converting audio signals into spectrogram image through deep convolutional neural network. The network performs end to end learning processes, thereby reducing the complexity involved in audio signal analysis and improving the performance using optimisation technique. Method: A time frequency-based analysis called Short Time Fourier Transform (STFT) is applied to generate the spectrogram. 256 DFT (Discrete Fourier Transform) points are considered to compute the Fourier transform. A deep convolutional neural network called AlexNet with few enhancements is utilised in this work to classify the recorded infant cry. To improve the effectiveness of the above mentioned neural network, Stochastic Gradient Descent with Momentum (SGDM) is used to train the algorithm. Results: A deep neural network-based infant cry classification system achieves a maximum accuracy of 95% in the classification of sleepy cries. The result shows that convolutional neural network with SGDM optimisation acquires higher prediction accuracy. Conclusion: This proposed work has been compared with the convolutional neural network with SGD and Naïve Bayes and based on the result, it is implied the convolutional neural network with SGDM performs better than the other techniques.
-
-
-
Multi-Strategy Learning for Recognizing Network Symptoms
More LessBackground: Many network symptoms may occur due to different reasons in today's computer networks. The finding of a few kinds of these interesting symptoms is not direct. Therefore, an intelligent system is presented for extracting and recognizing that kind of network symptoms based on prior background knowledge. Methods: Here, the main target is to build a network-monitoring tool that can discover network symptoms and provide reasonable interpretations for various operational patterns. These interpretations are discussed with the purpose of supporting network planners/administrators. It introduces Multi-Strategy Learning (MSL) that can recognize network symptoms. Repeated symptoms or sometimes a single event of heavy traffic networks may lead us to recognize various network patterns that may be expressed for discovering and solving network problems. Results: To achieve this goal, an MSL system recognizes network symptoms. The first technique is done in an empirical manner. It focuses on selecting subsets of data traffic by using certain fields from a group of records related to database samples using queries. The data abstraction is accomplished, and various symptoms are extracted. A second technique is based on explanation-based learning. It produces a procedure that obtains operational rules. These rules may lead to network administrators solving some problems later. By using only one formal training example in the domain knowledge (network), we can learn and analyze in terms of this knowledge. In this work, to store and maintain network-monitoring traffic, network events, and the knowledge base for implementing the above techniques, Hadoop and a relational database are used. Discussion: Using EBL only is not suitable, and it cannot take the same props as other types of available training data set as SBL can. EBL does not need only a complete domain theory but also needs consistent domain theory. This reduces the suitability of EBL as knowledge acquisition. For this reason, we used EBL to discover the pattern of network malfunction in case of a single example only to take a complete solution for this example. Conclusion: Hence, the proposed system can discover abnormal patterns (symptoms) of the underlying network traffic. A real network using our MSL, as such, could recognize these abnormal patterns. The network administrator can adapt the current configuration according to advice and observations that come from that intelligent system in order to avoid the problems that may currently exist or it may happen in the near future. Finally, the proposed system is capable of extracting different symptoms (behaviors and operational patterns) and provides sensible advice in order to support networkplanning activity.
-
-
-
A New Method for Community Detection in the Complex Network on the Basis of Similarity
Authors: Munawar Hussain and Awais AkramIntroduction: Regarding complex network, to find optimal communities in the network has become a key topic in the field of network theory. It is crucial to understand the structure and functionality of associated networks. In this paper, we propose a new method of community detection that works on the Structural Similarity of a Network (SSN). Methods: This method works in two steps, in the first step, it removes edges between the different groups of nodes which are not very similar to each other. As a result of edge removal, the network is divided into many small random communities, which are referred to as main communities. Results: In the second step, we apply the Evaluation Method (EM), it chooses the best quality communities, from all main communities which are already produced in the first step. Lastly, we apply evaluation metrics to our proposed method and benchmarking methods, which show that the SSN method can detect comparatively more accurate results than other methods in this paper. Discussion: This approach is defined on the basis of the unweighted network, so in further research, it could be used on weighted networks and can explore some new deep-down attributes. Furthermore, it will be used for Facebook and twitter weighted data with the artificial intelligence approach. Conclusion: In this article, we proposed a novel method for community detection in networks, called Structural Similarity of Network (SSN). It works in two steps. In the first step, it randomly removes low similarity edges from the network, which makes several small disconnected communities, called as main communities. Afterward, the main communities are merged to search for the final communities, which are near to actual existing communities of the network.
-
-
-
Brain Tumor Detection via Asymmetry Quantification Across Mid Sagittal Plane
Authors: Shoaib A. Banday and Mohammad K. PanditIntroduction: Brain tumor is among the major causes of morbidity and mortality rates worldwide. According to the National Brain Tumor Foundation (NBTS), the death rate has nearly increased by as much as 300% over the last couple of decades. Tumors can be categorized as benign (non-cancerous) and malignant (cancerous). The type of the brain tumor significantly depends on various factors like the site of its occurrence, its shape, the age of the subject, etc. On the other hand, Computer-Aided Detection (CAD) has been improving significantly in recent times. The concept, design and implementation of these systems ascend from fairly simple ones to computationally intense ones. For efficient and effective diagnosis and treatment plans in brain tumor studies, it is imperative that an abnormality is detected at an early stage as it provides a little more time for medical professionals to respond. The early detection of diseases has predominantly been possible because of medical imaging techniques developed from the past many decades like CT, MRI, PET, SPECT, FMRI, etc. The detection of brain tumors, however, has always been a challenging task because of the complex structure of the brain, diverse tumor sizes and locations in the brain. Methods: This paper proposes an algorithm that can detect the brain tumors in the presence of the Radio-Frequency (RF) inhomogeneity. The algorithm utilizes the Mid Sagittal Plane as a landmark point across which the asymmetry between the two brain hemispheres is estimated using various intensity and texture-based parameters. Results: The results show the efficacy of the proposed method for the detection of the brain tumors with an acceptable detection rate. Conclusion: In this paper, we have calculated three textural features from the two hemispheres of the brain viz: Contrast (CON), Entropy (ENT) and Homogeneity (HOM) and three parameters viz: Root Mean Square Error (RMSE), Correlation Co-efficient (CC), and Integral of Absolute Difference (IAD) from the intensity distribution profiles of the two brain hemispheres to predict any presence of the pathology. First, a Mid-Sagittal Plane (MSP) is obtained on the Magnetic Resonance Images that virtually divides the brain into two bilaterally symmetric hemispheres. The block-wise texture asymmetry is estimated for these hemispheres using the above 6 parameters.
-
-
-
Formal Specification and Verification of Data Separation for Muen Separation Kernel
Authors: Ram C. Bhushan and Dharmendra K. YadavIntroduction: Integrated mixed-criticality systems are becoming increasingly popular for application-specific systems that need a separation mechanism for available onboard resources and the processors equipped with hardware virtualization, which allows the partitions to physical resources, including processor cores, memory, and I/O devices, among guest Virtual Machines (VMs). For building mixed-criticality computing environment, traditional virtual machine systems are inappropriate because they use hypervisors to schedule separate VMs from physical processor cores. This article discusses the design of an environment for mixed-criticality systems: The Muen, an x86/64 separation kernel for high assurance. The Muen Separation Kernel is an open source microkernel with no run-time errors at the source code level. The Muen separation kernel has been designed precisely to encounter the challenging requirements of high-assurance systems built on the Intel x86/64 platform. Muen is under active development, and none of the kernel properties of it has been verified yet. In this paper, we present a novel work of demonstrating one of the kernel properties formally. Methods: The CTL used in NuSMV is a first-order modal along with data-depended processes and regular formulas. CTL is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any of which might be an actual path that is realized. This section shows the verification of all the requirements mentioned in section 3. In the NuSMV tool, the command used for confirmation of the formulas written in CTL is checkctlspec -p ”CTL-expression”. The nearest quantifier binds each occurrence of a variable in the scope of the bound variable, which has the same name and the same number of arguments. Results: Formal methods have been applied to various projects for specification and verification of safety properties. Some of them are the SCOMP , SeaView , LOCK, and Multinet Gateway projects. The TLS was written formally. Several mappings were done between the TLS and the SCOMP code: Informal English language to TLS, TLS to actual code, and TLS to pseudo-code. The authors present an ACL2 model for a generic separation kernel, also known as GWV approach. Conclusion: We consider the formal verification of data separation property, which is one of the crucial modules to achieve separation functionality. The verification of the data separation manager is carried out on the design level using the NuSMV tool. Furthermore, we present the complete model of the data separation unit along with its code written in the NuSMV modelling language. Finally, we have converted the non-functional requirements into the formal logic, which then has verified the model formally.
-
-
-
Position and Pose Measurement of 3-PRS Ankle Rehabilitation Robot Based on Deep Learning
Authors: Guoqiang Chen, Hongpeng Zhou, Junjie Huang, Mengchao Liu and Bingxin BaiIntroduction: The position and pose measurement of the rehabilitation robot plays a very important role in patient rehabilitation movement, and the non-contact real-time robot position and pose measurement is of great significance. Rehabilitation training is a relatively complicated process, so it is very important to detect the training process of the rehabilitation robot in real-time and its accuracy. The method of deep learning has a very good effect on monitoring the rehabilitation robot state. Methods: The structure sketch and the 3D model of the 3-PRS ankle rehabilitation robot are established, and the mechanism kinematics is analyzed to obtain the relationship between the driving input - the three slider heights - and the position and pose parameters. The whole network of the position and pose measurement is composed of two stages: (1) measuring the slider heights using the convolutional neural network (CNN) based on the robot image and (2) calculating the position and pose parameter using the backpropagation neural network (BPNN) based on the measured slider heights from the CNN. According to the characteristics of continuous variation of the slider heights, a CNN with regression is proposed and established to measure the robot slider height. Based on the data calculated by using the inverse kinematics of the 3-PRS ankle rehabilitation robot, a BPNN is established to solve the forward kinematics for the position and pose. Results: The experimental results show that the regression CNN measures the slider height and then the BPNN accurately measures the corresponding position and pose. Eventually, the position and pose parameters are obtained from the robot image. Compared with the traditional robot position and pose measurement method, the proposed method has significant advantages. Conclusion: The proposed 3-PRS ankle rehabilitation position and pose method can not only reduce the experiment period and cost, but also has excellent timeliness and precision. The proposed approach can help the medical staff to monitor the status of the rehabilitation robot and help the patient in rehabilitation training. Discussion: The goal of the work is to construct a new position and pose detection network based on the combination of the regression CNN and the BPNN. The main contribution is to measure the position and pose of the 3-PRS ankle rehabilitation robot in real-time, which improves the measurement accuracy and the efficiency of the medical staff work.
-
-
-
A Fast and Reliable Balanced Approach for Detecting and Tracking Road Vehicles
By Wael FaragIntroduction: An advanced, reliable and fast vehicle detection-and-tracking technique is proposed, implemented and tested. In this paper, an advanced-and-reliable vehicle detectionand- tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and- Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). Methods: The Real-Time Vehicle Detection-and-Tracking (RT_VDT) is proposed, and it is mainly a pipeline of reliable computer-vision and machine-learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main emphasis is the careful fusion of the employed algorithms, where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. Results: The RT_VDT is tested and its performance is evaluated using actual road images and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions. Discussion: Robust real-time vehicle detection and tracking is required for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC).
-
Most Read This Month
