Recent Advances in Computer Science and Communications - Volume 13, Issue 4, 2020
Volume 13, Issue 4, 2020
-
-
Clustering Algorithm for Community Detection in Complex Network: A Comprehensive Review
Authors: Smita Agrawal and Atul PatelMany real-world social networks exist in the form of a complex network, which includes very large scale networks with structured or unstructured data and a set of graphs. This complex network is available in the form of brain graph, protein structure, food web, transportation system, World Wide Web, and these networks are sparsely connected, and most of the subgraphs are densely connected. Due to the scaling of large scale graphs, efficient way for graph generation, complexity, the dynamic nature of graphs, and community detection are challenging tasks. From large scale graph to find the densely connected subgraph from the complex network, various community detection algorithms using clustering techniques are discussed here. In this paper, we discussed the taxonomy of various community detection algorithms like Structural Clustering Algorithm for Networks (SCAN), Structural-Attribute based Cluster (SA-cluster), Community Detection based on Hierarchical Clustering (CDHC), etc. In this comprehensive review, we provide a classification of community detection algorithm based on their approach, dataset used for the existing algorithm for experimental study and measure to evaluate them. In the end, insights into the future scope and research opportunities for community detection are discussed.
-
-
-
The Estimation of the Critical Flashover Voltage of Insulators Using the Computational Intelligence
Authors: Louiza Dehyadegari and Somayeh KhajehasaniBackground: Electric insulation is generally a vital factor in both the technical and economic feasibility of complex power and electronic systems. Several researches focus on the behavior of insulators under polluted conditions. That they are mathematical and physical models of insulators, experiments and simulation programs. Also experiments on critical flashover voltage are timeconsuming and have more limitations such as high cost and need for especial equipment’s. Objective: This paper focused on optimized predicting of critical flashover voltage of Polluted insulators based on artificial intelligence. Method: Fuzzy logic and artificial neural networks are used in order to have the best estimation of the critical flashover. Results: In this way the correlation index (regression coefficient) improved about 2% toward previous works with same experimental data sets. Additionally, with using the properties of nonlinear artificial neural networks we can have the perfect (R=100%) prediction of the critical flashover voltage on experimental dataset. Conclusion: In this paper two methods for the estimation of critical flashover voltage of polluted insulators using fuzzy logic and neural networks was presented. the regression coefficient R achieved by the optimal parameters is 98.4% while in previous work is 96.7%. In neural network model we have regression coefficient 100% and in previous neural network model it was 99%. our test set is the same as previous works and achieved from experiments. These results show that fuzzy proposed methods are powerful and useful tools lead to a more accurate, generalized and objective estimation of the critical flashover voltage.
-
-
-
An Empirical Evaluation of Name Semantic Network for Face Annotation
Authors: Kasthuri Anburajan, Suruliandi Andavar and Poongothai ElangoBackground: Face annotation is the naming procedure to assign the correct name of a person who has emerged on an image. Objective: The main objective of this paper was to compare and evaluate six feature extraction techniques for face annotation under real-time challenging images and to find the best suitable feature for face annotation. Method: From literature review, it has been observed that Name Semantic Network (NSN) outperforms other annotation methods for various unconditioned images as well as ambiguous tags. However, the NSN’s performance can differ with various feature extraction techniques. Hence, its success is influenced by the feature extraction techniques that are used. Therefore, in this work, the NSN’s performance is experimented and evaluated with various feature extraction methods such as the Discrete Cosine Transform Local Binary Pattern (DCT-LBP), Discrete Fourier Transform Local Binary Pattern (DFT-LBP), Local Patterns of Gradients (LPOG), Gist, Local Order-constrained Gradient Orientations (LOGO) and Convolutional Neural Networks (CNNs) deep features. Results: Different feature extraction approaches demonstrate variations in performance with respect to a range of difficulties in face annotation using the Yahoo, LFW and IMFDB databases. The experimental results show that the deep feature method can achieve better recognition rate other than texture features. It confronts several issues in the presentation of a face in an image and produces better results. Conclusion: It is concluded that the CNNs deep feature is the best feature extraction technique that offers enhanced performance for face annotation.
-
-
-
Complexity and Nesting Evolution in Open Source Software Systems: Experimental Study
Authors: Mamdouh Alenezi, Mohammad Zarour and Mohammed AkourBackground: Software complexity affects its quality; a complex software is not only difficult to read, maintain and less efficient, but it also can be less secure with many vulnerabilities. Complexity metrics, e.g. cyclomatic complexity and nesting levels, are commonly used to predict and benchmark software cost and efficiency. Complexity metrics are also used to decide if code refactoring is needed. Objective: Software systems with high complexity need more time to develop and test and may lead to bad understandability and more errors. Nesting level in the target structure may result in developing more complex software in what is so-called the nesting problem. Nesting problem should be shortened by rewriting the code or breaking into several functional procedures. Method: In this paper, the relationship between the nesting levels, the cyclomatic complexity, and lines of code (LOC) metrics are measured through several software releases. In order to address how strong a relationship between these factors with the nesting level, correlation coefficients are calculated. Moreover, to examine to what extent the developers are aware of and tackle the nesting problem, the evolution of nesting levels for ten releases of five open sources systems is studied to see if it is improving over successive versions or not. Results: The result shows that the nesting level has variant effects on the cyclomatic complexity and SLOC for the five studied systems. Conclusion: nesting level has the tendency to have a positive correlation with other factors (cyclomatic complexity and LOC).
-
-
-
Power Transformer Fault Diagnosis using DGA and Artificial Intelligence
Authors: Seyed Javad T. Shahrabad, Vahid Ghods and Mohammad Tolou AskariBackground: Power transformers are one of the most applicable electricity network devices which transmit output power of the generator to the network through increasing voltage and decreasing current. Due to high cost of such devices and cost of disconnecting device upon failure, disconnection and failure of the transformer should be avoided as much as possible. Objective: In addition, in order to increase reliability and reduce maintenance costs, such devices should be monitored constantly. Internal faults ionize and warm up oil and as a result, gases like carbon dioxide, methane, ethane, ethylene and acetylene are produced. Various methods have been proposed for diagnosing fault in power transformers where one of the most well-known methods is dissolved gas analysis (DGA). DGA in oil is one of the effective tools for diagnosing initial faults in transformers. Method: Common fault detection methods using oil-dissolved gas analysis include Dornemburge, Duval’s triangle, IEC/IEEE standard, key gases and Rogers. In recent years, artificial intelligence like genetic algorithm, fuzzy logic and neural networks have been used to detect faults using DGA. In this paper, support vector machine (SVM) and decision tree are used to detect internal faults in power transformers. Results: By evaluation of the proposed methods, total accuracies of classifiers using SVM and decision tree were 90% and 97.5%, respectively. Conclusion: Decision tree shows better performance and it is suggested as a proper method for obtaining promising results.
-
-
-
Category Classification of the Training Set Combined with Sentence Multiplication for Semantic Data Extraction Using GENI Algorithm
Background: Increase in the internet data has increased the priority in the data extraction accuracy. Accuracy here lies with what data the user has requested for and what has been retrieved. The same large data sets that need to be analyzed make the required information retrieval a challenging task. Objective: To propose a new algorithm in an improved way than the traditional methods to classify the category or group to which each training sentence belongs. Method: Identifying the category to which the input sentence belongs is achieved by analyzing the Noun and Verb of each training sentence. NLP is applied to each training sentence and the group or category classification is achieved using the proposed GENI algorithm so that the classifier is trained efficiently to extract the user requested information. Results: The input sentences are transformed into a data table by applying GENI algorithm for group categorization. Plotting the graph in R tool, the accuracy of the group extracted by the Classifier involving GENI approach is higher than that of Naive Bayes & Decision Trees. Conclusion: It remains a challenging task to extract the user-requested data, when the user query is complex. Existing techniques are based more on the fixed attributes, and when we move with respect to the fixed attributes, it becomes too complex or impossible for us to determine the common group from the base sentence. Existing techniques are more suitable to a smaller dataset, whereas the proposed GENI algorithm does not hold any restrictions for the Group categorization of larger data sets.
-
-
-
Mobile Complex Factors: An Approach for the Prediction of Mobile Size Parameters
Authors: Ziema Mushtaq and Abdul WahidBackground: Mobile application and Effort estimation have direct relationship where on the basis of size, mobile application development efforts can be determined. Inaccuracy or inappropriateness in this approach can cause underestimation or overestimation. The main phase of Mobile application development is to standardize the approach to predict the size of an application. Objectives: The primary objective of this study is to quantify the functionality provided by the software to the end users it is necessary to know the size of an application. This paper focuses on the background of Mobile application size measures, Mobile complexity factors and the future work of the size measure. Methods: This is a survey based study where the primary endpoint was to see the resemblance of selected parameters with modern day mobile application development, a list of questions commonly known as questionnaire was prepared and was sent to more than 140 people including practitioners, researchers and industry people. Results: Out of 40 Parameters 9 parameters were selected to be includes as Mobile complex factors in order to calculate the functional size of a mobile application. Hence new concept for mobile size measures is introduced. Conclusion: Mobile complexity factors were proposed to form a standard to be used as an input in proposed size metrics for estimation of Mobile application development. To validate the effectiveness of this research work, there is something that is to be achieved in future: a) Propose a New Sizing metrics to calculate size of a Mobile application. b) Proposing a model for estimation of Cost in Mobile application development so that the there will be more accuracy in the resultant value and the process of estimation will be more streamlined.
-
-
-
Nickel Foam Surface Defect Identification Based on Improved Probability Extreme Learning Machine
Authors: Binfang Cao, Jianqi Li and Fangyan NieBackground: In the nickel foam production process, the detection and identification of surface defects relies heavily upon the operators’ experiences. However, the manual observation is of high labor intensity, low efficiency, strong subjectivity and high error rate. Objective: Therefore, this paper proposes a new method for the nickel foam surface defect detection and identification, based on an improved probability extreme learning machine. Methods: At first, a machine vision system for nickel foam is established, and gray level cooccurrence matrix is used to calculate defect features, which are inputted into extreme learning machine to train the defect classifier. Then a composite differential evolution algorithm is used to optimize the input weights and hidden layer thresholds. Finally, an integrated probabilistic ELM is proposed to avoid misjudgments when multiple probabilities values are almost identical. Conclusion: Experiments show that the proposed method can achieve a defect-identifying accuracy, which meets an enterprise’s needs.
-
-
-
BeaTicket - Beacon Based Ticketing System
Authors: Vanita Jain, Yogesh Khurana, Mayank Kharbanda and Kanav MehtaObjective: In this paper, we have implemented the Beacon Based Ticketing System, which uses the bluetooth of the smartphone that senses the Beacon in the bus and interacts with the backend server to issue a ticket to the user. All the ticket generation and completion process is done in the background without the interaction of the user. Our model employs Contactless approach for Ticket Generation unlike the smartcard or token that need to be touched against the terminal. Method: The basic model of Beacon based Ticketing System consists of 3 devices: Beacon, Bluetooth equipped smartphone and server. Whenever a passenger will board a bus, beacon attached to the bus will broadcast the information which will be detected by an application installed on the user’s smartphone. There is a unique ID for every beacon and that beacon will be used to uniquely determine the bus number. The UID will be send to the server which will check for the minimum balance in the wallet which is integrated into the application itself and generate the ticket correspondingly. Now when the passenger de-boards the bus, the information is sent again to the server which deducts the money from the wallet and sends the trip summary back to the user’s device. This system increases the efficiency of the process and provides comfort and ease to the passengers. Results: This model provides Offline Ticket Generation system, it allows the user to travel without internet access, and as soon it becomes available, the locally stored board and de-board time are sent to the server, which determines the respective locations, and accordingly the fare is deducted from the wallet. The Trip History, stores the records of all previous trips and displays them to the user in reverse chronological order. Conclusion: We have successfully implemented a Bus Ticketing System based on Beacon Technology. The Ticket Generation can be done both offline and online and payment will be made using the wallet provided to each user, that is integrated in the application itself. Offline Ticket Generation allowed the user to travel even without internet access. Our model also provided Contactless feature so that the user does not need to touch the smartphone against any terminal or scan any barcode. Optimizer feature notifies the user if the bus is nearby so that the user’s waiting time is minimised at the bus stop.
-
-
-
Defining Theoretical Foundations to Unified Metamodel For Model Reusability
Authors: Jagadeeswaran Thangaraj and Senthilkumaran UlaganathanBackground: In model-driven development, model transformation transforms one model to another between different phases of software engineering. In model transformation, metamodel plays a vital role which defines the abstract syntax of models and the interrelationship between their elements. A unified metamodel defines an abstract syntax for both source and target models when they share core elements. Theoretical approaches define language and platform independent representation of models in software engineering. This paper investigates the theoretical foundation to this unified meta-modelling for their consistent transformation. Objective: This paper aims to define the formal foundations to the unified metamodel for generating implementation from design specifications and model reusability. Method: In this paper, the study considers transformation from design to implementation and vice versa using theoretical foundations to build a verified software system. Results: The related tools provide a formal representation of the design phase for verification purpose. Our approach provides a set-theoretical foundation to the unified metamodel for model transformation from USE (UML/OCL) to Spec#. In other words, our approach defines the formal foundation to define a model which consists of all the required properties for verification at the design and implementation phase. Conclusion: This paper introduced a new set of the theoretical framework which acts as an interface between the design and implementation to generate verified software systems.
-
-
-
Improving Sentiment Analysis using Hybrid Deep Learning Model
Authors: Avinash C. Pandey and Dharmveer Singh RajpootBackground: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.
-
-
-
The Role of Management and Strategy in the Development of E-Marketing
Authors: Somayeh Khajehasani, Ahmadreza Abolizadeh and Louiza DehyadegariBackground: Internet is a global network that can be accessed through computers, mobile phones, PDA, digital TVs etc. The number of Internet users is constantly growing, and internet communications have become routine. The use of internet by companies to provide their products and brand is commonplace and even inevitable. This paper is focused on marketing by high-speed and low-cost internet. Emarketing refers to all efforts in terms of adjusting and developing marketing strategies in virtual spaces including, web, social media etc. Big and powerful internet marketing websites do not make small internet websites disappear but enable them to gain competitive advantage by market segmentation strategy. When a customer visits a website (websites), he leaves a trace of data called digital footprint that can be used to know and understand the customer requirements, desires and requests as well as improve and enhance the web. The e-marketing real position can be seen by an online survey on Iran retailers and two deep and accurate interviews. Objective: This paper is focused on marketing by high-speed and low-cost internet. Method: When a customer visits a website (websites), he leaves a trace of data called digital footprint that can be used to know and understand the customer requirements, desires and requests as well as improve and enhance the web. The e-marketing real position can be seen by an online survey on Iran retailers and two deep and accurate interviews. Results: E-markets and online orders are amongst common ways of customer affairs management in many commercial organizations and institutions. E-marketing is not only a specific performance related to selling products and services but also a managerial process to manage the relationship between the organization and customer. Conclusion: E-marketing performance can be introduced and categorized in three parts: 1. Integration: e-marketing involves all selling stages by the company as well as the company agents, as an integrated process. 2. Mediation: e-marketing controls the amount of customers’ demands and requirement by the amount of production and the service provider capacity of the company. 3. Brokerage: e-marketing plays a broker role among different parts of the company including financial sectors and foreign investors. Smart marketing processes should always focus on the necessary relationships between e-marketing processes and data-mining techniques to develop specific marketing strategies on the internet. Today, the access to the information is no longer a major benefit of the organizations, but the optimal use of the information is the senior managers’ main concern. It cannot be possible unless it is based upon the integrated and efficient systems in a way that it can cover all the organization’s activities and provide customers with the information in due time. With the high-speed internet and people turning to easier and more comfortable life, they can shop online. We found that market segmentation provided unexpected strategies to win the competition with small websites and our analyses showed that small websites could increase their share in the market, stay in the competition or win by market segmentation.
-
-
-
Speech Recognition Using Elman Artificial Neural Network and Linear Predictive Coding
Authors: Somayeh Khajehasani and Louiza DehyadegariBackground: Today, the automatic intelligent system requirement has caused an increasing consideration on the interactive modern techniques between human being and machine. These techniques generally consist of two types: audio and visual methods. Meanwhile, the need for developing the algorithms that enable the human speech recognition by machine is of high importance and frequently studied by the researchers. Objective: Using artificial intelligence methods has led to better results in human speech recognition, but the basic problem is the lack of an appropriate strategy to select the recognition data among the huge amount of speech information that practically makes it impossible for the available algorithms to work. Method: In this article, to solve the problem, the linear predictive coding coefficients extraction method is used to sum up the data related to the English digits pronunciation. After extracting the database, it is utilized to an Elman neural network to recognize the relation between the linear coding coefficients of an audio file with the pronounced digit. Results: The results show that this method has a good performance compared to other methods. According to the experiments, the obtained results of network training (99% recognition accuracy) indicate that the network still has better performance than RBF despite many errors. Conclusion: The results of the experiments showed that the Elman memory neural network has had an acceptable performance in recognizing the speech signal compared to the other algorithms. The use of the linear predictive coding coefficients along with the Elman neural network has led to higher recognition accuracy and improved the speech recognition system.
-
-
-
Prediction and Analysis of Strawberry Moisture Content based on BP Neural Network Model
Authors: Wei Jiang, Hongmei Xu, Elnaz Akbari, Jiang Wen, Shuang Liu, Chenglong Wang and Jiajun DongBackground: Moisture content is one of the most important indicators for the quality of fresh strawberries. Currently, several methods are usually employed to detect the moisture content in strawberry. However, these methods are relatively simple and can only be used to detect the moisture content of single samples but not batches of samples. Besides, the integrity of the samples may be destroyed. Therefore, it is important to develop a simple and efficient prediction method for strawberry moisture to facilitate the market circulation of strawberry. Objective: This study aims to establish a novel BP neural network prediction model to predict and analyze strawberry moisture. Methods: Toyonoka and Jingyao strawberries were taken as the research objects. The hyperspectral technology, spectral difference analysis, correlation coefficient method, principal component analysis and artificial neural network technology were combined to predict the moisture content of strawberry. Results: The characteristic wavelengths were highly correlated with the strawberry moisture content. The stability and prediction effect of the BP neural network prediction model based on characteristic wavelengths are superior to those of the prediction model based on principal components, and the correlation coefficients of the calibration set for Toyonaka and Jingyao respectively reached up to 0.9532 and 0.9846 with low levels of standard deviations (0.3204 and 0.3010, respectively). Conclusion: The BP neural network prediction model of strawberry moisture has certain practicability and can provide some reference for the on-line and non-destructive detection of fruits and vegetables.
-
-
-
An Asymmetric Optical Cryptosystem of Double Image Encryption Based on Optical Vortex Phase Mask Using Gyrator Transform Domain
Authors: Hukum Singh and Mehak KhuranaBackground: Optical Vortex (OV) has attracted attention amongst many researchers. Paper proposes a nonlinear scheme of image encryption based on Optical Vortex (OV) and Double Random Phase Encoding (DRPE) in the Gyrator Transform (GT) domain under phase truncation operations. Objective: The amplitude and phase truncation operation in the image encryption generates two decryption keys and convert the method to nonlinear. It has also been proposed opto-electronically. Original image can only be decrypted with correct values of OV, GT rotation angles and Decryption Keys (DKs). Methods: A novel asymmetric image encryption scheme, using optical vortex mask has been proposed in view of amplitude and phase truncation operation. The scheme becomes more strengthened by the parameters used in the Optical Vortex (OV) and by taking the (n)th power operation in the encryption path and (n)th root operation in the decryption path. Results: It shows that for each of the encryption parameters, binary image has greater sensitivity as compared to the grayscale image. This scheme inflates the security by making use of OV-based Structured Phase Mask (SPM) as it expands the key space. The scheme has also been investigated for its robustness and its sensitivity against various attacks such as noise and occlusion attacks under number of iterations. Conclusion: This scheme provides solution to the problem of key space with the use of GT rotational angles and OV phase mask thus enhances the security. The scheme has been verified based on various security parameters such as occlusion, noise attacks, CC, entropy etc.
-
-
-
Efficient Dynamic Resource Allocation in Hadoop Multiclusters for Load-Balancing Problem
Authors: Karthikeyan S., Hari Seetha and Manimegalai R.Background: ‘Map-Reduce’ is the framework and its processing of data by rationalizing the distributed servers. also its running the various tasks in parallel way. The most important problem in map reduce environment is Resource Allocation in distributed environments and data locality to its corresponding slave nodes. If the applications are not scheduled properly then it leads to load unbalancing problems in the cloud environments. Objective: This Research suggests a new dynamic way of allocating the resources in hadoop multi cluster environment in order to effectively monitor the nodes for faster computation, load balancing and data locality issues. The dynamic slot allocation is explained theoretically in order to address the problems of pre configuration, speculative execution, delay in scheduling and pre slot allocation in hadoop environments. Experiment is done with Hadoop cluster which increases the efficiency of the nodes and solves the load balancing problem. Methods: The Current design of Map Reduce Hadoop systems are affected by underutilization of slots. The reason is the number of maps and reducer is allotted is smaller than the available number of maps and reducers. In Hadoop most of the times its noticed that dynamic slot allocation policy, the mapper or reducers are idle. So we can use those unused map tasks to overloaded reducer tasks in-order to increase the efficiency of MR jobs and vice versa. Results: The real time experiment was implemented with Multinode Hadoop cluster map reduce jobs of file size 1giga bytes to 5 giga bytes and various performance test has been taken. Conclusion: This paper focused on Hadoop map reduce resource allocation management techniques for multi cluster environments. It proposes a novel dynamic slot allocation policy to improve the performance of yarn scheduler and eliminates the load balancing problem. This work proves that dynamic slot allocation is outperforms more than yarn framework. In future it considered to concentrate on CPU bandwidth and processing time.
-
-
-
Prediction of Tetralogy of Fallot using Fuzzy Clustering
Authors: K.R. K. Devi and V. DeepaBackground: Congenital Heart Disease is one of the abnormalities in your heart's structure. To predict the tetralogy of fallot in a heart is a difficult task. Cluster is the collection of data objects, which are similar to one another within the same group and are different from the objects in the other clusters. To detect the edges, the clustering mechanism improve its accuracy by using segmentation, Colour space conversion of an image implemented in Fuzzy c-Means with Edge and Local Information. Objective: To predict the tetralogy of fallot in a heart, the clustering mechanism is used. Fuzzy c-Means with Edge and Local Information gives an accuracy to detect the edges of a fallot to identify the congential heart disease in an efficient way. Methods: One of the finest image clustering methods, called as Fuzzy c-Means with Edge and Local Information which will introduce the weights for a pixel value to increase the edge detection accuracy value. It will identify the pixel value within its local neighbor windows to improve the exactness. For evaluation , the Adjusted rand index metrics used to achieve the accurate measurement. Results: The cluster metrics Adjusted rand index and jaccard index are used to evaluate the Fuzzy c- Means with Edge and Local Information. It gives an accurate results to identify the edges. By evaluating the clustering technique, the Adjusted Rand index, jaccard index gives the accurate values of 0.2, 0.6363, and 0.8333 compared to other clustering methods. Conclusion: Tetralogy of fallot accurately identified and gives the better performance to detect the edges. And also it will be useful to identify more defects in various heart diseases in a accurate manner. Fuzzy c-Means with Edge and Local Information and Gray level Co-occurrence matrix are more promising than other Clustering Techniques.
-
-
-
A Framework for Multimedia Event Classification With Convoluted Texture Feature
Authors: Kaavya Kanagaraj and Lakshmi P. G.G.Background: The proposed work uses two approaches as its background. They are (i) LBP approach (ii) Kirsch compass mask. Texture classification plays a vital role in object discrimination from the original image. LBP is majorly used for classifying texture. Many filtering based methods co-occurrence matrix method, etc., were used, but due to the computational efficiency and invariance to monotonic grey level changes, LBP is adopted majorly. Second, as Edge plays a vital role in discriminating the object visually, Kirsch compass mask was applied to obtain maximum edge strength in 8 compass directions which has the advantage of changing the mask according to users own requirement than any other compass mask. Objective: The objective of our work was to extract better features and model a classifier for the Multimedia Event Detection task. Methods: The proposed work consists of two steps for feature extraction. Initially, an LBP based approach is used for object discrimination, later, convolution is used for object magnitude determination using Kirsch Compass mask. Eigenvalue decomposition is adopted for feature representation. Finally, a classifier is modelled using a chi-square kernel for the event classification task. Result: The proposed event detection work is experimented using Columbia Consumer Video (CCV) dataset. It contains 20 event based videos. The proposed work is evaluated with other existing works using mean Average Precision (mAP). Several experiments have been carried out to evaluate our work, they are LBP vs. non-LBP approach, Kirsch vs. Robinson compass mask, Kirsch masks angle wise analysis, comparison of above approaches are performed in a modeled classifier. Two approaches are used to compare the proposed work with other existing works.They are (i) Non Clustered Events (events were considered individually and one versus one strategy was followed) (ii) Clustered Events (some events were clustered and followed one vs. all strategy and remaining events were non-clustered). Conclusion: In the proposed work, a method for event detection is described. Feature extraction is performed using LBP based approach and Kirsch compass mask for convolution. For event detection, a classifier model is generated using the chi-square kernel. The accuracy of event classification is further increased using clustered events approach. The proposed work is compared with various state- of- the- art methods and proved that the proposed work obtained outstanding performance.
-
-
-
Empowered AODV Protocol in Wireless Sensor Network using Three Variable RSA Cryptosystem
Authors: Amit K. Agarwal, Munesh Chandra and S.S. SarangdevotBackground: The Wireless Sensor Network (WSN) is a type of networks which primarily designed for the purpose of monitoring in remote areas. It consists of communicating nodes (called sensor's) which communicate each other to share their data and passing the information to the central node. In many applications like defence requires the secure communication of information. However, due to the numerous characteristics of WSN such as open shared communication channel, limited memory, and processing power of sensors, etc. these networks are vulnerable to various attacks such as black hole, gray hole, etc. Objective: The objective of the paper is to secure the AODV routing protocol in WSN using cryptography techniques. Methods: In this paper, the Ad hoc On-demand Distance Vector (AODV) routing protocol has been chosen for information routing because of their lightweight processing capability. To provide secure communication in WSN, the AODV routing protocol is secured by utilizing the RSA key generation algorithm. Here, RSA with three variables (three prime numbers) is employed instead of two variables. Results: The effectiveness of the proposed approach in handling black hole attack is being verified through the simulation results obtained from the experiments conducted using Network Simulator tool (NS2). The three popular performance metrics namely Average End-to-End Delay, Packet Delivery Ratio, and Average Throughput are used for evaluation purpose. These results are observed under different pose time and varying number of malicious nodes. Conclusion: In this paper, a new three variable RSA cryptosystem-based security model is proposed to protect the communication against the Black Hole (BH) attack in wireless sensor networks. The use of three variables instead of two variables allows our model to provide more security as compared to other methods. Simulation results obtained from the experiments carried out using NS2 tool evident the performance of the proposed model over original AODV and other previous models.
-
Most Read This Month
