Recent Patents on Computer Science - Volume 12, Issue 3, 2019
Volume 12, Issue 3, 2019
-
-
Analysis of Non-Linear Activation Functions for Classification Tasks Using Convolutional Neural Networks
Authors: Aman Dureja and Payal PahwaBackground: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.
-
-
-
Classification and Retrieval of Images Based on Extensive Context and Content Feature Set
Authors: Thiriveedhi Y. S. Rao and Pakanati C. ReddyBackground: This paper renders a classification and retrieval of image achievements in the search area of image retrieval, especially content-based image retrieval, an area that has been very active and successful in the past few years. Objective: Primarily the features extracted established on the bag of visual words (BOW) can be arranged by utilizing Scaling Invariant Feature Transform (SIFT) and developed K-Means clustering method. Methods: The texture is extracted for a developed multi-texton method by our study. Our retrieval process consists of two stages such as retrieval and classification. The images will be classified established on the features by applying k- Nearest Neighbor (kNN) algorithm. This will separate the images into various classes in order to develop the precision and recall rate initially. Results: After the classification of images, the similar images are retrieved from the relevant class as per the afforded query image.
-
-
-
Classification of Operational and Financial Variables Affecting the Bullwhip Effect in Indian Sectors: A Machine Learning Approach
Authors: Sachin Gupta and Anurag SaxenaBackground: The increased variability in production or procurement with respect to less increase of variability in demand or sales is considered as bullwhip effect. Bullwhip effect is considered as an encumbrance in optimization of supply chain as it causes inadequacy in the supply chain. Various operations and supply chain management consultants, managers and researchers are doing a rigorous study to find the causes behind the dynamic nature of the supply chain management and have listed shorter product life cycle, change in technology, change in consumer preference and era of globalization, to name a few. Most of the literature that explored bullwhip effect is found to be based on simulations and mathematical models. Exploring bullwhip effect using machine learning is the novel approach of the present study. Methods: Present study explores the operational and financial variables affecting the bullwhip effect on the basis of secondary data. Data mining and machine learning techniques are used to explore the variables affecting bullwhip effect in Indian sectors. Rapid Miner tool has been used for data mining and 10-fold cross validation has been performed. Weka Alternating Decision Tree (w-ADT) has been built for decision makers to mitigate bullwhip effect after the classification. Results: Out of the 19 selected variables affecting bullwhip effect 7 variables have been selected which have highest accuracy level with minimum deviation. Conclusion: Classification technique using machine learning provides an effective tool and techniques to explore bullwhip effect in supply chain management.
-
-
-
Effect of Multiple-Agent Deployment in MANET
Authors: Bandana Mahapatra, Srikant Patnaik and Anand NayyarBackground: The scaling up of the MANETs is an important criterion to consider since the nodes have to maintain the updated routing information. Generally, agents are deployed to balance the load. However, a single agent performance may not be satisfactory, if the network has a large set of nodes. Hence, the agents are necessary once the network size increases, but the launching of Agents in the network involves computational complexity, power consumption and in turn, increase network traffic. This paper addresses the impact of multiple agent deployments in MANETs to quantify the favourable number of Agents in MANETs that can balance the computational overhead as well as the performance gain by involving multiple agents in MANETs. Methods: The behavior of a varying number of agents in a dynamic network environment launched by a node is analysed across different network metrics. After that, considering all the constraints affecting the network performance, the optimal number of agents is determined using F-Min- Constrained optimization technique. Result: The Perito-optimal points are generated that shows the approximately near most optimal points to the exact solution. Conclusion: The paper tries to strike a balance between the constraints like power consumption involved in the launch of Multiple Agents in the network.
-
-
-
A Two-tier Security Solution for Storing Data Across Public Cloud
Authors: Kavita Sharma, Fatima Rafiqui, Diksha, Prabhanshu Attri and Sumit K. YadavBackground: Data integrity protection in Cloud Computing becomes very challenging since the user no longer has the possession of their own data. This makes cloud data storage security of critical importance. The users can resort to legal action against the cloud provider if the provider fails to maintain the integrity of the shared data, but it also raises a need to secure users' private data across the public cloud. Methods: In this paper, we propose a novel end-to-end solution to ensure the security of data stored over a public cloud. It is a two-tier approach where the data is stored in an encrypted format and only the owner of the data can have access to the original data shared across the cloud. The algorithm OwnData: Encryption and Decryption is based on AES file encryption, which has the flexibility to be implemented across different cloud platforms. Results: The proposed OwnData model to provide privacy and confidentiality to the user data successfully secures data in an encrypted format. The users can gain full access control over the accessibility of their data. The application has been improvised to minimize page load time which enables it to achieve improvements in scalability. Algorithm and concatenation operators (dot) give minimum computation load during uploading of data to the cloud platform or downloading the same. Conclusion: The algorithm is robust, scalable and secure and It gives the user complete authorization and control over the data even when data is being stored remotely or in any other cloud premises.
-
-
-
Construction and Reduction Methods of Web Spam Identification Index System
Authors: Yuancheng Li, Rong Huang and Xiangqian NieBackground: With the rapid development of the Internet, the number of web spam has increased dramatically in recent years, which has wasted search engine storage and computing power on a massive scale. To identify the web spam effectively, the content features, link features, hidden features and quality features of web page are integrated to establish the corresponding web spam identification index system. However, the index system is highly correlation dimension. Methods: An improved method of autoencoder named stacked autoencoder neural network (SAE) is used to realize the reduction of the web spam identification index system. Results: The experiment results show that our method could reduce effectively the index of web spam and significantly improves the recognition rate in the following work. Conclusion: An autoencoder based web spam indexes reduction method is proposed in this paper. The experimental results show that it greatly reduces the temporal and spatial complexity of the future web spam detection model.
-
-
-
HRED, An Active Queue Management Algorithm for TCP Congestion Control
Authors: Nabhan Hamadneh, Mamoon Obiedat, Ahmad Qawasmeh and Mohammad BsoulBackground: Active Queue Management (AQM) is a TCP congestion avoidance approach that predicts congestion before sources overwhelm the buffers of routers. Random Early Detection (RED) is an AQM strategy that keeps history of queue dynamics by estimating an average queue size parameter avg and drops packets when this average exceeds preset thresholds. The parameter configuration in RED is problematic and the performance of the whole network could be reduced due to wrong setup of these parameters. Drop probability is another parameter calculated by RED to tune the drop rate with the aggressiveness of arriving packets. Objective: In this article, we propose an enhancement to the drop probability calculation to increase the performance of RED. Methods: This article studies the drop rate when the average queue size is at the midpoint between the minimum and maximum thresholds. The proposal suggests a nonlinear adjustment for the drop rate in this area. Hence, we call this strategy as the Half-Way RED (HRED). Results: Our strategy is tested using the NS2 simulator and compared with some queue management strategies including RED, TD and Gentle-RED. The calculated parameters are: throughput, link utilization and packet drop rate. Conclusion: Each performance parameter has been plotted in a separate figure; then the robustness of each strategy has been evaluated against these parameters. The results suggest that this function has enhanced the performance of RED-like strategies in controlling congestion. HRED has outperformed the strategies included in this article in terms of throughput, link utilization and packet loss rate.
-
-
-
Recent Advances in Supply Chain Costs-based Importance Measures in Supply Chain Systems Reliability
Authors: Dui Hongyan, Li Ceng and Zhang ChiBackground: In recent years, the cost is becoming one of the biggest obstacles for the effective operation of supply chain. It is increasingly urgent to minimize the cost and optimize the supply chain system. However, importance measures have been little used in it as a research focus in solving the optimization problems. Methods: With respect to the foregoing, this paper combines the reliability, importance measures and cost issues to improve the supply chain. Supply chain costs can be divided into three classes: design cost, manufacturing cost, and inventory cost. Results: Then importance measures based on the costs are given and demonstrated in a case study. Results show that the optimal system and minimal cost can be obtained by focusing on the important parts in supply chain operation. Conclusion: The importance order and the key elements identification are helpful to increase the operational efficiency of the supply chain and provide effective methods for improving the supply chain management.
-
-
-
Fuzzy based Schematic Component Selection Decision Search with OPAM-Ocaml Engine
Authors: Iqbaldeep Kaur and Rajesh K. BawaBackground: With an exponential increase in software online as well as offline, through each passing day, the task of digging out precise and relevant software components has become the need of the hour. There is no dearth of techniques used for the retrieval of software component from the available online and offline repositories in the conceptual as well as the empirical literature. However each of these techniques has its own set of limitations and suitability. Objective: The proposed technique gives concrete decision using schematic based search that gives better result and higher precision and recall values. Methods: In this paper, a component decision and retrieval engine called SR-SCRS (Schematic and Refinement based Software Component Retrieval System) has been presented using OPAM. OPAM is a github repository containing software components (packages), designed by OcamlPro. This search engine employs two retrieval techniques for a robust decision vis-o-vis Schematic-based search with fuzzy logic and Refinement-based search. The Schematic based search is based on matching the attribute values and the threshold of those values as given by the user. Thereafter the results are optimized to achieve the level of relevance using fuzzy logic. Refinement based search works on one particular attribute value. The experiments have been conducted and validated on OPAM dataset. Results: Precisely, the average precision of Schematic based search and Refinement based search is 60% and 27.86% which shows robust results. Conclusion: Hence, the performance and efficiency of the proposed work has been evaluated and compared with the other retrieval technique.
-
-
-
Research on Optimal CDMA Multiuser Detection Based on Stochastic Hopfield Neural Network
By Tongke FanBackground: Most of the common multi-user detection techniques have the shortcomings of large computation and slow operation. For Hopfield neural networks, there are some problems such as high-speed searching ability and parallel processing, but there are local convergence problems. Objective: The stochastic Hopfield neural network avoids local convergence by introducing noise into the state variables and then achieves the optimal detection. Methods: Based on the study of CDMA communication model, this paper presents and models the problem of multi-user detection. Then a new stochastic Hopfield neural network is obtained by introducing a stochastic disturbance into the traditional Hopfield neural network. Finally, the problem of CDMA multi-user detection is simulated. Conclusion: The results show that the introduction of stochastic disturbance into Hopfield neural network can help the neural network to jump out of the local minimum, thus achieving the minimum and improving the performance of the neural network.
-
Most Read This Month
