Recent Advances in Computer Science and Communications - Volume 13, Issue 5, 2020
Volume 13, Issue 5, 2020
-
-
Secrecy Outage Performance of Cognitive Radio Network with Selection Combining at Eavesdropper
Authors: Khyati Chopra, Ranjan Bose and Anupam JoshiBackground: Based on the idea of cooperative communication, recently a lot of attention has been drawn to cooperative spectrum access for the secure information transmission in a Cognitive Radio Network (CRN). Security is one of the most important aspects of these networks, as due to their open and dynamic nature, they are extremely vulnerable to malicious behavior. Cooperative cognitive radio has emerged as a dynamic spectrum access technique, where an unlicensed (secondary) user is allowed to simultaneously access the licensed channels dedicated to a Primary User (PU), as long as the Quality of Service (QoS) of primary communication is not affected. Methods: This paper investigates the secrecy outage performance of threshold-based cognitive decode-andforward relay network, with interference constraints from primary licensed user. Threshold-based relaying is considered where; the source message is successfully decoded by the relay, only if the received SNR satisfies the particular threshold. Outage probability expressions have been derived for the worst-case scenario, where only the eavesdropper can achieve the advantage of diversity. The Selection Combining (SC) diversity scheme is employed only at the secondary eavesdropper. Results: The system secrecy performance is better for SC diversity scheme at the eavesdropper than Maximal Ratio Combining (MRC) diversity scheme, as MRC has better diversity performance than SC. We have shown that the improvement in desired secrecy rate, predetermined threshold, eavesdropper channel quality and interference constraints affect the secrecy performance of the cognitive radio system. The outage probability decreases accordingly with an increase in the maximum tolerable interference level at primary destination. The outage probability of Optimal relay Selection (OS) scheme is derived for a multi-relay system, when either the Instantaneous Channel State Information (ICSI) or the Statistical Channel State Information (SCSI) is available. We have shown that the secrecy performance of the OS with ICSI of the system is better than with SCSI. Also, the OS improves the performance of the multi-relay system, when the number of relays is increased. Conclusion: The secrecy outage probability of threshold-based DF underlay cognitive relay network is evaluated. Both interference and maximum transmit power constraints are considered at secondary source and secondary relay. Also, the relay can successfully decode the message, only if it meets the pre-defined threshold. We have investigated the performance of MRC and SC diversity schemes at the secondary eavesdropper and have shown that the system secrecy performance is better for SC than MRC, as MRC has better diversity performance than SC. We have shown that the system secrecy performance is significantly affected by the required secrecy rate, pre-defined threshold, interference constraints and choice of diversity scheme (MRC/SC) at the eavesdropper. The outage probability of OS scheme is derived for a multi-relay system, when either the ICSI or the SCSI is available. We have shown that the secrecy performance of the OS with ICSI of the system is better than with SCSI. Also, the OS improves the performance of the multi-relay system, when the number of relays is increased.
-
-
-
Combinatorial Double Auction Based Meta-scheduler for Medical Image Analysis Application in Grid Environment
Background: Medical image analysis application has complex resource requirement. Scheduling Medical image analysis application is the complex task to the grid resources. It is necessary to develop a new model to improve the breast cancer screening process. Proposed novel Meta scheduler algorithm allocate the image analyse applications to the local schedulers and local scheduler submit the job to the grid node which analyses the medical image and generates the result sent back to Meta scheduler. Meta schedulers are distinct from the local scheduler. Meta scheduler and local scheduler have the aim at resource allocation and management. Objective: The main objective of the CDAM meta-scheduler is to maximize the number of jobs accepted. Methods: In the beginning, the user sends jobs with the deadline to the global grid resource broker. Resource providers sent information about the available resources connected in the network at a fixed interval of time to the global grid resource broker, the information such as valuation of the resource and number of an available free resource. CDAM requests the global grid resource broker for available resources details and user jobs. After receiving the information from the global grid resource broker, it matches the job with the resources. CDAM sends jobs to the local scheduler and local scheduler schedule the job to the local grid site. Local grid site executes the jobs and sends the result back to the CDAM. Success full completion of the job status and resource status are updated into the auction history database. CDAM collect the result from all local grid site and return to the grid users. Results: The CDAM was simulated using grid simulator. Number of jobs increases then the percentage of the jobs accepted also decrease due to the scarcity of resources. CDAM is providing 2% to 5% better result than Fair share Meta scheduling algorithm. CDAM algorithm bid density value is generated based on the user requirement and user history and ask value is generated from the resource details. Users who, having the most significant deadline are generated the highest bid value, grid resource which is having the fastest processor are generated lowest ask value. The highest bid is assigned to the lowest Ask it means that the user who is having the most significant deadline is assigned to the grid resource which is having the fastest processor. The deadline represents a time by which the user requires the result. The user can define the deadline by which the results are needed, and the CDAM will try to find the fastest resource available in order to meet the user-defined deadline. If the scheduler detects that the tasks cannot be completed before the deadline, then the scheduler abandons the current resource, tries to select the next fastest resource and tries until the completion of application meets the deadline. CDAM is providing 25% better result than grid way Meta scheduler this is because grid way Meta scheduler allocate jobs to the resource based on the first come first served policy. Conclusion: The proposed CDAM model was validated through simulation and was evaluated based on jobs accepted. The experimental results clearly show that the CDAM model maximizes the number of jobs accepted than conventional Meta scheduler. We conclude that a CDAM is highly effective meta-scheduler systems and can be used for an extraordinary situation where jobs have a combinatorial requirement.
-
-
-
Resource Allocation in Cloud using Multi Bidding Model with User Centric Behavior Analysis
Authors: N. Vijayaraj and T. S. MuruganBackground: Number of resource allocation and bidding schemes had been enormously arrived for on demand supply scheme of cloud services. But accessing and presenting the Cloud services depending on the reputation would not produce fair result in cloud computing. Since the cloud users not only looking for the efficient services but in major they look towards the cost. So here there is a way of introducing the bidding option system that includes efficient user centric behavior analysis model to render the cloud services and resource allocation with low cost. Objective: The allocation of resources is not flexible and dynamic for the users in the recent days. This gave me the key idea and generated as a problem statement for my proposed work. Methods: An online auction framework that ensures multi bidding mechanism which utilizes user centric behavioral analysis to produce the efficient and reliable usage of cloud resources according to the user choice. Results: we implement Efficient Resource Allocation using Multi Bidding Model with User Centric Behavior Analysis. Thus the algorithm is implemented and system is designed in such a way to provide better allocation of cloud resources which ensures bidding and user behavior. Conclusion: Thus the algorithm Efficient Resource Allocation using Multi Bidding Model with User Centric Behavior Analysis is implemented & system is designed in such a way to provide better allocation of cloud resources which ensures bidding, user behavior. The user bid data is trained accordingly such that to produce efficient resource utilization. Further the work can be taken towards data analytics and prediction of user behavior while allocating the cloud resources.
-
-
-
Machine Learning Based Predictive Action on Categorical Non-Sequential Data
Authors: Pradeep S. and Jagadish S. KallimaniBackground: With the advent of data analysis and machine learning, there is a growing impetus of analyzing and generating models on historic data. The data comes in numerous forms and shapes with an abundance of challenges. The most sorted form of data for analysis is the numerical data. With the plethora of algorithms and tools it is quite manageable to deal with such data. Another form of data is of categorical nature, which is subdivided into, ordinal (order wise) and nominal (number wise). This data can be broadly classified as Sequential and Non-Sequential. Sequential data analysis is easier to preprocess using algorithms. Objective: The challenge of applying machine learning algorithms on categorical data of nonsequential nature is dealt in this paper. Methods: Upon implementing several data analysis algorithms on such data, we end up getting a biased result, which makes it impossible to generate a reliable predictive model. In this paper, we will address this problem by walking through a handful of techniques which during our research helped us in dealing with a large categorical data of non-sequential nature. In subsequent sections, we will discuss the possible implementable solutions and shortfalls of these techniques. Results: The methods are applied to sample datasets available in public domain and the results with respect to accuracy of classification are satisfactory. Conclusion: The best pre-processing technique we observed in our research is one hot encoding, which facilitates breaking down the categorical features into binary and feeding it into an Algorithm to predict the outcome. The example that we took is not abstract but it is a real – time production services dataset, which had many complex variations of categorical features. Our Future work includes creating a robust model on such data and deploying it into industry standard applications.
-
-
-
Aspect-Oriented System Coupling Metric and its Validation
Authors: Amandeep Kaur, Pritam S. Grover and Ashutosh DixitBackground: Aspect-oriented programming promises to enhance the extensibility and reusability of code through the removal of tangled and crosscutting code. Determining the degree of coupling for Aspect- Oriented Systems (AOSs) would assist in the quantification of various software attributes and hence improve quality. Objective: The research aims to present a novel Aspect-oriented System Coupling Metric (COAO), that calculates the coupling for the complete aspect-oriented system as a whole, based on the properties of elements and the relationships among them. Methods: The process of defining a metric essentially requires a clear, unambiguous definition of primary and relevant concepts related to Aspect-Oriented Programming. As such, first and foremost, novel definitions of basic concepts such as system, element, relation, module, and attribute are specified concerning Aspect- Oriented Programming. Subsequently, a metric for Aspect-Oriented System Coupling is proposed. Subsequently, the proposed metric is validated theoretically against Braiand properties for coupling of software systems. Finally, an illustration for calculation of the proposed metric is demonstrated using an exemplary aspect-oriented system. Results: The findings reveal that the proposed Aspect-Oriented Coupling metric conforms to the five Property- Based software engineering measurements given by Braiand et al. for coupling. This indicates that the proposed metric for the Aspect-oriented System Coupling metric COAO is a valid metric for measuring coupling in Aspect-oriented Software Systems. Conclusion: Results of validation along with the supportive illustration show that single metric to assess coupling for the complete Aspect-oriented Software System is theoretically sound and also easies the calculation of coupling of a software system.
-
-
-
Intuitionistic Level Set Segmentation for Medical Image Segmentation
Authors: Jyoti Arora and Meena TushirIntroduction: Image segmentation is one of the basic practices that involve dividing an image into mutually exclusive partitions. Learning how to partition an image into different segments is considered as one of the most critical and crucial step in the area of medical image analysis. Objective: The primary objective of the work is to design an integrated approach for automating the process of level set segmentation for medical image segmentation. This method will help to overcome the problem of manual initialization of parameters. Methods: In the proposed method, input image is simplified by the process of intuitionistic fuzzification of an image. Further segmentation is done by intuitionistic based clustering technique incorporated with local spatial information (S-IFCM). The controlling parameters of level set method are automated by S-IFCM, for defining anatomical boundaries. Results: Experimental results were carried out on MRI and CT-scan images of brain and liver. The results are compared with existing Fuzzy Level set segmentation; Spatial Fuzzy Level set segmentation using MSE, PSNR and Segmentation Accuracy. Qualitatively results achieved after proposed segmentation technique shows more clear definition of boundaries. The attain PSNR and MSE value of propose algorithm proves the robustness of algorithm. Segmentation accuracy is calculated for the segmentation results of the T-1 weighted axial slice of MRI image with 0.909 value. Conclusion: The proposed method shows good accuracy for the segmentation of medical images. This method is a good substitute for the segmentation of different clinical images with different modalities and proves to give better result than fuzzy technique.
-
-
-
Deep Learning Based Sentiment Classification on User-Generated Big Data
Authors: Akshi Kumar and Arunima JaiswalBackground: Sentiment analysis of big data such as Twitter primarily aids the organizations with the potential of surveying public opinions or emotions for the products and events associated with them. Objective: In this paper, we propose the application of a deep learning architecture namely the Convolution Neural Network. The proposed model is implemented on benchmark Twitter corpus (SemEval 2016 and SemEval 2017) and empirically analyzed with other baseline supervised soft computing techniques. The pragmatics of the work includes modelling the behavior of trained Convolution Neural Network on wellknown Twitter datasets for sentiment classification. The performance efficacy of the proposed model has been compared and contrasted with the existing soft computing techniques like Naïve Bayesian, Support Vector Machines, k-Nearest Neighbor, Multilayer Perceptron and Decision Tree using precision, accuracy, recall, and F-measure as key performance indicators. Methods: Majority of the studies emphasize on the utilization of feature mining using lexical or syntactic feature extraction that are often unequivocally articulated through words, emoticons and exclamation marks. Subsequently, CNN, a deep learning based soft computing technique is used to improve the sentiment classifier’s performance. Results: The empirical analysis validates that the proposed implementation of the CNN model outperforms the baseline supervised learning algorithms with an accuracy of around 87% to 88%. Conclusion: Statistical analysis validates that the proposed CNN model outperforms the existing techniques and thus can enhance the performance of sentiment classification viability and coherency.
-
-
-
Comparative Performance Evaluation of Keyword and Semantic Search Engines using Different Query Set Categories
Authors: Poonam Jatwani, Pradeep Tomar and Vandana DhingraBackground: Keyword search engines are unable to understand the intention of user as a result they produce enormous results for user to distinguish between relevant and non relevant answers of user queries. This has led to rise in requirement to study search capabilities of different search engines. In this research work, experimental evaluation is done based on different metrics to distinguish different search engines on the basis of type of query that can be handled by them. Methods: To check the semantics handling performance, four types of query sets consisting of 20 queries of agriculture domain are chosen. Different query set are single term queries, two term queries, three term queries and NLP queries. Queries from different query set were submitted to Google, DuckDuckGo and Bing search engines. Effectiveness of different search engines for different nature of queries is experimented and evaluated in this research using Grade relevance measures like Cumulative Gain, Discounted Cumulative Gain, Ideal Discounted Cumulative Gain, and Normalized Discounted Cumulative Gain in addition to the precision metric. Results: Our experimental results demonstrate that for single term query, Google retrieves more relevant documents and performs better and DuckDuckGo retrieves more relevant documents for NLP queries. Conclusion: Analysis done in this research shows that DuckDuckGo understand human intention and retrieve more relevant result, through NLP queries as compared to other search engines.
-
Most Read This Month
