Recent Advances in Computer Science and Communications - Volume 14, Issue 5, 2021
Volume 14, Issue 5, 2021
-
-
Comparative Study of Cryptography for Cloud Computing for Data Security
Authors: Priya Mathur, Amit K. Gupta and Prateek VashishthaBackground: Cloud computing is an emerging technique by which anyone can access the applications as utilities over the internet. Cloud computing is the technology which comprises of all the characteristics of the technologies like distributed computing, grid computing, and ubiquitous computing. Cloud computing allows everyone to create, to configure as well as to customize the business applications online. So the cloud computing techniques need security of information communicated between the sending and receiving entities. Objective: The secure data storage disadvantage of Cloud computing can be resolve up to some extent with the help of the implementation of the cryptographic algorithms during the storing and accessing of the data from the cloud servers. Methods: In this paper we have compared four different recently implemented Cryptographic Algorithms which are Modified RSA (SRNN), Elliptic Curve Cryptography Algorithm, Client Side Encryption Technique and Hybrid Encryption Technique. Conclusion: Client Side Encryption Technique and Hybrid Encryption Technique are better than Modified RSA and Elliptic Curve Cryptography because Client Side Encryption Technique has an advantage that in this data is encrypted prior to upload the data on the cloud i.e. encryption at the client side which provides an additional layer of security to the data on the cloud. On the other hand, Hybrid Encryption Technique has an advantage that it uses the rapidity of the processing time of Symmetric Key Cryptography and robustness in key length of the Asymmetric Key Cryptography.
-
-
-
Compatibility Study of Installation of an Operating System with Boot Modes and Partitioning Styles Configuration
Authors: Sandeep Tuli and Nitin JainIntroduction: This manuscript is explicating the mutual compatibility of boot modes in a computer system and partitioning styles of Hard Disk Drive. Most of us are familiar with these terms and know a little about these. Methods: This manuscript contains ample information about the boot modes of a computer system and partitioning styles of Hard Disk Drive (HDD) and their related configuration through which we get to know about their configuration and endurability. It also contains some practically verified case studies of the problems which occur due to the wrong configuration of boot modes and partitioning styles, though there are a lot more, the most common ones are discussed along with their solutions. Results: In order to achieve the compatibility, it might require to convert the primary HDD into either GPT or MBR partitioning schemes. It should be marked that this interconversion wipes data on HDD, so cautiously, the data on the hard drive must be backed-up before any conversion process. Discussion: This is helpful when the system is equipped with the latest configuration, i.e., UEFI, and if there is a need to install an older operating system that does not support UEFI boot mode (for example, Windows XP), then CSM can help. In addition, some graphics cards (for example, GTX 680) do not support UEFI and hence, require CSM to boot. This means that the CSM can help as a runner for all that hardware configuration (in an updated system with new configuration), which can run only through legacy BIOS configuration. Conclusion: The information contained had been practically verified and is helpful in coping with the newer technology trends, which contains more features along with backward compatibility.
-
-
-
Discovering Strong Communities in Social Networks Through Chromatic Correlation Clustering
Authors: Jaishri Gothania and Shashi K. RathoreBackground: Complex systems involved in biochemistry, neuroscience, physics, engineering and social science are primarily studied and modeled through network structures. The connectivity patterns within these interaction networks are discovered through clustering-like techniques. Community discovery is a related problem to find patterns in networks. Objectives: Existing algorithms either try to find few large communities in networks; or try to partition network into small strongly connected communities; that too is time consuming and parameterdependant. Methods/Results: This paper proposes a chromatic correlation clustering method to discover small strong communities in an interaction network in heuristic manner to have low time complexity and a parameter free method. Comparison with other methods over synthetic data is done. Conclusion: Interaction networks are very large, sparse containing few small dense communities that can be discovered only through method specifically designed for the purpose.
-
-
-
Distributed Ledger System for Stock Market Using Block Chain
Authors: Era Johri, Bhakti Kantariya, Rachana Gandhi and Unnati MistryBackground: Lacuna of the traditional stock market systems overtime has been bridled by BlockChain Technology. Used cases designed in BlockChain Technology has a flair to deal with the problems in the financial sector. Most vituperative outburst is related to stock markets where the centralization of the clearing house increases the dependency on the central system. Centralization of the clearing houses increases the risks and delay in response to the users of the system. Traditionally, stock market players need to endure the complex multi-layer process of pre-trading, trading and post-trading settlements. The processes in these systems were time consuming and cost inefficient in terms of resources utilized due to the role of intermediaries. Objective: The objective of this paper is to propose a system which will help embark the issues of processing, traceability, transparency, and availability of stocks using BlockChain Technology. Methods: We offer a decentralized system for the stock market users, and as a result, the intermediaries become expendable. The rules and regulations will execute within every smart contract for every trade transaction being regulatory. Results: This paper discusses various solutions to the problems that arise due to current centralized or decentralized systems. Conclusion: We will discuss aspects like security and transparency of the proposed system while concluding the paper.
-
-
-
An Efficient Speculative Task Detection Algorithm for MapReduce Schedulers
Authors: Utsav Upadhyay and Geeta SikkaBackground: The MapReduce programming model was developed and designed for Google File System to efficiently process large distributed datasets. The open source implementation of the Google project was called Apache Hadoop. Hadoop architecture comprises of Hadoop Distributed File System (HDFS) and Hadoop MapReduce. HDFS provides support to Hadoop for effectively managing large datasets over the cluster and MapReduce helps in efficient large-scale distributed datasets processing. MapReduce incorporates strategies to re-executes speculative task on some other node in order to finish computation quickly, enhancing the overall Quality of Service (QoS). Several mechanisms were suggested over default Hadoop’s Scheduler, such as Longest Approximate Time to End (LATE), Self-Adaptive MapReduce scheduler (SAMR) and Enhanced Self-Adaptive MapReduce scheduler (ESAMR), to improve speculative re-execution of tasks over the cluster. Objective: The aim of this research is to develop an efficient speculative task detection mechanism to improve the overall QoS offered over Hadoop cluster. Methods: Our studies suggest the importance of keeping a regular track of node’s performance in order to re-execute speculative tasks more efficiently. Results: We have successfully reduced the detection time of speculative tasks (∼ 15%) and improved accuracy of correct speculative task detection (~10%) as compared to existing mechanisms. Conclusion: This paper presents an efficient speculative task detection algorithm for MapReduce schedulers to improve the QoS offered by Hadoop clusters.
-
-
-
Minimized False Alarm Predictive Threshold for Cloud Service Providers
Authors: Amandeep S. Arora, Linesh Raja and Barkha BahlAim: Cloud Security is a strong hindrance which discourages organisations to move toward cloud despite huge benefits. Distributed denial of service attacks operated via distributed systems compromise availability of cloud services which cause limited resources for authentic users and high expense for cloud service users and business owners. Objective: Techniques to identify distributed denial of service attacks with minimized false positives are highly required to ensure availability of cloud services to genuine users. Scarcity of solution which can detect denial of service attacks with minimum false positives and reduced detection delay has motivated us to compare classification algorithms for detection of distributed denial of service attacks with minimum false positive rate. Methods: Classification of incoming requests and outgoing responses using machine learning algorithms is a quite effective way of detection and prevention. We designed a performance tuned support vector machine algorithm with features of F-hold cross validation strategy. Results: F-hold crosses validation strategy, which can detect denial of service packets with 99.89% accuracy. Conclusion: This system ensures economic sustainability for business owners and limited resources mitigation for authenticated and valid cloud users.
-
-
-
A Review on Different Biometric Template Protection Methods
Authors: Arpita Sarkar and Binod K. SinghBiometrics is a universally used automatic identification of persons, depending on their behavioral and biological quality. Biometric authentication represents a significant role in the identity management system. Protection of biometric authentication systems demands to be discussed as there are still some points related to the integrity and public receiving of biometric systems. Feature extraction module of biometric authentication systems, during the period of enrolment, scan the biometric information to determine a set of distinctive features. This set of totally distinct features is recognized as a biometric template. This biometric template is effective in distinguishing between different users. These templates are normally stored during the time of enrolment in a database arranged by the user’s identification data. For the templates of biometrics, protection from attackers is an essential issue, since the compromised template will not be canceled and reissued like a password or token. Template security is not an obvious task because of variations present within the extracted biometric features of users. This paper surveys about different existing approaches for designing a protection scheme for biometric templates with their strengths and boundaries existing in the literature. Some prospect information in designing template protection schemes have been elaborately explained in this paper.
-
-
-
Android Quiz Application Based on Face Recognition
More LessBackground: Technology in the field of education is permanently developing, growing and this raise will repeatedly offer new and unusual advances. The significant objective of this project is to encourage the understudies in participating in learning and enhancing their insight abilities. Methods: An Android application provides a new technique of developing a test or quiz using smart devices. This project implements a mobile quiz application based on face recognition as an authentication process to ascertain students' identity. The authentication process was implemented in two steps face detection using Mobile Vision APIs and face recognition using a Speed Up Robust Features (SURF) algorithm. Image classification and retrieval process is applied using SURF algorithm to extract features vector, then compare those vector with those of all stored images at server database and the matching process is applied based on RANSAC algorithm. A Wi-Fi ad hoc network in this project is established using jmdns java library to enable accessing the application by students. For training purposes, a data set containing 10 persons is added with 5 images per person. Results: A quiz environment has been arranged, in class with seven examiners each one separately accessed the quiz application with randomly chosen questions by the server. The achieved recognition rate was 85%, with a total average computation time 8.816 s per user login. Conclusion: This quiz application decreases manual intervention and brings adaptability to users with ease of use.
-
-
-
Technique for Optimization of Association Rule Mining by Utilizing Genetic Algorithm
Authors: Darshana H. Patel, Saurabh Shah and Avani VasantBackground: Due to advancement in usage of Internet and pattern discovery from huge amount of data flowing through internet, personal information of an individual or organization can be traced. Hence, to protect the private information is becoming extremely crucial, which can be achieved through privacy preserving data mining. Objective: The main objective to preserve the privacy of data and to maintain the balance between privacy and accuracy by applying privacy preserving technique and optimization respectively. Methodology: The generation of class association rule is done by utilizing associative classification technique namely class based association due to its simplicity which serves the purpose of classifying the data. Furthermore, privacy of the data should be maintained and hence privacy preserved class association rules are produced by applying privacy preserved technique namely anonymization. Hence, optimization technique specifically genetic algorithm as well as neural network has been applied to maximize the accuracy. Results: (Four various real datasets has been utilized for different experimentation). Implemented Classification Based on Association (CBA) algorithm of Associative Classification technique and it provides virtuous accuracy as compared to other techniques by setting the support as 0.2 and confidence at 0.6. Privacy preserving techniques namely k-anonymization was implemented to preserve the privacy but it has been observed that as privacy (k-level) increases, accuracy (percentage) decreases due to data transformation. Conclusion: (Hence, optimization technique namely Genetic Algorithm (GA) and Neural Network (NN) has been implemented to increase the accuracy (probably 7-8%). Furthermore, on comparison of GA and NN considering the time parameter, GA outperforms well.
-
-
-
Performance Evaluation of Neural Network for Human Classification Using Blob Dataset
Authors: Monika Mehta and Madhulika BhadauriaBackground: Human Classification in public places is an emerging area in the applications of Computational Intelligence. Therefore, modeling of an optimal architecture of the neural network is required to classify them. Methods: In this work for this purpose, blob dataset has been used to train the neural network. This dataset consists of 2408 features of a human blob. Results: Further, analysis of this blob dataset has been done on the basis of various characteristic parameters for affirmation of actual training. During training and testing of this dataset, it has been observed that when nodes at hidden layer are below and above 10 then training of neural network is under fitted and overfitted respectively and works effectively when the nodes are 10 at the hidden layer. Conclusion: From the experimental work performed in this study, an optimal neural network has been obtained to classify human using blob dataset.
-
-
-
Reverse Nearest Neighbors Query of Moving Objects Based on HRNNTree
Authors: Miao Wang, Xiaotong Wang, Xiaodong Liu, Songyang Li and Song LiBackground: Reverse nearest neighbors query is an important means to solve many practical applications based on the concept of Influence Sets. It is widely used in various fields such as data mining, decision support, resources allocation, knowledge discovery, data flow, bioinformatics and so on. Objective: This work aims to improve time efficiency of Reverse Nearest Neighbors query of moving objects with large data scale. Methods: A new spatio - temporal index HRNN-tree is developed.Then an algorithm for reverse nearest neighbors query based on HRNN-tree is developed. Results: Our algorithm is superior to the existing method in execution time. The performance of our algorithm is excellent especially for the queries with large data scale and small values of k. Conclusion: This study devises a new spatio - temporal index HRNN-tree. Then an algorithm for reverse nearest neighbor search of moving objects is developed based on this index. This algorithm avoids that the query performance deteriorates rapidly as the data space grows and has a better performance fort the large data space. This work will be helpful to enrich and improve the abilities of intelligent analysis, mobile computing and quantitative query based on distance for spatio - temporal database.
-
-
-
Utility-Based SK-Clustering Algorithm for Privacy Preservation of Anonymized Data in Healthcare
Authors: G. Shobana and S. ShankarBackground: The increasing need for various data publishers to release or share the healthcare datasets has imparted a threat for the privacy and confidentiality of the Electronic Medical Records. However, the main goal is to share useful information thereby maximizing utility as well as ensuring that sensitive information is not disclosed. There always exist utility-privacy tradeoff which needs to be handled properly for the researchers to learn statistical properties of the datasets. Objective: The objective of the research article is to introduce a novel SK-Clustering algorithm that overcomes identity disclosure, attribute disclosure and similarity attacks. The algorithm is evaluated using metrics such as discernability measure and relative error so as to show its performance compared with other clustering algorithms. Methodology: The SK-Clustering algorithm flexibly adjusts the level of protection for high utility. Also the size of the clusters is minimized dynamically based on the requirements of the protection required and we add extra tuples accordingly. This will drastically reduce information loss thereby increasing utilization. Results: For a k-value of 50 the discernabilty measure of SK algorithm is 65000 whereas the Mondrian algorithm exhibits 70000 discernability measure and the Anatomy algorithm has a discernability measure of 150000. Similarly, the relative error of our algorithm is less than 10% for a tuple count of 35000 when compared to other k-anonymity algorithms. Conclusion: The proposed algorithm executes more competently in terms of minimal discernability measure as well as relative error, thereby proving higher data utility compared with traditionally available algorithms.
-
-
-
Protein Classification Using Machine Learning and Statistical Techniques
Authors: Chhote L. P. Gupta, Anand Bihari and Sudhakar TripathiBackground: In the recent era, the prediction of enzyme class from an unknown protein is one of the challenging tasks in bioinformatics. Day-to-day, the number of proteins increases which causes difficulties in clinical verification and classification; as a result, the prediction of enzyme class gives a new opportunity to bioinformatics scholars. The machine learning classification technique helps in protein classification and predictions. But it is imperative to know which classification technique is more suited for protein classification. This study used human proteins data that is extracted from the UniProtKB databank. A total of 4368 protein data with 45 identified features were used for experimental analysis. Objective: The prime objective of this article is to find an appropriate classification technique to classify the reviewed as well as un-reviewed human enzyme class of protein data. Also, find the significance of different features in protein classification and prediction. Methods: In this article, the ten most significant classification techniques such as CRT, QUEST, CHAID, C5.0, ANN, SVM, Bayesian, Random Forest, XgBoost, and CatBoost have been used to classify the data and discover the importance of features. To validate the result of different classification techniques, accuracy, precision, recall, F-measures, sensitivity, specificity, MCC, ROC, and AUROC were used. All experiments were done with the help of SPSS Clementine and Python. Results: Above discussed classification techniques give different results and found that the data are imbalanced for class C4, C5, and C6. As a result, all of the classification techniques give acceptable accuracy above 60% for these classes of data, but their precision value is very less or negligible. The experimental results highlight that the Random forest gives the highest accuracy as well as AUROC among all, i.e., 96.84% and 0.945, respectively, and also has high precision and recall value. Conclusion: The experiment conducted and analyzed in this article highlights that the Random Forest classification technique can be used for protein of human enzyme classification and predictions.
-
-
-
A Concept of Captcha Based Dynamic Password
Authors: Md. A. Haque and Tauseef AhmadBackground: The conventional text passwords are by far the main means of authentication and they will continue to be popular as they are easy to deploy and to use. However, they have been largely exposed to different kind of attacks like guessing, phishing and key-logger attacks. Objective: It is efficient to use both CAPTCHA and text password in a user authentication to create an additional layer of security over the passwords. Captcha and password work side by side and independently in such captcha-assisted systems. Captcha filters out the suspicious programs from the human beings and password recognizes the legitimate user among the human beings. Methods: In this paper, we have suggested a dynamic password scheme combining the traditional text password and captcha. User authentication will be verified by different passwords at different login session based on the captcha presentations. Results: Suggested method does not replace the conventional password process rather than modifying it. Therefore, users’ current sign-in experience is largely preserved. It will be implemented in software alone, increasing the potential for large-scale adoption on the Internet. Conclusion: The scheme is easy to implement and will be useful to improve the security to a great extent.
-
-
-
Distance Aware VM Allocation Process to Minimize Energy Consumption in Cloud Computing
Authors: Gurpreet Singh, Manish Mahajan and Rajni MohanaBackground: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. Aim and Objective: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. Methods: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Results and Conclusion: Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.
-
-
-
Lung Cancer Prediction Using Random Forest
Authors: A. Rajini and M.A. JabbarBackground: In recent years, lung cancer is a common cancer across the globe. For the early prediction of lung cancer, medical practitioners and researchers require an efficient predictive model, which will reduce the number of deaths. This paper proposes a lung cancer prediction model by using the Random Forest Classifier, which aims at analyzing symptoms (gender, age, air pollution, weight loss, etc.). Objective: This work addresses the problem of classification of lung cancer data using the Random Forest Algorithm. Random Forest is the most accurate learning algorithm and many researchers in the healthcare domain use it. Methods: This paper deals with the prediction of lung cancer by using the Random Forest Classifier. Results: The proposed method (Random Forest Classifier) applied on two lung cancer datasets achieved an accuracy of 100% for the lung cancer dataset-1 and 96.31 on dataset-2. In the prediction of lung cancer, the Random Forest Algorithm showed improved accuracy compared with other methods. Conclusion: This predictive model will help health professionals in predicting lung cancer at an early stage.
-
-
-
Software-Defined Security Architecture for a Smart Home Networks Using Token Sharing Mechanism
Authors: Utkarsh Saxena, J.S Sodhi and Yaduveer SinghBackground: Several approaches were proposed earlier to provide a secure infrastructure dependent communication in a smart home network. Some used overlay networks, some used lightweight encryption techniques, and some used honey pot techniques. However, all the approaches are vulnerable to network attacks due to the dependency on the device and server, and due to centralization, there exists a higher chance of attacks. Objective: To develop a security architecture that is more resilient to cyber-attacks and less dependent on any complex network parameter, i.e. an encryption algorithm or an overlay network. Methods: Authentication module along with squid performs token generation, and monitoring module helps devices to communicate with each other. The integrity protection module performs data integrity and the expiration of token is performed by the access module with the clock. Our approach meets with all the security aspects of a smart home network. Results: The analysis of our secure architecture showed that this architecture provides more flexibility, robustness in terms of Load Balancing, Network Lifetime maximization, Failure Management, Energy efficiency, Link quality, and heterogeneity of the network as compared to other existing security policies or architecture. Conclusion: The proposed framework ensures and improves all the security requirements for a smart home network. Token-based authentication is much secure and robust as compared to traditional approaches. This framework is suited for secure communication in a smart home environment, but it lacks for controlling zero-day attacks. In the future, we will improve its resilience against the zero-day attacks and also enhance security features in the current architecture.
-
-
-
Low Power and High Speed Sequential Circuits Test Architecture
Authors: Ahmed K. Jameil, Yasir A. Abbas and Saad Al-AzawiBackground: Electronic circuits testing and verification are performed to determine faulty devices after IC fabrication and to ensure that the circuit performs its designed functions. The verification process is considered as a test for both sequential and combinational logic circuits. The sequential circuits test is a more complex task than the combinational circuits test. However, dedicated algorithms can be used to test any type of sequential circuit regardless of its complexity. Objective: This paper presents a new Design Under Test (DUT) algorithm for 4-and 8-tap Finite Impulse Response (FIR) filters sequential circuits. The FIR filter and the proposed DUT algorithm are implemented using field Programmable Gate Arrays (FPGA) platform. Methods: The proposed DUT test generation algorithm is implemented using VHDL and Xilinx ISE V14.5 design suite. The proposed test generation algorithm for the FIR filter utilizes filtering redundant faults to obtain a set of target faults for the DUT. Results: The proposed algorithm reduces time delay for up to 50 % with power consumption reduction of up to 70 % in comparison with the most recent similar work. Conclusion: The implementation results ensured that a high speed and low power consumption architecture can be achieved. Also, the proposed architecture performance is faster than that of the existing techniques.
-
Most Read This Month
