Recent Advances in Computer Science and Communications - Volume 13, Issue 3, 2020
Volume 13, Issue 3, 2020
-
-
A Study of the Cloud Computing Adoption Issues and Challenges
Authors: Dhanapal Angamuthu and Nithyanandam PandianBackground: The cloud computing is the modern trend in high-performance computing. Cloud computing becomes very popular due to its characteristic of available anywhere, elasticity, ease of use, cost-effectiveness, etc. Though the cloud grants various benefits, it has associated issues and challenges to prevent the organizations to adopt the cloud. Objective: The objective of this paper is to cover the several perspectives of Cloud Computing. This includes a basic definition of cloud, classification of the cloud based on Delivery and Deployment Model. The broad classification of the issues and challenges faced by the organization to adopt the cloud computing model are explored. Examples for the broad classification are Data Related issues in the cloud, Service availability related issues in cloud, etc. The detailed sub-classifications of each of the issues and challenges discussed. The example sub-classification of the Data Related issues in cloud shall be further classified into Data Security issues, Data Integrity issue, Data location issue, Multitenancy issues, etc. This paper also covers the typical problem of vendor lock-in issue. This article analyzed and described the various possible unique insider attacks in the cloud environment. Results: The guideline and recommendations for the different issues and challenges are discussed. The most importantly the potential research areas in the cloud domain are explored. Conclusion: This paper discussed the details on cloud computing, classifications and the several issues and challenges faced in adopting the cloud. The guideline and recommendations for issues and challenges are covered. The potential research areas in the cloud domain are captured. This helps the researchers, academicians and industries to focus and address the current challenges faced by the customers.
-
-
-
Malicious Route Detection in Vehicular Ad-hoc Network using Geographic Routing with Masked Data
Background: Vehicular Ad-hoc Network is the subset of Mobile Ad-hoc Network, Intelligent Transport System and Internet of Things. The acting nodes in VANET are the vehicles on the road at any moment. Objective: The anonymity character of these vehicles is opening the opportunity for malicious attacks. Malicious routes increase the data retransmission and hence, the performance of routing will be degraded. The main objective this work is to identify the malicious routes, avoid the data transmission using these routes and increase the packet delivery ratio. Methods: In the proposed system called Geographic Routing Protocol with Masked data, two binary- codes called mask and share have been generated to identify the malicious route. The original data is encoded using these binary-codes and routed to the destination using the geographic routing protocol. It is reconstructed at the destination node and based on the encoding technique the malicious routes and malicious nodes are identified. Simulations were conducted with varying speed and varying network size in 20 km2 geographical area. Results: The average packet delivery ratio with varying speed is 0.817 and with varying networksize is 0.733. Conclusion: The proposed geographical routing protocol with masked data technique outperforms than traditional geographic protocol and Detection of Malicious Node protocol, by 0.102 and 0.264 respectively with different speeds and by 0.065 and 0.1616 respectively with different network size.
-
-
-
Cost-Aware Ant Colony Optimization for Resource Allocation in Cloud Infrastructure
Authors: Punit Gupta, Ujjwal Goyal and Vaishali VermaBackground: Cloud Computing is a growing industry for secure and low cost pay per use resources. Efficient resource allocation is the challenging issue in cloud computing environment. Many task scheduling algorithms used to improve the performance of system. It includes ant colony, genetic algorithm & Round Robin improve the performance but these are not cost efficient at the same time. Objective: In early proven task scheduling algorithms network cost are not included but in this proposed ACO network overhead or cost is taken into consideration which thus improves the efficiency of the algorithm as compared to the previous algorithm. Proposed algorithm aims to improve in term of cost and execution time and reduces network cost. Methods: The proposed task scheduling algorithm in cloud uses ACO with network cost and execution cost as a fitness function. This work tries to improve the existing ACO that will give improved result in terms of performance and execution cost for cloud architecture. Our study includes a comparison between various other algorithms with our proposed ACO model. Results: Performance is measured using an optimization criteria tasks completion time and resource operational cost in the duration of execution. The network cost and user requests measures the performance of the proposed model. Conclusion: The simulation shows that the proposed cost and time aware technique outperforms using performance measurement parameters (average finish time, resource cost, network cost).
-
-
-
Machine Learning Based Support System for Students to Select Stream (Subject)
Authors: Kapil Sethi, Varun Jaiswal and Mohammad D. AnsariBackground: In most of the countries, students have to select a subject/stream in the secondary education phase. Selection of subject/stream is crucial for students because further their career proceeds according to their selection. Mostly subject/stream selection cannot be changed in the further career. Inappropriate selection of subjects due to parental pressure, lack of information etc. can lead to limited success in the selected stream. Guidance for subject/stream selection based on information of successful scholars of their stream and information of students such as interest, family background, previous education and other associated can enhance the success in career. Methods: Data mining and machine learning based methods were developed on the above information. Data from the different institutions and students of two different streams were used for training and testing purposes. Different machine learning algorithms were used and methods with high accuracy (86.72) were developed. Result: Developed methods can be extended and used for different subject/stream selection.
-
-
-
Performance Analysis of DCF- Two Way Handshake vs RTS/CTS During Train-Trackside Communication in CBTC based on WLAN802.11b
Authors: Bhupendra Singh and Rajesh MishraBackground: Wireless Local Area Network (WLAN) is used primarily in CBTC because of easy availability of commercial WLAN equipment. In present scenario, WLAN Medium Access Control (MAC) protocol is a well-known protocol which is used to satisfy real-time traffic and delay- sensitive applications. The bidirectional train-trackside communication is the fundamental key of train control in CBTC. Methods: DCF describes two basic techniques used for packet transmission: First technique is a Two Way Handshake (TWH) mechanism and another is Four Way Handshake (FWH) mechanisms. RTS/CTS FWH protocol specified by IEEE802.11b is introduced to rectify the Hidden Node Problem (HNP) encounters in TWH protocol. That is why the TWH mechanism of DCF technique suffers from higher average packet delay time when this protocol is applied to CBTC. DCF- Four Way Handshake (FWH), Request To Send (RTS) and Clear To Send (CTS) delay model is proposed to develop Communication Based Train Control (CBTC) system. Results: FWH is applied in CBTC to overcome the packet delay and throughput limitations of Two Way Handshake (TWH) mechanism of distributed coordination function (DCF) based technique. An experiment is designed to simulate and compare the performance of RTS/CTS delay model against TWH mechanism of DCF. Conclusion: It was found that the Average packet delay is slightly higher and throughput is lesser in RTS/CTS in comparison to TWH method. By comparing the performance of these two medium access mechanism in CBTC it was found that for multiple retransmissions with various data rates the RTS/CTS model had better packet delay time than TWH.
-
-
-
An Energy Efficient Routing Protocol Based On New Variable Data Packet (VDP) Algorithm for Wireless Sensor Networks
More LessBackground: Wireless Sensor Networks (WSNs) refer to a group of sensors used for sensing and monitoring the physical data of the environment and organizing the collected data at a central location. These networks enjoy several benefits because of their lower cost, smaller size and smarter sensors. However, a limited source of energy and lifetime of the sensors have emerged as the major setbacks for these networks. Methods: In this work, an energy-aware algorithm has been proposed for the transmission of variable data packets from sensor nodes to the base station according to the balanced energy consumption by all the nodes of a WSN. Result: Obtained simulation results verify that the lifetime of the sensor network is significantly enhanced in comparison to other existing clustering based routing algorithm. Conclusion: The proposed algorithm is comparatively easy to implement and achieves a higher gain in the lifetime of a WSN while keeping the throughput nearly same as LEACH protocol.
-
-
-
Brain Tumor Detection from MR Images Employing Fuzzy Graph Cut Technique
Authors: Jyotsna Dogra, Shruti Jain, Ashutosh Sharma, Rajiv Kumar and Meenakshi SoodBackground: This research aims at the accurate selection of the seed points from the brain MRI image for the detection of the tumor region. Since, the conventional way of manual seed selection leads to inappropriate tumor extraction therefore, fuzzy clustering technique is employed for the accurate seed selection for performing the segmentation through graph cut method. Methods: In the proposed method Fuzzy Kernel Seed Selection technique is used to define the complete brain MRI image into different groups of similar intensity. Among these groups the most accurate kernels are selected empirically that show highest resemblance with the tumor. The concept of fuzziness helps making the selection even at the boundary regions. Results: The proposed Fuzzy kernel selection technique is applied on the BraTS dataset. Among the four modalities, the proposed technique is applied on Flair images. This dataset consists of Low Grade Glioma (LGG) and High Grade Glioma (HGG) tumor images. The experiment is conducted on more than 40 images and validated by evaluating the following performance metrics: 1. Disc Similarity Coefficient (DSC), 2. Jaccard Index (JI) and 3. Positive Predictive Value (PPV). The mean DSC and PPV values obtained for LGG images are 0.89 and 0.87 respectively; and for HGG images it is 0.92 and 0.90 respectively. Conclusion: On comparing the proposed Fuzzy kernel selection graph cut technique approach with the existing techniques it is observed that the former provides an automatic accurate tumor detection. It is highly efficient and can provide a better performance for HGG and LGG tumor segmentation in clinical application.
-
-
-
SEGIN-Minus: A New Approach to Design Reliable and Fault-Tolerant MIN
Authors: Shilpa Gupta and Gobind Lal PahujaBackground: VLSI technology advancements have resulted the requirements of high computational power, which can be achieved by implementing multiple processors in parallel. These multiple processors have to communicate with their memory modules by using Interconnection Networks (IN). Multistage Interconnection Networks (MIN) are used as IN, as they provide efficient computing with low cost. Objective: the objective of the study is to introduce new reliable MIN named as a (Shuffle Exchange Gamma Interconnection Network Minus) SEGIN-Minus, which provide reliability and faulttolerance with less number of stages. Methods: MUX at input terminal and DEMUX at output terminal of SEGIN has been employed with reduction in one intermidiate stage. Fault tolerance has been introduced in the form of disjoint paths formed between each source-destnation node pair. Hence reliability has been improved. Results: Terminal, Broadcast and Network Reliability has been evaluated by using Reliability Block Diagrams for each source-destination node pair. The results have been shown, which depicts the hiher reliability values for newly proposed network. The cost analysis shows that new SEGINMinus is a cheaper network than SEGIN. Conclusion: SEGIN-Minus has better reliability and Fault-tolerance than priviously proposed SEGIN.
-
-
-
ANN-Based Relaying Algorithm for Protection of SVC- Compensated AC Transmission Line and Criticality Analysis of a Digital Relay
Authors: Farhana Fayaz and Gobind L. PahujaBackground: The Static VAR Compensator (SVC) has the capability of improving reliability, operation and control of the transmission system thereby improving the dynamic performance of power system. SVC is a widely used shunt FACTS device, which is an important tool for the reactive power compensation in high voltage AC transmission systems. The transmission lines compensated with the SVC may experience faults and hence need a protection system against the damage caused by these faults as well as provide the uninterrupted supply of power. Methods: The research work reported in the paper is a successful attempt to reduce the time to detect faults on a SVC-compensated transmission line to less than quarter of a cycle. The relay algorithm involves two ANNs, one for detection and the other for classification of faults, including the identification of the faulted phase/phases. RMS (Root Mean Square) values of line voltages and ratios of sequence components of line currents are used as inputs to the ANNs. Extensive training and testing of the two ANNs have been carried out using the data generated by simulating an SVC-compensated transmission line in PSCAD at a signal sampling frequency of 1 kHz. Back-propagation method has been used for the training and testing. Also the criticality analysis of the existing relay and the modified relay has been done using three fault tree importance measures i.e., Fussell-Vesely (FV) Importance, Risk Achievement Worth (RAW) and Risk Reduction Worth (RRW). Results: It is found that the relay detects any type of fault occurring anywhere on the line with 100% accuracy within a short time of 4 ms. It also classifies the type of the fault and indicates the faulted phase or phases, as the case may be, with 100% accuracy within 15 ms, that is well before a circuit breaker can clear the fault. As demonstrated, fault detection and classification by the use of ANNs is reliable and accurate when a large data set is available for training. The results from the criticality analysis show that the criticality ranking varies in both the designs (existing relay and the existing modified relay) and the ranking of the improved measurement system in the modified relay changes from 2 to 4. Conclusion: A relaying algorithm is proposed for the protection of transmission line compensated with Static Var Compensator (SVC) and criticality ranking of different failure modes of a digital relay is carried out. The proposed scheme has significant advantages over more traditional relaying algorithms. It is suitable for high resistance faults and is not affected by the inception angle nor by the location of fault.
-
-
-
An Intelligent Resource Manager Over Terrorism Knowledge Base
Authors: Archana Patel, Abhisek Sharma and Sarika JainThe complex and chaotic crisis created by terrorism demands for situation awareness which is possible with the proposed Indian Terrorism Knowledge Treasure (ITKT). Objective: This work is an effort at creating the largest comprehensive knowledge base of terrorism and related activities, people and agencies involved, and extremist movements; and providing a platform to the society, the government and the military personnel in order to combat the evolving threat of the global menace terrorism. Methods: For representing knowledge of the domain semantically, an ontology has been used in order to better integrate data and information from multiple heterogeneous sources. An Indian Terrorism Knowledge Base is created consisting of information about past terrorist attacks, actions taken at time of those attacks, available resources and more. An Indian Terrorism Resource Manager is conceived comprising of various use cases catering to searching a specified keyword for its description, navigating the complete knowledge base of Indian Terrorism and finding any answers to any type of queries pertaining to terrorism. Results: The managerial implications of this work are two-fold. All the involved parties, i.e., the government officials, military, police, emergency personnel, fire department, NGOs, media, public etc will be better informed in case of emergency and will be able to communicate with each other; hence improving situation awareness and providing decision support.
-
-
-
Dimensionality Reduction Technique in Decision Making Using Pythagorean Fuzzy Soft Matrices
Authors: Rakesh K. Bajaj and Abhishek GuleriaBackground: Dimensionality reduction plays an effective role in downsizing the data having irregular factors and acquires an arrangement of important factors in the information. Sometimes, most of the attributes in the information are found to be correlated and hence redundant. The process of dimensionality reduction has a wider applicability in dealing with the decision making problems where a large number of factors are involved. Objective: To take care of the impreciseness in the decision making factors in terms of the Pythagorean fuzzy information which is in the form of soft matrix. The perception of the information has the parameters - degree of membership, degree of indeterminacy (neutral) and degree of nonmembership, for a broader coverage of the information. Methods: We first provided a technique for finding a threshold element and value for the information provided in the form of Pythagorean fuzzy soft matrix. Further, the proposed definitions of the object-oriented Pythagorean fuzzy soft matrix and the parameter-oriented Pythagorean fuzzy soft matrix have been utilized to outline an algorithm for the dimensionality reduction in the process of decision making. Results: The proposed algorithm has been applied in a decision making problem with the help of a numerical example. A comparative analysis in contrast with the existing methodologies has also been presented with comparative remarks and additional advantages. Conclusion: The example clearly validates the contribution and demonstrates that the proposed algorithm efficiently encounters the dimension reduction. The proposed dimensionality reduction technique may further be applied in enhancing the performance of large scale image retrieval.
-
-
-
Optimization of PV Based Standalone Hybrid Energy System using Cuckoo Search Algorithm
Authors: Vinay A. Tikkiwal, Sajai Vir Singh and Hariom GuptaBackground: Renewable sources of energy have emerged as a promising eco-friendly alternative to the conventional and non-renewable sources of energy. However, highly variable and intermittent nature of renewable sources acts as a big hurdle in their widespread adoption. Hybrid energy systems provide an efficient and reliable solution to this issue, especially for the non-grid connected or stand-alone systems. Objective: The study deals with the design and optimization of a stand-alone hybrid renewable energy system. Methods: Two different configurations consisting of PV/W/B/DG have been modeled and optimized for lower annualized cost using cuckoo search, a meta-heuristic algorithm. Analysis of these system configurations has been carried to meet the energy demand at the least annualized cost. Results and Conclusion: Using a real world data for an existing educational organization in India, it is proven that the proposed optimization method meets all the requirements of the system and PV/B/DG configuration returns a lower annualized cost as well as leads to lower emissions.
-
-
-
Probabilistic and Fuzzy based Efficient Routing Protocol for Mobile Ad Hoc Networks
Authors: Madan M. Agarwal, Hemraj Saini and Mahesh Chandra GovilBackground: The performance of the network protocol depends on number of parameters like re-broadcast probability, mobility, the distance between source and destination, hop count, queue length and residual energy, etc. Objective: In this paper, a new energy efficient routing protocol IAOMDV-PF is developed based on the fixed threshold re-broadcast probability determination and best route selection using fuzzy logic from multiple routes. Methods: In the first phase, the proposed protocol determines fixed threshold rebroadcast probability. It is used for discovering multiple paths between the source and the destination. The threshold probability at each node decides the rebroadcasting of received control packets to its neighbors thereby reducing routing overheads and energy consumption. The multiple paths list received from the first phase and supply to the second phase that is the fuzzy controller selects the best path. This fuzzy controller has been named as Fuzzy Best Route Selector (FBRS). FBRS determines the best path based on function of queue length, the distance between nodes and mobility of nodes. Results: Comparative analysis of the proposed protocol named as "Improved Ad-Hoc On-demand Multiple Path Distance Vector based on Probabilistic and Fuzzy logic" (IAOMDV-PF) shows that it is more efficient in terms of overheads and energy consumption. Conclusion: The proposed protocol reduced energy consumption by about 61%, 58% and 30% with respect to FF-AOMDV, IAOMDV-F and FPAOMDV routing protocols, respectively. The proposed protocol has been simulated and analyzed by using NS-2.
-
-
-
A Novel Simplified AES Algorithm for Lightweight Real-Time Applications: Testing and Discussion
Authors: Malik Qasaimeh, Raad S. Al-Qassas, Fida Mohammad and Shadi AljawarnehBackground: Lightweight cryptographic algorithms have been the focus of many researchers in the past few years. This has been inspired by the potential developments of lightweight constrained devices and their applications. These algorithms are intended to overcome the limitations of traditional cryptographic algorithms in terms of exaction time, complex computation and energy requirements. Methods: This paper proposes LAES, a lightweight and simplified cryptographic algorithm for constricted environments. It operates on GF(24), with a block size of 64 bits and a key size of 80-bit. While this simplified AES algorithm is impressive in terms of processing time and randomness levels. The fundamental architecture of LAES is expounded using mathematical proofs to compare and contrast it with a variant lightweight algorithm, PRESENT, in terms of efficiency and randomness level. Results: Three metrics were used for evaluating LAES according to the NIST cryptographic applications statistical test suite. The testing indicated competitive processing time and randomness level of LAES compared to PRESENT. Conclusion: The study demonstrates that LAES achieves comparable results to PRESENT in terms of randomness levels and generally outperform PRESENT in terms of processing time.
-
-
-
Web Service Composition in Cloud: A Fuzzy Rule Model
Authors: Hussien Alhadithy and Bassam Al-ShargabiBackground: Cloud Computing has drawn much attention in the industry due to its costefficient schema along with more prospects, such as elasticity and scalability nature of Cloud Computing. One of the main service models of a Cloud is software as a service, where many web services are published and hosted in the Cloud environment. Many web services offered in a Cloud have similar functionality, with different of characteristics non-functional requirements such as Quality of Service (QoS). In addition, as individual web services are limited in their capability. Therefore, there is a need for composing existing services to create new functionality in the form of composite service to fulfill the requirements of Cloud user for certain processes. Methods: This paper introduces a fuzzy rule approach to compose web service based on QoS from different Cloud Computing providers. The fuzzy rule is generated based on QoS of discovered web service from Cloud in order to compose web services that only match user requirements. The proposed model is based on an agent that is responsible for discovering and composing web service that only stratified user requirements. Result: the experimental result shows that the proposed model is efficient in terms of time and the use of fuzzy rules to compose web services from different Cloud providers under different specifications and configurations of Cloud Computing environment. Conclusion: In this paper, an agent-based model was proposed to compose web services based on fuzzy rule in Cloud environment. The agent is responsible for discovering web services and generating composition plans based on offered QoS for each web service. The agent employs a set of fuzzy rules to carry out an intelligent selection to select the best composition plan that fulfills the requirements of the end user. The model was implemented on CloudSim to ensure the validity of the proposed model and performance time analysis was performed that showed good result in terms of time with regard to the Cloud Computing configuration.
-
-
-
Project Management Knowledge Areas and Skills for Managing Software and Cloud Projects: Overcoming Challenges
Authors: Sofyan Hayajneh, Mohammed Hamada and Shadi AljawarnehBackground: Cloud Computing has already started to revolutionize storing and accessing data. Although cloud computing is on its way to become a huge success, there are some challenges that arise while managing cloud services. This indeed reveals many new knowledge areas, skills and consequently new challenges that need to be overcome so that software project managers can cope and make use of the newly available cloud services. This research aims to identify the challenges faced by project managers in cloud computing and highlight the knowledge areas and skills required to meet these challenges. Methods: The findings of this paper are presented through three stages. First, the pre-survey questionnaire to validate skills and knowledge areas which will be selected and eventually adopted for the main survey. Second, interviews with experts in the field to discuss the challenges identified in the literature review and it will also be adopted with the pre-survey to build the main survey. Third, the main survey to identify the critical skills and knowledge areas that are required for managing cloud projects. Results: This study drew recommendations so that possible systems and tools can be developed and integrated to overcome some of these challenges that give it the importance of providing guidance for the managers in the field to improve the performance of project management successfully. Conclusion: This study leads to gain more understanding of the attributes of a competent cloud manager. It also determines knowledge areas and skills to help them effectively in overcoming the challenges faced in software and cloud projects.
-
-
-
A Novel Model for Aligning Knowledge Management Activities within the Process of Developing Quality Software
By Omar SabriBackground: Currently, the organization's competitive advantage is based on critical decisions to achieve their objectives by understanding the power of knowledge as a source within the organizations. However, there is a lack of qualitative models/frameworks for integrating Knowledge Life Cycle (KLC) within software development life cycle SDLC. Therefore, the goal of this research is to involve Knowledge Management activities within the SDLC in Information Technology companies to produce quality software. With the help of knowledge movements within the companies, the quality of provided software is used to improve organizations performance and products better and faster. Methods: This research highlights the importance of Knowledge Management activities during a typical software development process to provide the software as a final product/target. Moreover, the paper proposes a model to explain the relationships between knowledge management activities within the process of software development life cycle to produce quality software using three basic building blocks: people, organizations, and technologies. The success factors for the blocks are selected depending on the most recent literature occurrences and on their fitness to the nature of this study. Result: The research proposes a novel model for the success factors to evaluate the effects of the building blocks, and workflows during the software development processes. The selected success factors for the blocks are (Training, Leadership, Teamwork, Trust, IT Infrastructure, Culture, and strategies). Also, the research demonstrates the relationships between KM success factors and SDLC to produce quality software. Conclusion: In this research, we proposed a novel model to explain the relationships between knowledge management activities within the process of software development life cycle to produce quality software using three basic building blocks: people, organizations, and technologies. We selected seven success factors for the blocks depending on: 1) their importance and occurrence in a number of literature by many authors; and 2) their fitness to the nature of this study. The success factors (Training, Leadership, Teamwork, Trust, IT Infrastructure, Culture, and strategies) of the proposed model can be used to evaluate the effects of people, organizations, technologies, and workflows during the software development processes to obtain the required software quality. Finally, a quantitative study will be implemented to investigate the proposed hypothesis and to measure factors influencing the suggested model. By assessing to which degree these factors are present/ absent within the SDLC process the managers will be able to address the weakness by preparing a suitable plan and produce quality software.
-
-
-
Secure Digital Databases using Watermarking based on English-Character Attributes
Authors: Khalaf Khatatneh, Ashraf Odeh, Ashraf Mashaleh and Hind HamadeenIntroduction: The single space and the double space (DS). In this procedure, an image is used to watermark a digital database, where the image bytes are divided into binary strings that block the text attributes of the selected database, we proposed an algorithm to defend against four common database attacks. Objective: Perform the watermark is Embedding and makes extraction of the watermark. We also describe the principal of the Embedding and extraction the watermark. Methods: The procedure to extract the watermark does not require knowledge of the original database that does not carry the same watermark. This feature is extremely important because it allows the discovery of a watermark in a copy of the original database, regardless of the subsequent updates to the asset. The extraction procedure is a direct reflection of the procedure used to embed the watermark is six steps. Results: Using new algorithm ability to develop a database watermark that would make it difficult for an attacker to remove or change the watermark without discovering the value of the object. To be judged effective, the database algorithm had to be able to create a strong enough watermark that could sustain the security of the database in the face of the following four types of attack: deletion of a sub-dataset, addition of a sub-dataset. Conclusion: The performance of the proposed algorithm was assessed in respect of its ability to defend the database against four common attacks for all tuples selection.
-
-
-
IP Traceback using Flow Based Classification
Authors: Yerram Bhavani, Vinjamuri Janaki and Rangu SrideviBackground: Distributed Denial of Service (DDoS) attack is a major threat over the internet. The IP traceback mechanism defends against DDoS attacks by tracing the path traversed by attack packets. The existing traceback techniques proposed till now are found with few short comings. The victim required many number of packets to trace the attack path. The requirement of a large number of packets resulted in more number of combinations and more false positives. Methods: To generate a unique value for the IP address of the routers in the attack path Chinese Remainder theorem is applied. This helped in combining the exact parts of the IP address at the victim. We also applied K-Nearest Neighbor (KNN) algorithm to classify the packets depending on their traffic flow, this reduced the number of packets to reconstruct the attack path. Results: The proposed approach is compared with the existing approaches and the results demonstrated that the attack graph is effectively constructed with higher precision and lower combination overhead under large scale DDoS attacks. In this approach, packets from diverse flows are separated as per flow information by applying KNN algorithm. Hence, the reconstruction procedure could be applied on each group separately to construct the multiple attack paths. This results in reconstruction of the complete attack graph with fewer combinations and false positive rate. Conclusion: In case of DDoS attacks the reconstruction of the attack path plays a major role in revealing IP addresses of the participated routers without false positives and false negatives. Our algorithm FRS enhances the feasibility of information pertaining to even the farthest routers by incorporating a flag condition while marking the packets. The rate of false positives and false negatives are drastically reduced by the application of Chinese Remainder Theorem on the IP addresses of the router. At the victim, the application of KNN algorithm reduced the combination overhead and the computation cost enormously.
-
-
-
Dynamic Consolidation of Virtual Machine: A Survey of Challenges for Resource Optimization in Cloud Computing
Authors: A.M. S. Kani and D. PaulrajBackground: Virtualization is an efficient technology that accelerates available data center to support efficient workload for the application. It completely based on guest operating system which keeps track of infrastructure that keeps track of real time usage of hardware and utilization of software. Objective: To address the issues with Virtualization this paper analyzed various virtualization terminology for treating best effective way to reduce IT expenses while boosting efficiency and deployment for all levels of businesses. Methods: This paper discusses about the scenarios where various challenges met by Dynamic VM consolidation. Dynamic conclusion of virtual machines has the ability to increase the consumption of physical setup and focus on reducing power utilization with VM movement for stipulated period. Gathering the needs of all VM working in the application, adjusting the Virtual machine and suitably fit the virtual resource on a physical machine. Profiling and scheduling the virtual CPU to another Physical resource. This can be increased by making live migration with regards to planned schedule of virtual machine allotment. Results: The recent trends followed in comprehending dynamic VM consolidation is applicable either in heuristic-based techniques which has further approaches based on static as well as adaptive utilization threshold. SLA with unit of time with variant HOST adoption (SLATAH) which is dependent on CPU utilization threshold with 100% for active host. Conclusion: The cloud provider decision upon choosing the virtual machine for their application also varies with their decision support system that considers data storage and other parameters. It is being compared for the continuous workload distribution as well as eventually compared with changing demands of computation and in various optimization VM placement strategies.
-
-
-
Peak Average Power Reduction in NOMA by using PTSCT Technique
Authors: Arun Kumar and Manisha GuptaBackground: High peak power is one of the several disadvantages which need to be addressed for its effective regularization. It hampers the performance of the system due to the utilization of the orthogonal frequency division multiplexing transmission scheme at the sender of the Non-orthogonal multiple access system. Objective: In this work, a new Partial transmission sequence circular transformation reduction technique is designed for Non-orthogonal multiple access schemes. Methods: Partial transmission sequence is considered to be one of the most efficient techniques to reduce the Peak to average power ratio but it leads to high computational complexity. Additionally, a circular transformation is implemented. In the proposed technique, circular transformation and alternate optimization are used. Results: Simulation results reveal that the Peak to average power ratio performance of the proposed technique is better as compared to the conventional partial transmission sequence. Conclusion: It is observed that the proposed technique achieved 80% Peak to average power ratio reduction and 90 % bit error rate performance as compared to the conventional Partial transmission sequence.
-
-
-
Enhancing Resiliency Feature in Smart Grids through a Deep Learning Based Prediction Model
Authors: Abderrazak Khediri, Mohamed R. Laouar and Sean B. EomBackground: Enhancing the resiliency of electric power grids is becoming a crucial issue due to the outages that have recently occurred. One solution could be the prediction of imminent failure that is engendered by line contingency or grid disturbances. Therefore, a number of researchers have initiated investigations to generate techniques for predicting outages. However, extended blackouts can still occur due to the frailty of distribution power grids. Objective: This paper implements a proactive prediction model based on deep-belief networks to predict the imminent outages using previous historical blackouts, trigger alarms, and suggest solutions for blackouts. These actions can prevent outages, stop cascading failures and diminish the resulting economic losses. Methods: The proposed model is divided into three phases: A, B and C. The first phase (A) represents the initial segment that collects and extracts data and trains the deep belief network using the collected data. Phase B defines the Power outage threshold and determines whether the grid is in a normal state. Phase C involves detecting potential unsafe events, triggering alarms and proposing emergency action plans for restoration. Results: Different machine learning and deep learning algorithms are used in our experiments to validate our proposition, such as Random forest, Bayesian nets and others. Deep belief Networks can achieve 97.30% accuracy and 97.06% precision. Conclusion: The obtained findings demonstrate that the proposed model would be convenient for blackouts’ prediction and that the deep-belief network represents a powerful deep learning tool that can offer plausible results.
-
-
-
Research on Java Programming Course Based on CDIO and Iterative Engineering Teaching Pattern
By Cai YangBackground: In universities, the course of Java programming is widely offered.It contains many contents and is practical. Therefore, learning Java programming is considered to be a difficult and challenging task for beginners. That is to say, students must learn a lot of programming skills in order to effectively master the course. However, it is often reported that the result of teaching Java programming is poor, mainly reflected in the stereotyped teaching methods, lack of project development experience and so on. In order to investigate and solve these problems, many educational experts have conducted in-depth research about it. CDIO (Conceiving-Designing-Implementing-Operating) engineering education model is the latest achievement of engineering education reform in recent years. It is a life cycle from product development to product operation as the carrier, which enables students to learn engineering in an active, practical and comprehensive way. For the problems in Java programming course, the concept of CDIO engineering is introduced to solve them. Methods: Firstly, the research status of Java programming course and the application of CDIO model were analysed. Secondly, the current situation of learning was analysed by means of questionnaire survey. At the same time, the main problems existing in the current teaching project were listed. Following this, the questionnaire method was used to analyse the current learning situation of Java programming course. The ideas of CDIO engineering education and iteration mode were applied to Java programming course. From various perspectives, this paper makes a detailed analysis of the development methods and strategies of the new teaching mode. Finally, the teaching model was applied to the existing teaching process. The teaching effect of the model was verified by data statistics. Results: The experimental results show that the new teaching mode encouraged students to master programming knowledge as well as problem-solving strategies. Students' interest in learning has been increased and their comprehensive ability has also been improved. Compared with traditional teaching methods, teachers tend to adopt CDIO teaching methods. The data statistics of teaching effect include six aspects: learning initiative, learning interest, knowledge-related ability, communication ability and practical ability, practical skills and final examination scores. The final exam results also showed that students with the new method performed better than those being taught with the older teaching method. Conclusion: A new teaching model based on graded iteration and CDIO Engineering education mode is proposed for the problems existing in the teaching process of Java programming course. This paper creatively combines CDIO engineering ideas with Java programming course, and introduces the idea of hierarchical iteration. According to this idea, the knowledge structure of the course is put forward, and the teaching method of CDIO is adopted to attract students to study Java programming. The basic characteristics of the teaching mode are that the project is taken as the main line, the teacher as the leading role, and the students as the main body, so as to cultivate the students' comprehensive engineering ability. By strengthening the classroom teaching and practice teaching, the new model improves the Java teaching process, and enhances the teaching effect. The teaching practice proves that the new teaching model can mobilize the enthusiasm of students and improve the students' practical ability. It is worthy of popularizing.
-
-
-
A Prediction based Cloud Resource Provisioning using SVM
Authors: Vijayasherly Velayutham and Srimathi ChandrasekaranAim: To develop a prediction model grounded on Machine Learning using Support Vector Machine (SVM). Background: Prediction of workload in a Cloud Environment is one of the primary task in provisioning resources. Forecasting the requirements of future workload lies in the competency of predicting technique which could maximize the usage of resources in a cloud computing environment. Objective: To reduce the training time of SVM model. Methods: K-Means clustering is applied on the training dataset to form ‘n’ clusters firstly. Then, for every tuple in the cluster, the tuple’s class label is compared with the tuple’s cluster label. If the two labels are identical then the tuple is rightly classified and such a tuple would not contribute much during the SVM training process that formulates the separating hyperplane with lowest generalization error. Otherwise the tuple is added to the reduced training dataset. This selective addition of tuples to train SVM is carried for all clusters. The support vectors are a few among the samples in reduced training dataset that determines the optimal separating hyperplane. Results: On Google Cluster Trace dataset, the proposed model incurred a reduction in the training time, Root Mean Square Error and a marginal increase in the R2 Score than the traditional SVM. The model has also been tested on Los Alamos National Laboratory’s Mustang and Trinity cluster traces. Conclusion: The Cloudsim’s CPU utilization (VM and Cloudlet utilization) was measured and it was found to increase upon running the same set of tasks through our proposed model.
-
Most Read This Month
