Recent Advances in Computer Science and Communications - Volume 13, Issue 3, 2020
Volume 13, Issue 3, 2020
-
-
IP Traceback using Flow Based Classification
More LessAuthors: Yerram Bhavani, Vinjamuri Janaki and Rangu SrideviBackground: Distributed Denial of Service (DDoS) attack is a major threat over the internet. The IP traceback mechanism defends against DDoS attacks by tracing the path traversed by attack packets. The existing traceback techniques proposed till now are found with few short comings. The victim required many number of packets to trace the attack path. The requirement of a large number of packets resulted in more number of combinations and more false positives. Methods: To generate a unique value for the IP address of the routers in the attack path Chinese Remainder theorem is applied. This helped in combining the exact parts of the IP address at the victim. We also applied K-Nearest Neighbor (KNN) algorithm to classify the packets depending on their traffic flow, this reduced the number of packets to reconstruct the attack path. Results: The proposed approach is compared with the existing approaches and the results demonstrated that the attack graph is effectively constructed with higher precision and lower combination overhead under large scale DDoS attacks. In this approach, packets from diverse flows are separated as per flow information by applying KNN algorithm. Hence, the reconstruction procedure could be applied on each group separately to construct the multiple attack paths. This results in reconstruction of the complete attack graph with fewer combinations and false positive rate. Conclusion: In case of DDoS attacks the reconstruction of the attack path plays a major role in revealing IP addresses of the participated routers without false positives and false negatives. Our algorithm FRS enhances the feasibility of information pertaining to even the farthest routers by incorporating a flag condition while marking the packets. The rate of false positives and false negatives are drastically reduced by the application of Chinese Remainder Theorem on the IP addresses of the router. At the victim, the application of KNN algorithm reduced the combination overhead and the computation cost enormously.
-
-
-
Dynamic Consolidation of Virtual Machine: A Survey of Challenges for Resource Optimization in Cloud Computing
More LessAuthors: A.M. S. Kani and D. PaulrajBackground: Virtualization is an efficient technology that accelerates available data center to support efficient workload for the application. It completely based on guest operating system which keeps track of infrastructure that keeps track of real time usage of hardware and utilization of software. Objective: To address the issues with Virtualization this paper analyzed various virtualization terminology for treating best effective way to reduce IT expenses while boosting efficiency and deployment for all levels of businesses. Methods: This paper discusses about the scenarios where various challenges met by Dynamic VM consolidation. Dynamic conclusion of virtual machines has the ability to increase the consumption of physical setup and focus on reducing power utilization with VM movement for stipulated period. Gathering the needs of all VM working in the application, adjusting the Virtual machine and suitably fit the virtual resource on a physical machine. Profiling and scheduling the virtual CPU to another Physical resource. This can be increased by making live migration with regards to planned schedule of virtual machine allotment. Results: The recent trends followed in comprehending dynamic VM consolidation is applicable either in heuristic-based techniques which has further approaches based on static as well as adaptive utilization threshold. SLA with unit of time with variant HOST adoption (SLATAH) which is dependent on CPU utilization threshold with 100% for active host. Conclusion: The cloud provider decision upon choosing the virtual machine for their application also varies with their decision support system that considers data storage and other parameters. It is being compared for the continuous workload distribution as well as eventually compared with changing demands of computation and in various optimization VM placement strategies.
-
-
-
Peak Average Power Reduction in NOMA by using PTSCT Technique
More LessAuthors: Arun Kumar and Manisha GuptaBackground: High peak power is one of the several disadvantages which need to be addressed for its effective regularization. It hampers the performance of the system due to the utilization of the orthogonal frequency division multiplexing transmission scheme at the sender of the Non-orthogonal multiple access system. Objective: In this work, a new Partial transmission sequence circular transformation reduction technique is designed for Non-orthogonal multiple access schemes. Methods: Partial transmission sequence is considered to be one of the most efficient techniques to reduce the Peak to average power ratio but it leads to high computational complexity. Additionally, a circular transformation is implemented. In the proposed technique, circular transformation and alternate optimization are used. Results: Simulation results reveal that the Peak to average power ratio performance of the proposed technique is better as compared to the conventional partial transmission sequence. Conclusion: It is observed that the proposed technique achieved 80% Peak to average power ratio reduction and 90 % bit error rate performance as compared to the conventional Partial transmission sequence.
-
-
-
Enhancing Resiliency Feature in Smart Grids through a Deep Learning Based Prediction Model
More LessAuthors: Abderrazak Khediri, Mohamed R. Laouar and Sean B. EomBackground: Enhancing the resiliency of electric power grids is becoming a crucial issue due to the outages that have recently occurred. One solution could be the prediction of imminent failure that is engendered by line contingency or grid disturbances. Therefore, a number of researchers have initiated investigations to generate techniques for predicting outages. However, extended blackouts can still occur due to the frailty of distribution power grids. Objective: This paper implements a proactive prediction model based on deep-belief networks to predict the imminent outages using previous historical blackouts, trigger alarms, and suggest solutions for blackouts. These actions can prevent outages, stop cascading failures and diminish the resulting economic losses. Methods: The proposed model is divided into three phases: A, B and C. The first phase (A) represents the initial segment that collects and extracts data and trains the deep belief network using the collected data. Phase B defines the Power outage threshold and determines whether the grid is in a normal state. Phase C involves detecting potential unsafe events, triggering alarms and proposing emergency action plans for restoration. Results: Different machine learning and deep learning algorithms are used in our experiments to validate our proposition, such as Random forest, Bayesian nets and others. Deep belief Networks can achieve 97.30% accuracy and 97.06% precision. Conclusion: The obtained findings demonstrate that the proposed model would be convenient for blackouts’ prediction and that the deep-belief network represents a powerful deep learning tool that can offer plausible results.
-
-
-
Research on Java Programming Course Based on CDIO and Iterative Engineering Teaching Pattern
More LessBy Cai YangBackground: In universities, the course of Java programming is widely offered.It contains many contents and is practical. Therefore, learning Java programming is considered to be a difficult and challenging task for beginners. That is to say, students must learn a lot of programming skills in order to effectively master the course. However, it is often reported that the result of teaching Java programming is poor, mainly reflected in the stereotyped teaching methods, lack of project development experience and so on. In order to investigate and solve these problems, many educational experts have conducted in-depth research about it. CDIO (Conceiving-Designing-Implementing-Operating) engineering education model is the latest achievement of engineering education reform in recent years. It is a life cycle from product development to product operation as the carrier, which enables students to learn engineering in an active, practical and comprehensive way. For the problems in Java programming course, the concept of CDIO engineering is introduced to solve them. Methods: Firstly, the research status of Java programming course and the application of CDIO model were analysed. Secondly, the current situation of learning was analysed by means of questionnaire survey. At the same time, the main problems existing in the current teaching project were listed. Following this, the questionnaire method was used to analyse the current learning situation of Java programming course. The ideas of CDIO engineering education and iteration mode were applied to Java programming course. From various perspectives, this paper makes a detailed analysis of the development methods and strategies of the new teaching mode. Finally, the teaching model was applied to the existing teaching process. The teaching effect of the model was verified by data statistics. Results: The experimental results show that the new teaching mode encouraged students to master programming knowledge as well as problem-solving strategies. Students' interest in learning has been increased and their comprehensive ability has also been improved. Compared with traditional teaching methods, teachers tend to adopt CDIO teaching methods. The data statistics of teaching effect include six aspects: learning initiative, learning interest, knowledge-related ability, communication ability and practical ability, practical skills and final examination scores. The final exam results also showed that students with the new method performed better than those being taught with the older teaching method. Conclusion: A new teaching model based on graded iteration and CDIO Engineering education mode is proposed for the problems existing in the teaching process of Java programming course. This paper creatively combines CDIO engineering ideas with Java programming course, and introduces the idea of hierarchical iteration. According to this idea, the knowledge structure of the course is put forward, and the teaching method of CDIO is adopted to attract students to study Java programming. The basic characteristics of the teaching mode are that the project is taken as the main line, the teacher as the leading role, and the students as the main body, so as to cultivate the students' comprehensive engineering ability. By strengthening the classroom teaching and practice teaching, the new model improves the Java teaching process, and enhances the teaching effect. The teaching practice proves that the new teaching model can mobilize the enthusiasm of students and improve the students' practical ability. It is worthy of popularizing.
-
-
-
A Prediction based Cloud Resource Provisioning using SVM
More LessAuthors: Vijayasherly Velayutham and Srimathi ChandrasekaranAim: To develop a prediction model grounded on Machine Learning using Support Vector Machine (SVM). Background: Prediction of workload in a Cloud Environment is one of the primary task in provisioning resources. Forecasting the requirements of future workload lies in the competency of predicting technique which could maximize the usage of resources in a cloud computing environment. Objective: To reduce the training time of SVM model. Methods: K-Means clustering is applied on the training dataset to form ‘n’ clusters firstly. Then, for every tuple in the cluster, the tuple’s class label is compared with the tuple’s cluster label. If the two labels are identical then the tuple is rightly classified and such a tuple would not contribute much during the SVM training process that formulates the separating hyperplane with lowest generalization error. Otherwise the tuple is added to the reduced training dataset. This selective addition of tuples to train SVM is carried for all clusters. The support vectors are a few among the samples in reduced training dataset that determines the optimal separating hyperplane. Results: On Google Cluster Trace dataset, the proposed model incurred a reduction in the training time, Root Mean Square Error and a marginal increase in the R2 Score than the traditional SVM. The model has also been tested on Los Alamos National Laboratory’s Mustang and Trinity cluster traces. Conclusion: The Cloudsim’s CPU utilization (VM and Cloudlet utilization) was measured and it was found to increase upon running the same set of tasks through our proposed model.
-
Most Read This Month