Recent Advances in Computer Science and Communications - Volume 14, Issue 4, 2021
Volume 14, Issue 4, 2021
-
-
Analysis of Relay Node Failure in Heterogeneous Wireless Sensor Networks
Authors: Gholamreza Kakamanshadi, Savita Gupta and Sukhwinder SinghIntroduction: Fault tolerance is an important issue for assuring data reliability, energy saving and prolonging the lifetime of wireless sensor networks. Since, sensor node, relay node, etc. are prone to failure, there is a need for an effective fault tolerance mechanism. Methods: Relay nodes are used as cluster heads and the concept of two disjoint paths is employed for proving fault tolerance against link failure. To evaluate the fault tolerance level, mean time to failure and subsequently, failure rate are calculated, that reflect the reliability of the network. Results: The results show that as the area size of the network increases, the average fault tolerance level of the network becomes constant. Furthermore, when the mean time to failure of the network decreases, then the failure rate increases. It means the overall reliability of the network with a smaller network size is more than the larger network size. Discussion: This paper presents a detailed analysis of relay nodes failure under distinct network configurations in heterogeneous wireless sensor networks. Conclusion: This analysis helps the network designers to decide the quantity of deployment of relay nodes with respect to fault tolerance level. It may also help to prevent relay nodes failure by taking appropriate actions so as to increase the fault tolerance level of the network as well as network reliability.
-
-
-
An Empirical Study of Predictive Model for Website Quality Analytics Using Dataset of Different Domains of Websites
By Divya GuptaIntroduction: Web analytics is the process of examining websites to uncover patterns, correlations, trends, insights, and other useful information that can be utilized to optimize web usage and improve the quality of the website. Methods: This research proffers an approach which associates the website assessment with user satisfaction and acceptance. The proposed WQA (Website Quality Analytic) Model considers websites from seven domains, and using 13 UX- based quality attributes, evaluates the quality of websites in each domain. The quality assessment is automated using supervised learning models to predict good, average, and bad websites. Results: The real-time dataset of website domains was assessed and websites were predicted as good, average, and bad, using the algorithms. Discussion: A Website quality model essentially consists of a set of criteria used to determine if a website reaches certain levels of fineness. User Experience (UX) directly measures the quality of site interactions and is an indirect representative of site success and customer conversions. That is, a bad UX bounces away visitors to seek a more reliable website. Every single second, a user spends on a website is directly attributable to the usability of a good UX. Hence, the evaluation of the quality of websites is essential to determine user acceptance; that is, the users are the parameter measured for the success of the site. Conclusion: The feature (attribute)-based predictive model for quality analytics is empirically analyzed for five classification algorithms. A qualitative analysis of the domain-wise classification of websites is also presented.
-
-
-
Adaptive Energy-Aware Algorithms to Minimize Power Consumption and SLA Violation in Cloud Computing
Authors: Monika Singh, Pardeep Kumar and Sanjay TyagiObjective: With the establishment of virtualized datacenters on a large scale, cuttingedge technology requires more energy to deliver the services 24*7 hours. With this expansion and accumulation of information on a massive scale on data centers, the consumption of an excessive amount of power results in high operational costs. Therefore, there is an urgent need to make the environment more adaptive and dynamic, where the overutilization and underutilization of hosts are well known to the system and active measures can be taken accordingly. To serve this purpose, an energy-efficient method, for the detection of overloaded and under-loaded hosts, has been proposed in this paper. For implementing VM migration, VM placement decision has also been taken to save energy and reduce SLA (Service Level Agreement) rate over the cloud. Methods: In the paper, a novel adaptive heuristics approach has been presented that concerns with the utilization of resources for dynamic consolidation of VMs based on the mustered data from the usage of resources by VMs, while ensuring the high level of relevancy to the SLA. After identification of under-load and overload hosts, VM placement decision has been taken in the way that takes minimum energy consumption. A minimum migration policy has been adopted in the proposed methodology to minimize execution time. The validation of the effectiveness and efficiency of the suggested approach has been performed by using real-world workload traces in the CloudSim simulator. Results: The results shows that the proposed methodology is ideal for SLA, but costs more for VM migration. Conclusion: A deep analysis must be done in existing energy efficient approaches and a new platform should be suggested to save energy in real life.
-
-
-
Enhance the Quality of Collaborative Filtering Using Tagging
Authors: Latha Banda and Karan SinghBackground: Due to enormous data on web sites, recommending users for every item is impossible. For this problem, Recommender Systems (RS) are introduced. RS is categorized into Content-Based (CB), Collaborative Filtering (CF) and Hybrid RS. Based on these techniques, recommendations are made to the user. In this, CF is the recent technique used in RS in which tagging features also provided. Objective: Three main issues occur in RS are scalability problem which occurs when there is a huge data, sparsity problem occurs when rating data is missing and cols start user or item problem occurs when new user or new item enters in the system. To avoid these issues, here we have proposed Tag and Time weight model with GA in Collaborative Tagging. Methods: Here we have proposed a method Collaborative Tagging (CT) with Tag and Time weight model with real value genetic algorithm which enhances the recommendation quality by removing the issues of sparsity and cold start user problems with the help of missing value prediction. Here in this system, the sparsity problem can be removed using missing value prediction and cold start problems are removed using tag and time weight model using GA. Results: In this study, we have compared the results of Collaborative Filtering with Cosine Similarity (CF-CS), Collaborative Filtering with Diffusion Similarity (CF-DS), Tag and Time Weight Model with Diffusion Similarity (TAW-TIW-DS) and Tag and Time Weight Model Using Diffusion Similarity and Genetic Algorithm (TAW-TIW-DS-GA). Conclusion: Here we have compared the proposed approach with the baseline approaches and the metrics are used MAE, prediction percentage, Hit-rate and Hit-rank. Based on these metrics for every split, TAW-TIW-DS-GA showed best results as compared to the existing approach.
-
-
-
Network Selection in Wireless Heterogeneous Environment Based on Available Bandwidth Estimation
Authors: Kiran Ahuja, Brahmjit Singh and Rajesh KhannaBackground: With the availability of multiple options in wireless network simultaneously, Always Best Connected (ABC) requires dynamic selection of the best network and access technologies. Objective: In this paper, a novel dynamic access network selection algorithm based on the real time is proposed. The Available BandWidth (ABW) of each network is required to be estimated to solve the network selection problem. Methods: Proposed algorithm estimates available bandwidth by taking averages, peaks, low points and bootstrap approximation for network selection. It monitors real-time internet connection and resolves the selection issue in internet connection. The proposed algorithm is capable of adapting to prevailing network conditions in heterogeneous environment of 2G, 3G and WLAN networks without user intervention. It is implemented in temporal and spatial domains to check its robustness. Estimation error, overhead, estimation time with the varying size of traffic and reliability are used as the performance metrics. Results: Through numerical results, it is shown that the proposed algorithm’s ABW estimation based on bootstrap approximation gives improved performance in terms of estimation error (less than 20%), overhead (varies from 0.03% to 83%) and reliability (approx. 99%) with respect to existing techniques. Conclusion: Our proposed methodology of network selection criterion estimates the available bandwidth by taking averages, peaks, and low points and bootstrap approximation method (standard deviation) for the selection of network in the wireless heterogeneous environment. It monitors realtime internet connection and resolves internet connections selection issue. All the real-time usage and test results demonstrate the productivity and adequacy of available bandwidth estimation with bootstrap approximation as a practical solution for consistent correspondence among heterogeneous wireless networks by precise network selection for multimedia services.
-
-
-
Cost-Effective Cluster-Based Energy Efficient Routing for Green Wireless Sensor Network
Authors: Sandeep Verma, Neetu Sood and Ajay K. SharmaBackground: The green Information and Communications Technologies (ICTs) have brought a revolution in uplifting the technology efficiently to facilitate the human sector in the best possible way. Green Wireless Sensor Network (WSN) tactically focuses on improving the survival period of deployed nodes (as they have a limited battery) in any target area. Objective: To address this concern, the main objective is to improve the routing in WSN. The cluster- based routing helps in acquiring the same with the appropriate Cluster Head (CH) selection. The use of energy heterogeneous nodes that normally comprise of high energy nodes, puts a lot of financial burden on the users as they incur a huge cost, thus becoming a bottleneck for the growth of green WSN. Therefore , another objective of the study is to reduce this cost involved in the network. Methods: A cost-effective routing protocol is proposed that introduces energy-efficient CH selection by incorporating parameters namely, node density, residual energy, the total energy of network and distance factor. Thus, the proposed protocol is termed as Cost-Effective Cluster-based Routing Protocol (CECRP) as it performs remarkably better with only two energy level nodes as compared to state-of-the-art protocols with three levels nodes. Results: It can be encapsulated from the simulation results that CECRP outperforms state-of-theprotocols on different performance metrics. Conclusion: Furthermore, it is comprehended from the simulation results that CECRP proves to be 33.33% more cost-effective as compared to the competent protocols, hence CECRP favors the green WSN.
-
-
-
Enhanced Auxiliary Cluster Head Selection Routing Algorithm in Wireless Sensor Networks
Authors: Gaurav K. Nigam and Chetna DabasBackground & Objective: Wireless sensor networks are made up of a huge amount of less powered small sensor nodes that can audit the surroundings, collect meaningful data, and send it to the base station. Various energy management plans that pursue to lengthen the endurance of overall network have been proposed over the years, but energy conservation remains the major challenge as the sensor nodes have finite battery and low computational capabilities. Cluster based routing is the most fitting system to help in burden adjusting, adaptation to internal failure, and solid correspondence to draw out execution parameters of wireless sensor network. Low energy adaptive clustering hierarchy is an efficient clustering based hierarchical protocol that is used to enhance the lifetime of sensor nodes in wireless sensor network. It has some basic flaws that need to be overwhelmed in order to reduce the energy utilization and inflating the nodes lifetime. Methods: In this paper, an effective auxiliary cluster head selection is used to propose a new enhanced GC-LEACH algorithm in order to minimize the energy utilization and prolong the lifespan of wireless sensor network. Results & Conclusion: Simulation is performed in NS-2 and the outcomes show that the GCLEACH outperforms conventional LEACH and its existing versions in the context of frequent cluster head rotation in various rounds, number of data packets collected at base station, as well as reduces the energy consumption 14% - 19% and prolongs the system lifetime 8% - 15%.
-
-
-
Reliability Analysis and Modeling of Green Computing Based Software Systems
Authors: Sangeeta Malik, Kapil Sharma and Manju BalaBackground: Software industries are growing very fast to develop new solutions and ease people’s life. Software reliability has been considered as a critical factor in today’s growing digital world. Software reliability models are one of the most generally used mathematical tools for estimation of software reliability. These reliability models can be applied on the development of sustainable and green computing-based software’s having their constrained development environments. Objective: This paper proposes a new reliability estimation model for green IT environment based software systems. Methods: In this paper, a new failure rate behavior-based model centered on green software development life cycle process has been developed. This model integrates a new modulation factor for incorporating changing needs in each phase of green software development methodology. Parameter estimation for proposed model has been done using hybrid Particle Swarm Optimization and Gravitational Search Algorithm. The proposed model has been tested on real-world datasets. Results: Experimental results are showing the enhanced capability of proposed model in simulating real green software development environment. Using GC-1 and GC-2 dataset, the proposed model is about 60.05% which is more significant than other models. Conclusion: This paper proposed a new failure rate model for softwares that have been developed under green IT environment.
-
-
-
Multisensory Decision Level Fusion for Improvement in Urban Land Classification
Authors: Rubeena Vohra and Kailash C. TiwariBackground: Planning and development of urban areas to meet the human developmental requirements is an ongoing process across the globe. This, in turn, requires continuous mapping of urban developmental patterns. Remote sensing and image processing techniques have greatly facilitated the study of urban developmental patterns by mapping urban areas. Objective: The objective of the paper is to carry out the object based classification in urban environments by using fusion techniques on multisensory data to classify natural and man-made objects. Multisensory data fusion using spectral and spatial features is done to improve the classification accuracy. Methods: In our research, the performance of the proposed framework has been verified via investigating the multistage feature level fusion. Spatial and spectral features are explored using feature level fusion between multisensory data and then the database is classified using linear SVM classifier. The individual probabilities (confidence measures) from all such pair of binary SVMs are combined to uniquely represent the object feature to one of the classes. After summing up all the probabilities the class with highest probability value represent the object through decision level fusion. Results: The results explored spatial and spectral features in a unique way using connected component analysis. Also, the improvement in classification accuracy has been achieved. Conclusion: The results reveal that the overall accuracies of SVM classifier are in the range of 53% to 90% for various classes which is improved to 96-98% by majority voting rule.
-
-
-
An Energy Efficient Routing Approach to Enhance Coverage for Application- Specific Wireless Sensor Networks Using Genetic Algorithm
Authors: Amandeep K. Sohal, Ajay K. Sharma and Neetu SoodBackground: An information gathering is a typical and important task in agriculture monitoring and military surveillance. In these applications, minimization of energy consumption and maximization of network lifetime have prime importance for green computing. As wireless sensor networks comprise of a large number of sensors with limited battery power and deployed at remote geographical locations for monitoring physical events, therefore it is imperative to have minimum consumption of energy during network coverage. The WSNs help in accurate monitoring of remote environment by collecting data intelligently from the individual sensors. Objective: The paper is motivated from green computing aspect of wireless sensor network and an Energy-efficient Weight-based Coverage Enhancing protocol using Genetic Algorithm (WCEGA) is presented. The WCEGA is designed to achieve continuously monitoring of remote areas for a longer time with least power consumption. Methods: The cluster-based algorithm consists two phases: cluster formation and data transmission. In cluster formation, selection of cluster heads and cluster members areas based on energy and coverage efficient parameters. The governing parameters are residual energy, overlapping degree, node density and neighbor’s degree. The data transmission between CHs and sink is based on well-known evolution search algorithm i.e. Genetic Algorithm. Results: The results of WCEGA are compared with other established protocols and shows significant improvement of full coverage and lifetime approximately 40% and 45% respectively. Conclusion: This paper proposes an evolutionary method to improve an energy-efficient clustering protocol for longer full coverage.
-
-
-
SQL Versus NoSQL Databases to Assess Their Appropriateness for Big Data Application
Authors: Mohammad A. Kausar and Mohammad NasarBackground: Nowadays, the digital world is rising rapidly and becoming very difficult in nature's quantity, diversity, and speed. Recently, there have been two major changes in data management, which are NoSQL databases and Big Data Analytics. While evolving with the diverse reasons, their independent growths balance each other and their convergence would greatly benefit organization to make decisions on-time with the amount of multifaceted data sets that might be semi structured, structured, and unstructured. Though several software solutions have come out to support Big Data analytics on the one hand, on the other hand, there have been several packages of NoSQL database available in the market. Aim and Methods: The main goal of this article is to give comprehension of their perspective and a complete study to associate the future of the emerging several important NoSQL data models. Results: Evaluating NoSQL databases for Big Data analytics with traditional SQL performance shows that NoSQL database is a superior alternative for industry condition need high-performance analytics, adaptability, simplicity, and distributed large data scalability. Conclusion: In this article we conclude with the adoption of NoSQL in various markets.
-
-
-
Link Stability Based Approach for Route Discovery in MANET Using DSR
Authors: Ganesh K. Wadhwani, Sunil K. Khatri and Sunil K. MuttooBackground: Mobile Ad-hoc network is a set of devices which are capable of communicating with each other without the help of any central entity or fixed infrastructure. The absence of fixed access points makes MANET flexible and deployable at the extreme geographical territories. Each device has routing capabilities to facilitate communication among nodes in the network. Objectives: 1) To selects the stable path which has lowest hop count. 2) To use backup path in case of link break up to minimize delay incurred in finding out the alternate path. Methods: Dynamic source routing is modified to choose the most stable path and a backup path is cached to save the route discovery time in case of link failure. Results: The modified-DSR based on Link stability and hop count is performing better as compared to DSR most of the time. Conclusion: A modified-DSR is proposed that selects the path using hop count and link stability as parameters. The advantage of modified-DSR is that if a link breaks in between data communication then back up path can be used for carrying out the data transfer. Analysis is done by varying the node density and counting number of packets received .Modified-DSR gives better results for light to moderate network and DSR performs better if the number of nodes increases beyond a certain limit.
-
-
-
Supervised Classifier Approach for Intrusion Detection on KDD with Optimal MapReduce Framework Model in Cloud Computing
Authors: Ilayaraja Murugan, Hemalatha S., Manickam P., Sathesh K. K. and Shankar K.Background: Cloud computing is characterized as the arrangement of assets or accessible administrations by the cloud service providers through web to their clients. It communicates everything as administrations over the web as per the client request, for example, the operating system, organization of equipment, storage, assets, and software. Nowadays, Intrusion Detection Systems (IDS) play a powerful role while working under the influence of the experts who act when a system is hacked or under some intrusions. Most intrusion detection frameworks are created based on the machine learning strategies. Since the datasets play a major role, this is utilized as part of the intrusion detection i.e., Knowledge Discovery in Database (KDD). Methods: In this paper, the intruded data was detected and classified utilizing Machine Learning (ML) with MapReduce model. The primary objective of the Hadoop MapReduce model is to reduce the extent of database ideal weight that was decided for reducer model and second stage by utilizing Decision Tree (DT) classifier in data detection. This DT classifier utilizes an appropriate classifier to decide the class labels for non-homogeneous leaf nodes. The decision tree fragment provided a coarse section profile while the leaf level classifier yielded the data about the qualities that influence the label inside a portion. Results: From the proposed results, the accuracy for detection was 96.21% in comparison with the existing classifiers, for example, Neural Network (NN), Naive Bayes (NB) and K Nearest Neighbor (KNN). Conclusion: This study introduced a Hadoop Map-reduce model to create diverse mappers and diminish the data utilizing OBL-GWO strategy.
-
-
-
Transaction Issues in Mobile Distributed Real-Time Database Systems
Authors: Prakash K. Singh and Udai ShankerIn recent years, a large number of populations are dependent on mobile database technology and it is difficult for us to imagine our lifestyle in absence of database. Today’s portable handy mobile devices take part in emerging new technology for sharing distributed applications or/ and information between many users even on the move (from one network to another). To manage this resulting large Volume of data in a wireless environment with time constraints such as deadline making it the fertile land of research for researchers. Fast transaction processing in many industrial applications is needed efficient algorithm and protocols in the field of mobile distributed real-time database (MDRTDBS). Transaction execution in a mobile environment has various interesting research issues like low bandwidth, storage capacity, power backup, priority scheduling policy, and concurrency and commits protocols, security, check-pointing etc. At first in our paper, we address performance issues that are important to MDRTDBS and then survey the various researches that have been done so far. In fact, this paper provides ground knowledge for addressing the performance issues important for mobile distributed real-time database and somehow helping to find out the future area of research in the field of MDRTDBS.
-
-
-
Predicting Suitable Agile Method Using Fuzzy AHP
Authors: Rajbala Singh, Deepak Kumar and Bharat B. SagarAgile methodology promotes changing requirements and its journey has come a long way in the software industries giving edge as well as dimensions to software products. According to the statistics, agile methodology is more successful than traditional projects. The agile team manages the project more efficiently and delivers quality product. In agile methodology, customer satisfaction is a priority due to rapid development as well as continuous delivery which is the core of the Dynamic System Development (DSD). Thus, the industry keeps pace with the new expertise and changing market situation. This paper reveals the utilization of the Fuzzy Analytic Hierarchical Process (FAHP) which is a Multi-Criteria Decision Making (MCDM) process used in the software industry by correlating the selected alternatives using fuzzy triangular numbers, which decides on the best process of agile. Thus, the methodology of Fuzzy AHP for agile process selection is discussed in this research paper. Moreover, by expressing Fuzzy AHP comprehensively and numerally in this paper, the best likely process selection for agile methodology amongst various criteria as well as decision making issues is implemented.
-
-
-
Analysis of Optimum Precoding Schemes in Millimeter Wave System
Authors: Divya Singh and Aasheesh ShuklaBackground: Millimeter wave technology is the emerging technology in wireless communication due to increased demand for data traffic and its numerous advantages however it suffers from severe attenuation. To mitigate this attenuation, phased antenna arrays are used for unidirectional power distribution. An initial access is needed to make a connection between the base station and users in millimeter wave system. The high complexity and cost can be mitigated by the use of hybrid precoding schemes. Hybrid precoding techniques are developed to reduce the complexity, power consumption and cost by using phase shifters in place of converters. The use of phase shifters also increases the spectral efficiency. Objective: Analysis of Optimum Precoding schemes in Millimeter Wave System. Methods: In this paper, the suitability of existing hybrid precoding solutions are explored on the basis of the different algorithms and the architecture to increase the average achievable rate. Previous work done in hybrid precoding is also compared on the basis of the resolution of the phase shifter and digital to analog converter. Results: A comparison of the previous work is done on the basis of different parameters like the resolution of phase shifters, digital to analog converter, amount of power consumption and spectral efficiency. Spectral efficiency shows the average achievable rate of different algorithms at SNR= 0 dB and 5 dB. It also compares the performance achieved by the hybrid precoder in the fully connected structure with two existing approaches, dynamic subarray structure with and without switch and sub connected or partially connected structure, and gives the comparative analysis of hybrid precoding with the different resolutions of the phase shifter and DAC. Conclusion: In this paper, some available literature is reviewed and summarized about hybrid precoding in millimeter wave communication. Current solutions of hybrid precoding are also reviewed and compared in terms of their efficiency, power consumption, and effectiveness. The limitations of the existing hybrid precoding algorithms are the selection of group and resolution of phase shifters. The mm wave massive MIMO is only feasible due to hybrid precoding.
-
-
-
Comparative Study of Fuzzy PID and PID Controller Optimized with Spider Monkey Optimization for a Robotic Manipulator System
Authors: Alka Agrawal, Vishal Goyal and Puneet MishraBackground: Robotic manipulator system has been useful in many areas like chemical industries, automobile, medical fields etc. Therefore, it is essential to implement a controller for controlling the end position of a robotic armeffectively. However, with the increasing non-linearity and the complexities of a robotic manipulator system, a conventional Proportional-Integral-Derivative controller has become ineffective. Nowadays, intelligent techniques like fuzzy logic, neural network and optimization algorithms has emerged as an efficient tool for controlling the highly complex nonlinear functions with uncertain dynamics. Objective: To implement an efficient and robustcontroller using Fuzzy Logic to effectively control the end position of Single link Robotic Manipulator to follow the desired trajectory. Methods: In this paper, a Fuzzy Proportional-Integral-Derivativecontroller is implemented whose parameters are obtainedwith the Spider Monkey Optimization technique taking Integral of Absolute Error as an objective function. Results: Simulated results ofoutput of the plants controlled byFuzzy Proportional-Integral- Derivative controller have been shown in this paper and the superiority of the implemented controller has also been described by comparing itwith the conventional Proportional-Integral-Derivative controller and Genetic Algorithm optimization technique. Conclusion: From results, it is clear that the FuzzyProportional-Integral-Derivativeoptimized with the Spider monkey optimization technique is more accurate, fast and robust as compared to the Proportional- Integral-Derivativecontroller as well as the controllers optimized with the Genetic algorithm techniques.Also, by comparing the integral absolute error values of all the controllers, it has been found that the controller optimized with the Spider Monkey Optimization technique shows 99% better efficacy than the genetic algorithm technique.
-
-
-
A Historical Data Based Ensemble System for Efficient Stock Price Prediction
Authors: Vijay K. Dwivedi and Manoj M. GoreBackground: Stock price prediction is a challenging task. The social, economic, political, and various other factors cause frequent abrupt changes in the stock price. This article proposes a historical data-based ensemble system to predict the closing stock price with higher accuracy and consistency over the existing stock price prediction systems. Objective: The primary objective of this article is to predict the closing price of a stock for the next trading in more accurate and consistent manner over the existing methods employed for the stock price prediction. Methods: The proposed system combines various machine learning-based prediction models employing Least Absolute Shrinkage and Selection Operator (LASSO) regression regularization technique to enhance the accuracy of stock price prediction system as compared to any one of the base prediction models. Results: The analysis of results for all the eleven stocks (listed under Information Technology sector on the Bombay Stock Exchange, India) reveals that the proposed system performs best (on all defined metrics of the proposed system) for training datasets and test datasets comprising of all the stocks considered in the proposed system. Conclusion: The proposed ensemble model consistently predicts stock price with a high degree of accuracy over the existing methods used for the prediction.
-
-
-
Improving Data-Throughput in Energy Harvesting Wireless Sensor Networks Using a Data Mule
Authors: Naween Kumar and Dinesh DashBackground: In Energy Harvesting Wireless Sensor Networks (EH-WSNs), sensors are harvesting energy from the renewable environment to make their operations endless and uninterrupted. However, in such a network, the time-varying nature of harvesting imposes a challenging issue in obtaining improved data-throughput. The use of a static-sink in EH-WSNs to improve data- throughput is less reliable because there is no assurance of the network connectivity. To alleviate such shortcomings, a Data Mule (MDM) has been introduced in EH-WSN for collecting sensors’ data. In this article, the MDM-based distance constrained tour finding problem is formulated such that the data-throughput can be improved within a given delay constraint. Methods: To solve the problem, we devise two different heuristic algorithms based on two different metrics. Results: The obtained experimental results demonstrate that the devised algorithms are more effective than the existing algorithms in terms of data-throughput. Conclusion: The data-throughput values of the first proposed algorithm are about 6.14% and 3.56% better than the other for two different data gathering time durations of 100 sec and 800 sec. The data-throughput values of the second proposed algorithm are about 5.03% and 5.25% better than the other for two different data gathering time durations of 100 sec and 800 sec.
-
-
-
Synthesis of Emotional Speech by Prosody Modification of Vowel Segments of Neutral Speech
Authors: Md S. Fahad, Shreya Singh, Shruti Gupta, Akshay Deepak and AbhinavBackground: Emotional speech synthesis is the process of synthesising emotions in a neutral speech – potentially generated by a text-to-speech system – to make an artificial humanmachine interaction human-like. It typically involves analysis and modification of speech parameters. Existing work on speech synthesis involving modification of prosody parameters does so at sentence, word, and syllable level. However, further fine-grained modification at vowel level has not been explored yet, thereby motivating our work. Objective: To explore prosody parameters at vowel level for emotion synthesis. Methods: Our work modifies prosody features (duration, pitch, and intensity) for emotion synthesis. Specifically, it modifies the duration parameter of vowel-like and pause regions and the pitch and intensity parameters of only vowel-like regions. The modification is gender specific using emotional speech templates stored in a database and done using Pitch Synchronous Overlap and Add (PSOLA) method. Results: Comparison was done with the existing work on prosody modification at sentence, word and syllable label on IITKGP-SEHSC database. Improvements of 8.14%, 13.56%, and 2.80% for emotions angry, happy, and fear respectively were obtained for the relative mean opinion score. This was due to: (1) prosody modification at vowel-level being more fine-grained than sentence, word, or syllable level and (2) prosody patterns not being generated for consonant regions because vocal cords do not vibrate during consonant production. Conclusion: Our proposed work shows that an emotional speech generated using prosody modification at vowel-level is more convincible than prosody modification at sentence, word and syllable level.
-
-
-
Effectiveness of Online Learning and its Comparison Using Innovative Statistical Approach
Authors: Manoj K. Srivastava, Rajesh Kumar and Ashish KhareBackground: Advances in Mobile and Internet technology evolved several online applications like smart class, virtual class and online classes. Online courseware influences better subjective knowledge of the learners. The effectiveness of processes of teaching and learning must evaluated for the benefits of the learners to select the best approach of learning which motivated us to evaluate and compare different Online Learning courses Effectiveness through statistical approaches. Objective: The main objective of this paper is to compare the learning effect of National Program on Technology Enhanced Learning (NPTEL) with traditional class room learning approach. Methods: Master of Science -Final year Computer Science students has been allowed to learn their subjects in online learning mode using with NPTEL and traditional learning approach in two different groups. After learning of the subjects a series of tests has been conducted and their marks are recorded for comparison of two different learning modes For comparison of the results of two learning methodologies two different measuring statistical matrices namely F-test and T-test has been taken. The experimental results demonstrate thatthe t-test results of NPTEL and the f-test results for NPTEL learning method are superior than the other comparative learning methods. Results: The test shows that online learning approach provides better learning as compared to traditional classroom learning. Conclusion: The obtained results also indicate that there is a significant improvement on learners through NPTEL video lectures over traditional class room based learning.
-
-
-
Classification of Diabetes by Kernel Based SVM with PSO
Authors: Dilip K. Choubey, Sudhakar Tripathi, Prabhat Kumar, Vaibhav Shukla and Vinay K. DhandhaniaBackground: The classification method is required to deduce possible errors and assist the doctors. These methods are used to take suitable decisions in real world applications. It is well known that classification is an efficient, effective and broadly utilized strategy in several applications such as medical diagnosis, etc. The prime objective of this research paper is to achieve an efficient and effective classification method for Diabetes. Methods: The proposed methodology comprises two phases: The first phase deals with t h e description of Pima Indian Diabetes Dataset and Localized Diabetes Dataset, whereas in the second phase, the dataset has been processed through two different approaches. Results: The first approach entails classification through Polynomial Kernel, RBF Kernel, Sigmoid Function Kernel and Linear Kernel SVM on Pima Indian Diabetes Dataset and Localized Diabetes Dataset. In the second approach, PSO has been utilized as a feature reduction method followed by using the same set of classification methods used in the first approach. PSO_Linear Kernel SVM provides the highest accuracy and ROC for both the above mentioned dataset. Conclusion: The present work consists of a comparative analysis of outcomes w.r.t. performance assessment has been done PSO and without PSO for the same set of classification methods. Finally, it has been concluded that PSO selects the relevant features, reduces the expense and computation time while improving the ROC and accuracy. The used methodology could be implemented in other medical diseases.
-
-
-
Analysis of Voice Cues in Recognition of Sarcasm
Authors: Basavaraj N Hiremath and Malini M. PatilBackground: The voice recognition system is about cognizing the signals, by feature extraction and identification of related parameters. The entire process is referred to as voice analytics. Objective: The paper aims at analyzing and synthesizing the phonetics of voice. The work focuses on the facts of voice analytics i.e. basic blocks of ‘Glottal signature’. The glottal signature and unique voice cues are evaluated to derive the relationship for utterance of emotional words which leads to sentimental expression cues. An effort is made to map further to understand sarcasm behavior in the sounds made by human speech. Methods: The basic blocks of unique features identified in the work are Intensity, Pitch, Formants related to speak, read, interactive and declarative sentences solely on voice cues not on linguistic theory. It is also tested to identify derived features that maps to fine-grained details of voice cues to drill up usage in sarcasm detection. Results: Different unique features identified in the work are, intensity, pitch, formants related to read, speak, interactive and declarative sentences and derived parameters. Conclusion: The work carried out in the paper also supports the analysis of voice segmentation labelling, analyzes the unique features of voice cues, understanding physics of voice, the process is further carried out to recognize sarcasm.
-
-
-
METHWORK: An Approach for Ranking of Research Trends with a Case Study for IoET
Authors: Neeraj Kumar, Alka Agrawal and Raess A. KhanObjective: Ranking in many areas has been a big problem for a long time. The authors tried to use a novel ranking approach to find out the most popular research interest among all the research fields. The authors have assumed that the selection of a research area is a tedious task. Methods: Therefore, they tried to propose a mechanism named METHWORK (Methodology With the Opted Related Keywords) to choose popular research trends. Google-based searching was applied to find samples in the initial state of this approach. The METHWORK was tested with respect to a case of ranking of research trends within the IoET (Internet of Environmental Things). To find ranks, the first phase is to assess the popularity of the topics in the existing published research papers. To find out the correctness of the ranking found using METHWORK, the authors performed χ2 hypothesis testing while making a comparison with current ranking techniques. Results: The methodology proposed is a milestone to find out the research trends within a broad research area. Conclusion: The results of the test indicated that the approach could be applied to determine trends in any research discipline and proved the applicability of the proposed approach.
-
-
-
Machine Learning Based Parametric Estimation Approach for Poll Prediction
Authors: Abdul M. Koli and Muqeem AhmedBackground: The process of election prediction started long back when common practice for election predictions were traditional methods like pundits, hereditary factor etc. However, in recent times new methods and techniques are being used for election forecasting like Data mining, Data Science, Big data, and numerous machine learning techniques. By using such computational techniques the whole process of political forecasting is changed and poll predictions are carried out through them. Objective: The main objective of this research work is to propose an election prediction model for developing areas especially for the state of Jammu and Kashmir (India). Methods: The election prediction model is developed in Jupyter notebook web application using different supervised machine learning techniques. To obtain the optimal results, we perform the hyperparameter tuning of all the proposed classifiers. For measuring the performance of poll prediction system we used confusion matrix along with AUROC curve which depicts that this methods can be well suited for political forecasting. An important contribution of this article is to design a Prediction system which can be used for making prediction in other fields like cardiovascular disease predictions, weather forecasting etc. Results: This model is tested and trained with real-time dataset of the state Jammu and Kashmir (India). We applied features selection techniques like Random Forest, Decision Tree Classifier, Gradient boosting Classifier and Extra Gradient Boosting and obtained eight most important parameters like (Central Influence, Religion Followers, Party Wave, Party Abbreviations, Sensitive Areas, Vote Bank, Incumbent Party, and Caste Factor) for poll predictions with their mean weightages. By applying different classifier to get mean weightage of different parameters for this election prediction models, it has been observed that Party wave got maximum mean weightage of 0.82% as compared to others parameters. After obtaining the vital parameters for political forecasting, we applied various machine learning algorithms like Decision tree, Random forest, K-nearest neighbor and support vector machine for the early prediction of elections. Experimental results show that Support Vector Machine outperformed with a higher accuracy of 0.84% in contrast to others classifiers. Conclusion: In this paper, a clear overview of election prediction models, their potentials, techniques, parameters as well as limitations are outlined. We conclude this work by stating that election predictions can indeed be forecasted with significant parameters however, with caution due to the limitations which were outlined in developing nations like sensitive areas, social unrest, religion etc. This research work may be considered as the first attempt to use multiple classifier for forecasting the Assembly election results of the state Jammu and Kashmir (India).
-
-
-
Train Delay Estimation in Indian Railways by Including Weather Factors Through Machine Learning Techniques
Authors: Mohd Arshad and Muqeem AhmedBackground: Railway systems all over the world face an uphill task in preventing train delays. Categorically in India, the situation is far worse than other developing countries due to the high number of passengers and poor update of the previous system. As per a report in Times of India (TOI), a daily newspaper, around 25.3 million people used to travel by train in 2006 which drastically increased year on year to 80 million in 2018. Objective: Deploy Machine Learning model to predict the delay in arrival of train(s) in minutes, before starting the journey on a valid date. Methods: In this paper we combined previous train delay data and weather data to predict delay. In the proposed model, we use 4 different machine learning methods (Linear regression, Gradient Boosting Regression, Decision Tree and Random Forest) which have been compared with different settings to find the most accurate method. Results: Linear Regression gives 90.01% accuracy, while Gradient Boosting Regressor measure 91.68% and the most accurate configuration of decision tree give 93.71% accuracy. When the researcher implemented the ensemble method, Random forest regression, the researcher achieved 95.36% accuracy. Conclusion: Trains in India get delayed frequently. This model would assist the Indian railways and concerned companies by giving the possibility of finding frequent delays during certain times of the week. The Indian railways could thereafter implement delay preventions during these particular times of the week in order to maintain a good on-time arrival rate.
-
-
-
Developing a Conceptual Model for Crime Against Women using ISM & MICMAC
Authors: Bhajneet Kaur, Laxmi Ahuja and Vinay KumarBackground: Crime against women is a major issue of society, resulted in physical, psychological, sexual or economic harm of women. Objective: The main objective of this research paper is to propose the conceptual model after finding the contextual relationship among all identified factors affecting crime against women. Methods: Interpretive structural modeling (ISM) technique has been used to develop the model. This is the mathematical approach or step-by-step procedure to deploy the unstructured items into a structured or hierarchical form to build a conceptual model. Further, MICMAC (Matriced' Impacts Croisés Multiplication Appliquée á un Classement) analysis has also been applied to segregate the factors into groups of independent, dependent and operational factors on the basis of their driving power and dependence power. Hence, driving power indicates on what extent an individual factor is contributing for driving the issue and dependence power indicates on what extent an individual factor-driven from other factors. Results: As resulted all 11 identified factors have been structured into a well-defined model with the groups of linkage factors, independent factors, and dependent factors. The model clearly defines the role and contribution of each factor which gives very good insights to take any kind of decision by the law firms, police departments and other criminal or crime organizations to control or prevent this major issue ‘crime against women’. Conclusion: Public and private crime organizations, law firms can use this article to reform its policies to take more security measures with their implementation related to it.
-
-
-
Enhanced Mseec Routing Protocol Involving Tabu Search with Static and Mobile Nodes in Wsns
Authors: Varsha Sahni, Manju Bala and Manoj KumarBackground: Background of this paper has taken place in mainly heterogeneous network in which three types of nodes are present like normal node, advance node and super node with different amount of energy. The energy of super node is greater than that of advance and normal nodes and the energy of advance nodes are also greater than that of normal nodes in the designed network. The optimization techniques have to be studied from the swarm intelligence based on the different aspects of routing. Objective: The objective of this paper is to propose a new heterogeneous protocol with the help of hybrid meta-heuristic technique. In this technique, the shortest route has been selected and forwarded the data to the sink in a minimal time span to save the energy and make the network more stable. Methods: To evaluate the technique, a new hybrid technique has been created, where the data transmission is implemented from the beginning. This technique contains the route process of the algorithm which was made available through a hybrid meta-heuristic technique. Results: Simulation results show that the hybrid meta-heuristic technique has high throughput with less number of dead nodes with existing methods and also show that the efficiency and stability of new proposed protocol. Conclusion: The conclusion to this paper is a novel, energy-efficient technique applied for randomly deployed sensor nodes over the wireless sensor network and enhancement has been done in stability and throughput of a new proposed algorithm in case of static as well as moving nodes.
-
-
-
A Trust Based Neighbor Identification Using MCDM Model in Wireless Sensor Networks
Authors: Amit K. Gautam and Rakesh KumarBackground: Wireless Sensor Network (WSN) is a major technology for the Internet of Things (IoT) and is used within an IoT system to facilitate collaboration of heterogeneous information systems and services. Due to its distributed nature, these networks are highly vulnerable to various security threats which adversely affect their performance. Trust is one of the influential factors that applies in the security of WSNs to have its applications in cloud system, e-commerce etc. The secure and efficient neighbor selection is an issue of Multiple Criteria Decision Making (MCDM), where many Quality of Service (QoS) parameters play a vital role in the process of best neighbor selection. Methods: A dynamic and efficient trust model is proposed in this paper based on the ranking method for recommendation of appropriate secure neighbor node. To rank the available neighbors, we use voting approach and also a hybrid method of Analytical Hierarchy Process (AHP) and Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) methods. Results: The paper included a case study to demonstrate the effectiveness of proposed method which maximizes the defense against internal attacks. Complexity analysis has been done to show the superiority of the proposed method. Time complexity of the proposed algorithm is O (n2) against the compared algorithm the growth rate of which is O (2n). Conclusion: This method evaluates trustworthiness of neighbor node quantitatively as a fraction in the range 0 and 1. The proposed algorithm when applied, selects the best node among the alternatives.
-
Most Read This Month
