Recent Patents on Engineering - Volume 13, Issue 2, 2019
Volume 13, Issue 2, 2019
-
-
Trust Models in Grid Computing: A Review
More LessAuthors: Dolly Sharma, Shailendra Singh and Mamta MittalBackground: Grid computing relates to a pool of resources to be shared by users in Grid Environment. Security of the resources from users and vice-versa is a significant issue. This is where the notion of trust comes into existence. A number of researchers have proposed models for evaluation of trust in grid computing, but they fail to spot one or the other parameters for trust evaluation. The essence of trust models in grid computing is that they offer autonomic trust management. An autonomic trust model has been patented by Z. Yan and C. Prehofer in 2009. Another patent was published by Anna University in 2010 to evaluate the trustworthiness of a resource provider in Grid environment. Objective: This paper firstly focuses and illustrates these essential parameters. Based on these parameters, further, a comparison of some existing models for trust evaluation is shown. Finally, common parameters missed out by various models have been highlighted giving way for improvements of Trust model. Methods: A Trust evaluation model has been proposed by us previously based on a number of real-world trust evaluation parameters. This model sees trust as a three-dimensional entity. Trust is based on Dempter Shafer’s theory in which trust is calculated mathematically. Results: Software trust needs to be calculated mathematically. There are a large number of real-world parameters that need to be included for evaluating trust. Conclusion: As trust models in research are based on simulation techniques, so it is important to include real-world factors that affect trust value of one entity on other. Some of those parameters, missed by most of the models have been identified for inclusion in future trust models.
-
-
-
Study of Optimized Window Aggregate Function for Big Data Analytics
More LessAuthors: Shailender Kumar, Preetam Kumar and Aman MittalBackground: A Window Aggregate function belongs to a class of functions, which have emerged as a very important tool for Big Data Analytics. They lend support in analysis and decisionmaking applications. A window aggregate function aggregates and returns the result by applying the function over a limited number of tuples corresponding to current tuple and hence lending support for big data analytics. We have gone through different patents related to window aggregate functions and its optimization. The cost associated with Big data analytics, especially the processing of window functions is one of the major limiting factors. However, now a number of optimizing techniques have evolved for both single as well as multiple window aggregate functions. Methods: In this paper, the authors have discussed various optimization techniques and summarized the latest techniques that have been developed over a period through intensive research in this area. The paper tried to compare various techniques based on certain parameters like the degree of parallelism, multiple window function support, execution time etc. Results: After analyzing all these techniques, segment tree data structure seems better technique as it outperforms other techniques on different grounds like efficiency, memory overhead, execution speed and degree of parallelism. Conclusion: In order to optimize the window aggregate function, segment tree data structure technique is a better technique, which can certainly improve the processing of window aggregate function specifically in big data analytics.
-
-
-
Design and Analysis of Capacitive Micromachined Ultrasonic Transducer
More LessAuthors: Rashmi Sharma, Rekha Agarwal, Ashwani K. Dubey and Anil AroraBackground: Ultrasound refers to the acoustic energy above the human hearing range (20 Hz to 20 kHz) and finds applications in quality control of food technology, medical imaging, nondestructive testing, distance measuring etc. Ultrasonic transducers are designed to work both as a transmitter to generate ultrasound and as a receiver according to recent patents. Piezoelectric transducers have long dominated the society for generating ultrasound but recent developments in the Micromachining techniques have led to Capacitive Micromachined Ultrasonic Transducer (CMUT). Objective: To simulate a Micromechanical systems (MEMS) based CMUT working as a transmitter with the existing design and provide comparison within the possible architectural geometries. Methods: FEM simulation software COMSOL is used to simulate the 3D model of the transducer radiating in the air. The classical thin-plate theory is employed to solve for CMUT with a circular shape which is sufficient when the ratio of the diameter to thickness of the plate is very large, an aspect common in CMUTs. The Galerkin-weighted residual technique is used to get a solution for thin plate equation with the presumption that the deflections are small in comparison to the thickness of the plate. Results: The resonant frequency of CMUT with different geometries have been calculated. The deflection of membrane with applied DC bias is shown along with collapse voltage calculation. The generated ultrasound is shown with the AC bias superimposed on the DC bias. The capacitance change with the increasing DC voltage is discussed. The deflection of membrane is maximum as the resonance frequency is proved. Conclusion: The review of Capacitive Micromachined Ultrasonic Transducer architectures with different shapes is highlighted. The working behavior of CMUT with suitable dimension is simulated in 3D providing researcher data to wisely choose the CMUT prior to the fabrication. The CMUT is prioritized on various characteristics like wafer area utilization, deflection percentage within the cavity and durability of the transducer.
-
-
-
Metrics Analysis in Object Oriented and Aspect Oriented Programming
More LessAuthors: Preeti Gulia, Manju Khari and Shrikant PatelBackground: Object oriented programming (OOP) is a programming paradigm that has been used for several years by the software engineering community. The best practice of OOP was gathered and they are known as Design Patterns. They provide guidelines for developing flexible software applications. Recent studies claim that some patterns have limitations and their implementations could be improved. Researchers claim that Aspect Oriented Programming (AOP) is a paradigm that provides features to overcome the limitations of OOP and patterns. However, even with the good results achieved using AOP, it is possible to cause side effects in code. We revised all patents relating to aspect oriented programming of applicability. This paper tries to implement a subset of the patterns with AOP and identify merits and demerits in comparison with the traditional OOP implementations. In another term if a method which is called several time in different class so we use to write code on both classes (if we don’t want to overload it) and manually write code of function call after those methods which we want to execute. Methods: Aspect-Oriented Programming entails breaking down program logic into distinct parts called so-called concerns. The functions that span multiple points of an application are called cross-cutting concerns and these cross-cutting concerns are conceptually separate from the application's business logic. There are various common good examples of aspects like logging, auditing, declarative transactions, security, and caching, etc. Results: after the implement AOP concept with OOPs, the response time is reduce and throughput rate is increases. And the development of program is become more easy and reliable. Conclusion: so those method which is called several time in a program execution these type of method must be written in AOP so it is triggered automatically when the pointcode if occurred.
-
-
-
“Water” Use it - Wisely WSN-Irrigation System (WSN-IS) for Smart Home Garden
More LessAuthors: Santosh R. Durugkar, Ramesh C. Poonia and Radhakrishna B. NaikBackground: India is the land of agriculture. Agriculture and gardening is not only done at a huge level but also at small piece of land, it is a backbone of Indian economy. Small gardens are maintained within the boundary of home. Water consumption in agriculture and gardening is at high level. But due to irregular monsoon and decreased ground water level it is hard to irrigate the farm and gardens. We have referred 03 patents which motivated us to revolutionize the agriculture sector. Objectives: Initial stages our scope is limited to home garden and we have proposed a GUI with which end user can get many things such as pH of the soil, conductivity and TDS of the water, temperature and humidity relationship, effect of temperature and humidity on the moisture, soil analysis, moisture holding capacity of the soil etc. Irrigation plays important role in yielding of any plant. In this proposed system, we have proposed a priority driven based irrigation model that supplies optimum and good quality water to the crops with the help of Wireless Sensor Network. Methods: This proposed model is based on sensing the soil moisture, temperature, humidity and other factor which affect the irrigation and supplies the water according to the priority of the requirement to the plant. It is crop independent system, which can be implemented for basic crops, commercials crops, garden and orchards, as basis for this proposed system is important to immediately irrigate the plant wherever soil moisture level will be less. Smart irrigation is the new trend and we can say thirst now a days and it is the major requirement due to many critical factors such as irregularity of monsoon, less availability of water etc. Even though sufficient water is available still we have to make sure whether it is good to use for better yielding of crops. At the same time temperature, humidity, air flow, soil moisture will play important roles in better crop yielding. Wireless sensor network in which ‘n’ no. of issues need to be discussed for the smooth execution of various tasks. Some challenges have been pointed out through this work and important issues which must be considered. Conclusion: final testing shows how this approach is beneficial to the society. In agriculture and gardening now onwards water consumption will be at low level. At the same time this proposed system shows additional advantages to the end user i.e. quality of water utilization in terms of TDS, Conductivity and pH of the water. Similarly, w.r.t. soil, if due to excess utilization of fertilizers and pesticides pH is changing then also same thing will be noticed by avoiding future losses.
-
-
-
Performance Evaluation of Threshold-Based and k-means Clustering Algorithms Using Iris Dataset
More LessAuthors: Mamta Mittal, Rajendra K. Sharma and Varinder Pal SinghBackground: Clustering is one of the data mining tools which classify the raw data reasonably into disjoint clusters. Researchers have developed many algorithms to cluster large data sets based on specific parameters. Objective: This study is centered around the popular partitioning-based technique, i.e., k-means. It requires the number of clusters to be generated as an input parameter; it does not provide a global solution of the problem; and it is sensitive to outliers and initial seed selection. Methods: In this paper, authors have discussed threshold-based clustering method, single pass method, which overcomes the above limitations but it requires a threshold value as an input parameter. Other researchers’ work related to k-means published in patent form is noteworthy and paving path for the researchers. Results: To assess the quality of clustering, numerous validity measures and indices have been assessed on the Iris dataset for both k-means and threshold-based clustering algorithms. It has been observed from the experiments that threshold-based method generates more separated and compact clusters, in addition, there is significant improvement in the validity indices. Conclusion: Threshold-based clustering generates the clusters automatically which are not sensitive to initial seeds selection and outlier; it is more scalable. It will inevitably be an efficient approach of partitioning based clustering whenever one will select the threshold value carefully or will propose new functions for deciding the value of threshold.
-
-
-
Gray-Level Co-occurrence Matrix and Random Forest Based Off-line Odia Handwritten Character Recognition
More LessAuthors: Abhisek Sethy, Prashanta K. Patra and Deepak Ranjan NayakBackground: In the past decades, handwritten character recognition has received considerable attention from researchers across the globe because of its wide range of applications in daily life. From the literature, it has been observed that there is limited study on various handwritten Indian scripts and Odia is one of them. We revised some of the patents relating to handwritten character recognition. Methods: This paper deals with the development of an automatic recognition system for offline handwritten Odia character recognition. In this case, prior to feature extraction from images, preprocessing has been done on the character images. For feature extraction, first the gray level co-occurrence matrix (GLCM) is computed from all the sub-bands of two-dimensional discrete wavelet transform (2D DWT) and thereafter, feature descriptors such as energy, entropy, correlation, homogeneity, and contrast are calculated from GLCMs which are termed as the primary feature vector. In order to further reduce the feature space and generate more relevant features, principal component analysis (PCA) has been employed. Because of the several salient features of random forest (RF) and K- nearest neighbor (K-NN), they have become a significant choice in pattern classification tasks and therefore, both RF and K-NN are separately applied in this study for segregation of character images. Results: All the experiments were performed on a system having specification as windows 8, 64-bit operating system, and Intel (R) i7 – 4770 CPU @ 3.40 GHz. Simulations were conducted through Matlab2014a on a standard database named as NIT Rourkela Odia Database. Conclusion: The proposed system has been validated on a standard database. The simulation results based on 10-fold cross-validation scenario demonstrate that the proposed system earns better accuracy than the existing methods while requiring least number of features. The recognition rate using RF and K-NN classifier is found to be 94.6% and 96.4% respectively.
-
-
-
Analysis of NSL KDD Dataset Using Classification Algorithms for Intrusion Detection System
More LessAuthors: Srishti Sharma, Yogita Gigras, Rita Chhikara and Anuradha DhullBackground: Intrusion detection systems are responsible for detecting anomalies and network attacks. Building of an effective IDS depends upon the readily available dataset. This dataset is used to train and test intelligent IDS. In this research, NSL KDD dataset (an improvement over original KDD Cup 1999 dataset) is used as KDD’99 contains huge amount of redundant records, which makes it difficult to process the data accurately. Methods: The classification techniques applied on this dataset to analyze the data are decision trees like J48, Random Forest and Random Trees. Results: On comparison of these three classification algorithms, Random Forest was proved to produce the best results and therefore, Random Forest classification method was used to further analyze the data. The results are analyzed and depicted in this paper with the help of feature/attribute selection by applying all the possible combinations. Conclusion: There are total of eight significant attributes selected after applying various attribute selection methods on NSL KDD dataset.
-
-
-
EELEACH Clustering Approach to Improve Energy Efficiency in WSN
More LessAuthors: Neha D. Desai and Shrihari D. KhatawkarBackground: Wireless sensor network is self-organizing which consists of a large number of sensor nodes and one sink node according to recent patents. The most important characteristics of such a network are the restricted resources like battery power, consumption capacity and consumption range. Energy consumption is one of the important issues in the wireless sensor network and the challenge is to prolong the network lifespan. Objective: The objective of the proposed approach is to balance a consumption of energy at member node as well as head node of cluster during the data transmission stage and to improve energy efficiency and lifespan of the network. Methods: The aim of an energy efficient clustering method to deal with the homogenous distribution and deployment of tree structure is performed. The performance of network is enhanced by electing head node with data to the node with greater cluster rate and having lowest distance from sink node. The member node sends their data to the head node which forwards their data to the node with greater weight rate which is sent to the sink node in an energy balancing way. Results: A performance analysis of existing approach as LEACH and proposed approach as EELEACH is undertaken by considering different metrics such as energy consumption successful data delivery, throughput, routing overhead, packet delivery fraction and delay ratio. Conclusion: From result analysis, the proposed system as EELEACH shows successful data delivery, throughput, routing overhead, packet delivery fraction and delay ratio. Hence, the low energy consumption improved lifespan of the network and better data transfer rate.
-
-
-
A Statistical Tool for Time Synchronization Problem in WSN
More LessAuthors: D. Upadhyay, A.K. Dubey and P.S. ThilagamBackground: In recent research, time synchronization has a great importance in the various application of wireless sensor network. Localization, tracking, message passing using contention-based schemes and communication are some of the fields where synchronization between sensor clocks is highly required. Therefore, several algorithms were designed to achieve a rational and reliable frame of time within the wireless sensor network. Patents related to time synchronization in WSN were also analyzed. Methods: This paper discusses the powerful statistical tool using maximum probability theory for synchronizing the time within the sensor's clock. In this paper, maximum probability theory is applied to estimate the best value of clock offset between two sensor clocks. The proposed algorithm is analyzed by exchanging timing messages between nodes using two-way message exchange schemes. Results: The proposed algorithm is also implemented along with a Time-Sync Protocol for Sensor Network. It reduces error deviation from 2.32 to 0.064 ms as compared with Time-Sync Protocol for Sensor Network without proposed works. Conclusion: It was observed that for a small network, proposed work gives better and efficient results with Time-Sync Protocol for Sensor Network.
-
-
-
Design of GA and Ontology based NLP Frameworks for Online Opinion Mining
More LessAuthors: Manik Sharma, Gurvinder Singh and Rajinder SinghBackground: For almost every domain, a tremendous degree of data is accessible in an online and offline mode. Billions of users are daily posting their views or opinions by using different online applications like WhatsApp, Facebook, Twitter, Blogs, Instagram etc. Objective: These reviews are constructive for the progress of the venture, civilization, state and even nation. However, this momentous amount of information is useful only if it is collectively and effectively mined. Methodology: Opinion mining is used to extract the thoughts, expression, emotions, critics, appraisal from the data posted by different persons. It is one of the prevailing research techniques that coalesce and employ the features from natural language processing. Here, an amalgamated approach has been employed to mine online reviews. Results: To improve the results of genetic algorithm based opining mining patent, here, a hybrid genetic algorithm and ontology based 3-tier natural language processing framework named GAO_NLP_OM has been designed. First tier is used for preprocessing and corrosion of the sentences. Middle tier is composed of genetic algorithm based searching module, ontology for English sentences, base words for the review, complete set of English words with item and their features. Genetic algorithm is used to expedite the polarity mining process. The last tier is liable for semantic, discourse and feature summarization. Furthermore, the use of ontology assists in progressing more accurate opinion mining model. Conclusion: GAO_NLP_OM is supposed to improve the performance of genetic algorithm based opinion mining patent. The amalgamation of genetic algorithm, ontology and natural language processing seems to produce fast and more precise results. The proposed framework is able to mine simple as well as compound sentences. However, affirmative preceded interrogative, hidden feature and mixed language sentences still be a challenge for the proposed framework.
-
-
-
A Note on Comparison between Statistical Cluster and Neural Network Cluster
More LessAuthors: Jagdish Prasad and Rahul RajawatBackground: Cluster analysis is a data reduction technique in rows of the data matrix. This technique is widely used in engineering, biology, society, pattern recognition, and image processing. Objective: In this paper, self organized map (SOM) using the artificial neural network and different statistical techniques of cluster analysis are used on Population data of 33 districts of Rajasthan with 9 variables for comparison purpose. Methods: The goal of this work is to identify the most suitable technique for clustering the data by using the artificial neural network and different statistical clustering techniques. We received all patents regarding artificial neural network and k-means cluster method. Results: In some situation, artificial neural network (ANN) self-organized map cluster analysis runs on software MATLAB 8.2.0 is more or less same with K-means Statistical cluster analysis using SPSS 7.0. Conclusion: The k-means cluster analysis is found as good as Neural Network cluster analysis, whereas Hierarchical cluster analysis and two steps cluster analysis provide some variation from the neural network cluster analysis.
-
-
-
Efficient Computing in Image Processing and DSPs with ASIP Based Multiplier
More LessAuthors: Poonam Sharma, Ashwani K. Dubey and Ayush GoyalBackground: With the growing demand of image processing and the use of Digital Signal Processors (DSP), the efficiency of the Multipliers and Accumulators has become a bottleneck to get through. We revised a few patents on an Application Specific Instruction Set Processor (ASIP), where the design considerations are proposed for application-specific computing in an efficient way to enhance the throughput. Objective: The study aims to develop and analyze a computationally efficient method to optimize the speed performance of MAC. Methods: The work presented here proposes the design of an Application Specific Instruction Set Processor, exploiting a Multiplier Accumulator integrated as the dedicated hardware. This MAC is optimized for high-speed performance and is the application-specific part of the processor; here it can be the DSP block of an image processor while a 16-bit Reduced Instruction Set Computer (RISC) processor core gives the flexibility to the design for any computing. The design was emulated on a Xilinx Field Programmable Gate Array (FPGA) and tested for various real-time computing. Results: The synthesis of the hardware logic on FPGA tools gave the operating frequencies of the legacy methods and the proposed method, the simulation of the logic verified the functionality. Conclusion: With the proposed method, a significant improvement of 16% increase in throughput has been observed for 256 steps iterations of multiplier and accumulators on an 8-bit sample data. Such an improvement can help in reducing the computation time in many digital signal processing applications where multiplication and addition are done iteratively.
-
-
-
3D Finite Element Simulation for Turning of Hardened 45 Steel
More LessAuthors: Meng Liu, Guohe Li, Xueli Zhao, Xiaole Qi and Shanshan ZhaoBackground: Finite element simulation has become an important method for the mechanism research of metal machining in recent years. Objective: To study the cutting mechanism of hardened 45 steel (45HRC), and improve the processing efficiency and quality. Methods: A 3D oblique finite element model of traditional turning of hardened 45 steel based on ABAQUS was established in this paper. The feasibility of the finite element model was verified by experiment, and the influence of cutting parameters on cutting force was predicted by single factor experiment and orthogonal experiment based on simulation. Finally, the empirical formula of cutting force was fitted by MATLAB. Besides, a lot of patents on 3D finite element simulation for metal machining were studied. Results: The results show that the 3D oblique finite element model can predict three direction cutting force, the 3D chip shape, and other variables of metal machining and the prediction errors of three direction cutting force are 5%, 9.02%, and 8.56%. The results of single factor experiment and orthogonal experiment are in good agreement with similar research, which shows that the model can meet the needs for engineering application. Besides, the empirical formula and the prediction results of cutting force are helpful for the parameters optimization and tool design. Conclusion: A 3D oblique finite element model of traditional turning of hardened 45 steel is established, based on ABAQUS, and the validation is carried out by comparing with experiment.
-
Volumes & issues
-
Volume 19 (2025)
-
Volume 18 (2024)
-
Volume 17 (2023)
-
Volume 16 (2022)
-
Volume 15 (2021)
-
Volume 14 (2020)
-
Volume 13 (2019)
-
Volume 12 (2018)
-
Volume 11 (2017)
-
Volume 10 (2016)
-
Volume 9 (2015)
-
Volume 8 (2014)
-
Volume 7 (2013)
-
Volume 6 (2012)
-
Volume 5 (2011)
-
Volume 4 (2010)
-
Volume 3 (2009)
-
Volume 2 (2008)
-
Volume 1 (2007)
Most Read This Month