Recent Advances in Computer Science and Communications - Volume 14, Issue 8, 2021
Volume 14, Issue 8, 2021
-
-
Stackelberg Game Analysis of Enterprise Operation Improvement Decision and Consumer Choice Behaviour
Authors: Gongliang Zhang and Qian SunTo explore the effective improvement of business decision-making, this paper takes consumers as a utility factor in business decision-making. Under the assumption of maximizing the interests of consumers and manager, it describes the problems and establishes Stackelberg game model, which uses the heuristic algorithm of sensitivity to solve the business decision-making model of two charging items with the number of consumers as N through calculation, and discussed the corresponding strategies, hopes can provide decision-making reference for the managing of two charging projects at the same time. The results show that for the operation with two charging items, the manager shall decide whether to improve and the order of improvement according to the improvement cost and the net utility after improvement.
-
-
-
Analysis of Influencing Factors of Flip Class Mode in the Application of Psychological Teaching in Colleges and Universities
By Peng LiuWith the advancement of quality education reform, the advent of the era of big data and cloud computing has given birth to the innovative teaching mode of "flipping the classroom" in Internet technology. It subverts the traditional teaching mode with the classroom teaching system as the core and is widely used in the classroom teaching of psychology. Introduction: This article, mainly through the literature research method and content analysis method, has carried on the analysis to the university turnover classroom. This paper analyses the characteristics and development trend of flipped classroom research from the aspects of research fields, research topics, research methods, literature sources, author information and references. Methods: This teaching method undoubtedly fully demonstrates the concept of quality education, that is, while respecting students' individual differences in learning, it also cultivates students' autonomy, allowing them to learn independently and develop their ability to analyze and solve problems. The flipped classroom uses the Internet as a platform to adjust the teaching steps, which has changed the organization of teaching and learning, class and off-class, teaching and self-study. Results: Teachers can supplement other resources in the Flipping classroom. At the same time, according to the content of the textbook, the teacher should design the test questions. In addition, teachers should also participate in the discussion of students before class, answering questions for students online. Throughout the online learning process, teachers can obtain students' mastery of the unit's knowledge points through the support of big data, such as which knowledge points are difficult, which ones are mastered, and which students are well mastered, so that the teaching can be effectively adjusted. The main content of the teacher in the pre-course period is to check the content of the student's reply in the classroom exchange area to understand the students' knowledge of the knowledge points. Conclusion: It can be seen from the above column chart that the following conclusions can be drawn through the investigation and study of the factors affecting college psychology teaching in colleges and universities. University leaders have a certain degree of emphasis on college psychology courses, but teaching management needs further improvement; the number of people in some psychology majors is too high. Some colleges and universities in the teaching objectives of psychology special courses lack the target requirements of students' social adaptation and scientific research; most of the teachers' theoretical teaching content is not comprehensive enough and the content is single. Discussion: This article is based on an in-depth analysis of the advantages of domestic SPOC platforms and flipped classrooms, and analyses the course goals of psychological teaching in colleges and universities. A teaching model of psychological courses based on the SPOC platform was constructed. This article is based on the teaching of psychology, and discusses the problems related to the flip reading teaching of psychology. The purpose is to sort out the theoretical and practical problems of flipped reading teaching in psychology lessons, and better adapt to the teaching of psychology lessons in the new era and new technology.
-
-
-
Dynamic Feature Extraction Method of Phone Speakers Based on Deep Learning
More LessBackground: Nowadays, speech recognition has become one of the important technologies for human-computer interaction. Speech recognition is essentially a process of speech training and pattern recognition, which makes feature extraction technology particularly essential. The quality of feature extraction is directly related to the accuracy of speech recognition. Dynamic feature parameters can effectively improve the accuracy of speech recognition. These parameters make the speech dynamic feature extraction to have a higher research value. The traditional dynamic feature extraction method is easier to generate more redundant information, resulting in low recognition accuracy. Methods: Therefore, based on a new speech feature extraction method, which is based on deep learning for speech feature extraction, is proposed in the present study. Firstly, the speech signal is preprocessed by pre-emphasis, windowing, filtering, and endpoint detection. Then, the Sliding Differential Cepstral (SDC) feature is extracted, which contains the voice information of the front and back frames. Finally, the feature is used as input to extract the dynamic features that represent the depth essence of speech information through the deep self-encoding neural network. Results: The simulation results show that the dynamic features extracted by in-depth learning have better recognition performance than the original features, and have a good effect on speech recognition.
-
-
-
A Systematic Review on Various Reversible Data Hiding Techniques in Digital Images
Authors: Ankita Vaish and Shweta JayswalNowadays, Internet has become essential for living and running businesses smoothly; it has made things simpler like online transaction, online shopping, sharing images, videos, audios, messages on social media, uploading some important information on Google drives, etc. So the very first requirement is to secure and protect digital contents from any unauthorized access. Reversible Data Hiding (RDH) is one of the ways to provide security in digital content, through which useful information can be embedded in the digital content, and at the receiver’s end, the complete recovery of cover media as well as the embedded message is possible. In this digital era, digital images are most rapidly used for communication purposes; therefore, the security of digital images is in high demand. RDH in digital images has gained a lot of interest during the last few decades. This paper describes and investigates a systematic review on various RDH techniques for digital images, which can be broadly classified into five categories: Lossless Compression Based, Histogram Modification Based, Difference Expansion Based, Interpolation Based and Encrypted image Based techniques.
-
-
-
Wavelet-Based Multi-Focus Image Fusion Using Average Method Noise Diffusion (AMND)
Authors: Prabhishek Singh and Manoj DiwakarAim: This paper presents a new and upgraded wavelet-based multi-focus image fusion technique using average method noise diffusion (AMND). Objective: This paper aims to enhance visual appearance, remove blurring in the final fused image and make objects (fine edges) clearly visible. Methods: This method extends the standard wavelet-based image fusion technique on multi-focus images by incorporating the concept of method noise and anisotropic diffusion in it. This hybrid structure is implemented as post-processing operation in the proposed method. Results: The proposed work shows excellent results in terms of visual appearance and edge preservation. The experimental result of the proposed method is compared with some traditional and nontraditional methods. The proposed method shows a comparatively better result. Conclusion: In the field of image enhancement, this paper describes robustness, effectiveness and adaptive nature of method noise, especially in the field of image fusion. The performance of the proposed method is analyzed qualitatively (good visual appearance) and quantitatively (entropy, spatial frequency and standard deviation). The proposed method has the capability to be incorporated in real-time applications like surveillance in the visual sensor network (VSN).
-
-
-
An Intelligent Artificial Bee Colony and Adaptive Bacterial Foraging Optimization Scheme for Reliable Breast Cancer Diagnosis
Authors: S. Punitha, A. Amuthan and K. S. JosephBackground: Breast cancer is essential to be detected in primitive localized stage for enhancing the possibility of survival since it is considered as the major malediction to the women society around the globe. Most of the intelligent approaches devised for breast cancer necessitate expertise that results in reliable identification of patterns that conclude the presence of oncology cells and determine the possible treatment to breast cancer patients in order to enhance their survival feasibility. Moreover, the majority of the existing schemes of the literature incur intensive labor and time, inducing a predominant impact over the diagnosis time utilized for detecting breast cancer cells. Methods: An Intelligent Artificial Bee Colony and Adaptive Bacterial Foraging Optimization (IABCABFO) scheme is proposed for facilitating a better rate of local and global searching ability in selecting the optimal features subsets and optimal parameters of ANN considered for breast cancer diagnosis. In the proposed IABC-ABFO approach, the traditional ABC algorithm used for cancer detection is improved by integrating an adaptive bacterial foraging process in the onlooker bee and the employee bee phase that results in optimal exploitation and exploration. Results: The investigation of results of the proposed IABC-ABFO approach facilitating the use of the Wisconsin breast cancer dataset showed a mean classification accuracy of 99.52% which is higher than the existing breast cancer detection schemes.
-
-
-
A Magic Wand Selection Tool for Surface of 3D Model
Authors: Bangquana Liu, Shaojun Zhu, Dechaoc Sun, GuangYua Zhou, Weihua Yang, Li Liu and Kai ChenIntroduction: Segmentation of 3d shapes is a fundamental problem in computer graphics and computer-aided design. It has received much attention in recent years. The analysis and research methods of 3d mesh models have established reliable mathematical foundations in graphics and geometric modeling. Compared with color and texture, shape features describe the shape information of objects from geometric structure features and play an important role in a wide range of applications, including mesh parameterization, skeleton extraction, resolution modeling, shape retrieval, character recognition, robot navigation, and many others. Methods: The interactive selection surface of models is mainly used for shape segmentation. The common method is boundary-based selection, which requires the user to input some stokes near the edge of the selected or segmented region. Chen et al. introduced an approach to join the specified points to form the boundaries for region segmentation on the surface. Funkhouser et al. improve the Dijkstra algorithm to find segmentation boundary contour. The graph cut algorithm uses the distance between the surface and its convex hull as the growing criteria to decompose a shape into meaningful components. The watershed algorithm, widely used for image segmentation, is a region- growing algorithm with multiple seed points. Wu and Levine use simulated electrical charge distributions over the mesh to deal with the 3D part segmentation problem. Other methods using a watershed algorithm for surface decomposition. Results: Our algorithm in C++ and Open MP has been implemented and the experiments on a PC with a 3.07 GHz Intel(R) Core(TM) i7 CPU and 6 GB memory have been conducted. Our method can get a similar region under different interaction vertices in specific regions. Figure 6a and Figure 6b are the calculation results of tolerance region selection of this algorithm in a certain region of the kitten model at two different interaction points, from which it has been observed that the obtained regions are similar to different vertices in this region. Figure 6c and Figure 6d are two different interactive points in the same region, and the region selection results are obtained by Region growing technique. Discussion: In this paper, we proposed a novel magic wand selection tool to the interactive select surface of the 3D model. The feature vector is constructed by extracting the HKS feature descriptor and mean curvature of 3D model surface, which allow users to input the feature tolerance value for region selection and improve the self-interaction of users. Many experiments show that our algorithm has obvious advantages in speed and effectiveness. The interactive generation of region boundary is very useful for many applications, including model segmentation. Conclusion: In consideration of a couple of requirements, including user-friendliness and effectiveness in model region selection, a novel magic wand selection tool has been proposed to interactive selection surface of 3D models. First, we pre-compute the heat kernel feature and mean curvature of the surface, and then form the eigenvector of the model. Then, two ways for region selection have been provided. One is to select the region according to the feature of tolerance value. The other is to select the region that aligns with stroke automatically. Finally, we use the geometry optimization approach to improve the performance of the computing region con-tours. Extensive experimental results show that our algorithm is efficient and effective.
-
-
-
Efficiently Computing Geodesic Loop for Interactive Segmentation of a 3D Mesh
Authors: Yun Meng, Shaojun Zhu, Bangquan Liu, Dechao Sun, Li Liu and Weihua YangIntroduction: Shape segmentation is a fundamental problem of computer graphics and geometric modeling. Although the existence of segmentation algorithms of shapes have been widely studied in the mathematics community, little progress has been made on how to compute them on polygonal surfaces interactively using geodesic loops. Method: We compute the geodesic distance fields with the improved Fast March Method (FMM) proposed by Xin and Wang. A new algorithm is proposed to compute geodesic loops over a triangulated surface as well as a new interactive shape segmentation manner. Result: The average computation time on the 50K vertices model is less than 0.08s. Discussion: In the future, we will use an accurate geodesic algorithm and parallel computing techniques to improve our algorithm to obtain a better smooth geodesic loop. Conclusion: A large number of experimental results show that the algorithm proposed in this paper can effectively achieve high precision geodesic loop paths, and this method can also be used for interactive shape segmentation in real-time.
-
-
-
Computing Salient Feature Points of 3D Model Based on Geodesic Distance and Decision Graph Clustering
Authors: Dechao Sun, Nenglun Chen, Renfang Wang, Bangquan Liu and Feng LiangIntroduction: Computing salient feature points (SFP) of 3D models has important application value in the field of computer graphics. In order to extract the SFP more effectively, a novel SFP computing algorithm based on geodesic distance and decision graph clustering is proposed. Method: Firstly, the geodesic distance of model vertices is calculated based on the heat conduction equation, and then the average geodesic distance and importance weight of vertices are calculated. Finally, the decision graph clustering method is used to calculate the decision graph of model vertices. Results and Discussion: 3D models in SHREC 2011 dataset are selected to test the proposed algorithm. Compared with the existing algorithms, this method calculates the SFP of the 3D model from a global perspective. Results show that it is not affected by model posture and noise. Conclusion: Our method maps the SFP of the 3D model to the 2D decision-making diagram, which simplifies the calculation process of SFP, improves the calculation accuracy and possesses strong robustness.
-
-
-
Group DEMATEL Decision Method Based on Hesitant Fuzzy Linguistic Term Sets
Authors: Hui Xie, Qian Ren, Wanchun Duan, Yonghe Sun and Wei HanBackground: Decision-making trial and evaluation laboratory (DEMATEL) is a practical and concise method to deal with the complicated socioeconomic system problems. However, there are two defects in original DEMATEL. On the one hand, the traditional expert preference expressions cannot reflect the hesitation and flexibility of expert, on the other hand, the experts’ weight is usually expressed as the equivalent weight which cannot reflect the scientific weight on behalf of the experts’ academic background, capability experience, risk preference and so on. To solve the above problems, a novel Group DEMATEL decision method based on hesitant fuzzy linguistic term sets (HFLTSs) is proposed. Method: Firstly, this paper presents that experts make their judgement on the causal relationship of factors by using a linguistic expression closed to human expression, which can be easily transformed into HFLTSs. Next, the hybrid weight of experts are calculated on the base of the initial HFLTSs direct influence matrix(HDIM) according to the hesitant degree and distance between two HDIMs. And the aggregation of each expert’s information is introduced by possibility degree. Then, the new group DEMATEL decision method based on HFLTSs are constructed. Finally, an illustrative example is given and analyzed to demonstrate the effectiveness and validation of the proposed approach. Results: This paper demonstrate the heterogeneity of decision experts and the hesitation degree of expert information representation must be taken into account when determining the interaction of factors in complex systems by DEMATEL method. Conclusion: This paper constructs the new amended group DEMATEL which provides a new way to deal with the integration of each expert’s information by the hybrid weight and possibility degree. The methods provide references for determining the importance of complex system factors more scientifically and objectively.
-
-
-
A Two-Sided Matching Method for Green Suppliers and Manufacturers with Intuitionistic Linguistic Preference Information
Authors: Lan-lin Wang, Zhi Liu, Yue-ling Zheng and Feng-juan GuPurpose: Existing methodologies on two-sided matching seldom consider asymmetry, uncertainty, and the fuzziness of preference information; therefore, this study aims to develop a methodology for solving the selection process between green suppliers and manufacturers with intuitionistic linguistic numbers. Methods: This study first constructs the evaluation indicators for both sides, which are depicted as intuitionistic linguistic numbers. Subsequently, we redefine the intuitionistic linguistic numbers’ expected function based on regret theory. By considering the psychological behaviors due to the regret aversion of decision makers, the study constructs the comprehensive perceived values of decision makers. Furthermore, by maximizing the comprehensive perceived values for the two sides, the multi-objective matching model is established. In addition, this study adopts a min-max method to transform the multi-objective optimal model into the single-objective model. Conclusion: This study considers fuzziness and hesitancy of the preference information in addition to the psychological behavior arising from the regret aversion of decision makers. The two-sided matching method proposed by this paper has more validity and effectiveness than that of existing methods.
-
-
-
Explore the Optimal Node Degree of Interfirm Network for Efficient Knowledge Sharing
Authors: Houxing Tang, Fang Fang and Zhenzhong MaBackground: Network structure is a critical issue for efficient inter-firm knowledge sharing. The optimal node degree plays a major role because it is generally regarded as a core proxy of network structural characteristics. This paper aims to examine what is the optimal node degree for an efficient network structure. Methods: Based on an interaction rule combining the barter rule and the gift rule, this study first describes and then builds a knowledge diffusion process. Then using four factors, namely network size, network randomness, knowledge endowment of network, and knowledge stock of each firm, we examined the factors that influence the optimal node degree for efficient knowledge sharing. Results: The simulation results show that the optimal node degree can be determined along with the change in external factors. Furthermore, changing the network randomness and network size has a little impact on the node degree. Instead, both knowledge endowment of network and knowledge stock of each firm have a significant impact on the node degree. Conclusion: It has been found that an optimal node degree can always be found in any condition, which confirms the existence of a balanced state. Thus, policymakers can determine the appropriate number of links to avoid redundancy and thus reduce cost in interfirm networks. We also examined how different factors influence the size of the optimal node degree, and as a result, policymakers can set an appropriate number of links under different situations.
-
-
-
Fingerprint Presentation Attack Detection in Open-Set Scenario Using Transient Liveness Factor
Authors: Akhilesh Verma, Vijay K. Gupta and Savita GoelBackground: In recent history, fingerprint presentation attack detection (FPAD) proposal came out in a variety of ways. A close-set approach uses a pattern classification technique that best suits a specific context and goal. The Open-set approach works fine in a wider context, which is relatively robust with new fabrication material and independent of sensor type. In both cases, results were promising but not too generalizable because of unseen conditions not fitting into the method used. It is clear that the two key challenges in the FPAD system, sensor interoperability and robustness with new fabrication materials are not addressed to date. Objective: To address the above challenges, a liveness detection model is proposed using a live sample using transient liveness factor and one-class CNN. Methods: In our architecture, liveness is predicted by using the fusion rule, score level fusion of two decisions. Here, ‘n’ high-quality live samples are initially trained for quality. We observed that fingerprint liveness information is ‘transitory’ in nature, a variation in the different live sample is natural. Thus, each live sample has a ‘transient liveness’ (TL) information. We use no-reference (NR) image quality measure (IQM) as a transient value corresponding to each live sample. A consensus agreement is collectively reached in transient value to predict adversarial input. Further, live samples at the server are trained with augmented inputs on the one-class classifier to predict the outlier. So, by using the fusion rule, score level fusion of consensus agreement and appropriately characterized negative cases (or outliers) predicts liveness. Results: Our approach uses high-quality 30-live samples only, out of 90 images available in the dataset to reduce learning time. We used Time Series images from the LivDet competition 2015. It has 90-live images and 45-spoof images made from Bodydouble, Ecoflex and Playdoh of each person. Fusion rule results in 100% accuracy in recognising live as live. Conclusion: We have presented an architecture for liveness-server for extraction/updating transient liveness factor. Our work explained here a significant step forward towards a generalized and reproducible process with consideration towards the provision for the universal scheme as a need of today. The proposed TLF approach has a solid presumption; it will address dataset heterogeneity as it incorporates wider scope-context. Similar results with other datasets are under validation. Implementation seems difficult now but has several advantages when carried out during the transformative process.
-
-
-
Study and Analysis of User Desired Image Retrieval
Authors: John B. P and S JanakiramanBackground: In the present digital world, Content Based Image Retrieval (CBIR) has gained significant importance. In this context, image processing technology has become the most sought one, as a result its demand has increased to a large extent. The complex growth concerning computer technology offers a platform to apply the image processing application. Well-known image retrieval techniques suitable for application zone are 1. Text Based Image Retrieval (TBIR) 2. Content Based Image Retrieval (CBIR) 3. Semantic Based Image Retrieval (SBIR) and etc. In the recent past, many researchers have conducted extensive research in the field of content-based image retrieval (CBIR). However, many research related studies on image retrieval and characterization have exemplified to be an immense issue and it should be progressively developed in its techniques. Hence, by putting altogether the research conducted in recent years, this survey study makes a comprehensive attempt to review the state-of-the art in this field. Aims: This paper aims to retrieve similar images according to visual properties, which are defined as shape, color, texture and edge detection. Objective: This study investigates the CBIR to achieve the task because of the essential and fundamental problems. The present and future trends are addressed to show contributions and directions and can inspire more research in the CBIR methods. Result: We present an in-depth analysis of state of the art on CBIR methods; We explain the methods based on color, texture, shape and edge detection with performance evaluation metrics. In addition, we have discussed some significant future research directions reviewed. Methods: This paper has quickly anticipated the noteworthiness of CBIR and its related improvement, which incorporates Edge Detection Techniques, various sorts of Distance Metric (DM), performance measurements and various kinds of Datasets. This paper shows the conceivable outcomes to overcome the difficulties concerning re-ranking strategies with an improved accuracy. Conclusion: At last, we have proposed another technique for consolidating different highlights in a CBIR framework that can give preferred outcomes over the current strategies.
-
-
-
Risk Factor Identification, Classification and Prediction Summary of Chronic Kidney Disease
Authors: Pramila Arulanthu and Eswaran PerumalThe data generated by medical equipment is huge and humongous in size loaded with valuable information. This data set requires an effective classification for accurate prediction. The prediction of health issues is an extremely difficult task, especially Chronic Kidney Disease (CKD) is one of the major unpredictable diseases in the medical field. Perhaps, certain medical experts do not have identical awareness and skills to solve the issues of their patients. Most of the medical experts may have unsubstantiated results on disease diagnosis of their patients. Sometimes, patients may lose their life owing to disease severity. As per the Global Burden of Disease (GBD-2015) report, death by CKD was ranked at 17th position whereas GBD-2010 reported the same at 27th among the causes of death globally. Death by CKD constitutes 2.9% of all deaths between the years 2010 and 2013 among people in the age range of 15 to 69. As per the World Health Organization (WHO- 2005) report, CKD was the primary reason behind the death of 58 million people so far. Hence, this article presents a state-of-the-art review of the classification and prediction of Chronic Kidney Disease (CKD). Normally, advanced data mining techniques, fuzzy and machine learning algorithms are used to classify medical data and disease diagnosis. This study reviews and summarizes many classification techniques and disease diagnosis methods presented earlier. The main intention of this review is to point out and address some of the issues and complications of the existing methods. It also attempts to discuss the limitations and accuracy level of the existing CKD classification and disease diagnosis methods.
-
-
-
A Novel Hybrid Approach for Multi-Objective Bi-Clustering in Microarray Data
Authors: Naveen Trivedi and Suvendu KanungoBackground: Today, the bi-clustering technique plays a vital role to analyze gene expression data in microarray technology. This technique performs clustering on both rows and columns of expression data simultaneously. It determines the expression level of genes set under the subset of several conditions or samples. Basically, the obtained information is collected in the form of a sub-matrix comprising microarray data that satisfy coherent expression patterns of subsets of genes with respect to subsets of conditions. These sub-matrices are represented as bi-clusters and the overall process is called bi-clustering. In this paper, we proposed a new meta-heuristics hybrid ABC-MWOA-CC which is based on artificial bee colony (ABC), modified whale optimization algorithm (MWOA) and Cheng and Church (CC) algorithm to optimize the extracted bi-clusters. In order to validate this algorithm, we also delve into finding the statistical and biological relevancy of extracted genes with respect to various conditions. However, most of the bi-clustering techniques do not address the biological significance of genes belonging to extracted bi-clusters Objective: The major aim of the proposed work is to design and develop a novel hybrid multiobjective bi-clustering approach for in microarray data to produce the desired number of valid biclusters. Further, these extracted bi-clusters are to be optimized to obtain an optimal solution. Method: In the proposed approach, a hybrid multi-objective bi-clustering algorithm which is based on ABC along with MWOA is recommended to group the data into the desired number of biclusters. Further, ABC with MWOA multi-objective optimization algorithm is applied in order to optimize the solutions using a variety of fitness functions. Results: In the analysis of the result, the multi-objective functions which are employed to judge the fitness calculation like Volume Mean (VM), Mean of Genes (GM), Mean of Conditions (CM), and Mean of MSR (MMSR) lead to improve the performance analysis of the CC bi-clustering algorithm on real-life data set such as Yeast Saccharomyces cerevisiae cell cycle gene expression datasets. Conclusion: The effectiveness of the ABC-MWOA-CC algorithm is comprehensively demonstrated by comparing it with well-known traditional ABC-CC, OPSM, and CC algorithm in terms of VM, GM, CM, and MMSR.
-
-
-
Coral Reef Classification Using Improved WLD Feature Extraction with Convolution Neural Network Classification
Authors: M. A. Paul, P. A. J. Rani and J. Evangelin Deva SheelaIn this paper, it is proposed to employ IWLD (Improved Weber Local Descriptor) for coral reef annotation. IWLD is a powerful texture descriptor, it is proposed to analyze the role of IWLD for coral reef classification. Background: Coral reefs are one of the oldest and dynamic ecosystems of the world. Manual annotation of coral reef is not possible due to lacking consistency and objectivity in human labeling. Objective: Manual annotation consumes an enormous number of hours to annotate every coral image and video frames as well as human resources. An emblematic survey states that more than 400 person-hours are required to annotate 1000 images. Incidentally, some coral species have different shapes, sizes and colors, while most of the corals seem indistinguishable to the human eye. In order to avoid the contradictory classifications, an expert system that can automatically annotate the corals is essential to improve the accuracy of classification. Method: The proposed improved WLD extract texture features from six combinations of color channels like (1) R Channel, (2) G Channel (3) B Channel (4) RG Channel (5) GB Channel and (6) BR Channel of an image in a holistic way while preserving their relations. The extracted features are analyzed and classified using CNN Classifier. Results: Experiments are carried out with EILAT, RSMAS, EILAT 2 and MLC2008 datasets and the proposed improved WLD based coral reef classification is found to be appropriate. From the accuracy point of view, the improved WLD demonstrate higher accuracy compared to other state-ofthe- art techniques. Conclusion: This paper analyzes the role of Improved WLD for feature extraction to classify coral reefs. For this purpose, EILAT, RSMAS, EILAT 2 and MLC2008 datasets have been used. It is observed that the proposed IWLD based classifier gives promising results for coral reef classification.
-
-
-
Binary Grasshopper Optimization Based Feature Selection for Intrusion Detection System Using Feed Forward Neural Network Classifier
Authors: M. Jeyakarthic and A. ThirumalairajBackground: The occurrence of intrusions and attacks has increased tremendously in recent years, thanks to the ever-growing technological advancements in the internet and networking domains. Intrusion Detection System (IDS) is employed nowadays to prevent distinct attacks. Several machine learning approaches have been presented for classifying IDS. However, IDS undergoes dimensionality issues that result in increased complexity and decreased resource exploitation. Consequently, it becomes necessary to investigate the significant features of data using IDS in order to reduce the dimensionality. Aim: In this article, a new Feature Selection (FS)-based classification system is presented which performs both FS and classification processes. Methods: In this study, a binary variant of the Grasshopper Optimization Algorithm called BGOA is applied as FS model. The significant features are integrated using an effective model to extract the useful ones and discard the useless features. The chosen features are given to Feed-Forward Neural Network (FFNN) model to train and test the KDD99 dataset. Results: The presented model was validated using the benchmark KDD Cup 1999 dataset. With the inclusion of FS process, the classifier results got increased by attaining a FPR of 0.43, FNR of 0.45, sensitivity of 99.55, specificity of 99.57, accuracy of 99.56, F-score of 99.59 and kappa value of 99.11. Conclusion: The experimental outcome confirmed the superior performance of the presented model compared to diverse models from several aspects and was found to be an appropriate tool for detecting intrusions.
-
-
-
Viability Prediction of Smart Meter Installation to Prevent Non-Technical Losses Using Naïve Bayes Classifier
Authors: Hadiza Umar, Rajesh Prasad and Mathias FonkamBackground: Energy regulators across the world resolved to curtail the liability of Non- Technical Losses (NTLs) in power by implementing the use of Smart Meters to measure consumed power. However, power regulators in developing countries are confronted with a huge metering gap in an era of unprecedentedenergy theft.This has resulted in deficits in revenue, an increase in debts and subsequently power cuts. Objective: The objective of this research is to predict whether the unmetered customers are eligible to be metered by identifying worthy and unworthy customers for metering given their bill payment history. Methods: The approach analyses the performance accuracy of some machine learning algorithms on small datasets by exploring the classification abilities of Deep learning, Naïve Bayes, Support Vector Machine and Extreme Learning Machine using data obtained from an electricity distribution company in Nigeria. Results: The performance analysis shows that Naïve Bayes classifier outperformed the Deep Learning, Support Vector Machine and Extreme Learning Machine algorithms. Experiments in deep learning have shown that the alteration of batch sizes has asignificant effect on the outputs. Conclusion: This paper presents a data-driven methodology for the prediction of consumers’ eligibility to be metered. The research has analysed the performance of deep learning, Naive Bayes, SVM and ELM on a small dataset. It is anticipated that the research will help utility companies in developing countries with large populations and huge metering gaps to prioritise the installation of smart meters based on consumer’s payment history.
-
-
-
Design of Psk Based Trusted Dtls for Smart Sensor Nodes
Authors: Anil Yadav, Sujata Pandey, Rajat Singh and Nitin RakeshBackground: RSA based key exchange is a heavy and time-consuming process, as it involves numerous message exchange between a client and the server. The pre-shared key (PSK) based handshake process attempts to reduce the messages during the key exchange between a client and the server. Method: This paper extends the TEE enabled dtls handshake design based on RSA to the TEE enabled pre-shared key based handshake. A dtls client and the server installs the pre-shared key in advance so that the message exchanges can be reduced during session key generation. Result: In this article, the authors have significantly reduced this penalty by fine-tuning of the tdtls algorithm for psk based handshake. On average, this gain is over 2 ms (50% - from 3.5 ms to 1.5 ms) across various cipher-suites. Conclusion: The tdtls approach increases the security of the session key and its intermediate keying materials, which is a huge gain as compared to minor handshake time increase. The algorithm ensures end-to-end security to the PSK based session key as well as its keying materials between a dtls client and a server.
-
-
-
IoT with Cloud-Based End to End Secured Disease Diagnosis Model Using Light Weight Cryptography and Gradient Boosting Tree
By K. ShankarBackground: With the evolution of the Internet of Things (IoT), technology and its associated devices employed in the medical domain, the different characteristics of online healthcare applications become advantageous for human wellbeing. Aim: The objective of this paper is to present an IoT and cloud-based secure disease diagnosis model. At present, various e-healthcare applications offer online services in diverse dimensions using the Internet of Things (IoT). Method: In this paper, an efficient IoT and cloud-based secure classification model are proposed for disease diagnosis. People can avail efficient and secure services globally over online healthcare applications through this model. The presented model includes an effective Gradient Boosting Tree (GBT)-based data classification and lightweight cryptographic technique named rectangle. The presented GBT–R model offers a better diagnosis in a secure way. Results: The proposed model was validated using Pima Indians diabetes data and extensive simulation was conducted to prove the consistent results of the employed GBT-R model. Conclusion: The experimental outcome strongly suggested that the presented model shows maximum performance with an accuracy of 94.92.
-
-
-
Empirical Evaluation of NoSQL and Relational Database Systems
Authors: Shivangi Kanchan, Parmeet Kaur and Pranjal ApoorvaAim: To evaluate the performance of Relational and NoSQL databases in terms of execution time and memory consumption during operations involving structured data. Objective: To outline the criteria that decision makers should consider while making a choice of the database most suited to an application. Methods: Extensive experiments were performed on MySQL, MongoDB, Cassandra, and Redis using the data for an IMDB movies schema prorated into 4 datasets of 1000, 10000, 25000 and 50000 records. The experiments involved typical database operations of insertion, deletion, update read of records with and without indexing as well as aggregation operations. Databases’ performance has been evaluated by measuring the time taken for operations and computing memory usage. Results: Redis provides the best performance for write, update and delete operations in terms of time elapsed and memory usage, whereas MongoDB gives the worst performance when the size of data increases, due to its locking mechanism. For the read operations, Redis provides better performance in terms of latency than Cassandra and MongoDB. MySQL shows the worst performance due to its relational architecture. On the other hand, MongoDB shows the best performance among all databases in terms of efficient memory usage. Indexing improves the performance of any database only for covered queries. Redis and MongoDB give good performance for range based queries and for fetching complete data in terms of elapsed time whereas MySQL gives the worst performance. MySQL provides better performance for aggregate functions. NoSQL is not suitable for complex queries and aggregate functions. Conclusion: It has been found from the extensive empirical analysis that NoSQL outperforms SQL based systems in terms of basic read and write operations. However, SQL based systems are better if queries on the dataset mainly involve aggregation operations.
-
-
-
Identification of Coronary Artery Disease using Artificial Neural Network and Case-Based Reasoning
Authors: Varun Sapra, M.L Saini and Luxmi VermaBackground: Cardiovascular diseases are increasing at an alarming rate with a very high rate of mortality. Coronary artery disease is one of the types of cardiovascular diseases, which is not easily diagnosed in its early stage. Prevention of coronary artery disease is possible only if it is diagnosed at an early stage and proper medication is done. Objective: An effective diagnosis model is important not only for the early diagnosis but also to check the severity of the disease. Method: In this paper, a hybrid approach is followed, with the integration of deep learning (multilayer perceptron) with case-based reasoning to design an analytical framework. This paper suggests two phases of the study, one in which the patient is diagnosed for coronary artery disease and in the second phase, if the patient is found suffering from the disease, then case-based reasoning is employed to diagnose the severity of the disease. In the first phase, a multilayer perceptron is implemented on a reduced dataset and with time-based learning for stochastic gradient descent, respectively. Results: The classification accuracy increased by 4.18 % with reduced data set using a deep neural network with time-based learning. In the second phase, when the patient was diagnosedpositive for coronary artery disease, then the case-based reasoning system was used to retrieve from the case base the most similar case to predict the severity of the disease for that patient. The CBR model achieved 97.3% accuracy. Conclusion: The model can be very useful for medical practitioners, supporting in the decisionmaking process and thus can save the patients from unnecessary medical expenses on costly tests and can improve the quality and effectiveness of medical treatment.
-
-
-
Time Series Features Extraction and Forecast from Multi-feature Stocks with Hybrid Deep Neural Networks
More LessIn this paper, we use LSTM and LSTM-CNN models to predict the rise and fall of stock data. It has been proved that LSTM-based models are powerful tools in time series stock data forecast. Background: Forecasting of time series stock data is important in financial works. Stock data usually have multi-features such as opening price, closing price and so on. The traditional forecast methods, however, are mainly applied to one feature – closing price, or a few, like four or five features. The massive information hidden in the multi-feature data is not thoroughly discovered and used. Objective: The study aimed to find a method to make use of all information about multi-features and get a forecast model. Method: LSTM based models are introduced in this paper. For comparison, three models are used, and they are single LSTM model, a hybrid model of LSTM-CNN, and a traditional ARIMA model. Results: Experiments with different models were performed on stock data with 50 and 230 features, respectively. Results showed that MSE of single LSTM model was 2.4% lower than the ARIMA model and MSE of LSTM-CNN model was 12.57% lower than that of a single LSTM model on 50 features data. On 230 features data, the LSTM-CNN model was found to be improved by 23.41% in forecast accuracy. Conclusion: In this paper, we used three different models – ARIMA, single LSTM and LSTMCNN hybrid model – to forecast the rise and fall of multi-features stock data. It has been found that the single LSTM model is better than the traditional ARIMA model on average, and the LSTMCNN hybrid model is better than a single LSTM model on 50-feature stock data. Moreover, we used LSTM-CNN model to perform experiments on stock data with 50 and 230 features, respectively and found that the results of the same model on 230 features data were better than that on 50 features data. It has been proved in our work that the LSTM-CNN hybrid model is better than other models and experiments on stock data with more features could result in better outcomes. We will carry out more works on hybrid models next.
-
-
-
Some Methods for Constructing Infinite Families of Quasi-Strongly Regular Graphs
Authors: Gholam H. Shirdel and Adel AsgariObjective: In this article, we examined some method of constructing infinite families of semi-strongly regular graphs, Also we obtained a necessary condition for the composition of several graphs to be semi-strongly regular graphs, and using it, we have constructed some infinite families of semi-strongly regular graphs, Also by using the Cartesian product of two graphs, we have constructed some infinite families of semi-strongly regular graphs. Intoduction: A regular graph is called strongly regular graph if the number of common neighbors of two adjacent vertices is a non-negative integer λ and the number of common neighbors of two nonadjacent vertices is a non- negative integer μ. Strongly regular graph introduced in 1963. Subsequently, studying of this graphs and methods of constructing them was a very important part of graph theory, There are two important branches in studying strongly regular graphs. Mehtods: A pairwise balanced incomplete bloc design (PBIBD) is a collection of subsets β of a vset X called blocks such that every pair of elements of X appears in exactly blocks,. If each block has k elements this design is called a 2-(v,k,λ) design, or simply a 2-design or a block design. We denote the number of blocks in β by b and it is easy to see that for each element x of X the number of blocks containing x is a constant (denoted by r) Result: We use a method of constructing new graph from the old ones, introduced and named as composition of graphs. A block design is usually displayed with an array, so that each column represents a block. Discussion: Interesting graphs have been introduced with certain properties that have proximity kinship with strongly regular graphs and quasi-strongly regular graphs. Conclusion: Strongly regular graphs are an important and interesting family of graphs that are generalized in a variety of ways. For example, the strongly regular digraphs, (λ, μ)- graphs and quasistrongly regular graphs are some generalizations of these graphs. In present article, in addition to a review of several methods of constructing strongly regular graphs.
-
-
-
Adaptive Privacy Preservation Approach for Big Data Publishing in Cloud using k-anonymization
Authors: Suman Madan and Puneet GoswamiBackground: Big data is an emerging technology that has numerous applications in the fields, like hospitals, government records, social sites, and so on. As the cloud computing can transfer large amount of data through servers, it has found its importance in big data. Hence, it is important in cloud computing to protect the data so that the third party users cannot access the information from the users. Methods: This paper develops an anonymization model and adaptive Dragon Particle Swarm Optimization (adaptive Dragon-PSO) algorithm for privacy preservation in the cloud environment. The development of proposed adaptive Dragon-PSO incorporates the integration of adaptive idea in the dragon-PSO algorithm. The dragon-PSO is the integration of Dragonfly Algorithm (DA) and Particle Swarm Optimization (PSO) algorithm. The proposed method derives the fitness function for the proposed adaptive Dragon-PSO algorithm to attain the higher value of privacy and utility. The performance of the proposed method was evaluated using the metrics, such as information loss and classification accuracy for different anonymization constant values. Conclusion: The proposed method provided a minimal information loss and maximal classification accuracy of 0.0110 and 0.7415, respectively when compared with the existing methods.
-
-
-
Unsymmetric Image Encryption Using Lower-Upper Decomposition and Structured Phase Mask in the Fractional Fourier Domain
Authors: Shivani Yadav and Hukum SinghBackground: An asymmetric cryptosystem using Structured Phase Mask (SPM) and Random Phase Mask (RPM) in fractional Fourier transform (FrFT) using Lower-Upper decomposition with partial pivoting is proposed in order to enhance security for an existing system. The usage of structured phase mask offers additional parameters in encryption. In the encoded process, the phase-truncation (PT) part is replaced by the Lower-Upper decomposition part. Objective: Introducing the asymmetric cryptosystem using LUDP is to prevent quick identification of encrypted image in the FrFT domain. Method: Initially input image is convoluted using SPM, FrFT and finally LUDP.Then the obtained result is multiplied using RPM, inverse FrFT and LUDP. Results: The strength and legitimacy of the proposed scheme have been verified by using numerical analysis on MATLAB R2018a (9.4.0.813654). For checking the viability of the proposed scheme, mathematical simulations have been carried out which inturn determines the performance and better quality of the image. These all simulations based on key sensitivity, occlusion attack, noise attacks and histograms. Conclusion: A novel asymmetric cryptosystem is proposed by using two phase masks; one is SPM and another is RPM. LUDP is proposed in which the encoded procedure is different from the decoded procedure. Security is enhanced by increasing the number of keys and the scheme is also robust against attacks. Statistical simulations are also carried out for inspecting the strength and viability of the algorithm.
-
-
-
hGWO-SA: A Novel Hybrid Grey Wolf Optimizer-Simulated Annealing Algorithm for Engineering and Power System Optimization Problems
More LessBackground: The improved variants of a Grey wolf optimizer have good exploration capability for the global optimum solution. However, the exploitation competence of the existing variants of grey wolf optimizer is unfortunate. Researchers are continuously trying to improve the exploitation phase of the existing grey wolf optimizer, but still, the improved variants of grey wolf optimizer lack in local search capability. In the proposed research, the exploitation phase of the existing grey wolf optimizer has been further improved using a simulated annealing algorithm and the proposed hybrid optimizer has been named as hGWO-SA algorithm. Methods: The effectiveness of the proposed hybrid variant has been tested for various benchmark problems, including multi-disciplinary optimization and design engineering problems and unit commitment problems of the electric power system and it has been experimentally found that the proposed optimizer performs much better than existing variants of grey wolf optimizer. The feasibility of hGWO-SA algorithm has been tested for small & medium scale power systems unit commitment problems, in which, the results for 4 unit, 5 unit, 6 unit, 7 unit, 10 units, 19 unit, 20 unit, 40 unit and 60 units are evaluated. The 10-generating units are evaluated with 5% and 10% spinning reserve. Result and Conclusion: The results obviously show that the suggested method gives the superior type of solutions as compared to other algorithms.
-
Most Read This Month
