Recent Advances in Computer Science and Communications - Volume 14, Issue 8, 2021
Volume 14, Issue 8, 2021
-
-
Stackelberg Game Analysis of Enterprise Operation Improvement Decision and Consumer Choice Behaviour
Authors: Gongliang Zhang and Qian SunTo explore the effective improvement of business decision-making, this paper takes consumers as a utility factor in business decision-making. Under the assumption of maximizing the interests of consumers and manager, it describes the problems and establishes Stackelberg game model, which uses the heuristic algorithm of sensitivity to solve the business decision-making model of two charging items with the number of consumers as N through calculation, and discussed the corresponding strategies, hopes can provide decision-making reference for the managing of two charging projects at the same time. The results show that for the operation with two charging items, the manager shall decide whether to improve and the order of improvement according to the improvement cost and the net utility after improvement.
-
-
-
Analysis of Influencing Factors of Flip Class Mode in the Application of Psychological Teaching in Colleges and Universities
By Peng LiuWith the advancement of quality education reform, the advent of the era of big data and cloud computing has given birth to the innovative teaching mode of "flipping the classroom" in Internet technology. It subverts the traditional teaching mode with the classroom teaching system as the core and is widely used in the classroom teaching of psychology. Introduction: This article, mainly through the literature research method and content analysis method, has carried on the analysis to the university turnover classroom. This paper analyses the characteristics and development trend of flipped classroom research from the aspects of research fields, research topics, research methods, literature sources, author information and references. Methods: This teaching method undoubtedly fully demonstrates the concept of quality education, that is, while respecting students' individual differences in learning, it also cultivates students' autonomy, allowing them to learn independently and develop their ability to analyze and solve problems. The flipped classroom uses the Internet as a platform to adjust the teaching steps, which has changed the organization of teaching and learning, class and off-class, teaching and self-study. Results: Teachers can supplement other resources in the Flipping classroom. At the same time, according to the content of the textbook, the teacher should design the test questions. In addition, teachers should also participate in the discussion of students before class, answering questions for students online. Throughout the online learning process, teachers can obtain students' mastery of the unit's knowledge points through the support of big data, such as which knowledge points are difficult, which ones are mastered, and which students are well mastered, so that the teaching can be effectively adjusted. The main content of the teacher in the pre-course period is to check the content of the student's reply in the classroom exchange area to understand the students' knowledge of the knowledge points. Conclusion: It can be seen from the above column chart that the following conclusions can be drawn through the investigation and study of the factors affecting college psychology teaching in colleges and universities. University leaders have a certain degree of emphasis on college psychology courses, but teaching management needs further improvement; the number of people in some psychology majors is too high. Some colleges and universities in the teaching objectives of psychology special courses lack the target requirements of students' social adaptation and scientific research; most of the teachers' theoretical teaching content is not comprehensive enough and the content is single. Discussion: This article is based on an in-depth analysis of the advantages of domestic SPOC platforms and flipped classrooms, and analyses the course goals of psychological teaching in colleges and universities. A teaching model of psychological courses based on the SPOC platform was constructed. This article is based on the teaching of psychology, and discusses the problems related to the flip reading teaching of psychology. The purpose is to sort out the theoretical and practical problems of flipped reading teaching in psychology lessons, and better adapt to the teaching of psychology lessons in the new era and new technology.
-
-
-
Dynamic Feature Extraction Method of Phone Speakers Based on Deep Learning
More LessBackground: Nowadays, speech recognition has become one of the important technologies for human-computer interaction. Speech recognition is essentially a process of speech training and pattern recognition, which makes feature extraction technology particularly essential. The quality of feature extraction is directly related to the accuracy of speech recognition. Dynamic feature parameters can effectively improve the accuracy of speech recognition. These parameters make the speech dynamic feature extraction to have a higher research value. The traditional dynamic feature extraction method is easier to generate more redundant information, resulting in low recognition accuracy. Methods: Therefore, based on a new speech feature extraction method, which is based on deep learning for speech feature extraction, is proposed in the present study. Firstly, the speech signal is preprocessed by pre-emphasis, windowing, filtering, and endpoint detection. Then, the Sliding Differential Cepstral (SDC) feature is extracted, which contains the voice information of the front and back frames. Finally, the feature is used as input to extract the dynamic features that represent the depth essence of speech information through the deep self-encoding neural network. Results: The simulation results show that the dynamic features extracted by in-depth learning have better recognition performance than the original features, and have a good effect on speech recognition.
-
-
-
A Systematic Review on Various Reversible Data Hiding Techniques in Digital Images
Authors: Ankita Vaish and Shweta JayswalNowadays, Internet has become essential for living and running businesses smoothly; it has made things simpler like online transaction, online shopping, sharing images, videos, audios, messages on social media, uploading some important information on Google drives, etc. So the very first requirement is to secure and protect digital contents from any unauthorized access. Reversible Data Hiding (RDH) is one of the ways to provide security in digital content, through which useful information can be embedded in the digital content, and at the receiver’s end, the complete recovery of cover media as well as the embedded message is possible. In this digital era, digital images are most rapidly used for communication purposes; therefore, the security of digital images is in high demand. RDH in digital images has gained a lot of interest during the last few decades. This paper describes and investigates a systematic review on various RDH techniques for digital images, which can be broadly classified into five categories: Lossless Compression Based, Histogram Modification Based, Difference Expansion Based, Interpolation Based and Encrypted image Based techniques.
-
-
-
Wavelet-Based Multi-Focus Image Fusion Using Average Method Noise Diffusion (AMND)
Authors: Prabhishek Singh and Manoj DiwakarAim: This paper presents a new and upgraded wavelet-based multi-focus image fusion technique using average method noise diffusion (AMND). Objective: This paper aims to enhance visual appearance, remove blurring in the final fused image and make objects (fine edges) clearly visible. Methods: This method extends the standard wavelet-based image fusion technique on multi-focus images by incorporating the concept of method noise and anisotropic diffusion in it. This hybrid structure is implemented as post-processing operation in the proposed method. Results: The proposed work shows excellent results in terms of visual appearance and edge preservation. The experimental result of the proposed method is compared with some traditional and nontraditional methods. The proposed method shows a comparatively better result. Conclusion: In the field of image enhancement, this paper describes robustness, effectiveness and adaptive nature of method noise, especially in the field of image fusion. The performance of the proposed method is analyzed qualitatively (good visual appearance) and quantitatively (entropy, spatial frequency and standard deviation). The proposed method has the capability to be incorporated in real-time applications like surveillance in the visual sensor network (VSN).
-
-
-
An Intelligent Artificial Bee Colony and Adaptive Bacterial Foraging Optimization Scheme for Reliable Breast Cancer Diagnosis
Authors: S. Punitha, A. Amuthan and K. S. JosephBackground: Breast cancer is essential to be detected in primitive localized stage for enhancing the possibility of survival since it is considered as the major malediction to the women society around the globe. Most of the intelligent approaches devised for breast cancer necessitate expertise that results in reliable identification of patterns that conclude the presence of oncology cells and determine the possible treatment to breast cancer patients in order to enhance their survival feasibility. Moreover, the majority of the existing schemes of the literature incur intensive labor and time, inducing a predominant impact over the diagnosis time utilized for detecting breast cancer cells. Methods: An Intelligent Artificial Bee Colony and Adaptive Bacterial Foraging Optimization (IABCABFO) scheme is proposed for facilitating a better rate of local and global searching ability in selecting the optimal features subsets and optimal parameters of ANN considered for breast cancer diagnosis. In the proposed IABC-ABFO approach, the traditional ABC algorithm used for cancer detection is improved by integrating an adaptive bacterial foraging process in the onlooker bee and the employee bee phase that results in optimal exploitation and exploration. Results: The investigation of results of the proposed IABC-ABFO approach facilitating the use of the Wisconsin breast cancer dataset showed a mean classification accuracy of 99.52% which is higher than the existing breast cancer detection schemes.
-
-
-
A Magic Wand Selection Tool for Surface of 3D Model
Authors: Bangquana Liu, Shaojun Zhu, Dechaoc Sun, GuangYua Zhou, Weihua Yang, Li Liu and Kai ChenIntroduction: Segmentation of 3d shapes is a fundamental problem in computer graphics and computer-aided design. It has received much attention in recent years. The analysis and research methods of 3d mesh models have established reliable mathematical foundations in graphics and geometric modeling. Compared with color and texture, shape features describe the shape information of objects from geometric structure features and play an important role in a wide range of applications, including mesh parameterization, skeleton extraction, resolution modeling, shape retrieval, character recognition, robot navigation, and many others. Methods: The interactive selection surface of models is mainly used for shape segmentation. The common method is boundary-based selection, which requires the user to input some stokes near the edge of the selected or segmented region. Chen et al. introduced an approach to join the specified points to form the boundaries for region segmentation on the surface. Funkhouser et al. improve the Dijkstra algorithm to find segmentation boundary contour. The graph cut algorithm uses the distance between the surface and its convex hull as the growing criteria to decompose a shape into meaningful components. The watershed algorithm, widely used for image segmentation, is a region- growing algorithm with multiple seed points. Wu and Levine use simulated electrical charge distributions over the mesh to deal with the 3D part segmentation problem. Other methods using a watershed algorithm for surface decomposition. Results: Our algorithm in C++ and Open MP has been implemented and the experiments on a PC with a 3.07 GHz Intel(R) Core(TM) i7 CPU and 6 GB memory have been conducted. Our method can get a similar region under different interaction vertices in specific regions. Figure 6a and Figure 6b are the calculation results of tolerance region selection of this algorithm in a certain region of the kitten model at two different interaction points, from which it has been observed that the obtained regions are similar to different vertices in this region. Figure 6c and Figure 6d are two different interactive points in the same region, and the region selection results are obtained by Region growing technique. Discussion: In this paper, we proposed a novel magic wand selection tool to the interactive select surface of the 3D model. The feature vector is constructed by extracting the HKS feature descriptor and mean curvature of 3D model surface, which allow users to input the feature tolerance value for region selection and improve the self-interaction of users. Many experiments show that our algorithm has obvious advantages in speed and effectiveness. The interactive generation of region boundary is very useful for many applications, including model segmentation. Conclusion: In consideration of a couple of requirements, including user-friendliness and effectiveness in model region selection, a novel magic wand selection tool has been proposed to interactive selection surface of 3D models. First, we pre-compute the heat kernel feature and mean curvature of the surface, and then form the eigenvector of the model. Then, two ways for region selection have been provided. One is to select the region according to the feature of tolerance value. The other is to select the region that aligns with stroke automatically. Finally, we use the geometry optimization approach to improve the performance of the computing region con-tours. Extensive experimental results show that our algorithm is efficient and effective.
-
-
-
Efficiently Computing Geodesic Loop for Interactive Segmentation of a 3D Mesh
Authors: Yun Meng, Shaojun Zhu, Bangquan Liu, Dechao Sun, Li Liu and Weihua YangIntroduction: Shape segmentation is a fundamental problem of computer graphics and geometric modeling. Although the existence of segmentation algorithms of shapes have been widely studied in the mathematics community, little progress has been made on how to compute them on polygonal surfaces interactively using geodesic loops. Method: We compute the geodesic distance fields with the improved Fast March Method (FMM) proposed by Xin and Wang. A new algorithm is proposed to compute geodesic loops over a triangulated surface as well as a new interactive shape segmentation manner. Result: The average computation time on the 50K vertices model is less than 0.08s. Discussion: In the future, we will use an accurate geodesic algorithm and parallel computing techniques to improve our algorithm to obtain a better smooth geodesic loop. Conclusion: A large number of experimental results show that the algorithm proposed in this paper can effectively achieve high precision geodesic loop paths, and this method can also be used for interactive shape segmentation in real-time.
-
-
-
Computing Salient Feature Points of 3D Model Based on Geodesic Distance and Decision Graph Clustering
Authors: Dechao Sun, Nenglun Chen, Renfang Wang, Bangquan Liu and Feng LiangIntroduction: Computing salient feature points (SFP) of 3D models has important application value in the field of computer graphics. In order to extract the SFP more effectively, a novel SFP computing algorithm based on geodesic distance and decision graph clustering is proposed. Method: Firstly, the geodesic distance of model vertices is calculated based on the heat conduction equation, and then the average geodesic distance and importance weight of vertices are calculated. Finally, the decision graph clustering method is used to calculate the decision graph of model vertices. Results and Discussion: 3D models in SHREC 2011 dataset are selected to test the proposed algorithm. Compared with the existing algorithms, this method calculates the SFP of the 3D model from a global perspective. Results show that it is not affected by model posture and noise. Conclusion: Our method maps the SFP of the 3D model to the 2D decision-making diagram, which simplifies the calculation process of SFP, improves the calculation accuracy and possesses strong robustness.
-
-
-
Group DEMATEL Decision Method Based on Hesitant Fuzzy Linguistic Term Sets
Authors: Hui Xie, Qian Ren, Wanchun Duan, Yonghe Sun and Wei HanBackground: Decision-making trial and evaluation laboratory (DEMATEL) is a practical and concise method to deal with the complicated socioeconomic system problems. However, there are two defects in original DEMATEL. On the one hand, the traditional expert preference expressions cannot reflect the hesitation and flexibility of expert, on the other hand, the experts’ weight is usually expressed as the equivalent weight which cannot reflect the scientific weight on behalf of the experts’ academic background, capability experience, risk preference and so on. To solve the above problems, a novel Group DEMATEL decision method based on hesitant fuzzy linguistic term sets (HFLTSs) is proposed. Method: Firstly, this paper presents that experts make their judgement on the causal relationship of factors by using a linguistic expression closed to human expression, which can be easily transformed into HFLTSs. Next, the hybrid weight of experts are calculated on the base of the initial HFLTSs direct influence matrix(HDIM) according to the hesitant degree and distance between two HDIMs. And the aggregation of each expert’s information is introduced by possibility degree. Then, the new group DEMATEL decision method based on HFLTSs are constructed. Finally, an illustrative example is given and analyzed to demonstrate the effectiveness and validation of the proposed approach. Results: This paper demonstrate the heterogeneity of decision experts and the hesitation degree of expert information representation must be taken into account when determining the interaction of factors in complex systems by DEMATEL method. Conclusion: This paper constructs the new amended group DEMATEL which provides a new way to deal with the integration of each expert’s information by the hybrid weight and possibility degree. The methods provide references for determining the importance of complex system factors more scientifically and objectively.
-
-
-
A Two-Sided Matching Method for Green Suppliers and Manufacturers with Intuitionistic Linguistic Preference Information
Authors: Lan-lin Wang, Zhi Liu, Yue-ling Zheng and Feng-juan GuPurpose: Existing methodologies on two-sided matching seldom consider asymmetry, uncertainty, and the fuzziness of preference information; therefore, this study aims to develop a methodology for solving the selection process between green suppliers and manufacturers with intuitionistic linguistic numbers. Methods: This study first constructs the evaluation indicators for both sides, which are depicted as intuitionistic linguistic numbers. Subsequently, we redefine the intuitionistic linguistic numbers’ expected function based on regret theory. By considering the psychological behaviors due to the regret aversion of decision makers, the study constructs the comprehensive perceived values of decision makers. Furthermore, by maximizing the comprehensive perceived values for the two sides, the multi-objective matching model is established. In addition, this study adopts a min-max method to transform the multi-objective optimal model into the single-objective model. Conclusion: This study considers fuzziness and hesitancy of the preference information in addition to the psychological behavior arising from the regret aversion of decision makers. The two-sided matching method proposed by this paper has more validity and effectiveness than that of existing methods.
-
-
-
Explore the Optimal Node Degree of Interfirm Network for Efficient Knowledge Sharing
Authors: Houxing Tang, Fang Fang and Zhenzhong MaBackground: Network structure is a critical issue for efficient inter-firm knowledge sharing. The optimal node degree plays a major role because it is generally regarded as a core proxy of network structural characteristics. This paper aims to examine what is the optimal node degree for an efficient network structure. Methods: Based on an interaction rule combining the barter rule and the gift rule, this study first describes and then builds a knowledge diffusion process. Then using four factors, namely network size, network randomness, knowledge endowment of network, and knowledge stock of each firm, we examined the factors that influence the optimal node degree for efficient knowledge sharing. Results: The simulation results show that the optimal node degree can be determined along with the change in external factors. Furthermore, changing the network randomness and network size has a little impact on the node degree. Instead, both knowledge endowment of network and knowledge stock of each firm have a significant impact on the node degree. Conclusion: It has been found that an optimal node degree can always be found in any condition, which confirms the existence of a balanced state. Thus, policymakers can determine the appropriate number of links to avoid redundancy and thus reduce cost in interfirm networks. We also examined how different factors influence the size of the optimal node degree, and as a result, policymakers can set an appropriate number of links under different situations.
-
-
-
Fingerprint Presentation Attack Detection in Open-Set Scenario Using Transient Liveness Factor
Authors: Akhilesh Verma, Vijay K. Gupta and Savita GoelBackground: In recent history, fingerprint presentation attack detection (FPAD) proposal came out in a variety of ways. A close-set approach uses a pattern classification technique that best suits a specific context and goal. The Open-set approach works fine in a wider context, which is relatively robust with new fabrication material and independent of sensor type. In both cases, results were promising but not too generalizable because of unseen conditions not fitting into the method used. It is clear that the two key challenges in the FPAD system, sensor interoperability and robustness with new fabrication materials are not addressed to date. Objective: To address the above challenges, a liveness detection model is proposed using a live sample using transient liveness factor and one-class CNN. Methods: In our architecture, liveness is predicted by using the fusion rule, score level fusion of two decisions. Here, ‘n’ high-quality live samples are initially trained for quality. We observed that fingerprint liveness information is ‘transitory’ in nature, a variation in the different live sample is natural. Thus, each live sample has a ‘transient liveness’ (TL) information. We use no-reference (NR) image quality measure (IQM) as a transient value corresponding to each live sample. A consensus agreement is collectively reached in transient value to predict adversarial input. Further, live samples at the server are trained with augmented inputs on the one-class classifier to predict the outlier. So, by using the fusion rule, score level fusion of consensus agreement and appropriately characterized negative cases (or outliers) predicts liveness. Results: Our approach uses high-quality 30-live samples only, out of 90 images available in the dataset to reduce learning time. We used Time Series images from the LivDet competition 2015. It has 90-live images and 45-spoof images made from Bodydouble, Ecoflex and Playdoh of each person. Fusion rule results in 100% accuracy in recognising live as live. Conclusion: We have presented an architecture for liveness-server for extraction/updating transient liveness factor. Our work explained here a significant step forward towards a generalized and reproducible process with consideration towards the provision for the universal scheme as a need of today. The proposed TLF approach has a solid presumption; it will address dataset heterogeneity as it incorporates wider scope-context. Similar results with other datasets are under validation. Implementation seems difficult now but has several advantages when carried out during the transformative process.
-
-
-
Study and Analysis of User Desired Image Retrieval
Authors: John B. P and S JanakiramanBackground: In the present digital world, Content Based Image Retrieval (CBIR) has gained significant importance. In this context, image processing technology has become the most sought one, as a result its demand has increased to a large extent. The complex growth concerning computer technology offers a platform to apply the image processing application. Well-known image retrieval techniques suitable for application zone are 1. Text Based Image Retrieval (TBIR) 2. Content Based Image Retrieval (CBIR) 3. Semantic Based Image Retrieval (SBIR) and etc. In the recent past, many researchers have conducted extensive research in the field of content-based image retrieval (CBIR). However, many research related studies on image retrieval and characterization have exemplified to be an immense issue and it should be progressively developed in its techniques. Hence, by putting altogether the research conducted in recent years, this survey study makes a comprehensive attempt to review the state-of-the art in this field. Aims: This paper aims to retrieve similar images according to visual properties, which are defined as shape, color, texture and edge detection. Objective: This study investigates the CBIR to achieve the task because of the essential and fundamental problems. The present and future trends are addressed to show contributions and directions and can inspire more research in the CBIR methods. Result: We present an in-depth analysis of state of the art on CBIR methods; We explain the methods based on color, texture, shape and edge detection with performance evaluation metrics. In addition, we have discussed some significant future research directions reviewed. Methods: This paper has quickly anticipated the noteworthiness of CBIR and its related improvement, which incorporates Edge Detection Techniques, various sorts of Distance Metric (DM), performance measurements and various kinds of Datasets. This paper shows the conceivable outcomes to overcome the difficulties concerning re-ranking strategies with an improved accuracy. Conclusion: At last, we have proposed another technique for consolidating different highlights in a CBIR framework that can give preferred outcomes over the current strategies.
-
-
-
Risk Factor Identification, Classification and Prediction Summary of Chronic Kidney Disease
Authors: Pramila Arulanthu and Eswaran PerumalThe data generated by medical equipment is huge and humongous in size loaded with valuable information. This data set requires an effective classification for accurate prediction. The prediction of health issues is an extremely difficult task, especially Chronic Kidney Disease (CKD) is one of the major unpredictable diseases in the medical field. Perhaps, certain medical experts do not have identical awareness and skills to solve the issues of their patients. Most of the medical experts may have unsubstantiated results on disease diagnosis of their patients. Sometimes, patients may lose their life owing to disease severity. As per the Global Burden of Disease (GBD-2015) report, death by CKD was ranked at 17th position whereas GBD-2010 reported the same at 27th among the causes of death globally. Death by CKD constitutes 2.9% of all deaths between the years 2010 and 2013 among people in the age range of 15 to 69. As per the World Health Organization (WHO- 2005) report, CKD was the primary reason behind the death of 58 million people so far. Hence, this article presents a state-of-the-art review of the classification and prediction of Chronic Kidney Disease (CKD). Normally, advanced data mining techniques, fuzzy and machine learning algorithms are used to classify medical data and disease diagnosis. This study reviews and summarizes many classification techniques and disease diagnosis methods presented earlier. The main intention of this review is to point out and address some of the issues and complications of the existing methods. It also attempts to discuss the limitations and accuracy level of the existing CKD classification and disease diagnosis methods.
-
-
-
A Novel Hybrid Approach for Multi-Objective Bi-Clustering in Microarray Data
Authors: Naveen Trivedi and Suvendu KanungoBackground: Today, the bi-clustering technique plays a vital role to analyze gene expression data in microarray technology. This technique performs clustering on both rows and columns of expression data simultaneously. It determines the expression level of genes set under the subset of several conditions or samples. Basically, the obtained information is collected in the form of a sub-matrix comprising microarray data that satisfy coherent expression patterns of subsets of genes with respect to subsets of conditions. These sub-matrices are represented as bi-clusters and the overall process is called bi-clustering. In this paper, we proposed a new meta-heuristics hybrid ABC-MWOA-CC which is based on artificial bee colony (ABC), modified whale optimization algorithm (MWOA) and Cheng and Church (CC) algorithm to optimize the extracted bi-clusters. In order to validate this algorithm, we also delve into finding the statistical and biological relevancy of extracted genes with respect to various conditions. However, most of the bi-clustering techniques do not address the biological significance of genes belonging to extracted bi-clusters Objective: The major aim of the proposed work is to design and develop a novel hybrid multiobjective bi-clustering approach for in microarray data to produce the desired number of valid biclusters. Further, these extracted bi-clusters are to be optimized to obtain an optimal solution. Method: In the proposed approach, a hybrid multi-objective bi-clustering algorithm which is based on ABC along with MWOA is recommended to group the data into the desired number of biclusters. Further, ABC with MWOA multi-objective optimization algorithm is applied in order to optimize the solutions using a variety of fitness functions. Results: In the analysis of the result, the multi-objective functions which are employed to judge the fitness calculation like Volume Mean (VM), Mean of Genes (GM), Mean of Conditions (CM), and Mean of MSR (MMSR) lead to improve the performance analysis of the CC bi-clustering algorithm on real-life data set such as Yeast Saccharomyces cerevisiae cell cycle gene expression datasets. Conclusion: The effectiveness of the ABC-MWOA-CC algorithm is comprehensively demonstrated by comparing it with well-known traditional ABC-CC, OPSM, and CC algorithm in terms of VM, GM, CM, and MMSR.
-
-
-
Coral Reef Classification Using Improved WLD Feature Extraction with Convolution Neural Network Classification
Authors: M. A. Paul, P. A. J. Rani and J. Evangelin Deva SheelaIn this paper, it is proposed to employ IWLD (Improved Weber Local Descriptor) for coral reef annotation. IWLD is a powerful texture descriptor, it is proposed to analyze the role of IWLD for coral reef classification. Background: Coral reefs are one of the oldest and dynamic ecosystems of the world. Manual annotation of coral reef is not possible due to lacking consistency and objectivity in human labeling. Objective: Manual annotation consumes an enormous number of hours to annotate every coral image and video frames as well as human resources. An emblematic survey states that more than 400 person-hours are required to annotate 1000 images. Incidentally, some coral species have different shapes, sizes and colors, while most of the corals seem indistinguishable to the human eye. In order to avoid the contradictory classifications, an expert system that can automatically annotate the corals is essential to improve the accuracy of classification. Method: The proposed improved WLD extract texture features from six combinations of color channels like (1) R Channel, (2) G Channel (3) B Channel (4) RG Channel (5) GB Channel and (6) BR Channel of an image in a holistic way while preserving their relations. The extracted features are analyzed and classified using CNN Classifier. Results: Experiments are carried out with EILAT, RSMAS, EILAT 2 and MLC2008 datasets and the proposed improved WLD based coral reef classification is found to be appropriate. From the accuracy point of view, the improved WLD demonstrate higher accuracy compared to other state-ofthe- art techniques. Conclusion: This paper analyzes the role of Improved WLD for feature extraction to classify coral reefs. For this purpose, EILAT, RSMAS, EILAT 2 and MLC2008 datasets have been used. It is observed that the proposed IWLD based classifier gives promising results for coral reef classification.
-
Most Read This Month
