Domain Knowledge-based BERT Model with Deep Learning for Text Classification

- By Akhilesh Kalia1
-
View Affiliations Hide Affiliations1 Centre for Interdisciplinary Research in Business and Technology, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
- Source: Demystifying Emerging Trends in Machine Learning , pp 181-189
- Publication Date: February 2025
- Language: English


Domain Knowledge-based BERT Model with Deep Learning for Text Classification, Page 1 of 1
< Previous page | Next page > /docserver/preview/fulltext/9789815305395/chapter-17-1.gif
Lexical model BERT already trained on BookCorpus and Wikipedia works well on two NLP tasks after downstream fine-tuning. The requirements of BERT model include strategy analyses and task-specific and domain-related data. The problems of task awareness as well as instruction data in BERT-DL, a BERT-based text-classification model, are addressed through auxiliary sentences. The pre-training, training, and post-training steps for BERT4TC's domain challenges are all provided. Learning speed, sequencing duration, and secret state vectors that select fine-tuning are all investigated in extended trials over 7 public datasets. The BERT4TC model is then contrasted using a variety of auxiliary terms and post-training goals. On multiple-class datasets, BERT4TC with the ideal auxiliary phrase outperforms previous state-of-theart feature-based algorithms and fine-tuning approaches. Our domain-related corpustrained BERT4TC beats BERT on binary sentiment categorization datasets.
-
From This Site
/content/books/9789815305395.chapter-17dcterms_subject,pub_keyword-contentType:Journal -contentType:Figure -contentType:Table -contentType:SupplementaryData105
