Skip to content
2000

Domain Knowledge-based BERT Model with Deep Learning for Text Classification

image of Domain Knowledge-based BERT Model with Deep Learning for Text Classification
Preview this chapter:

Lexical model BERT already trained on BookCorpus and Wikipedia works well on two NLP tasks after downstream fine-tuning. The requirements of BERT model include strategy analyses and task-specific and domain-related data. The problems of task awareness as well as instruction data in BERT-DL, a BERT-based text-classification model, are addressed through auxiliary sentences. The pre-training, training, and post-training steps for BERT4TC's domain challenges are all provided. Learning speed, sequencing duration, and secret state vectors that select fine-tuning are all investigated in extended trials over 7 public datasets. The BERT4TC model is then contrasted using a variety of auxiliary terms and post-training goals. On multiple-class datasets, BERT4TC with the ideal auxiliary phrase outperforms previous state-of-theart feature-based algorithms and fine-tuning approaches. Our domain-related corpustrained BERT4TC beats BERT on binary sentiment categorization datasets.

/content/books/9789815305395.chapter-17
dcterms_subject,pub_keyword
-contentType:Journal -contentType:Figure -contentType:Table -contentType:SupplementaryData
10
5
Chapter
content/books/9789815305395
Book
false
en
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test