Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Hate and offensive language detection using BERT for English subtask A

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      RWTH Aachen University
    • الموضوع:
      2021
    • Collection:
      Jultika - University of Oulu repository / Oulun yliopiston julkaisuarkisto
    • نبذة مختصرة :
      This paper presents the results and main findings of the HASOC-2021 Hate/Offensive Language Identification Subtask A. The work consisted of fine-tuning pre-trained transformer networks such as BERT and an ensemble of different models, including CNN and BERT. We have used the HASOC-2021 English 3.8k annotated twitter dataset. We compare current pre-trained transformer networks with and without Masked-Language-Modelling (MLM) fine-tuning on their performance for offensive language detection. Among different BERT MLM fine-tuned BERT-base, BERT-large, and ALBERT outperformed other models; however, BERT and CNN ensemble classifier that applies majority voting outperformed other models, achieving 85.1% F1 score on both hate/non-hate labels. Our final submission achieved 77.0 F1 in the HASOC-2021 competition.
    • File Description:
      application/pdf
    • الدخول الالكتروني :
      http://urn.fi/urn:nbn:fi-fe2022070551079
    • Rights:
      info:eu-repo/semantics/openAccess ; © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ; https://creativecommons.org/licenses/by/4.0/
    • الرقم المعرف:
      edsbas.843AE099