Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Multimodal hate speech detection: a novel deep learning framework for multilingual text and images

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      ZU Scholars
    • الموضوع:
      2025
    • نبذة مختصرة :
      The rapid proliferation of social media platforms has facilitated the expression of opinions but also enabled the spread of hate speech. Detecting multimodal hate speech in low-resource multilingual contexts poses significant challenges. This study presents a deep learning framework that integrates bidirectional long short-term memory (BiLSTM) and EfficientNetB1 to classify hate speech in Urdu-English tweets, leveraging both text and image modalities. We introduce multimodal multilingual hate speech (MMHS11K), a manually annotated dataset comprising 11,000 multimodal tweets. Using an early fusion strategy, text and image features were combined for classification. Experimental results demonstrate that the BiLSTM+EfficientNetB1 model outperforms unimodal and baseline multimodal approaches, achieving an F1-score of 81.2% for Urdu tweets and 75.5% for English tweets. This research addresses critical gaps in multilingual and multimodal hate speech detection, offering a foundation for future advancements.
    • Relation:
      https://zuscholars.zu.ac.ae/works/7283
    • الرقم المعرف:
      10.7717/peerj-cs.2801
    • الدخول الالكتروني :
      https://zuscholars.zu.ac.ae/works/7283
      https://doi.org/10.7717/peerj-cs.2801
    • Rights:
      http://creativecommons.org/licenses/by/4.0/
    • الرقم المعرف:
      edsbas.270F9406