نبذة مختصرة : Hate speech on social media platforms has become a major concern due to its impact on individuals and society. Traditional machine learning methods such as SVM or logistic regression have shown limitations in capturing contextual meaning in text. In this study, we employ the Bidirectional Encoder Representations from Transformers (BERT) model to detect hate speech with high accuracy. Unlike conventional approaches, BERT leverages deep contextual embeddings by considering the meaning of words in both directions, thereby improving the detection of subtle and implicit hate expressions. Using a publicly available labeled dataset, we fine-tuned BERT and evaluated its performance. The results demonstrate that BERT significantly outperforms baseline methods, achieving improved precision, recall, and F1-score. This study highlights the effectiveness of transformer-based architectures in combating online hate speech.
No Comments.