نبذة مختصرة : In the contemporary landscape of decision support systems, machine learning (ML) algorithms assume a pivotal role in diverse domains, including job screening and loan approvals. Despite their extensive utilization, a persistent challenge arises in the form of biased outcomes, notably influenced by sensitive attributes such as gender and ethnicity. While current research heavily leans on these attributes for fairness, the scarcity of data due to privacy and legal constraints poses a substantial hurdle. Furthermore, imbalances in real-world datasets necessitate the use of class balancing techniques, but conflicting findings on their impact on bias mitigation and overall model performance complicate the pursuit of fairness. This paper conducts a comprehensive investigation, addressing the unique challenge of constructing fair models without explicit reliance on sensitive attributes. It specifically examines the effectiveness of Synthetic Minority Oversampling TEchnique (SMOTE)-driven oversampling methods. The study's findings reveal a significant enhancement in classification performance through SMOTE-driven techniques. These insights advocate for the thoughtful integration of SMOTE-driven oversampling techniques to achieve a balance between model fairness and accuracy. The results provide valuable guidance to researchers and practitioners in the field, contributing to the ongoing dialogue on fairness in machine learning models.
No Comments.