Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Improving Fairness Generalization Through a Sample-Robust Optimization Method

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Ecole de Technologie Supérieure Montréal (ETS); Équipe Recherche Opérationnelle, Optimisation Combinatoire et Contraintes (LAAS-ROC); Laboratoire d'analyse et d'architecture des systèmes (LAAS); Université Toulouse Capitole (UT Capitole); Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse); Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Université Toulouse - Jean Jaurès (UT2J); Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3); Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP); Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole); Université de Toulouse (UT); Université du Québec à Montréal = University of Québec in Montréal (UQAM); ANR-11-LABX-0040,CIMI,Centre International de Mathématiques et d'Informatique (de Toulouse)(2011); ANR-19-P3IA-0004,ANITI,Artificial and Natural Intelligence Toulouse Institute(2019)
    • بيانات النشر:
      CCSD
      Springer Verlag
    • الموضوع:
      2023
    • Collection:
      Université Toulouse 2 - Jean Jaurès: HAL
    • نبذة مختصرة :
      International audience ; Unwanted bias is a major concern in machine learning, raising in particular significant ethical issues when machine learning models are deployed within high-stakes decision systems. A common solution to mitigate it is to integrate and optimize a statistical fairness metric along with accuracy during the training phase. However, one of the main remaining challenges is that current approaches usually generalize poorly in terms of fairness on unseen data. We address this issue by proposing a new robustness framework for statistical fairness in machine learning. The proposed approach is inspired by the domain of Distributionally Robust Optimization and works in ensuring fairness over a variety of samplings of the training set. Our approach can be used to quantify the robustness of fairness but also to improve it when training a model. We empirically evaluate the proposed method and show that it effectively improves fairness generalization. In addition, we propose a simple yet powerful heuristic application of our framework that can be integrated into a wide range of existing fair classification techniques to enhance fairness generalization. Our extensive empirical study using two existing fair classification methods demonstrates the efficiency and scalability of the proposed heuristic approach.
    • الرقم المعرف:
      10.1007/s10994-022-06191-y
    • الدخول الالكتروني :
      https://hal.science/hal-03709547
      https://hal.science/hal-03709547v1/document
      https://hal.science/hal-03709547v1/file/HAL_preprint.pdf
      https://doi.org/10.1007/s10994-022-06191-y
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.1C747E9B