Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

On the robustness of randomized classifiers to adversarial examples

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Ecole Polytechnique Fédérale de Lausanne (EPFL); Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision (LAMSADE); Université Paris Dauphine-PSL; Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS); Intelligence Artificielle et Apprentissage Automatique (CEA, LIST) (LI3A (CEA, LIST)); Département Métrologie Instrumentation & Information (CEA, LIST) (DM2I (CEA, LIST)); Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)); Direction de Recherche Technologique (CEA) (DRT (CEA)); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay; Machine Intelligence and Learning Systems (MILES); Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Université Paris Dauphine-PSL; ANR-22-PECY-0008,SuperViz,SuperViz(2022)
    • بيانات النشر:
      HAL CCSD
      Springer Verlag
    • الموضوع:
      2022
    • Collection:
      Archive ouverte HAL (Hyper Article en Ligne, CCSD - Centre pour la Communication Scientifique Directe)
    • نبذة مختصرة :
      International audience ; This paper investigates the theory of robustness against adversarial attacks. We focus on randomized classifiers (i.e. classifiers that output random variables) and provide a thorough analysis of their behavior through the lens of statistical learning theory and information theory. To this aim, we introduce a new notion of robustness for randomized classifiers, enforcing local Lipschitzness using probability metrics. Equipped with this definition, we make two new contributions. The first one consists in devising a new upper bound on the adversarial generalization gap of randomized classifiers. More precisely, we devise bounds on the generalization gap and the adversarial gap i.e. the gap between the risk and the worst-case risk under attack) of randomized classifiers. The second contribution presents a yet simple but efficient noise injection method to design robust randomized classifiers. We show that our results are applicable to a wide range of machine learning models under mild hypotheses. We further corroborate our findings with experimental results using deep neural networks on standard image datasets, namely CIFAR-10 and CIFAR-100. On these tasks, we manage to design robust models that simultaneously achieve state-of-the-art accuracy (over 0.82 clean accuracy on CIFAR-10) and enjoy guaranteed robust accuracy bounds (0.45 against ℓ 2 adversaries with magnitude 0.5 on CIFAR-10).
    • Relation:
      hal-03916842; https://hal.science/hal-03916842; https://hal.science/hal-03916842/document; https://hal.science/hal-03916842/file/2102.10875.pdf
    • الرقم المعرف:
      10.1007/s10994-022-06216-6
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.C756C46E