Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Does a sparse ReLU network training problem always admit an optimum?

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Laboratoire de l'Informatique du Parallélisme (LIP); École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL); Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS); Optimisation, Connaissances pHysiques, Algorithmes et Modèles (OCKHAM); Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL); Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Institut Rhône-Alpin des systèmes complexes (IXXI); École normale supérieure de Lyon (ENS de Lyon)-Université Lumière - Lyon 2 (UL2)-Université Jean Moulin - Lyon 3 (UJML); Université de Lyon-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL); Université de Lyon-Institut National des Sciences Appliquées de Lyon (INSA Lyon); Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Université Lumière - Lyon 2 (UL2)-Université Jean Moulin - Lyon 3 (UJML); Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Inria Lyon; Institut National de Recherche en Informatique et en Automatique (Inria); ANR-19-CHIA-0009,AllegroAssai,Algorithmes, Approximations, Parcimonie et Plongements pour l'IA(2019)
    • بيانات النشر:
      HAL CCSD
    • الموضوع:
      2023
    • Collection:
      Université Jean Moulin - Lyon 3: Publications scientifiques (HAL)
    • الموضوع:
    • نبذة مختصرة :
      International audience ; Given a training set, a loss function, and a neural network architecture, it is often taken for granted that optimal network parameters exist, and a common practice is to apply available optimization algorithms to search for them. In this work, we show that the existence of an optimal solution is not always guaranteed, especially in the context of {\em sparse} ReLU neural networks. In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters, and that optimization algorithms may then diverge. Via a new topological relation between sparse ReLU neural networks and their linear counterparts, we derive --using existing tools from real algebraic geometry-- an algorithm to verify that a given sparsity pattern suffers from this issue. Then, the existence of a global optimum is proved for every concrete optimization problem involving a shallow sparse ReLU neural network of output dimension one. Overall, the analysis is based on the investigation of two topological properties of the space of functions implementable as sparse ReLU neural networks: a best approximation property, and a closedness property, both in the uniform norm. This is studied both for (finite) domains corresponding to practical training on finite training sets, and for more general domains such as the unit cube. This allows us to provide conditions for the guaranteed existence of an optimum given a sparsity pattern. The results apply not only to several sparsity patterns proposed in recent works on network pruning/sparsification, but also to classical dense neural networks, including architectures not covered by existing results.
    • Relation:
      info:eu-repo/semantics/altIdentifier/arxiv/2306.02666; hal-04108849; https://inria.hal.science/hal-04108849; https://inria.hal.science/hal-04108849v2/document; https://inria.hal.science/hal-04108849v2/file/neurips_2023.pdf; ARXIV: 2306.02666
    • الدخول الالكتروني :
      https://inria.hal.science/hal-04108849
      https://inria.hal.science/hal-04108849v2/document
      https://inria.hal.science/hal-04108849v2/file/neurips_2023.pdf
    • Rights:
      http://creativecommons.org/licenses/by/ ; info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.FE2E31CC