Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Interplay between depth and width for interpolation in neural ODEs

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      UAM. Departamento de Matemáticas
    • بيانات النشر:
      Elsevier
    • الموضوع:
      2024
    • Collection:
      Universidad Autónoma de Madrid (UAM): Biblos-e Archivo
    • نبذة مختصرة :
      Neural ordinary differential equations have emerged as a natural tool for supervised learning from a control perspective, yet a complete understanding of the role played by their architecture remains elusive. In this work, we examine the interplay between the width p and the number of transitions between layers L (corresponding to a depth of L+1). Specifically, we construct explicit controls interpolating either a finite dataset D, comprising N pairs of points in Rd, or two probability measures within a Wasserstein error margin ɛ > 0. Our findings reveal a balancing trade-off between p and L, with L scaling as 1 + O (N / p) for data interpolation, and as 1 + Op– 1 + (1 + p)− 1ɛ −d for measures. In the high-dimensional and wide setting where d,p > N, our result can be refined to achieve L = 0. This naturally raises the problem of data interpolation in the autonomous regime, characterized by L = 0. We adopt two alternative approaches: either controlling in a probabilistic sense, or by relaxing the target condition. In the first case, when p = N we develop an inductive control strategy based on a separability assumption whose probability increases with d. In the second one, we establish an explicit error decay rate with respect to p which results from applying a universal approximation theorem to a custom-built Lipschitz vector field interpolating D ; This paper was supported by the Madrid Government (Comunidad de Madrid– Spain) under the multiannual Agreement with UAM in the line for the Excellence of the University Research Staff in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). A. Álvarez-López has been funded by a contract FPU21/05673 from the Spanish Ministry of Universities. A. Hadj Slimane has been funded by École Normale Supérieur Paris-Saclay and Université Paris-Saclay. E. Zuazua has been funded by the Alexander von Humboldt-Professorship program, ModConFlex Marie Curie Action, HORIZON-MSCA-2021-DN-01, COST Action MAT-DYN-NET, Transregio 154 Project ...
    • File Description:
      application/pdf
    • Relation:
      Neural Networks; https://doi.org/10.1016/j.neunet.2024.106640; info:eurepo/grantAgreement/EC/HE/101073558/EU//ModConFlex; Gobierno de España. PID2020112617GB-C22; Gobierno de España. TED2021-131390B-I00; Neural Networks 180 (2024): 106640; http://hdl.handle.net/10486/714813; 180
    • الرقم المعرف:
      10.1016/j.neunet.2024.106640
    • الدخول الالكتروني :
      http://hdl.handle.net/10486/714813
      https://doi.org/10.1016/j.neunet.2024.106640
    • Rights:
      © 2024 The Authors ; http://creativecommons.org/licenses/by/4.0/ ; Reconocimiento ; openAccess
    • الرقم المعرف:
      edsbas.207F60ED