Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

ONLINE CONTINUAL LEARNING OF DIFFUSION MODELS: MULTI-MODE ADAPTIVE GENERATIVE DISTILLATION

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      École Centrale de Lyon (ECL); Université de Lyon; Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS); Université Lumière - Lyon 2 (UL2)-École Centrale de Lyon (ECL); Université de Lyon-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL); Université de Lyon-Institut National des Sciences Appliquées de Lyon (INSA Lyon); Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS); Siléane Saint-Etienne; Extraction de Caractéristiques et Identification (imagine); Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS)-Université Lumière - Lyon 2 (UL2)-École Centrale de Lyon (ECL)
    • بيانات النشر:
      CCSD
    • الموضوع:
      2025
    • Collection:
      Portail HAL de l'Université Lumière Lyon 2
    • الموضوع:
    • نبذة مختصرة :
      International audience ; Continual learning typically relies on storing real data, which is impractical in privacy-sensitive settings. Generative replay with diffusion models offers a high-fidelity alternative. However, in online continual learning (OCL), these models struggle with catastrophic forgetting and incur high computational costs from frequent updates and sampling. Existing distillation methods reduce generation steps but rely on a fixed teacher model, limiting their effectiveness as data distributions evolve. To address these, we introduce Multi-Mode Adaptive Generative Distillation (MAGD), which incorporates two innovative techniques: Noisy Intermediate Generative Distillation (NIGD) and SNR-Guided Generative Distillation (SGGD). NIGD leverages intermediate noisy images, created during the reverse process rather than by adding noise post-generation, to enhance knowledge transfer. SGGD uses a signal-to-noise ratio (SNR) based threshold to optimize the sampling of time steps, reducing unnecessary generation. Guided by an Exponential Moving Average (EMA) teacher, MAGD effectively mitigates catastrophic forgetting as it adapts to new data streams. Experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 show that MAGD reduces generation overhead by up to 25% relative to standard generative distillation and 92% compared to DDGR-1000, while maintaining generating quality. Furthermore, in class-conditioned diffusion models, MAGD outperforms memory-based methods in terms of classification accuracy.
    • الدخول الالكتروني :
      https://hal.science/hal-04928776
      https://hal.science/hal-04928776v2/document
      https://hal.science/hal-04928776v2/file/ICIP2025.pdf
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.A0C82175