Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Proposal-contrastive pretraining for object detection from fewer data

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Département Intelligence Ambiante et Systèmes Interactifs (DIASI (CEA, LIST)); Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)); Direction de Recherche Technologique (CEA) (DRT (CEA)); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay; Laboratoire Hubert Curien (LabHC); Institut d'Optique Graduate School (IOGS)-Université Jean Monnet - Saint-Étienne (UJM)-Centre National de la Recherche Scientifique (CNRS); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA); Institut universitaire de France (IUF); Ministère de l'Education nationale, de l’Enseignement supérieur et de la Recherche (M.E.N.E.S.R.)
    • بيانات النشر:
      HAL CCSD
    • الموضوع:
      2023
    • Collection:
      Université de Lyon: HAL
    • الموضوع:
    • نبذة مختصرة :
      International audience ; The use of pretrained deep neural networks represents an attractive alternative to achieve strong results with few data available. When specialized in dense problems such as object detection, learning local rather than global information in images has proven to be much more efficient. However, for unsupervised pretraining, the popular contrastive learning requires a large batch size and, therefore, a lot of resources. To address this problem, we are interested in Transformer-based object detectors that have recently gained traction in the community with good performance by generating many diverse object proposals.In this work, we propose ProSeCo, a novel unsupervised end-to-end pretraining approach to leverage this property of Transformer-based detector. ProSeCo uses the large number of object proposals generated by the detector for contrastive learning, which allows the use of a smaller batch size, combined with object-level features to learn local information in the images. To improve the effectiveness of the contrastive loss, we introduce the localization information in the positive selection to take into account multiple overlapping object proposals. When reusing pretrained backbone, we advocate for consistency in learning local information between the backbone and the detection head.We show that our method outperforms state-of-the-art in unsupervised end-to-end pretraining for object detection on standard and novel benchmarks in learning with fewer data.
    • Relation:
      cea-04041965; https://cea.hal.science/cea-04041965; https://cea.hal.science/cea-04041965/document; https://cea.hal.science/cea-04041965/file/full_paper.pdf
    • الدخول الالكتروني :
      https://cea.hal.science/cea-04041965
      https://cea.hal.science/cea-04041965/document
      https://cea.hal.science/cea-04041965/file/full_paper.pdf
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.7D55C9CD