Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Pre-training for Speech Translation: CTC Meets Optimal Transport

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Groupe d’Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole (GETALP); Laboratoire d'Informatique de Grenoble (LIG); Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ); Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ); Université Grenoble Alpes (UGA); Meta AI; Meta AI SRA; ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
    • بيانات النشر:
      HAL CCSD
    • الموضوع:
      2023
    • Collection:
      Université Grenoble Alpes: HAL
    • الموضوع:
    • نبذة مختصرة :
      International audience ; The gap between speech and text modalities is a major challenge in speech-to-text translation (ST). Different methods have been proposed to reduce this gap, but most of them require architectural changes in ST training. In this work, we propose to mitigate this issue at the pre-training stage, requiring no change in the ST model. First, we show that the connectionist temporal classification (CTC) loss can reduce the modality gap by design. We provide a quantitative comparison with the more common cross-entropy loss, showing that pre-training with CTC consistently achieves better final ST accuracy. Nevertheless, CTC is only a partial solution and thus, in our second contribution, we propose a novel pre-training method combining CTC and optimal transport to further reduce this gap. Our method pre-trains a Siamese-like model composed of two encoders, one for acoustic inputs and the other for textual inputs, such that they produce representations that are close to each other in the Wasserstein space. Extensive experiments on the standard CoVoST-2 and MuST-C datasets show that our pre-training method applied to the vanilla encoder-decoder Transformer achieves state-ofthe-art performance under the no-external-data setting, and performs on par with recent strong multi-task learning systems trained with external data. Finally, our method can also be applied on top of these multi-task systems, leading to further improvements for these models.
    • Relation:
      hal-04117237; https://hal.science/hal-04117237; https://hal.science/hal-04117237/document; https://hal.science/hal-04117237/file/2023_ICML_pretraining_ctc_ot.pdf
    • الدخول الالكتروني :
      https://hal.science/hal-04117237
      https://hal.science/hal-04117237/document
      https://hal.science/hal-04117237/file/2023_ICML_pretraining_ctc_ot.pdf
    • Rights:
      http://creativecommons.org/licenses/by/ ; info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.D0F5940F