Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Leveraging Pre-trained Models for Failure Analysis Triplets Generation

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Laboratoire d'Informatique, de Modélisation et d'Optimisation des Systèmes (LIMOS); Ecole Nationale Supérieure des Mines de St Etienne (ENSM ST-ETIENNE)-Centre National de la Recherche Scientifique (CNRS)-Université Clermont Auvergne (UCA)-Institut national polytechnique Clermont Auvergne (INP Clermont Auvergne); Université Clermont Auvergne (UCA)-Université Clermont Auvergne (UCA); Département Génie mathématique et industriel (FAYOL-ENSMSE); Ecole Nationale Supérieure des Mines de St Etienne (ENSM ST-ETIENNE)-Institut Henri Fayol; École des Mines de Saint-Étienne (Mines Saint-Étienne MSE); Institut Mines-Télécom Paris (IMT); Institut Henri Fayol (FAYOL-ENSMSE); Institut Mines-Télécom Paris (IMT)-Institut Mines-Télécom Paris (IMT); Centre Ingénierie Santé, Saint-Étienne (CIS - MINES); STMicroelectronics Grenoble (ST-GRENOBLE)
    • بيانات النشر:
      HAL CCSD
    • الموضوع:
      2022
    • Collection:
      Mines de Saint-Etienne: Archives Ouvertes / Open Archive (HAL)
    • نبذة مختصرة :
      Pre-trained Language Models recently gained traction in the Natural Language Processing (NLP) domain for text summarization, generation and question answering tasks. This stems from the innovation introduced in Transformer models and their overwhelming performance compared with Recurrent Neural Network Models (Long Short Term Memory (LSTM)). In this paper, we leverage the attention mechanism of pre-trained causal language models such as Transformer model for the downstream task of generating Failure Analysis Triplets (FATs)-a sequence of steps for analyzing defected components in the semiconductor industry. We compare different transformer model for this generative task and observe that Generative Pre-trained Transformer 2 (GPT2) outperformed other transformer model for the failure analysis triplet generation (FATG) task. In particular, we observe that GPT2 (trained on 1.5B parameters) outperforms pre-trained BERT, BART and GPT3 by a large margin on ROUGE. Furthermore, we introduce Levenshstein Sequential Evaluation metric (LESE) for better evaluation of the structured FAT data and show that it compares exactly with human judgment than existing metrics.
    • Relation:
      hal-03837798; https://hal.science/hal-03837798; https://hal.science/hal-03837798/document; https://hal.science/hal-03837798/file/PAPER_JAIR.pdf
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.CE967C45