Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Interpretable time series neural representation for classification purposes

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Institut des Systèmes Intelligents et de Robotique (ISIR); Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS); Machine Learning and Information Access (MLIA); Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS); Mathématiques et Informatique Appliquées (MIA Paris-Saclay); AgroParisTech-Université Paris-Saclay-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)
    • بيانات النشر:
      HAL CCSD
      IEEE
    • الموضوع:
      2023
    • Collection:
      AgroParisTech: HAL (Institut des sciences et industries du vivant et de l'environnement)
    • الموضوع:
    • نبذة مختصرة :
      Deep learning has made significant advances in creating efficient representations of time series data by automatically identifying complex patterns. However, these approaches lack interpretability, as the time series is transformed into a latent vector that is not easily interpretable. On the other hand, Symbolic Aggregate approximation (SAX) methods allow the creation of symbolic representations that can be interpreted but do not capture complex patterns effectively. In this work, we propose a set of requirements for a neural representation of univariate time series to be interpretable. We propose a new unsupervised neural architecture that meets these requirements. The proposed model produces consistent, discrete, interpretable, and visualizable representations. The model is learned independently of any downstream tasks in an unsupervised setting to ensure robustness. As a demonstration of the effectiveness of the proposed model, we propose experiments on classification tasks using UCR archive datasets. The obtained results are extensively compared to other interpretable models and state-ofthe-art neural representation learning models. The experiments show that the proposed model yields, on average better results than other interpretable approaches on multiple datasets. We also present qualitative experiments to asses the interpretability of the approach.
    • Relation:
      hal-04284273; https://hal.science/hal-04284273; https://hal.science/hal-04284273/document; https://hal.science/hal-04284273/file/dsaa23.pdf
    • الرقم المعرف:
      10.1109/DSAA60987.2023.10302534
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • الرقم المعرف:
      edsbas.838E15E9