Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Coalitional Bayesian autoencoders: Towards explainable unsupervised deep learning with applications to condition monitoring under covariate shift

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      Elsevier BV
      Department of Engineering
      //dx.doi.org/10.1016/j.asoc.2022.108912
      Applied Soft Computing
    • الموضوع:
      2022
    • Collection:
      Apollo - University of Cambridge Repository
    • الموضوع:
      0
    • نبذة مختصرة :
      This paper aims to improve the explainability of autoencoder (AE) pre- dictions by proposing two novel explanation methods based on the mean and epistemic uncertainty of log-likelihood estimates, which naturally arise from the probabilistic formulation of the AE, the Bayesian autoencoder (BAE). These formulations contrast the conventional post-hoc explanation methods for AEs, which incur additional modelling effort and implementations. We further extend the methods for sensor-based explanations, aggregating the explanations at the sensor level instead of the lower feature level.
    • File Description:
      application/pdf
    • Relation:
      https://www.repository.cam.ac.uk/handle/1810/336630
    • الرقم المعرف:
      10.17863/CAM.84051
    • Rights:
      Attribution-NonCommercial-NoDerivatives 4.0 International ; https://creativecommons.org/licenses/by-nc-nd/4.0/
    • الرقم المعرف:
      edsbas.9652B314