نبذة مختصرة : This paper aims to improve the explainability of autoencoder (AE) pre- dictions by proposing two novel explanation methods based on the mean and epistemic uncertainty of log-likelihood estimates, which naturally arise from the probabilistic formulation of the AE, the Bayesian autoencoder (BAE). These formulations contrast the conventional post-hoc explanation methods for AEs, which incur additional modelling effort and implementations. We further extend the methods for sensor-based explanations, aggregating the explanations at the sensor level instead of the lower feature level.
No Comments.