The rapidly growing research area of eXplainable Artificial Intelligence (XAI) focuses on making Machine Learning systems' decisions more transparent and humanly understandable. One of the most successful XAI strategies is to provide explanations in terms of visualisations and, more specifically, low-level input features such as relevance scores or heat maps of the input, like sensitivity analysis or layer-wise relevance propagation methods. The main problem with such methods is that starting from the relevance of low-level features, the human user needs to identify the overall input properties that are salient. Thus, a current line of XAI research attempts to alleviate this weakness of low-level approaches, constructing explanations in terms of input features that represent more salient and understandable input properties for a user, which we call here Middle-Level input Features (MLF). In addition, another interesting and very recent approach is that of considering hierarchically organised explanations. Thus, in this paper, we investigate the possibility to combine both MLFs and hierarchical organisations. The potential advantages of providing explanations in terms of hierarchically organised MLFs are grounded on the possibility of exhibiting explanations to a different granularity of MLFs interacting with each other. We experimentally tested our approach on 300 Birds Species and Cars dataset. The results seem encouraging.

Explanations in terms of Hierarchically organised Middle Level Features

Apicella A.;
2021

Abstract

The rapidly growing research area of eXplainable Artificial Intelligence (XAI) focuses on making Machine Learning systems' decisions more transparent and humanly understandable. One of the most successful XAI strategies is to provide explanations in terms of visualisations and, more specifically, low-level input features such as relevance scores or heat maps of the input, like sensitivity analysis or layer-wise relevance propagation methods. The main problem with such methods is that starting from the relevance of low-level features, the human user needs to identify the overall input properties that are salient. Thus, a current line of XAI research attempts to alleviate this weakness of low-level approaches, constructing explanations in terms of input features that represent more salient and understandable input properties for a user, which we call here Middle-Level input Features (MLF). In addition, another interesting and very recent approach is that of considering hierarchically organised explanations. Thus, in this paper, we investigate the possibility to combine both MLFs and hierarchical organisations. The potential advantages of providing explanations in terms of hierarchically organised MLFs are grounded on the possibility of exhibiting explanations to a different granularity of MLFs interacting with each other. We experimentally tested our approach on 300 Birds Species and Cars dataset. The results seem encouraging.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4911128
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact