Providing algorithmic explanations for the decisions of machine learning systems to end users, data protection officers, and other stakeholders in the design, production, commercialization and use of machine learning systems pipeline is an important and challenging research problem. Crucial motivations to address this research problem can be advanced on both ethical and legal grounds. Notably, explanations of the decisions of machine learning systems appear to be needed to protect the dignity, autonomy and legitimate interests of people who are subject to automatic decision-making. Much work in this area focuses on image classification, where the required explanations can be given in terms of images, therefore making explanations relatively easy to communicate to end users. In this paper we discuss how the representational power of sparse dictionaries can be used to identify local image properties as main ingredients for producing humanly understandable explanations for the decisions of a classifier developed on the basis of machine learning methods.

Sparse dictionaries for the explanation of classification systems

Apicella A.;
2019

Abstract

Providing algorithmic explanations for the decisions of machine learning systems to end users, data protection officers, and other stakeholders in the design, production, commercialization and use of machine learning systems pipeline is an important and challenging research problem. Crucial motivations to address this research problem can be advanced on both ethical and legal grounds. Notably, explanations of the decisions of machine learning systems appear to be needed to protect the dignity, autonomy and legitimate interests of people who are subject to automatic decision-making. Much work in this area focuses on image classification, where the required explanations can be given in terms of images, therefore making explanations relatively easy to communicate to end users. In this paper we discuss how the representational power of sparse dictionaries can be used to identify local image properties as main ingredients for producing humanly understandable explanations for the decisions of a classifier developed on the basis of machine learning methods.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4911124
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact