The decisions behind the mechanics of a biometric verification system based on Machine Learning (ML) are difficult to comprehend. Although there is now well-established research in various fields of application, such as health or justice, the use of ML-based methods is accompanied by a lack of confidence that results in their limited use. The explainability of a ML system and the comprehension of what lies behind its prediction is one of the numerous characteristics that define "trust" in these systems. Over the years, face-based biometric authen-tication has been the subject of extensive research in both academia and industry. However, existing biometric authentication systems still have problems regarding accuracy, robustness and, explainability. Still lacking in the literature is a comprehensive examination of the use of post-hoc explainability techniques for such systems. Cognitive neuroscience has always been interested in the method by which people perceive faces; local elements such as the nose, eyes, and mouth are critical to the perception and recognition of a face. In this work, starting from this assumption, we propose a framework of visual and textual explainability based on the parts of a face by analyzing them with respect to the facial attributes reported in the CelebA dataset. The primary objective is to be able to explain why two pictures of different subjects are distinct. This is done by sinthesizing pairs of images that illustrate how dissimilar the various parts of the face under investigation are and incisive and direct textual explanations of the distinguishing features are generated. A further study analyzes an interpretable mapping between the semantic space of the text and the space of the image. (c) 2023 Elsevier B.V. All rights reserved.

Visual and textual explainability for a biometric verification system based on piecewise facial attribute analysis

Cascone, L
;
Pero, C;
2023-01-01

Abstract

The decisions behind the mechanics of a biometric verification system based on Machine Learning (ML) are difficult to comprehend. Although there is now well-established research in various fields of application, such as health or justice, the use of ML-based methods is accompanied by a lack of confidence that results in their limited use. The explainability of a ML system and the comprehension of what lies behind its prediction is one of the numerous characteristics that define "trust" in these systems. Over the years, face-based biometric authen-tication has been the subject of extensive research in both academia and industry. However, existing biometric authentication systems still have problems regarding accuracy, robustness and, explainability. Still lacking in the literature is a comprehensive examination of the use of post-hoc explainability techniques for such systems. Cognitive neuroscience has always been interested in the method by which people perceive faces; local elements such as the nose, eyes, and mouth are critical to the perception and recognition of a face. In this work, starting from this assumption, we propose a framework of visual and textual explainability based on the parts of a face by analyzing them with respect to the facial attributes reported in the CelebA dataset. The primary objective is to be able to explain why two pictures of different subjects are distinct. This is done by sinthesizing pairs of images that illustrate how dissimilar the various parts of the face under investigation are and incisive and direct textual explanations of the distinguishing features are generated. A further study analyzes an interpretable mapping between the semantic space of the text and the space of the image. (c) 2023 Elsevier B.V. All rights reserved.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4846412
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact