Biometric systems are largely based on Machine Learning (ML) algorithms which are often considered as a black-box. There is a need to provide them with explanations to make their decision understandable. In this paper, we conduct a Systematic Literature Review aiming at investigating the present adoption of explainable Artificial Intelligence (XAI) techniques in biometric systems. By examining the biometric tasks performed by the selected papers (e.g., face detection or face spoofing), the datasets adopted by the different approaches, the considered ML models, the XAI techniques, and their evaluation methods. We started from 496 papers and, after an accurate analysis, selected 47 papers. Results revealed that XAI is mainly adopted in biometric systems related to the face biometric cues. The explanations provided were all based on model-centric metrics and did not consider how the end-users perceived the explanations, leaving wide space for the biometric researchers to apply the XAI models and enhance the explanation evaluation into an HCI perspective.

Explainable biometrics: a systematic literature review

Tucci C.;Della Greca A.;Tortora G.;Francese R.
2024-01-01

Abstract

Biometric systems are largely based on Machine Learning (ML) algorithms which are often considered as a black-box. There is a need to provide them with explanations to make their decision understandable. In this paper, we conduct a Systematic Literature Review aiming at investigating the present adoption of explainable Artificial Intelligence (XAI) techniques in biometric systems. By examining the biometric tasks performed by the selected papers (e.g., face detection or face spoofing), the datasets adopted by the different approaches, the considered ML models, the XAI techniques, and their evaluation methods. We started from 496 papers and, after an accurate analysis, selected 47 papers. Results revealed that XAI is mainly adopted in biometric systems related to the face biometric cues. The explanations provided were all based on model-centric metrics and did not consider how the end-users perceived the explanations, leaving wide space for the biometric researchers to apply the XAI models and enhance the explanation evaluation into an HCI perspective.
2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4894800
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact