In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions leading ML systems toward poor generalisation performances. Therefore, such systems can be unreliable and risky, particularly when used in safety-critical domains. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are used. In fact, EEG signals are highly non-stationary signals both over time and between different subjects. Despite several efforts in developing BCI systems to deal with different acquisition times or subjects, performance in many BCI applications remains low. Exploiting the knowledge from eXplainable Artificial Intelligence (XAI) methods can help develop EEG-based AI approaches, overcoming the performance returned by the current ones. The proposed framework will give greater robustness and reliability to BCI systems with respect to the current state of the art, alleviating the dataset shift problem and allowing a BCI system to be used by different subjects at different times without the need for further calibration/training stages.

XAI approach for addressing the dataset shift problem: BCI as a case study

Apicella A.;
2022

Abstract

In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions leading ML systems toward poor generalisation performances. Therefore, such systems can be unreliable and risky, particularly when used in safety-critical domains. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are used. In fact, EEG signals are highly non-stationary signals both over time and between different subjects. Despite several efforts in developing BCI systems to deal with different acquisition times or subjects, performance in many BCI applications remains low. Exploiting the knowledge from eXplainable Artificial Intelligence (XAI) methods can help develop EEG-based AI approaches, overcoming the performance returned by the current ones. The proposed framework will give greater robustness and reliability to BCI systems with respect to the current state of the art, alleviating the dataset shift problem and allowing a BCI system to be used by different subjects at different times without the need for further calibration/training stages.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4911199
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact