Electroencephalography (EEG) data acquisition process in Brain-Computer Interfaces (BCIs) is inevitably affected by uncertainty which introduces variability in the data. This variability, often overlooked, affects the training and testing of neural network (NN) models. This study evaluates the impact of systematic bias (+/- 2%) plus aleatoric uncertainty (2-5% random Gaussian perturbations) on the classification confidence of the EEGNet model for four class-Motor Imagery (MI) tasks using the BCI Competition IV 2a dataset. Through two Monte Carlo simulations with 100 iterations each, perturbed datasets were generated to mimic real-world EEG acquisition uncertainties. Softmax outputs served to analyze overlap in predicted probabilities and quantify model confidence in classification decisions. We introduce robust evaluation metrics, including the proportion of area under the curve (AUC) of probability density functions (PDFs) >= 70% accuracy, overlap coefficients, and percentile-based thresholding, which provide a more comprehensive assessment of model performance, capturing not only accuracy but also confidence and ambiguity in predictions. Results show that the robustness of EEGNet in the face of realistic measurement uncertainties is prone to inter-subject variability, with the model achieving higher confidence for Subject 1 (average 90.52%) compared to Subject 2 (62.96%). EEGNet demonstrates resilience to directional calibration shifts in the data, with model confidence varying by 0.22% for Subject 1 and by 6.24% for Subject 2, showing that aleatoric precision errors dominate over small systematic shifts. Our approach provides a rigorous framework for quantifying the impact of measurement variability on EEG-based BCI classification, thereby enhancing reliability and generalizability in practical BCI deployments.
Monte Carlo-Based Strategy for Assessing the Impact of EEG Data Uncertainty on Confidence in Convolutional Neural Network Classification
Nzakuna Pierre Sedi;Gallo V.;Paciello V.;
2025
Abstract
Electroencephalography (EEG) data acquisition process in Brain-Computer Interfaces (BCIs) is inevitably affected by uncertainty which introduces variability in the data. This variability, often overlooked, affects the training and testing of neural network (NN) models. This study evaluates the impact of systematic bias (+/- 2%) plus aleatoric uncertainty (2-5% random Gaussian perturbations) on the classification confidence of the EEGNet model for four class-Motor Imagery (MI) tasks using the BCI Competition IV 2a dataset. Through two Monte Carlo simulations with 100 iterations each, perturbed datasets were generated to mimic real-world EEG acquisition uncertainties. Softmax outputs served to analyze overlap in predicted probabilities and quantify model confidence in classification decisions. We introduce robust evaluation metrics, including the proportion of area under the curve (AUC) of probability density functions (PDFs) >= 70% accuracy, overlap coefficients, and percentile-based thresholding, which provide a more comprehensive assessment of model performance, capturing not only accuracy but also confidence and ambiguity in predictions. Results show that the robustness of EEGNet in the face of realistic measurement uncertainties is prone to inter-subject variability, with the model achieving higher confidence for Subject 1 (average 90.52%) compared to Subject 2 (62.96%). EEGNet demonstrates resilience to directional calibration shifts in the data, with model confidence varying by 0.22% for Subject 1 and by 6.24% for Subject 2, showing that aleatoric precision errors dominate over small systematic shifts. Our approach provides a rigorous framework for quantifying the impact of measurement variability on EEG-based BCI classification, thereby enhancing reliability and generalizability in practical BCI deployments.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.