Reliable predictive uncertainty - in particular predictive entropy used at decision time - in motor imagery electroencephalography (EEG)-based brain-computer interfaces (BCIs) is critical for safe real-world operation. Although Monte Carlo dropout and deep ensembles are effective for modeling predictive uncertainty, structured regularization approaches - such as FT-DropBlock for EEG-targeted convolutional neural networks (CNNs) - remain insufficiently investigated. We present learned-weight ensemble Monte Carlo FT-DropBlock with per-model temperature (LEMC-FTDB), a model-agnostic generalizable framework that integrates structured regularization, stochastic inference, learned aggregation, and per-model temperature scaling to deliver calibrated probabilities and informative uncertainty estimates on CNN models. We evaluated our framework on the EEGNet and EEG-ITNet backbones using the BCI Competition IV 2a dataset, with metrics including predictive entropy, mutual information, expected calibration error, negative log likelihood, Brier score, and misclassification-detection area under the curve (AUC). Results show that LEMC-FTDB consistently improves probability quality and calibration, strengthens misclassification detection, and maintains superior correctness and Cohen's kappa versus Monte Carlo-based, deep ensemble-based, and deterministic baselines. Crucially, predictive entropy cleanly ranks trial reliability and enables an hold/reject policy that achieves lower risk at the same coverage in risk-coverage analyzes, supporting practical deployment. Our code is available at: github.com/SedCore/lemc_ftdb.

Learned-Weight Ensemble Monte Carlo DropBlock for Uncertainty Estimation and EEG Classification

Sedi Nzakuna P.;Gallo V.;Carratu' M.;Paciello V.;Pietrosanto A.;
2025

Abstract

Reliable predictive uncertainty - in particular predictive entropy used at decision time - in motor imagery electroencephalography (EEG)-based brain-computer interfaces (BCIs) is critical for safe real-world operation. Although Monte Carlo dropout and deep ensembles are effective for modeling predictive uncertainty, structured regularization approaches - such as FT-DropBlock for EEG-targeted convolutional neural networks (CNNs) - remain insufficiently investigated. We present learned-weight ensemble Monte Carlo FT-DropBlock with per-model temperature (LEMC-FTDB), a model-agnostic generalizable framework that integrates structured regularization, stochastic inference, learned aggregation, and per-model temperature scaling to deliver calibrated probabilities and informative uncertainty estimates on CNN models. We evaluated our framework on the EEGNet and EEG-ITNet backbones using the BCI Competition IV 2a dataset, with metrics including predictive entropy, mutual information, expected calibration error, negative log likelihood, Brier score, and misclassification-detection area under the curve (AUC). Results show that LEMC-FTDB consistently improves probability quality and calibration, strengthens misclassification detection, and maintains superior correctness and Cohen's kappa versus Monte Carlo-based, deep ensemble-based, and deterministic baselines. Crucially, predictive entropy cleanly ranks trial reliability and enables an hold/reject policy that achieves lower risk at the same coverage in risk-coverage analyzes, supporting practical deployment. Our code is available at: github.com/SedCore/lemc_ftdb.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4930796
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact