The effective assessment of cancer pain requires a meticulous analysis of all the components that shape the painful experience collectively. Implementing Automatic Pain Assessment (APA) methods and computational analytical approaches, with a specific focus on emotional content, can facilitate a thorough characterization of pain. The proposed approach moves towards the use of automatic emotion recognition from speech recordings alongside a model we previously developed to examine facial expressions of pain. For training and validation, we adopted the EMOVO dataset, which simulates six emotional states (the Big Six). A Neural Network, consisting of a Multi-Layered Perceptron, was trained on 181 prosodic features to classify emotions. For testing, we used a dataset of interviews collected from cancer patients and selected two case studies. Speech annotation and continuous facial expression analysis (resulting in pain/no pain classifications) were carried out using Eudico Linguistic Annotator (ELAN) version 6.7. The model for emotion analysis achieved 84% accuracy, with encouraging precision, recall, and F1-score metrics across all classes. The preliminary results suggest the potential use of artificial intelligence (AI) strategies for continuous estimation of emotional states from video recordings, unveiling predominant emotional states, and providing the ability to corroborate the corresponding pain assessment. Despite limitations, the proposed AI framework exhibits potential for holistic and real-time pain assessment, paving the way for personalized pain management strategies in oncological settings. Clinical Trial registration: NCT04726228.

AI-based cancer pain assessment through speech emotion recognition and video facial expressions classification

Cascella M.
;
Conti V.;Sabbatino F.;Piazza O.
2024

Abstract

The effective assessment of cancer pain requires a meticulous analysis of all the components that shape the painful experience collectively. Implementing Automatic Pain Assessment (APA) methods and computational analytical approaches, with a specific focus on emotional content, can facilitate a thorough characterization of pain. The proposed approach moves towards the use of automatic emotion recognition from speech recordings alongside a model we previously developed to examine facial expressions of pain. For training and validation, we adopted the EMOVO dataset, which simulates six emotional states (the Big Six). A Neural Network, consisting of a Multi-Layered Perceptron, was trained on 181 prosodic features to classify emotions. For testing, we used a dataset of interviews collected from cancer patients and selected two case studies. Speech annotation and continuous facial expression analysis (resulting in pain/no pain classifications) were carried out using Eudico Linguistic Annotator (ELAN) version 6.7. The model for emotion analysis achieved 84% accuracy, with encouraging precision, recall, and F1-score metrics across all classes. The preliminary results suggest the potential use of artificial intelligence (AI) strategies for continuous estimation of emotional states from video recordings, unveiling predominant emotional states, and providing the ability to corroborate the corresponding pain assessment. Despite limitations, the proposed AI framework exhibits potential for holistic and real-time pain assessment, paving the way for personalized pain management strategies in oncological settings. Clinical Trial registration: NCT04726228.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4922769
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact