Subjective video quality assessment (VQA) is the most reliable way to get accurate quality scores, providing first-hand data on the research of quality of experience (QoE). However, traditional subjective VQA has the disadvantage of being cumbersome and time-consuming. In this paper, we propose an efficient subjective VQA framework based on hybrid information and active learning, named HA-SVQA. Built on the principle of active learning for data annotation, HA-SVQA allows iterative assessments on the most valuable or informative videos, which are selected based on hybrid information from the subject’s prior decisions and the objective quality predictions. By eliminating the redundant (or less valuable) videos to be assessed, HA-SVQA can speed up the process of subjective VQA. Concretely, our framework starts with a few quality-known videos to initialize dual-regression models. It then uses a scoring-difference stratified sampling strategy to iteratively select the next group of videos to be assessed, which contain high quality uncertainty. The newly scored videos are used to continually update every part of our framework. By this way, subjective VQA can be stopped early while meeting the acceptable goal of a full-time subjective study. We conducted simulation experiments on three different datasets: LIVE, LIVE VQC, and an underwater video quality dataset. The results show that HA-SVQA can effectively speed up the process of subjective VQA and reduce about 1/3 of the human workload or time cost in the presence of data redundancy. In order to investigate the effectiveness of HA-SVQA in more depth, we conducted a field experiment with deep-sea videos. We found that HA-SVQA is still effective in reducing about 1/3 of the overall human workload, which is consistent with the conclusion of simulation experiments. Finally, we discussed some of the factors that potentially affect the QoE modelling and the subjective VQA.

Speeding Up Subjective Video Quality Assessment via Hybrid Active Learning

Di Mauro M.;
2022-01-01

Abstract

Subjective video quality assessment (VQA) is the most reliable way to get accurate quality scores, providing first-hand data on the research of quality of experience (QoE). However, traditional subjective VQA has the disadvantage of being cumbersome and time-consuming. In this paper, we propose an efficient subjective VQA framework based on hybrid information and active learning, named HA-SVQA. Built on the principle of active learning for data annotation, HA-SVQA allows iterative assessments on the most valuable or informative videos, which are selected based on hybrid information from the subject’s prior decisions and the objective quality predictions. By eliminating the redundant (or less valuable) videos to be assessed, HA-SVQA can speed up the process of subjective VQA. Concretely, our framework starts with a few quality-known videos to initialize dual-regression models. It then uses a scoring-difference stratified sampling strategy to iteratively select the next group of videos to be assessed, which contain high quality uncertainty. The newly scored videos are used to continually update every part of our framework. By this way, subjective VQA can be stopped early while meeting the acceptable goal of a full-time subjective study. We conducted simulation experiments on three different datasets: LIVE, LIVE VQC, and an underwater video quality dataset. The results show that HA-SVQA can effectively speed up the process of subjective VQA and reduce about 1/3 of the human workload or time cost in the presence of data redundancy. In order to investigate the effectiveness of HA-SVQA in more depth, we conducted a field experiment with deep-sea videos. We found that HA-SVQA is still effective in reducing about 1/3 of the overall human workload, which is consistent with the conclusion of simulation experiments. Finally, we discussed some of the factors that potentially affect the QoE modelling and the subjective VQA.
2022
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4813397
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 1
social impact