The increase in popularity of Massive Open Online Courses (MOOCs) requires the resolution of new issues related to the huge number of participants to such courses. Among the main challenges is the difficulty in students' assessment, especially for complex assignments, such as essays or open-ended exercises, which is limited by the ability of teachers to evaluate and provide feedback at large scale. A feasible approach to tackle this problem is peer assessment, in which students also play the role of assessor for assignments submitted by others. Unfortunately, as students may have different expertise, peer assessment often does not deliver accurate results compared to human experts. In this paper, we describe and compare different methods aimed at mitigating this issue by adaptively combining peer grades on the basis of the detected expertise of the assessors. The possibility to improve these results through optimized techniques for assessors' assignment is also discussed. Experimental results with synthetic data are presented and show better performances compared to standard aggregation operators (i.e. median or mean) as well as to similar existing approaches.

Towards Adaptive Peer Assessment for MOOCs

Capuano N.;
2015-01-01

Abstract

The increase in popularity of Massive Open Online Courses (MOOCs) requires the resolution of new issues related to the huge number of participants to such courses. Among the main challenges is the difficulty in students' assessment, especially for complex assignments, such as essays or open-ended exercises, which is limited by the ability of teachers to evaluate and provide feedback at large scale. A feasible approach to tackle this problem is peer assessment, in which students also play the role of assessor for assignments submitted by others. Unfortunately, as students may have different expertise, peer assessment often does not deliver accurate results compared to human experts. In this paper, we describe and compare different methods aimed at mitigating this issue by adaptively combining peer grades on the basis of the detected expertise of the assessors. The possibility to improve these results through optimized techniques for assessors' assignment is also discussed. Experimental results with synthetic data are presented and show better performances compared to standard aggregation operators (i.e. median or mean) as well as to similar existing approaches.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4863369
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 19
social impact