Instructor effectiveness is fundamental to student learning, with the ability to manage student inquiries serving as a critical component of effective teaching. Student questions represent a valuable training resource for instructors to strengthen their teaching strategies, yet interactions with students are often constrained by several factors. In this paper, we investigate how instructors perceive machine- and student-generated questions, considering the potential for the former to complement the latter in a cost-effective manner. Our study involved 121 undergraduate students and an equivalent number of simulated students modeled using a state-of-the-art large language model, generating over 360 questions in total based on video lectures given by seven university instructors. We assessed whether instructors could distinguish between human- and machine-generated questions and how they evaluated their relevance, clarity, answerability, challenge level, and cognitive depth. Results show that instructors struggle to differentiate between the two sets of questions, with accuracy close to random chance. Instructors tended to (i) rate machine-generated questions slightly higher in relevance, clarity, answerability, and challenge—though only relevance and answerability showed significant differences—and (ii) associate them marginally more often with higher-order cognitive skills. This confirms the potential of machine-generated questions as tools for instructor training. Repository: https://github.com/tail-unica/realistic-ai-generated-questions.

Toward Realistic AI-Generated Student Questions to Support Instructor Training

Pentangelo V.;Lambiase S.;Gravino C.;Palomba F.;
2026

Abstract

Instructor effectiveness is fundamental to student learning, with the ability to manage student inquiries serving as a critical component of effective teaching. Student questions represent a valuable training resource for instructors to strengthen their teaching strategies, yet interactions with students are often constrained by several factors. In this paper, we investigate how instructors perceive machine- and student-generated questions, considering the potential for the former to complement the latter in a cost-effective manner. Our study involved 121 undergraduate students and an equivalent number of simulated students modeled using a state-of-the-art large language model, generating over 360 questions in total based on video lectures given by seven university instructors. We assessed whether instructors could distinguish between human- and machine-generated questions and how they evaluated their relevance, clarity, answerability, challenge level, and cognitive depth. Results show that instructors struggle to differentiate between the two sets of questions, with accuracy close to random chance. Instructors tended to (i) rate machine-generated questions slightly higher in relevance, clarity, answerability, and challenge—though only relevance and answerability showed significant differences—and (ii) associate them marginally more often with higher-order cognitive skills. This confirms the potential of machine-generated questions as tools for instructor training. Repository: https://github.com/tail-unica/realistic-ai-generated-questions.
2026
9783032038692
9783032038708
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4920157
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact