Designing An Automated Assessment of Public Speaking Skills Using Multimodal Cues

Authors

  • Lei Chen Educational Testing Service (ETS)
  • Gary Feng Educational Testing Service (ETS)
  • Chee Wee Leong Educational Testing Service (ETS)
  • Jilliam Joe JJoe@ETS.ORG
  • Christopher Kitchen Educational Testing Service (ETS)
  • Chong Min Lee Educational Testing Service (ETS)

DOI:

https://doi.org/10.18608/jla.2016.32.13

Abstract

Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of 17 speakers with 4 speaking tasks was collected using audio, video, and 3D motion capturing devices. A scoring model based on basic features in the speech content, speech delivery, and hand, body, and head movements significantly predicts human rating, suggesting the feasibility of using multimodal technologies in the assessment of public speaking skills. 

Author Biography

Lei Chen, Educational Testing Service (ETS)

Research Scientist

Downloads

Published

2016-09-17

How to Cite

Chen, L., Feng, G., Leong, C. W., Joe, J., Kitchen, C., & Lee, C. M. (2016). Designing An Automated Assessment of Public Speaking Skills Using Multimodal Cues. Journal of Learning Analytics, 3(2), 261-281. https://doi.org/10.18608/jla.2016.32.13

Issue

Section

Special section: Multimodal learning analytics