Bridging the Gap Between Theory and Empirical Research in Evaluative Judgment

Authors

DOI:

https://doi.org/10.18608/jla.2021.7206

Keywords:

evaluative judgement, educational technologies, learnersourcing, empirical educational research

Abstract

The value of students developing the capacity to accurately judge the quality of their work and that of others has been widely studied and recognized in higher education literature. To date, much of the research and commentary on evaluative judgment has been theoretical and speculative in nature, focusing on perceived benefits and proposing strategies seen to hold the potential to foster evaluative judgment. The efficacy of the strategies remains largely untested. The rise of educational tools and technologies that generate data on learning activities at an unprecedented scale, alongside insights from the learning sciences and learning analytics communities, provides new opportunities for fostering and supporting empirical research on evaluative judgment. Accordingly, this paper offers a conceptual framework and an instantiation of that framework in the form of an educational tool called RiPPLE for data-driven approaches to investigating the enhancement of evaluative judgment. Two case studies, demonstrating how RiPPLE can foster and support empirical research on evaluative judgment, are presented.

References

Abdi, S., Khosravi, H., & Sadiq, S. (2021). Modelling learners in adaptive educational systems: A multivariate Glicko-based approach. In Proceedings of the 11th International Conference on Learning Analytics and Knowledge (LAK 2021),12–16 April 2021, online (pp. 497–503). New York: ACM. https://doi.org.10.1145/3448139.3448189

Abdi, S., Khosravi, H., Sadiq, S., & Demartini, G. (2021). Evaluating the quality of learning resources: A learnersourcing approach. IEEE Transactions on Learning Technologies, 14(1), 81–92. https://doi.org.10.1109/TLT.2021.3058644

Abdi, S., Khosravi, H., Sadiq, S., & Gašević, D. (2019). A multivariate Elo-based learner model for adaptive educational systems. In C. F. Lynch, A. Merceron, M. Desmarais, & R. Nkambou (Eds.), Proceedings of the 12th International Conference on Educational Data Mining (EDM 2019), 2–5 July 2019, Montreal, Quebec, Canada (pp. 228–233). Retrieved from https://files.eric.ed.gov/fulltext/ED599177.pdf

Abdi, S., Khosravi, H., Sadiq, S., & Gašević, D. (2020). Complementing educational recommender systems with open learner models. In Proceedings of the 10th International Conference on Learning Analytics & Knowledge (LAK 2020), 23–27 March 2020, Cyberspace. New York: ACM. https://doi.org.10.1145/3375462.3375520

Ahmad Uzir, N., Gašević, D., Matcha, W., Jovanović, J., & Pardo, A. (2020). Analytics of time management strategies in aflipped classroom.Journal of Computer Assisted Learning, 36(1), 70–88. https://doi.org.10.1111/jcal.12392

Ajjawi, R., Tai, J., Dawson, P., & Boud, D. (2018). Conceptualising evaluative judgement for sustainable assessment in higher education. In D. Boud, R. Ajjawi, P. Dawson, & J. Tai (Eds.), Developing Evaluative Judgement in Higher Education (pp. 23–33). London: Routledge.

Andrade, H. L., & Brown, G. T. (2016). Student self-assessment in the classroom. In G. Brown & L. Harris (Eds.), Handbook of Human and Social Conditions in Assessment (pp. 319–334). New York: Routledge.

Bates, S. P., Galloway, R. K., Riise, J., & Homer, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics—Physics Education Research, 10(2), 020105. https://doi.org.10.1103/PhysRevSTPER.10.020105

Bhalerao, A., & Ward, A. (2001). Towards electronically assisted peer assessment: A case study. ALT-j, 9(1), 26–37. https://doi.org.10.1080/09687760108656773

Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167. https://doi.org.10.1080/713695728

Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement overtime? Assessment & Evaluation in Higher Education, 38(8), 941–956. https://doi.org.10.1080/02602938.2013.769198

Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. https://doi.org.10.1080/02602938.2015.1018133

Bouwer, R., Lesterhuis, M., Bonne, P., & De Maeyer, S. (2018). Applying criteria to examples or learning by comparison: Effects on students’ evaluative judgment and performance in writing. Frontiers in Education, 3, 86. https://doi.org.10.3389/feduc.2018.00086

Carless, D., Chan, K. K. H., To, J., Lo, M., & Barrett, E. (2018). Developing students’ capacities for evaluative judgement through analysing exemplars. In D. Boud, R. Ajjawi, P. Dawson, & J. Tai (Eds.), Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work. London: Routledge.

Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education, 48(3), 409–426. https://doi.org.10.1016/j.compedu.2005.02.004

Corrin, L., Kennedy, G., French, S., Shum, S. B., Kitto, K., Pardo, A., . . . Colvin, C. (2019). The Ethics of Learning Analytics in Australian Higher Education. Melbourne, Australia: University of Melbourne.

Cowan, J. (2010). Developing the ability for making evaluative judgements. Teaching in Higher Education, 15(3), 323–334. https://doi.org.10.1080/13562510903560036

Darvishi, A., Khosravi, H., & Sadiq, S. (2020). Utilising learner sourcing to inform design loop adaptivity. In C. Alario-Hoyos, M. J. Rodríguez-Triana, M. Scheffel, I. Arnedillo-Sánchez, & S. M. Dennerlein (Eds.), Proceedings of the 15thEuropean Conference on Technology Enhanced Learning (EC-TEL 2020), Addressing Global Challenges and Quality Education, 14–18 September 2020, Heidelberg, Germany (pp. 332–346). Cham: Springer International Publishing. https://doi.org.10.1007/978-3-030-57717-924

Denny, P. (2019). PeerWise Publications. Retrieved from https://peerwise.cs.auckland.ac.nz/docs/publications/

Denny, P., Hamer, J., Luxton-Reilly, A., & Purchase, H. (2008). PeerWise: Students sharing their multiple choice questions. In Proceedings of the Fourth International Workshop on Computing Education Research (ICER 2008), 6–7 September 2008, Sydney, Australia (pp. 51–58). New York: ACM. https://doi.org.10.1145/1404520.1404526

De Raadt, M., Toleman, M., & Watson, R. (2005). Electronic peer review: A large cohort teaching themselves? In Proceedings of the 22nd Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education: Balance, Fidelity, Mobility-Maintaining the Momentum? (ASCILITE 2005), 4–7 December 2005, Brisbane, Australia (Vol. 1, pp.159–168).

Doroudi, S., Williams, J., Kim, J., Patikorn, T., Ostrow, K., Selent, D., . . . Rosé, C. (2018). Crowdsourcing and education: Towards a theory and praxis of learner sourcing. In Proceedings of the 13th International Conference of the Learning Sciences (ICLS 2018), 23–27 June 2018, London, UK (Vol. 2, pp. 1267–1274). International Society of the Learning Sciences (ISLS).

Drachsler, H., Hoel, T., Scheffel, M., Kismihók, G., Berg, A., Ferguson, R., . . . Manderveld, J. (2015). Ethical and privacy issues in the application of learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge (LAK 2015), 16–20 March 2015, Poughkeepsie, New York (pp. 390–391). New York: ACM. https://doi.org.10.1145/2723576.2723642

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting learning analytics in context: Overcoming the barriers to large-scale adoption. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge (LAK 2014), 24–28 March 2014, Indianapolis, Indiana, USA (pp. 251–253). New York: ACM. https://doi.org.10.1145/2567574.2567592

Gyamfi, G., Hanna, B., & Khosravi, H. (2021). The effects of rubrics on evaluative judgement: A randomised controlled experiment. Assessment & Evaluation in Higher Education. https://doi.org.10.1080/02602938.2021.1887081

Hamer, J., Kell, C., & Spence, F. (2007). Peer assessment using Aropä. In Proceedings of the Ninth Australasian Conference on Computing Education (ACE 2007), 30 January–02 February 2007, Ballarat, Victoria, Australia (Vol. 66, pp. 43–54). Australian Computer Society. Retrieved from https://dl.acm.org/doi/10.5555/1273672.1273678

Hastie, R., & Dawes, R. M. (2010). Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making. Sage. Retrieved from https://us.sagepub.com/en-us/nam/rational-choice-in-an-uncertain-world/book231783

Heffernan, N. (2019). Assistments: As a researcher’s tool. Retrieved from https://sites.google.com/site/assistmentsstudies/all-studies

Heffernan, N., Ostrow, K. S., Kelly, K., Selent, D., Van Inwegen, E. G., Xiong, X., & Williams, J. J. (2016). The future of adaptive learning: Does the crowd hold the key? International Journal of Artificial Intelligence in Education, 26(2), 615–644. https://doi.org/10.1007/s40593-016-0094-z

Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity. Journal of Learning Analytics, 6(2), 27–52. https://doi.org/10.18608/jla.2019.62.3

Jahedi, S., & Méndez, F. (2014). On the advantages and disadvantages of subjective measures. Journal of Economic Behavior & Organization, 98, 97–114. https://doi.org/10.1016/j.jebo.2013.12.016

Joughin, G., Boud, D., & Dawson, P. (2019). Threats to student evaluative judgement and their management. Higher Education Research & Development, 38(3), 537–549. https://doi.org/10.1080/07294360.2018.1544227

Jovanovíc, J., Gašević, D., Dawson, S., Pardo, A., & Mirriahi, N. (2017). Learning analytics to unveil learning strategies in a flipped classroom. The Internet and Higher Education, 33(4), 74–85. https://doi.org/10.1016/j.iheduc.2017.02.001

Khosravi, H., & Cooper, K. (2018). Topic dependency models: Graph-based visual analytics for communicating assessment data. Journal of Learning Analytics, 5(3), 136–153. https://doi.org/10.18608/jla.2018.53.9

Khosravi, H., Cooper, K., & Kitto, K. (2017). RiPLE: Recommendation in peer-learning environments based on knowledge gaps and interests. Journal of Educational Data Mining, 9(1), 42–67. https://doi.org/10.5281/zenodo.3554627

Khosravi, H., Demartini, G., Sadiq, S., & Gašević, D. (2021). Charting the design and analytics agenda of learnersourcing systems. In Proceedings of the 11th International Conference on Learning Analytics and Knowledge (LAK 2021), 12–16 April 2021, Online—Everywhere! New York: ACM. https://doi.org/10.1145/3448139.3448143

Khosravi, H., Gyamfi, G., Hanna, B. E., & Lodge, J. (2020). Fostering and supporting empirical research on evaluative judgement via a crowdsourced adaptive learning system. In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK 2020), 23–27 March 2020, Frankfurt, Germany (pp. 83–88). New York: ACM. https://doi.org/10.1145/3375462.3375532

Khosravi, H., Kitto, K., & Joseph, W. (2019). RiPPLE: A crowdsourced adaptive platform for recommendation of learning activities. Journal of Learning Analytics, 6(3), 91–105. https://doi.org/10.18608/jla.2019.63.12

Khosravi, H., Sadiq, S., & Gašević, D. (2020). Development and adoption of an adaptive learning system: Reflections and lessons learned. In Proceedings of the 2020 ACM SIGCSE Technical Symposium on Computer Science Education, 2020, Online. New York: ACM. Retrieved from https://sigcse2020.sigcse.org/online/papers.html

Kovanovic, V., Gašević, D., Dawson, S., Joksimovic, S., & Baker, R. (2015). Does time-on-task estimation matter? Implications on validity of learning analytics findings. Journal of Learning Analytics, 2(3), 81–110. https://doi.org/10.18608/jla.2015.23.6

Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559. https://doi.org/10.1007/s11251-013-9299-9

Lodge, J. M., Kennedy, G., & Hattie, J. (2018). Understanding, assessing and enhancing student evaluative judgement in digital environments. In D. Boud, R. Ajjawi, P. Dawson, & J. Tai (Eds.), Developing Evaluative Judgement in Higher Education (pp. 86–94). London: Routledge. https://doi.org/10.4324/9781315109251

Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment. Computer Science Education, 19(4), 209-232. https://doi.org/10.1080/08993400903384844

Luxton-Reilly, A., Plimmer, B., & Sheehan, R. (2010). Studysieve: A tool that supports constructive evaluation for free-response questions. In Proceedings of the 11th International Conference of the NZ Chapter of the ACM Special Interest Group on Human-Computer Interaction (CHINZ 2010), 8 July 2010, Auckland, New Zealand (pp. 65–68). New York: ACM. https://doi.org/10.1145/1832838.1832849

Matcha, W., Gašević, D., Uzir, N. A., Jovanovíc, J., & Pardo, A. (2019). Analytics of learning strategies: Associations with academic performance and feedback. In Proceedings of the Ninth International Conference on Learning Analytics and Knowledge (LAK 2019), 4–8 March 2019, Tempe, Arizona, USA (pp. 461–470). https://doi.org/10.1145/3303772.3303787

Matlock-Hetzel, S. (1997). Basic concepts in item and test analysis. Paper presented at the Annual Meeting of the Southwest Educational Research Association, 23–25 January 1997, Austin, Texas. Retrieved from https://files.eric.ed.gov/fulltext/ED406441.pdf

McConlogue, T. (2012). But is it fair? Developing students’ understanding of grading complex written work through peer assessment. Assessment & Evaluation in Higher Education, 37(1), 113–123. https://doi.org/10.1080/02602938.2010.515010

Nicol, D. (2010). The Foundation for Graduate Attributes: Developing Self-Regulation through Self and Peer-Assessment. Glasgow, Scotland: The Quality Assurance Agency for Higher Education.

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8, 422. https://doi.org/10.3389/fpsyg.2017.00422

Panadero, E., Broadbent, J., Boud, D., & Lodge, J. M. (2019). Using formative assessment to influence self- and co-regulated learning: The role of evaluative judgement. European Journal of Psychology of Education, 34(3), 535–557. https://doi.org/10.1007/s10212-018-0407-8

Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society, 17(4), 49–64. Retrieved from https://www.jstor.org/stable/jeductechsoci.17.4.49

Pardo, A., & Siemens, G. (2014). Ethical and privacy principles for learning analytics. British Journal of Educational Technology, 45(3), 438–450. https://doi.org/10.1111/bjet.12152

Park, O. C., & Lee, J. (2004). Adaptive instructional systems. In D. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 651–684). Mahwah, New Jersey, USA: Lawrence Erlbaum Associates. Retrieved from https://psycnet.apa.org/record/2004-00176-025

Pirttinen, N., Kangas, V., Nygren, H., Leinonen, J., & Hellas, A. (2018). Analysis of students’ peer reviews to crowdsourced programming assignments. In Proceedings of the 18th Koli Calling International Conference on Computing Education Research, 22–25 November 2018, Koli, Finland (pp. 21:1–21:5). https://doi.org/10.1145/3279720.3279741

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55. https://doi.org/10.2307/2335942

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714

Sadler, D. R. (2009). Grade integrity and the representation of academic achievement. Studies in Higher Education, 34(7), 807–826. https://doi.org/10.1080/03075070802706553

Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550. https://doi.org/10.1080/02602930903541015

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420. https://doi.org/10.1037/0033-2909.86.2.420

Siemens, G., & Baker, R. S. d. (2012). Learning analytics and educational data mining: Towards communication and collaboration. In Proceedings of the Second International Conference on Learning Analytics and Knowledge (LAK2012), 29 April–2 May 2012, Vancouver, British Columbia, Canada (pp. 252–254). https://doi.org/10.1145/2330601.2330661

Sondergaard, H. (2009). Learning from and with peers: The different roles of student peer reviewing. ACM SIGCSE Bulletin, 41(3), 31–35. https://doi.org/10.1145/1595496.1562893

Styles, B., & Torgerson, C. (2018). Randomised controlled trials (RCTs) in education research—Methodological debates, questions, challenges. Educational Research, 60(3), 255–264. https://doi.org/10.1080/00131881.2018.1500194

Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education, 3(3), 285–289. https://doi.org/10.4300/JGME-D-11-00147.1

Sung, Y.-T., Chang, K.-E., Chiou, S.-K., & Hou, H.-T. (2005). The design and application of a web-based self- and peer-assessment system. Computers & Education, 45(2), 187–202. https://doi.org/10.1016/j.compedu.2004.07.002

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 76(3), 467–481. https://doi.org/10.1007/s10734-017-0220-3

Tai, J., Canny, B. J., Haines, T. P., & Molloy, E. K. (2016). The role of peer-assisted learning in building evaluative judgement: Opportunities in clinical medical education. Advances in Health Sciences Education, 21(3), 659–676. https://doi.org/10.1007/s10459-015-9659-0

Winne, P. H., Teng, K., Chang, D., Lin, M. P.-C., Marzouk, Z., Nesbit, J. C., . . . Vytasek, J. (2019). nStudy: Software for learning analytics about processes for self-regulated learning. Journal of Learning Analytics, 6(2), 95–106. https://doi.org/10.18608/jla.2019.62.7

Downloads

Published

2021-11-03

How to Cite

Khosravi, H., Gyamfi, G., Hanna, B. E., Lodge, J., & Abdi, S. (2021). Bridging the Gap Between Theory and Empirical Research in Evaluative Judgment. Journal of Learning Analytics, 8(3), 117-132. https://doi.org/10.18608/jla.2021.7206

Issue

Section

Extended Conference Papers

Most read articles by the same author(s)