The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study
DOI:
https://doi.org/10.36834/cmej.75876Abstract
Background: Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs.
Methods: Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall’s tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients.
Results: Both the faculty’s utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees’ utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88).
Conclusions: The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Metrics
References
Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45. https://doi.org/10.3109/0142159x.2010.501190 DOI: https://doi.org/10.3109/0142159X.2010.501190
Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010 Aug 1;32(8):676–82. https://doi.org/10.3109/0142159x.2010.500704 DOI: https://doi.org/10.3109/0142159X.2010.500704
Caverzagie KJ, Nousiainen MT, Ferguson PC, et al. Overarching challenges to the implementation of competency-based medical education. Med Teach. 2017 Jun 3;39(6):588–93. https://doi.org/10.1080/0142159x.2017.1315075 DOI: https://doi.org/10.1080/0142159X.2017.1315075
Jensen AR, Wright AS, Kim S, Horvath KD, Calhoun KE. Educational feedback in the operating room: a gap between resident and faculty perceptions. Amer J Surg. 2012 Aug 1;204(2):248–55. https://doi.org/10.1016/j.amjsurg.2011.08.019 DOI: https://doi.org/10.1016/j.amjsurg.2011.08.019
Upadhyaya S, Rashid M, Davila-Cervantes A, Oswald A. Exploring resident perceptions of initial competency based medical education implementation. Can Med Educ J. 2021 Apr;12(2):e42–56. https://doi.org/10.36834/cmej.70943 DOI: https://doi.org/10.36834/cmej.70943
Weller JM, Naik VN, Diego RJS. Systematic review and narrative synthesis of competency-based medical education in anaesthesia. Brit J Anaesthesia. 2020 Jun 1;124(6):748–60. https://doi.org/10.1016/j.bja.2019.10.025 DOI: https://doi.org/10.1016/j.bja.2019.10.025
Yilmaz Y, Carey R, Chan T et al. Developing a dashboard for faculty development in competency-based training programs: a design-based research project. Can Med Ed J. 2021 Sep 15;12(4):48–64. https://doi.org/10.36834/cmej.72067
Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. The Quality of Assessment of Learning (Qual) Score: validity evidence for a scoring system aimed at rating short, workplace-based comments on trainee performance. Teach Learn Med. 2020 Jul;32(3):319–29. https://doi.org/10.1080/10401334.2019.1708365 DOI: https://doi.org/10.1080/10401334.2019.1708365
Woods RA, Singh S, Thoma B, et al. Validity evidence for the QuAL (Quality of Assessment for Learning) score: a quality metric for supervisor comments in Competency Based Medical Education. Can Med Ed J. 2022; 13(6); 19-35. https://doi.org/10.36834/cmej.74860 DOI: https://doi.org/10.36834/cmej.74860
St-Onge C, Young M, Eva KW, Hodges B. Validity: one word with a plurality of meanings. Adv in Health Sci Educ. 2017 Oct;22(4):853–67. https://doi.org/10.1007%2Fs40037-018-0433-x DOI: https://doi.org/10.1007/s10459-016-9716-3
Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015 Jun;49(6):560–75. https://doi.org/10.1111/medu.12678 DOI: https://doi.org/10.1111/medu.12678
Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. 4th ed. Oxford: Oxford University Press; 2008. 452 p. Available from: https://oxford.universitypressscholarship.com/10.1093/acprof:oso/9780199231881.001.0001/acprof-9780199231881 [Accessed on Oct 3, 2021].
Field A. Discovering statistics using SPSS. 3rd ed., SAGE Publications.2009
Streiner DL. Starting at the beginning: an introduction to coefficient alpha and internal consistency. J Pers Assess. 2003 Feb;80(1):99-103. https://doi.org/10.1207/S15327752JPA8001_18 DOI: https://doi.org/10.1207/S15327752JPA8001_18
Telio S, Regehr G, Ajjawi R. Feedback and the educational alliance: examining credibility judgements and their consequences. Med Educ. 2016 Sep;50(9):933–42. https://doi.org/10.1111/medu.13063 DOI: https://doi.org/10.1111/medu.13063
Acai A, Li SA, Sherbino J, Chan TM. Attending emergency physicians’ perceptions of a programmatic workplace-based assessment system: the McMaster Modular Assessment Program (McMAP). Teach Learn Med. 2019 Aug 8;31(4):434-44. https://doi.org/10.1080/10401334.2019.1574581 DOI: https://doi.org/10.1080/10401334.2019.1574581
Chan T, Oswald A, Hauer KE, et al. Diagnosing conflict: conflicting data, interpersonal conflict, and conflicts of interest in clinical competency committees. Med Teach. 2021 Jul 3;43(7):765-73. https://doi.org/10.1080/0142159x.2021.1925101 DOI: https://doi.org/10.1080/0142159X.2021.1925101
Dudek N, Dojeiji S. Twelve tips for completing quality in-training evaluation reports. Med Teach. 2014 Dec;36(12):1038–42. https://doi.org/10.3109/0142159x.2014.932897 DOI: https://doi.org/10.3109/0142159X.2014.932897
Gray JD. Global rating scales in residency education. Acad Med. 1996 Jan;71(1 Suppl):S55-63. https://doi.org/10.1097/00001888-199601000-00043 DOI: https://doi.org/10.1097/00001888-199601000-00043
Hatala R, Sawatsky AP, Dudek N, Ginsburg S, Cook DA. Using In-Training Evaluation Report (ITER) qualitative comments to assess medical students and residents: a systematic review. Acad Med. 2017 Jun 1;92(6):868–79. https://doi.org/10.1097/acm.0000000000001506 DOI: https://doi.org/10.1097/ACM.0000000000001506
Ginsburg S, van der Vleuten C, Eva KW, Lingard L. Hedging to save face: a linguistic analysis of written comments on in-training evaluation reports. Adv Health Sci Educ. 2016 Mar;21:175-88. https://doi.org/10.1007/s40037-021-00681-w DOI: https://doi.org/10.1007/s10459-015-9622-0
Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: faculty interpretations of narrative evaluation comments. Med Ed. 2015 Mar;49(3):296-306. https://doi.org/10.1111/medu.12637 DOI: https://doi.org/10.1111/medu.12637
Ginsburg S, Eva K, Regehr G. Do in-training evaluation reports deserve their bad reputations? A study of the reliability and predictive ability of ITER scores and narrative comments. Acad Med. 2013 Oct 1;88(10):1539-44. https://doi.org/10.1097/acm.0b013e3182a36c3d DOI: https://doi.org/10.1097/ACM.0b013e3182a36c3d
Ginsburg S, van der Vleuten CP, Eva KW, Lingard L. Cracking the code: residents’ interpretations of written assessment comments. Med Ed. 2017 Apr;51(4):401-10. https://doi.org/10.1111/medu.13158 DOI: https://doi.org/10.1111/medu.13158
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers encapsulate, words elaborate: toward the best use of comments for assessment and feedback on entrustment ratings. Acad Med. 2021 Jul 1;96(7S):S81-6. https://doi.org/10.1097/acm.0000000000004089 DOI: https://doi.org/10.1097/ACM.0000000000004089
Li SA, Sherbino J, Chan TM. McMaster Modular Assessment Program (McMAP) through the years: residents' experience with an evolving feedback culture over a 3‐year period. AEM Educ Training. 2017 Jan;1(1):5-14. http://dx.doi.org/10.1002/aet2.10009 DOI: https://doi.org/10.1002/aet2.10009
Chan TM, Sherbino J, Mercuri M. Nuance and noise: lessons learned from longitudinal aggregated assessment data. J Grad Med Ed. 2017 Dec;9(6):724-9. https://doi.org/10.4300/jgme-d-17-00086.1 DOI: https://doi.org/10.4300/JGME-D-17-00086.1
Sebok-Syer SS, Klinger DA, Sherbino J, Chan TM. “It’s complicated”: understanding the relationships between checklists, rating scales, and written comments in workplace-based assessments. Acad Med. 2016 Nov 1;91(11):S10. https://doi.org/10.1097/ACM.000000000001373 DOI: https://doi.org/10.1097/ACM.0000000000001373
Yilmaz Y, Carey R, Chan TM et al. Developing a dashboard for faculty development in competency-based training programs: a design-based research project. Can Med Ed J. 2021 Oct 20;12(4):48-64. https://doi.org/10.36834/cmej.72067 DOI: https://doi.org/10.36834/cmej.72067
Miller J, Katz D. Gender differences in perception of workplace experience among anesthesiology residents. JEPM. 2018 Jan;20(1). DOI: https://doi.org/10.46374/volxx-issue1-miller
Pearce G, Sidhu N, Cavadino A, Shrivathsa A, Seglenieks R. Gender effects in anaesthesia training in Australia and New Zealand. Brit j anaesthesia. 2020 Mar 1;124(3):e70-6. https://doi.org/10.1016/j.bja.2019.12.020 DOI: https://doi.org/10.1016/j.bja.2019.12.020
Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA intern med. 2017 May 1;177(5):651-7. https://doi.org/10.1001/jamainternmed.2016.9616 DOI: https://doi.org/10.1001/jamainternmed.2016.9616
Mamtani M, Shofer F, Scott K, et al. Gender differences in emergency medicine attending physician comments to residents: a qualitative analysis. JAMA Network Open. 2022 Nov 1;5(11):e2243134-. https://doi.org/10.1001/jamanetworkopen.2022.43134 DOI: https://doi.org/10.1001/jamanetworkopen.2022.43134
Menchetti I, Eagles D, Ghanem D, Leppard J, Fournier K, Cheung WJ. Gender differences in emergency medicine resident assessment: a scoping review. AEM Educ Training. 2022 Oct;6(5):e10808. https://doi.org/10.1007/s11606-019-04884-0 DOI: https://doi.org/10.1002/aet2.10808
Santen SA, Yamazaki K, Holmboe ES, Yarris LM, Hamstra SJ. Comparison of male and female resident milestone assessments during emergency medicine residency training: a national study. Acad Med. 2020 Feb;95(2):263. https://doi.org/10.1097/acm.0000000000002988 DOI: https://doi.org/10.1097/ACM.0000000000002988
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Eugene K Choo, Rob Woods, Mary Ellen Walker, Jennifer M O'Brien, Teresa M Chan
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Submission of an original manuscript to the Canadian Medical Education Journal will be taken to mean that it represents original work not previously published, that it is not being considered elsewhere for publication. If accepted for publication, it will be published online and it will not be published elsewhere in the same form, for commercial purposes, in any language, without the consent of the publisher.
Authors who publish in the Canadian Medical Education Journal agree to release their articles under the Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 Canada Licence. This licence allows anyone to copy and distribute the article for non-commercial purposes provided that appropriate attribution is given. For details of the rights an author grants users of their work, please see the licence summary and the full licence.