Providing quality feedback to general internal medicine residents in a competency-based assessment environment

  • Laura Marcotte
  • Rylan Egan
  • Eleftherios Soleas Queen's University, Faculty of Health Sciences
  • Nancy J Dalgarno
  • Matthew Norris
  • Christopher A Smith

Abstract

Construct: Competence Based Medical Education (CBME) is designed to use workplace-based assessment (WBA) tools to provide observed assessment and feedback on resident competence. Moreover, WBAs are expected to provide evidence beyond that of more traditional mid- or end-of-rotation assessments [e.g., In Training Evaluation Records (ITERs)]. In this study we investigate competence in General Internal Medicine (GIM), by contrasting WBA and ITER assessment tools.

Background: WBAs are hypothesized to improve and differentiate written and numerical feedback to support the development and documentation of competence. In this study we investigate residents’ and faculty members’ perceptions of WBA validity, usability, and reliability and the extent to which WBAs differentiate residents’ performance when compared to ITERs.   

Approach: We used a mixed methods approach over a three-year period, including perspectives gathered from focus groups, interviews, along with numerical and narrative comparisons between WBA and ITERs in one GIM program.

Results: Residents indicated that the narrative component of feedback was more constructive and effective than numerical scores. They perceived the focus on specific workplace-based feedback was more effective than ITERs. However, quantitative analysis showed that overall rates of actionable feedback, including both ITERs and WBAs, were low (26%), with only 9% providing an improvement strategy. The provision of quality feedback was not statistically significantly different between tools; although WBAs provided more actionable feedback, ITERs provided more strategies. Statistical analyses showed that more than half of all assessments came from 11 core faculty.

Conclusions: Participants in this study viewed narrative, actionable and specific feedback as essential, and an overall preference was found for written feedback over numerical assessments. However, quantitative analyses showed that specific actionable feedback was rarely documented, despite qualitative emphasis from both groups of its importance for developing competency. Neither formative WBAs or summative ITERs clearly provided better feedback, and both may still have a role in overall resident evaluation. Participant views differed in roles and responsibilities, with residents stating that faculty should be responsible for initiating assessments and vice-versa. These results reveal a disconnect between resident and faculty perceptions and practice around giving feedback and emphasize opportunities for programs adopting and implementing CBME to address how best to support residents and frontline clinical teachers.
Published
2019-10-24
Section
Major Contributions