The impact of systematically repairing multiple choice questions with low discrimination on assessment reliability: an interrupted time series analysis


  • Janeve Desy University of Calgary
  • Adrian Harvey University of Calgary
  • Sarah Weeks University of Calgary
  • Kevin D Busche University of Calgary
  • Kerri Martin University of Calgary
  • Michael Paget University of Calgary
  • Christopher Naugler University of Calgary
  • Kevin Mclaughlin University of Calgary



At our centre, we introduced a continuous quality improvement (CQI) initiative during academic year 2018-19 targeting for repair multiple choice question (MCQ) items with discrimination index (D) < 0.1. The purpose of this study was to assess the impact of this initiative on reliability/internal consistency of our assessments. Our participants were medical students during academic years 2015-16 to 2020-21 and our data were summative MCQ assessments during this time. Since the goal was to systematically review and improve summative assessments in our undergraduate program on an ongoing basis, we used interrupted time series analysis to assess the impact on reliability. Between 2015-16 and 2017-18 there was a significant negative trend in the mean alpha coefficient for MCQ exams (regression coefficient -0.027 [-0.008, -0.047], p = 0.024). In the academic year following the introduction of our initiative (2018-19) there was a significant increase in the mean alpha coefficient (regression coefficient 0.113 [0.063, 0.163], p = 0.010) which was then followed by a significant positive post-intervention trend (regression coefficient 0.056 [0.037, 0.075], p = 0.006). In conclusion, our CQI intervention resulted in an immediate and progressive improvement reliability of our MCQ assessments.


Metrics Loading ...


Messick S. Validity. 3rd ed. New York, NY: American Council on Education and Macmillan, 1989.

Kane MT. Validation. In: Brennan RL, ed. Educational measurement. 4th ed. Westport.: Praeger; 2006:17-64.

Messick S. The interplay of evidence and consequences in the validation of performance assessments. Education Researcher 1994;32:13-23. DOI:

Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. Med Educ 2015;49(6):560-75. DOI:

De Champlain AF. A primer on classical test theory and item response theory for assessments in medical education. Med Educ 2010;44(1):109-17. DOI:

Thorndike RL, Hagen E. Measurement and evaluation in psychology and education. New York: John Wiley and Sons Inc, 1961.

Richardson MW. Notes on the rationale of item analysis. Psychometrika 1936;1:69-76. DOI:

Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951;16:297-334. DOI:

Glass GV, Hopkins, K.D. Statistical methods in education and psychology. 3rd ed. Needham Heights, MA: Allyn and Bacon, 1995.

Chiavaroli N. Negatively-worded multiple choice questions: an avoidable threat to validity. Pract Assessment Res Eval 2017;22:1-14. DOI:

Schuwirth LW, van der Vleuten CP, Donkers HH. A closer look at cueing effects in multiple-choice questions. Med Educ 1996;30(1):44-9. DOI:

Rodriguez MC, Kettler RJ, Elliott SN. Distractor functioning in modified items for test accessibility. SAGE Open 2014;4(4). DOI:

Office of Educational Assessment UoW. Understanding item analyses. Available from

McDowall D, McCleary R, Meidinger EE, Hay RA. Interrupted time series analysis. Newbury Park, CA: Sage Publications, 1980. DOI:

15. Mandin H, Harasym P, Eagle C, Watanabe M. Developing a "clinical presentation" curriculum at the University of Calgary. Acad Med 1995;70(3):186-93. DOI:

Ali SH, Carr PA, Ruit KG. Validity and reliability of scores obtained on multiple-choice questions: why functioning distractors matter. J Schol Teach Learn 2016;16:1-14. DOI:

Hudson J, Fielding S, Ramsay CR. Methodology and reporting characteristics of studies using interrupted time series design in healthcare. BMC Med Res Methodol 2019;19(1):137. DOI:

Linden A. Conducting interrupted time-series analysis for single- and multiple-group comparisons. Stata J. 2015;15:480-500. DOI:

Jiang S, Wang C, Weiss DJ. Sample size requirements for estimation of item parameters in the multidimensional graded response model. Front Psychol 2016;7:109. DOI:

Strauss V. The real problem with multiple-choice tests. The Washington Post2013




How to Cite

Desy J, Harvey A, Weeks S, Busche KD, Martin K, Paget M, Naugler C, Mclaughlin K. The impact of systematically repairing multiple choice questions with low discrimination on assessment reliability: an interrupted time series analysis . Can. Med. Ed. J [Internet]. 2024 Mar. 12 [cited 2024 Jul. 23];15(3):52-6. Available from:



Brief Reports

Most read articles by the same author(s)

1 2 > >>