Searching for significance in the scholarship of teaching and learning and finding none: Understanding non-significant results

Authors

  • April McGrath Mount Royal University

DOI:

https://doi.org/10.20343/teachlearninqu.4.2.12

Keywords:

Null Hypothesis Significance Testing, Statistical Power, Non-significant Effect

Abstract

Quantitative results from empirical studies are common in the field of Scholarship of Teaching and Learning (SoTL), but it is important to remain aware of what the results from our studies can, and cannot, tell us. Oftentimes studies conducted to examine teaching and learning are constrained by class size. Small sample sizes negatively influence statistical power and make non-significant results a more likely occurrence. When one finds non-significant results it is important to consider what conclusions can be drawn from the study. This article provides information on null hypothesis significance testing that is relevant to our understanding of non-significant results, and it highlights the importance of recognizing underpowered studies in the teaching and learning literature. Factors that can contribute to non-significant findings in a study are also highlighted. Being aware of these factors, statistical power, and the logic of significance testing will put scholars in a better position to evaluate non-significant results from their own research and that of others.

Metrics

Metrics Loading ...

Author Biography

April McGrath, Mount Royal University

April McGrath is an Associate Professor in the Department of Psychology at Mount Royal University. She was a 2012 Nexen Scholar at the Institute for Scholarship of Teaching and Learning.

References

Aron, A., Coups, E. J., & Aron, E. N. (2013). Statistics for psychology (6th ed.). New York, NY: Pearson.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304-1312.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159.

Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25, 7-29. doi: 10.1177/0956797613504966

Field, A. P. (2009). Discovering statistics using SPSS: (And sex and drugs and rock 'n' roll). Thousand Oaks, California: SAGE Publications.

Frick, R. W. (1995). Accepting the null hypothesis. Memory and Cognition, 25, 132-138.

Goodman, S. (2008). A dirty dozen: Twelve P-value misconceptions. Seminars in Hematology, 45, 135-140. doi: 10.1053/j.seminhematol.2008.04.003

Grauerholz, L., & Main, E. (2013). Fallacies of SOTL: Rethinking how we conduct our research. In K. McKinney (Ed.), The scholarship of teaching and learning in and across the disciplines (pp. 152-168). Bloomington, Indiana: Indiana University Press.

Gurung, R. A. R., & Landrum, R. E. (2013). Bottleneck concepts in psychology: Exploratory first steps. Psychology Learning and Teaching, 12(3), 236-245.

Kline, R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington, DC: APA.

Morling, B. (2015). Research methods in psychology: Evaluating a world of information. New York, NY: W. W. Norton & Company, Inc.

Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Science, 15, 20-27. doi: 10.1016/j.tics.2010.09.003

Saville, B. K. (2013, February). Interteaching: Ten tips for effective implementation. Observer, 26(2).

Saville, B. K., Pope, D., Lovaas, P., & Williams, J. (2012). Interteaching and the testing effect: A systematic replication. Teaching of Psychology, 39, 280-283. doi: 10.1177/00986283124 56628

Saville, B. K., Zinn, T. E., & Elliott, M. P. (2005). Interteaching vs. traditional methods of instruction: A preliminary analysis. Teaching of Psychology, 32, 161-163. doi: 10.1901 /jaba.2006.42-05

Saville, B. K., Zinn, T. E., Neef, N. A., Van Norman, R., & Ferreri, S. J. (2006). A comparison of interteaching and lecture in the college classroom. Journal of Applied Behavior Analysis, 39, 49-61. doi: 10.1901/jaba.2009.42-369

Tomcho, T. J., & Foels, R. (2009). The power of teaching activities: Statistical and methodological recommendations. Teaching of Psychology, 36, 96-101. doi: 10.1080/00986280902739743

Tressoldi, P. E., Giofré, D., Sella, F., & Cumming, G. (2013). High impact = high statistical standards? Not necessarily so. PLoS ONE, 8, 1-7. doi: 10.1371/journal.pone.0056180

Trochim, W. M. (2006). Conclusion validity. The research methods knowledge base (2nd ed.).

Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin, 76, 105-110.

Wilson-Doenges, G., & Gurung, R. A. R. (2013). Benchmarks for scholarly investigations of teaching and learning. Australian Journal of Psychology, 65, 63-70. doi: 10.1111/ajpy.12011

Zuckerman, M., Hodgins, H. S., Zuckerman, A., & Rosenthal, R. (1993). Contemporary issues in the analysis of data: A survey of 551 psychologists. Psychological Science, 4, 49-53.

Downloads

Published

2016-09-01

How to Cite

McGrath, April. 2016. “Searching for Significance in the Scholarship of Teaching and Learning and Finding None: Understanding Non-Significant Results”. Teaching and Learning Inquiry 4 (2):150-55. https://doi.org/10.20343/teachlearninqu.4.2.12.