Superficially Plausible Outputs from a Black Box: Problematising GenAI Tools for Analysing Qualitative SoTL Data
DOI:
https://doi.org/10.20343/teachlearninqu.13.4Keywords:
Generative AI, qualitative analysis, research methods, academic developmentAbstract
Generative AI tools (GenAI) are increasingly used for academic tasks, including qualitative data analysis for the Scholarship of Teaching and Learning (SoTL). In our practice as academic developers, we are frequently asked for advice on whether this use for GenAI is reliable, valid, and ethical. Since this is a new field, we have not been able to answer this confidently based on published literature, which depicts both very positive as well as highly cautionary accounts. To fill this gap, we experiment with the use of chatbot style GenAI (namely ChatGPT 4, ChatGPT 4o, and Microsoft Copilot) to support or conduct qualitative analysis of survey and interview data from a SoTL project, which had previously been analysed by experienced researchers using thematic analysis. At first sight, the output looked plausible, but the results were incomplete and not reproducible. In some instances, interpretations and extrapolations of data happened when it was clearly stated in the prompt that the tool should only analyse a specified dataset based on explicit instructions. Since both algorithm and training data of the GenAI tools are undisclosed, it is impossible to know how the outputs had been arrived at. We conclude that while results may look plausible initially, digging deeper soon reveals serious problems; the lack of transparency about how analyses are conducted and results are generated means that no reproducible method can be described. We therefore warn against an uncritical use of GenAI in qualitative analysis of SoTL data.
References
Davison, Robert M., Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, et al. 2024. “The Ethics of Using Generative AI for Qualitative Data Analysis.” Journal of Applied Learning and Teaching. https://openrepository.aut.ac.nz/server/api/core/bitstreams/7e046f0e-f0ab-4a3f-b2a1-c43f7979fce0/content.
ERA Forum. 2024. Living Guidelines on the Responsible Use of Generative AI in Research. European Commission (Brussels). https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en?filename=ec_rtd_ai-guidelines.pdf.
Felten, Peter. 2013. “Principles of Good Practice in SoTL.” Teaching & Learning Inquiry 1 (1): 121–25. https://doi.org/10.20343/teachlearninqu.1.1.121.
Gamieldien, Yasir, Jennifer M. Case, and Andrew Katz. 2023. “Advancing Qualitative Analysis: An Exploration of the Potential of Generative AI and NLP in Thematic Coding.” SSRN. http://dx.doi.org/10.2139/ssrn.4487768.
Glessmer, Mirjam Sophia, Peter Persson, and Rachel Forsyth. 2024. “Engineering Students Trust Teachers Who Ask, Listen, and Respond.” International Journal for Academic Development, 1–14. https://doi.org/10.1080/1360144X.2024.2438224.
Hannigan, Timothy R., Ian P. McCarthy, and André Spicer. 2024. “Beware of Botshit: How to Manage the Epistemic Risks of Generative Chatbots.” Business Horizons 67 (5). https://doi.org/10.1016/j.bushor.2024.03.001.
Ji, Ziwei, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, et al. 2023. “Survey of Hallucination in Natural Language Generation.” ACM Computing Surveys 55 (12): 1–38. https://doi.org/10.1145/3571730.
Lindebaum, Dirk, and Peter Fleming. 2023. “ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research.” British Journal of Management. https://doi.org/10.1111/1467-8551.12781.
Lo, Leo S. 2023. “The CLEAR Path: A Framework for Enhancing Information Literacy Through Prompt Engineering.” The Journal of Academic Librarianship 49 (4): 102720.
McCormack, Mark. 2023. “EDUCAUSE QuickPoll results: Adopting and Adapting to Generative AI in Higher Ed Tech.” EDUCAUSE. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech.
Pargman, Teresa Cerratto, Elin Sporrong, Alexandra Farazouli, and Cormac McGrath. 2024. “Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education.” Högre Utbildning 14 (1): 74–81. https://hogreutbildning.se/index.php/hu/article/view/6243.
Perkins, Mike, and Jasper Roe. 2024. “The Use of Generative AI in Qualitative Analysis: Inductive Thematic Analysis with ChatGPT.” Journal of Applied Learning and Teaching 7 (1). https://doi.org/10.37074/jalt.2024.7.1.22.
Yin, Ziqi, Hao Wang, Kaito Horio, Daisuke Kawahara, and Satoshi Sekine. 2024. “Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance.” arXiv preprint arXiv: 2402.14531.

Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mirjam Sophia Glessmer, Rachel Forsyth

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.