Who Wrote This?

Detecting Artificial Intelligence–Generated Text from Human-Written Text

Auteurs-es

DOI :

https://doi.org/10.55016/ojs/cpai.v7i1.77675

Mots-clés :

artificial intelligence, generative AI, GenAI, KMR, academic integrity, detection, Canada

Résumé

This article explores the impact of artificial intelligence (AI) on written compositions in education. The study examines participants’ accuracy in distinguishing between texts generated by humans and those produced by generative AI (GenAI). The study challenges the assumption that the listed author of a paper is the one who wrote it, which has implications for formal educational systems. If GenAI text becomes indistinguishable from human-generated text to a human instructor, marker, or grader, it raises concerns about the authenticity of submitted work. This is particularly relevant in post-secondary education, where academic papers are crucial in assessing students’ learning, application, and reflection. The study had 135 participants who were randomly presented with two passages in one session. The passages were on the topic of “How will technology change education?” and were placed into one of three pools based on the source of origin: written by researchers, generated by AI, and searched and copied from the internet. The study found that participants were able to identify human-generated texts with an accuracy rate of 63%. But with an accuracy of only 24% when the composition was AI-generated. However, the study also had limitations, such as limited sample size and an older predecessor of the current GenAI software. Overall, this study highlights the potential impact of AI on education and the need for further research to evaluate comparisons between AI-generated and human-generated text.

Bibliographies de l'auteur-e

Rahul Kumar, Brock University

Dr. Rahul Kumar is an Assistant Professor in the Department of Educational Studies at Brock University. His scholarship is primarily in the field of Higher Education. He also has written in the fields of educational technologies, artificial intelligence, quality of education at PSE level, and international education.

Michael Mindzak, Brock University

Dr. Michael Mindzak is an Assistant Professor in the Department of Educational Studies at Brock University. His scholarship is primarily in the field of labour relations. He has also written in the fields of educational technologies, artificial intelligence, and the nature of work of the professoriate.

Références

Abd-Elaal, E., Gamage, S. H. P. W., & Mills, J. E. (2019). Artificial intelligence is a tool for cheating academic integrity [Paper presentation]. AAEE 2019 Annual Conference, Queensland, Australia. https://aaee.net.au/wp-content/uploads/2020/07/AAEE2019_Annual_Conference_paper_180.pdf

Abd-Elaal, E., Gamage, S. H. P. W., & Mills, J. E. (2022). Assisting academics to identify computer generated writing. European Journal of Engineering Education, 47(5), 725–745. https://doi.org/10.1080/03043797.2022.2046709

Brinkman, B. (2013). An analysis of student privacy rights in the use of plagiarism detection systems. Science and Engineering Ethics, 19, 1255–1266. https://doi.org/10.1007/s11948-012-9370-y

Curtis, G. J., Slade, C., Bretag, T., & McNeill, M. (2021). Developing and evaluating nationwide expert-delivered academic integrity workshops for the higher education sector in Australia. Higher Education Research & Development, 41(3), 665–680. https://doi.org/10.1080/07294360.2021.1872057

Dalalah, D., & Dalalah, O. M. A. (2023). The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. The International Journal of Management Education, 21(2). https://doi.org/10.1016/j.ijme.2023.100822

Dawson, P., & Sutherland-Smith, W. (2018a). Can markers detect contract cheating? Results from a pilot study. Assessment & Evaluation in Higher Education 43(2), 286–293. https://doi.org/10.1080/02602938.2017.1336746.

Dawson, P., & Sutherland-Smith, W. (2018b). Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study. Assessment & Evaluation in Higher Education, 44(5), 715–725. https://doi.org/10.1080/02602938.2018.1531109

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics, 21, 17–23. http://dx.doi.org/10.3354/esep00195

Drake, C. A. (1941). Why students cheat. The Journal of Higher Education, 12(8), 418–420. https://www.jstor.org/stable/1976003

Eaton, S. E. (2023). 6 tenets of postplagiarism: Writing in the age of artificial intelligence. Learning, Teaching and Leadership. https://drsaraheaton.wordpress.com/2023/02/25/6-tenets-of-postplagiarism-writing-in-the-age-of-artificial-intelligence/

Eaton, S. E. (2021). Plagiarism in higher education: Tackling tough topics in academic integrity. Libraries Unlimited.

Eaton, S. E., & Christensen Hughes, J. (2022). Academic integrity in Canada: An enduring and essential challenge. Springer. https://link.springer.com/book/10.1007/978-3-030-83255-1

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19. https://doi.org/10.1007/s40979-023-00140-5 (Article 17.)

Feldstein, M. (2022, December 16). I would have cheated in college using ChatGPT. eLiterate. https://eliterate.us/i-would-have-cheated-in-college-using-chatgpt/

Gehrmann, S., Strobelt, H., & Rush, A. M. (2019). GLTR: Statistical detection and visualization of generated text. arXiv. https://doi.org/10.48550/arXiv.1906.04043

Gray, B. C. (2022). Ethics, edTech, and the rise of contract cheating. In S. E. Eaton & J. Christensen Hughes (Eds.), Academic Integrity in Canada: An Enduring and Essential Challenge, (pp. 189-201). Springer. https://link.springer.com/chapter/10.1007/978-3-030-83255-1_9

Haman, M., & Skolnik, M. (2023). Using ChatGPT to conduct a literature review. Accountability in Research: Ethics, Integrity and Policy. https://doi.org/10.1080/08989621.2023.2185514

Harper, R., Bretag, T., & Rundle, K. (2021). Detecting contract cheating: Examining the role of assessment type. Higher Education Research & Development, 40(2), 263–278. https://doi.org/10.1080/07294360.2020.1724899

Jordan, A. E. (2001). College student cheating: The role of motivation, perceived norms, attitudes, and knowledge of institutional policy. Ethics & Behavior, 11(3), 233–247. https://www.doi.org/10.1207/S15327019EB1103_3

Kerr, K. (2020). Ethical considerations when using artificial intelligence-based assistive technologies in education. In B. Brown, V. Roberts, M. Jacobsen & C. Hurrell (Eds.), Ethical Use of Technology in Digital Learning Environments: Graduate Student Perspectives. (pp. 9–14). Pressbooks. https://openeducationalberta.ca/educationaltechnologyethics/chapter/ethical-considerations-when-using-artificial-intelligence-based-assistive-technologies-in-education/

Kumar, R. (2023). Faculty members’ use of artificial intelligence to grade student papers: A case of implications. International Journal for Educational Integrity, 19(9). https://doi.org/10.1007/s40979-023-00130-7

Kobis, N., & Mossink, L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in Human Behavior, 114, Article 106553. https://doi.org/10.1016/j.chb.2020.106553

Lancaster, T. (2023). Artificial intelligence, text generation tools and ChatGPT—does digital watermarking offer a solution? International Journal for Educational Integrity, 19(10). https://doi.org/10.1007/s40979-023-00131-6

Luitse, D., & Denkena, W. (2021). The great transformer: Examining the role of large language models in the political economy of AI. Big Data & Society, 8(2), https://doi.org/10.1177/20539517211047734

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29. https://doi.org/10.1108/LHTN-01-2023-0009

Marche, S. (2021, July 23). The chatbot problem. The New Yorker. https://www.newyorker.com/culture/cultural-comment/the-chatbot-problem

McColl, B. (2023, May 2). Chegg shares plunge after company warns that ChatGPT is impacting growth. Investopedia. https://www.investopedia.com/chegg-shares-plunge-after-company-warns-that-chatgpt-is-impacting-growth-7487968

McKnight, L. (2021). Electric sheep? Humans, robots, artificial intelligence, and the future of writing. Studies in Culture and Education, 28(4), 442–455. https://doi.org/10.1080/1358684X.2021.1941768

McMurtrie, B. (2023a, May 25). Are professors ready for AI? The Chronicle of Higher Education. https://www.chronicle.com/newsletter/teaching/2023-05-25

McMurtrie, B. (2023b, January 5). Will ChatGPT change the way you teach? The Chronicle of Higher Education. https://www.chronicle.com/newsletter/teaching/2023-01-05

Mindzak, M., & Eaton, S. A. (2021, November 4). Artificial intelligence is getting better at writing, and universities should worry about plagiarism. The Conversation. https://theconversation.com/artificial-intelligence-is-getting-better-at-writing-and-universities-should-worry-about-plagiarism-160481

Perkins, M., Roe, J., Postma, D., McGaughran, J., Hickerson, D., & Cook, J. (2023, preprint). Game of Tones: Faculty detection of GPT-4 generated content in university assessments https://www.doi.org/10.48550/arXiv.2305.18081

Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. Polity Press. https://www.wiley.com/en-ca/Should+Robots+Replace+Teachers%3F%3A+AI+and+the+Future+of+Education-p-9781509528967

Southgate, E. (2021). Artificial intelligence and machine learning: A practical and ethical guide for teachers. In C. Wyatt-Smith, B. Lingard, & E. Heck (Eds.), Digital Disruption in Teaching and Testing: Assessments, Big Data, and the Transformation of Schooling (2nd ed., pp. 60–74). Routledge. https://www.taylorfrancis.com/chapters/oa-edit/10.4324/9781003045793-3/artificial-intelligence-machine-learning-erica-southgate

Teräs, M., Suoranta, J., Teräs, H., & Curcher, M. (2020). Post-Covid-19 education and education technology ‘solutionism’: A seller’s market. Postdigital Science and Education, 2, 863–878. https://doi.org/10.1007/s42438-020-00164-x

van Boom, D. (2023, March 19). ChatGPT can pass the bar exam. Does that actually matter? CNet. https://www.cnet.com/tech/chatgpt-can-pass-the-bar-exam-does-that-actually-matter/

Vrbanec, T. & Meštrović, A. (2021). Taxonomy of academic plagiarism methods. Zbornik Veleučilišta u Rijeci, 9(1), 283–300. http://dx.doi.org/10.31784/zvr.9.1.17

Warner, J. (2023, January 4). How about we put learning at the center? Inside Higher Ed. https://www.insidehighered.com/blogs/just-visiting/how-about-we-put-learning-center

Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Folt ́ynek, T., Guerrero-Dib, J., Popoola, O., . . . Wadding-ton, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(26). https://doi.org/10.1007/s40979-023-00146-z

Williamson, B. (2020, August 18). The social life of artificial intelligence in education. Code Acts in Education. https://codeactsineducation.wordpress.com/2020/08/18/social-life-artificial-intelligence-education/

Williamson, B., & Enyon, R. (2020). Historical trends, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. https://doi.org/10.1080/17439884.2020.1798995

Yang, X., Li, Y., Zhang, X., Chen, H., & Cheng, W. (2023). Exploring the limits of ChatGPT for query or aspect-based text summarization. arXiv:2302.08081 [cs.CL] https://doi.org/10.48550/arXiv.2302.08081

Zawacki-Richer, O., Marín, V.I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education — where are the educators? International Journal of Educational Technology in Higher Education, 16(39). http://dx.doi.org/10.1186/s41239-019-0171-0

Téléchargements

Publié-e

2024-01-10

Comment citer

Kumar, R., & Mindzak, M. (2024). Who Wrote This? Detecting Artificial Intelligence–Generated Text from Human-Written Text. Canadian Perspectives on Academic Integrity, 7(1). https://doi.org/10.55016/ojs/cpai.v7i1.77675

Numéro

Rubrique

Peer-reviewed Articles

Articles les plus lus du,de la,des même-s auteur-e-s