Can We Fake Academic Integrity?

Auteurs-es

DOI :

https://doi.org/10.11575/cpai.v6i1.76916

Mots-clés :

academic integrity, artificial intelligence, research, reflection, Canadian Symposium on Academic Integrity

Résumé

In this presentation, Dr Thomas Lancaster will examine how far work on academic integrity can be faked. He will report on some of his ongoing research and experiments, much of which has depended on the widespread availability of large language models, such as ChatGPT, which can generate original text and often combine ideas in a way that appears artificially intelligent, albeit different to how a human would approach a problem.

Thomas has developed several case studies of how artificial intelligence and machine learning systems can be used to generate assignment solutions, slides, computer programs, marketing materials and even academic research papers, amongst other areas. Although the ideas behind the generation can be applied to many academic disciplines, Thomas plans to share examples that are most relevant for the academic integrity community. Used as intended, artificial intelligence represents a powerful way to improve the quality of education and to better prepare students for the future. Such use also raises questions surrounding originality and authorship. Join Thomas to discover what is possible and to consider how to future-proof work in the academic integrity field.

Biographie de l'auteur-e

Thomas Lancaster, Imperial College London

Senior Teaching Fellow

Department of Computing

Téléchargements

Publié-e

2023-07-31

Comment citer

Lancaster, T. (2023). Can We Fake Academic Integrity?. Canadian Perspectives on Academic Integrity, 6(1). https://doi.org/10.11575/cpai.v6i1.76916

Numéro

Rubrique

Canadian Symposium on Academic Integrity