News Box

The Ethics of Using ChatGPT in Education and Academia

Illustration by Alexandra_Koch Pixabay

There are major potential benefits, but pitfalls too, in the use of AI and Large Language Models such as ChatGPT in education and academia. A balance must be struck in how this use is regulated.

The rapid advancement of artificial intelligence, AI, especially Large Language Models, LLMs, such as ChatGPT, has ushered in a new era of possibilities across various sectors, including education and academia.

ChatGPT, Chat Generative Pre-Trained Transformer, is an AI chatbot model developed and trained by OpenAI, a research organisation focused on advancing AI and launched by them in November 2022. It uses deep learning techniques to generate human-like text based on the input provided.

There is a tendency for younger people to adopt new technologies more readily. AI technology provides a great opportunity, especially for younger students and researchers, to learn and increase their productivity, research output, and quality.

The potential applications of ChatGPT in education are vast, ranging from helping with tests and essays to providing personalised tutoring. This technology can better meet students’ learning needs, improving their efficiency and grades. ChatGPT can also help teachers plan lessons and grade papers and stimulate student interests.

ChatGPT has become an attractive tool for various applications in the academic world. It can generate ideas and hypotheses for research papers, create outlines, summarise papers, draft entire articles, and help with editing.

These capabilities significantly reduce the time and effort required to produce academic work, potentially accelerating the pace of scientific discovery or overcoming writer’s block, a common problem many academics face.

ChatGPT, and LLMs like it, can assist researchers in various tasks, including data analysis, literature reviews, and writing research papers. One of the most significant advantages of using ChatGPT in academic research is its ability to analyse large amounts of data quickly. These tools can process texts with extraordinary success and often in a way that is indistinguishable from the human output.

The limitations of these LLMs, such as their brittleness [susceptibility to catastrophic failure], unreliability [false or made-up information], and the occasional inability to make elementary logical inferences or deal with simple mathematics, represent a decoupling of agency and intelligence.

But is ChatGPT a replacement for human authorship and critical thinking, or is it merely a helpful tool?


Photo by EPA/RITCHIE B. TONGO

Plagiarism, copyright, and integrity

While ChatGPT has the potential to revolutionise the way we approach education and research, its use in these fields brings many ethical issues and challenges that need to be considered. These concern plagiarism, copyright, and the integrity of academic work. ChatGPT can produce medium-quality essays within minutes, blurring the lines between original thought and automated generation.

First and foremost is the issue of plagiarism, as the model may generate text identical or similar to existing text. Plagiarism is copying someone else’s work, or simply rephrasing what was written without personally adding anything.

Since ChatGPT generates text based on a vast amount of data from the Internet, there is a risk that the tool may inadvertently produce text that closely resembles existing work. Students may be tempted to use verbatim in their work text produced by ChatGPT.

This raises questions about the originality of the work produced using ChatGPT and whether it constitutes plagiarism. It is difficult to ascertain the extent of the contribution made by the AI tool versus the human researcher, which further complicates the issue of authorship, credit, and intellectual property. A related concern is that using ChatGPT may lessen critical thinking and creativity.

Plagiarism, however, predates AI, as Serbia knows. Several cases involving public officials have come to light in recent years, before ChatGPT, including plagiarism of a PhD thesis that was copied from other people’s work.

Another ethical concern relates to copyright infringement. If ChatGPT generates text that closely resembles existing copyrighted material, using such text in an academic article could potentially violate copyright laws.

Using ChatGPT or similar LLMs becomes both a moral and legal issue. The need for legislation specifically regulating the use of Generative AI represents a significant challenge for its application in practice.

Using text-generating tools in scholarly writing presents challenges to transparency and credibility. Universities, journals, and institutes must revise their policies on acceptable tools.


Photo by EPA/RITCHIE B. TONGO

To ban or not to ban?

Given the concerns raised by academicians globally, many schools and universities have banned ChatGPT, although students use it anyway. Additionally, some advocate for carefully using it and not banning ChatGPT, but teaching with it because cheating using different tools is inevitable.

Further, significant questions about copyright have emerged, especially given the broad application of ChatGPT in academic spheres, content creation, and its use by students for completing academic tasks. The questions are: Who holds the intellectual property rights for the content produced by ChatGPT, and who would be liable for copyright violation?

Many educational institutions have already prohibited the use of ChatGPT, while prominent publishers such as Elsevier and Cambridge University Press authorise chatbots for academic writing. However, the guidelines for using AI in science still need to be provided.

AI tools such as ChatGPT in academic research are currently a matter of debate among journal editors, researchers, and publishers. There is an ongoing discussion about whether citing ChatGPT as an author in published literature is appropriate.

It is also essential for academic institutions and publishers to establish guidelines and policies for using AI-generated text in academic research. Governments and relevant agencies should develop corresponding laws and regulations to protect students’ privacy and rights, ensuring that the application of AI technology complies with educational ethics and moral standards.

The legislative procedure in the EU is still ongoing, and there are estimates that it will take years before the regulations begin to be implemented in practice. Legislation that would regulate the application of ChatGPT in practice, especially in academia in Serbia, also does not exist.

Recently, researchers have been caught copying and pasting text directly from ChatGPT, forgetting to remove its phrase ‘As an AI language model…’ and publishing peer-reviewed papers in prominent scientific journals.

For example, in the European International Journal of Pedagogics, in a paper titled Quinazolinone: Pharmacophore with Endless Pharmacological Actions, the authors in the section Methods pasted ChatGPT’s answer “as an AI language model; I don’t have access to the full text of the article…”

This has also been the case in some PhD and MA theses.


Photo by EPA-EFE/RONALD WITTEK

Need for guidelines

Emerging in response to the challenge of plagiarism are AI-text detectors, software specifically designed to detect content generated by AI tools.

To address these concerns regarding plagiarism, some scientific publishers, such as Springer Nature and Elsevier, have established guidelines to promote the ethical and transparent use of LLMs. These guidelines advise against crediting LLMs as authors on research papers since AI tools cannot take responsibility for the work. Some guidelines call for the use of LLMs to be documented in their papers’ methods or acknowledgments sections.

To prevent plagiarism using ChatGPT or other AI language models, it is necessary to educate students on plagiarism, what it is, and why it is wrong; to use plagiarism detection tools; and to set clear guidelines for the use of ChatGPT and other resources.

To ensure accountable use of this AI model, it is essential to establish guidelines, filters, and rules to prevent users’ misuse of unethical language.

Despite the concerns mentioned above, before discussing whether AI tools such as ChatGPT should be academically banned, it is necessary to examine the challenges currently faced by education, and the significant impact and benefits of using ChatGPT in education.

The application of ChatGPT raises various legal and ethical dilemmas. What we need are guidelines, policy and regulatory recommendations, and best practices for students, researchers, and higher education institutions.

A tech-first approach, relying solely on AI detectors, can lead to potential pitfalls. Mere reliance on technological solutions can inadvertently create an environment of suspicion and surveillance and shift the focus from fostering a culture of integrity to one of surveillance and punishment – the importance of establishing a culture wherein academic honesty is valued intrinsically and not just enforced extrinsically.

Striking the right balance between leveraging the benefits of ChatGPT and maintaining the integrity of the research process will be vital to navigating the ethical minefield associated with using AI tools in academia.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she deals with topics of ethics of AI, Applied and Normative Ethics, Bioethics.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now