António Pedro Costa, University of Aveiro (Portugal)
Researcher at the Center for Research in Didactics and Technology in Trainer Training (CIDTFF), Department of Education and Psychology, University of Aveiro and collaborator at the Laboratory of Artificial Intelligence and Computer Science (LIACC), Faculty of Engineering, University of Porto.
One of the great difficulties faced by master’s and doctoral students is writing texts with scientific rigor and quality. With the emergence of Artificial Intelligence (AI) writing assistants, students can bridge this gap by using this new technology ethically and honestly. It is, therefore, crucial to establish guidelines that can raise awareness of students’ academic integrity and distinguish “AI-assisted writing” from “AI-generated writing”.
Â
AI-assisted writing should only be allowed if the student uses an AI writing assistant as a collaborative tool to help them develop and advance in their process. Collaboration with an AI writing assistant can include brainstorming, outlining, and drafting, provided, for example, that the student has substantial writing, research, and composition skills not generated solely by AI.
Â
“AI-generated writing” means that there has been little or no involvement of the student as an author, with most of the writing being generated by Artificial Intelligence, whereas “AI-assisted writing” aims to help users develop their writing and critical thinking process and not replace one of them. Therefore, using AI to generate writing or compositions without substantial original contributions from the author cannot be acceptable or permitted.
Using Artificial Intelligence in the academic writing process places us in a delicate balance between enriching potential and integrity.
With the proliferation of Generative AI tools, many scientific conferences and publications are already anticipating the possible impact of AI in their sector by imposing rules on authors about the use of these tools. For example, the journal New Trends in Qualitative Research (NTQR) already includes in its statement of ethics and good practice the proviso that “The use of AI systems to generate text is only permitted if their role is properly documented in the article (e.g. by reporting experiences with such systems). However, using AI-powered systems to help polish human-authored text is permitted.”
Â
Establishing precise and detailed guidelines (see table below) on using AI in academic writing is imperative. It will help guide authors concerning the integrity of scientific work, clearly delineating acceptable practices from reprehensible academic behavior. However, even with a well-defined regulatory framework, intentional infractions are still possible and, therefore, require a transparent system of surveillance and penalties that can detect and discourage such deviations before they tarnish the collective body of academic knowledge.
Â
In this sense, universities and publishing institutions need not only to enforce their policies through careful and constant checks but also to promote an ethical culture that naturalizes authenticity and repels copying so that the constant emergence of new tools does not serve as a pretext for malicious fraud under the guise of innovation.
Acceptable | Not Acceptable |
---|---|
AI-Assisted Writing | AI-Generated Writing |
Use AI-assisted writing to brainstorm | Cheat on the writing & research process |
Explore new topics/ideas with AI-assisted writing | Generate large chunks of text with little or no input from the author |
Use AI-assisted writing to explore potential counterarguments/ opposing points of view | Trust something the AI has generated at face value |
Review your writing by taking suggestions from your AI assistant to make improvements | Use AI-generated text as a substitute for research or critical thinking |
To conclude this short reflection, the text output generated by AI may contain material that is offensive, biased, or goes against academic ethics, false, misleading, or potentially harmful material, or other problematic content, the use of which may fall outside the protections of academic freedom and/or freedom of expression.
Â
Authors should carefully review all results generated by Artificial Intelligence before using AI suggestions in their academic work. This procedure may require in-depth digital competencies (knowing how to use these tools from a technical point of view. For example, how to generate instructions – prompts) and transversal competencies such as critical thinking.
In summary, using Artificial Intelligence in the academic writing process places us in a delicate balance between enriching potential and integrity. By defining ethical criteria, we safeguard the originality and value of the individual contribution. Therefore, an active commitment to the development of critical and technical capabilities for the appropriate handling of AI tools in research is required, which must be seen as a modern ally that does not dispense with the wise judgment and human touch that only thoughtful authors, conscientious and humane people can provide.
Â
Note: Part of this text was adapted from Marc Watkins, Academic Innovation Fellow and Professor of Writing and Rhetoric, Director of the AI Institute for Writing Teachers at the University of Mississippi. He co-chairs his department’s AI working group and liaises with other departments on campus, exploring the impact of generative AI on teaching and learning. He blogs about AI and education at Rhetorica and, in small parts, suitably refined, of a friendly conversation with the software lex.page. Lex is an AI-powered word processor that works in much the same way as a traditional Google Doc, with the added feature of allowing users to call on AI assistance whenever necessary. It uses the same model that powers ChatGPT.