This section of the guide has been written by Natasha Spencer-Jolliffe, Lion Spirit Media, and was added in 2025
How to use AI ethically and effectively in academic research is a complex, challenging and comprehensive consideration. It affects researchers, writers, publishing houses, educational institutions, students and the wider academic community.
At IFIS Publishing, we recognise that the advancing technology and conversations around it are increasing, so as AI continues to develop, key educational resources are essential to support researchers and writers throughout their academic research.
As a result, we have added a section about AI to our Best Practice guide to help steer effective literature searching while maintaining scientific integrity.
AI tools can support researchers, writers, and students during their academic research process, including the wordsmithing and research stages. The University of Arizona states wordsmithing tasks as those that don’t require search and are related to idea generation, honing ideas and forming the building blocks for creating academic research papers by focusing on writing levels and style.
As highlighted in a Harvard Business Publishing article, using AI in academic research can be your partner, not your replacement. The benefits of generative AI in the academic sector revolve around the technology’s potential to help increase the writing process’s efficiency and enable researchers to communicate their findings more clearly.
A 2024 systematic review found that AI can support academic writing and research by helping to manage complex ideas and comprehensive information. Specifically, it can help in six core ways:
Using AI tools to speed up part of the writing and research process may also mean academic journals can enter the widespread food industry quicker, enabling fellow researchers to draw upon their findings and start their studies sooner, ultimately bringing more research into the publishing sphere at a faster rate. AI tools can also help support researchers whose native language is not English in writing and gathering information more easily and efficiently.
Researchers can use AI during the research process for tasks such as:
Researchers can use AI during the ideation and writing process for tasks such as:
Whether an AI tool is grounded in a fact-based source is crucial when choosing one to support academic research. One way to ascertain its credibility and validity in academic food research is how extensive the objective data is that a tool relies on.
The University of Arizona says that ChatGPT 4o mini and Claude 3.5 Sonnet, both free versions, are grounded in something other than facts, as they only operate by relying on their training data.
When platforms or tools like these use data based on their training, it can quickly become outdated, restrictive and inaccurate in providing reliable information. ChatGPT 4o mini was trained until October 2023 while Claude 3.5 Sonnet was trained until April 2024.
Other accessible AI tools are grounded in fact-based sources, which means they can also use web search results or other types of search results alongside their AI-generated findings to provide more comprehensive insights on a particular area of food research.
ChatGPT Plus, ChatGPT 4o (available for limited use in free accounts), Perplexity AI (available in both free and pro versions), Microsoft Copilot (in free and pro versions) and Google Gemini (in free and pro versions) are examples of these available and grounded AI tools.
Many tools have free and pro versions. The free tools often provide limited functions and usage limits, while the pro versions offer more extensive and unlimited capabilities.
In 2024, at the Special Libraries Association annual conference hosted at the University of Rhode Island, Brian Pitchman, Director of Strategic Innovation at Evolve Project, discussed AI’s new frontiers, including the challenges it’s likely to experience as it evolves.
One of these is described as “garbage in, garbage out”, emphasising the importance of precision and quality data. Ultimately, the essential rule of thumb is that the results of anything you’re doing in AI are only as good as the data you put in it in the first place.
1. Accuracy of results and the challenge of generative inbreeding in AI content
The downside of using AI in academic research and writing is that it may lack accuracy. It is at risk of providing false references, otherwise known as artificial hallucinations. Fictional information, too, is a considerable concern. AI tools’ capabilities to learn user biases and feed these into algorithms also have the potential to produce offensive material, including sexist and racist content.
Whether AI can detect AI is also a problem today due to the sheer amount of content that AI generates. If AI tools are using that content and populating the research sphere with even more AI-based content, it becomes difficult to know what’s AI and what’s not AI. The term for this is generative inbreeding.
2. Unethical uses of AI tools in academic research and writing
With AI rapidly on the rise due to the arrival of its more advanced evolution, generative AI, academic institutions from schools, colleges, universities and professional development organisations have concerns about the proliferation of the technology in education.
Now, peer-reviewed academic journals are also worried about the rate and level at which AI is being deployed to support researchers with writing—from creating research outlines and drafts to completing entire papers.
Without undergoing a vetting procedure by publishing houses or academics disclosing AI tools in their work, using AI tools may be considered plagiarism. AI tools could also result in the spread of fake references and insights, producing an inaccurate and non-credible picture of the food research space. The added problem of failing to make the use of AI clear and exactly how and where it’s been used in the journal article is another issue affecting academia.
3. Difficulty in detecting AI, restricting trust and credibility
AI increasingly appears in academic journal searches and writing, ultimately finding its way into final journal articles. However, while it provides various uses, it’s often hard to detect, limiting its acceptance and uptake in the research community. Subsequently, it risks restricting academia’s trust in the research process and potentially the findings and conclusions themselves, lowering their credibility.
Despite ethical watchdogs investigating instances of generative AI use in academic research that makes its way into scientific writing, there’s no advanced method of detection that matches AI’s sophistication.
In August 2023, the online publication WIRED brought attention to one peer-reviewed study in the academic journal Resources Policy, belonging to Elsevier Publishing, that contained the sentence: “Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”
Apart from this sentence, the journal article appeared like other academic research papers. The study’s authors were listed with names and institutions and did not appear to be produced using AI language models. After another researcher published a screenshot of this sentence to X (formerly known as Twitter), Elsevier began investigating. In response, Elsevier highlighted its publishing ethics on X, referencing its rules on acceptable use and necessary disclosure methods.
While the publishing house does not prohibit the use of AI tools, it does require disclosure. Without disclosure, readers, including other researchers and the publishers, do not know the methods—a growing number of which may rely on AI to support their writing and research process—are used, pulling a veil over writing and research methods.
There is no standard definition or response to using AI tools in academic research, making it harder for researchers to know the do’s and don’ts of using AI tools to assist them in their research. Typically, the rule of thumb is that journal policies stipulate that it’s the author’s responsibility to ensure the validity of information provided by AI.
The use of AI in academic research appears to be becoming a field of study in its own right. Guillaume Cabanac, a professor of computer science at the University of Toulouse explored this subject in 2021. With his team, Cabanac identified several telltale signs of text generator use in academic research, including “tortured phrases”, complicated or convoluted wordplay instead of simple terminology and generative AI.
AI detection tools are one way to counter unethical uses of AI and a lack of vetting or disclosure. In 2023, researchers studied using a tool that can review science writing and differentiate with 99% accuracy those written by a human and those created by ChatGPT. Rather than building a “one-size-fits-all” approach, the researchers sought to develop an accurate tool focused on a narrow type of writing.
As AI only becomes more advanced, academia needs to prioritise education on AI tools and the opportunities and challenges associated with their use to retain scientific integrity, trust and credibility.