Skip to Main Content

Best Practice for Literature Searching

For the sciences of food and health

This page of the guide has been written by IFIS Publishing's Katy Askew, and was added in August 2025.

Using AI as a literature searching tool

AI tools can be helpful when conducting literature searches as part of a comprehensive literature search strategy. But remember: never accept an AI output at face value.

Below, we have outlined some key considerations when using AI tools for literature searching. 

Important! The technology of generative AI tools and the research community's understanding of responsible use in academic research is constantly evolving. Always be sure to refer to the plagiarism and academic integrity policies at your institution for specific requirements.

When to use AI for literature searching

Consider whether AI use is right for each stage of your literature searching strategy. Generative AI can be great for getting a quick overview, idea generation, or brainstorming keywords. It can be useful at the beginning of your literature review process in order to help you define your search question and obtain a broad view of the topic and existing literature.

However, for the more in-depth literature searching stage, or when finding specific evidence, a curated database is indispensable. Use AI if it helps, but always follow up with trustworthy sources from the library.


This table outlines the key elements and differences between generative AI tools and curated databases. By understanding these differences, you can ensure that you are using the right resource for the task.

 

Generative AI

(e.g. ChatGPT / Perplexity / Claude / etc)

Curated Database

(e.g. FSTA / NutriHealth)

Scope of information
  • Broad, general knowledge from the open web.
  • May include non-scholarly content.
  • No access to paywalled information.
  • Focused scholarly content in food, nutrition and health sciences.
  • 50+ years of coverage.
Quality control
  • No vetting of sources.
  • Risk of algorithmic / training bias favouring particular perspectives.
  • Does not exclude predatory or unreliable sources.
  • All sources vetted by food and information scientists.
  • Indexed journals must pass a comprehensive predatory assessment.
  • Sources are continuously monitored for signs of hijacking.
Source transparency
  • Provides answers with little or no citation.
  • Can generate fake references.
  • Every result is a cited publication.
  • Full bibliographic details are provided.
Search method
  • Natural language Q&A: easy to ask but may miss nuanced terms and oversimplify without access to a controlled vocabulary.
  • Controlled vocabulary and expert indexing for precise, comprehensive discovery.
Comprehensive coverage?
  • Some Generative AIs have knowledge cut-offs and they do not have access to information behind paywalls. As such Generative AI can miss the latest research.
  • A&I databases such as FSTA and NutriHealth are updated weekly, including new articles from both OA and subscription journals.

Transparent use of AI tools

It is important to check the plagiarism and academic integrity policies at your institution for specific requirements. It is generally recommended that you are fully transparent about the AI tools you have used.

Using Generative AI without acknowledgement is usually treated as plagiarism, the same as copying someone else’s work. If your instructor allows AI for certain parts of the work (like brainstorming or first drafts), you must still disclose that use and cite any content from it. Presenting AI-generated text as your own work (or letting it fabricate sources for you) violates academic integrity. Universities and libraries are emphasising that students remain responsible for the work they submit. 

If you do use a Generative AI tool, get permission if required, use it only as allowed, and always give proper credit. If you’re ever unsure, ask your instructor or a librarian for guidance. It’s better to be safe and transparent than risk your academic reputation.

Best practice!

BEST PRACTICE RECOMMENDATION: Keep a research log of any AI queries or content you use during an assignment, just as you would note database searches or article sources. This provides documentation in case questions arise later about originality.

Citing AI-generated content

When citing an answer from an AI tool you should do so in the same way you would cite a personal communication or an unarchived source, unless your style guide provides specific instructions. For instance:

  • APA guidelines (7th edition) currently suggest citing ChatGPT in-text as a personal communication (e.g., OpenAI, personal communication, date) rather than in the reference list. This is because a ChatGPT response isn’t recoverable by others (they can’t look up your specific chat). 
  • In MLA style, AI tools are not treated as authors but as sources of information. When using AI-generated content, you should cite it in your work, including the prompt, the AI tool name, version, and the date of the response.
  • Under Chicago style guidelines, treat the AI tool as the author, and include details like the prompt used, the date of generation, the AI's developer, and a URL (if available and accessible). 

Always check for updated guidelines, as formal citation rules for AI are new and evolving. And remember, if you quote text that the AI wrote, you should put it in quotation marks and clarify in your paper that it came from an AI (just as you would quote and attribute any author).

Best practice!

BEST PRACTICE RECOMMENDATION: Treat ChatGPT as a source that needs acknowledgement – never just paste its output into your essay without citation.

Critically evaluate any AI output

AI tools can produce information that is incomplete, biased, or even incorrect. Consider these 5 questions to critically evaluate any answer you get from an AI tool before relying on it in your academic work.

  • What was the AI’s source for this information? Did it provide a citation, a link, or any reference to where it got this information? If there is no clear source, be sceptical. AI often doesn’t disclose where its statements come from.
  • Is there evidence or a citation provided for this claim? Look for specific data, studies, or publications that support the statements. If the AI offers a citation, try to locate it in your library’s databases or Google Scholar to verify that it’s real and says what the AI claims.
  • How current is the information and could it be outdated? Many AI models were trained on old data and may not ‘know’ about the most recent research. This matters especially for rapidly evolving fields like health, nutrition, or technology.
  • Could this answer be biased or incomplete? AI reflects the patterns and biases in the data it was trained on, which means it might emphasise popular or Western-centric viewpoints, or overlook important nuances. Consider whether the answer feels one-sided or lacks multiple perspectives
  • Have I verified this information in a reliable source? Always cross-check AI outputs with trusted library resources, scholarly databases, or peer-reviewed articles. If you can’t confirm it with credible evidence, think twice before including it in your paper.

Follow the steps in this flowchart for a simple, concrete method that you can use to assess the reliability of facts and citations given by generative AI tools such as ChatGPT, Perplexity, CoPilot and many more.

Remember: Generative AI tools are helpful starting points, but they are not authoritative sources. Always bring your own critical thinking and when in doubt, check it out in a library database!