Skip to Main Content

STEM LibGuide Resources: Citing ChatGPT

Introduction

ChatGPT is a new resource and citation standards are not yet available from sources such as APA, CSE, MLA, etc. This page will collect suggestions for how to cite ChatGPT resources until the standards are available.

Organizations/Publishers included in this guide: APA - arXiv - IEEE - JAMA Network - MLA - Nature - OpenAI - Science - Taylor & Francis

Note regarding false citations: A UCSB librarian tested ChatGPT for finding citations. While it gave her citations on her topic they were all fake. Another UC librarian stated, "It gave me some great citations -- except they're all made up! Real scholars, real journals, but articles that don't exist, and mostly the authors don't even work on this topic." See also the blog post, "Did ChatGPT Just Lie to Me?"

OpenAI

OpenAI, the company that produces ChatGPT has its own standards for use:

This sharing and publications policy covers:

  • Social media, livestreaming, and demonstrations
  • Content co-authored with the OpenAI API policy
  • Research policy

Link: http:// https://openai.com/api/policies/sharing-publication/
Date: November 14, 2022

APA

In an email an APA style expert said:

The APA Style team is currently collecting feedback about citing, quoting, and using ChatGPT and fielding related questions about large language models so that we can construct official guidelines about how to document their usage. What follows is some interim guidance in response to your question, but it should not be considered the final word. Please keep an eye on the APA Style website and the APA Style Blog for a more definitive and detailed update and guidelines on this topic.

Because the purpose of references is to direct readers to the specific sources that a writer used, hopefully the text that ChatGPT generates in any particular chat can be saved, is shareable, or is otherwise retrievable. If so, the reference format in Section 10.10 (Software) can be used, with the company (“OpenAI”) as author, not “ChatGPT.” If the chat has no title, a description in square brackets (that ideally includes information on what prompts were used) would be created. That would give us the following:   

OpenAI. (2023, January 17). [ChatGPT response to a prompt about three prominent themes in Emily Dickinson’s poetry]. https://chat.openai.com/..... 

Ithe text that ChatGPT generates is not retrievable or sharable, then it falls into our catch-all “personal communication” category, where you cite with an in-text only citation: “(OpenAI, personal communication, January 16, 2023).” However, this is not an entirely satisfactory option, especially because the technology is so new, so both students and instructors are learning about this resource and how to ethically use it. Consider, then, making the ChatGPT conversation retrievable by including the text as an appendix or as online supplemental material. If you do so, then readers may be referred to the appendix or the online supplemental material (where the ChatGPT response may be contextualized) when the ChatGPT conversation is cited. It would be good practice to describe, in the narrative or a note, the prompt that generated the specific ChatGPT response. This too will help inform the understanding of the technology and its outputs.

If you have further insights or comments that you would like to be considered by the APA Style team as it works on its ChatGPT guidance, you may send those comments to this email address or participate in the Twitter thread.

Date: January 25, 2023

arXiv

arXiv states their policy for Authors' Use of Generative AI Language Tools:

We 

  1. continue to require authors to report in their work any significant use of sophisticated tools, such as instruments and software; we now include in particular text-to-text generative AI among those that should be reported consistent with subject standards for methodology.
  2. remind all colleagues that by signing their name as an author of a paper, they each individually take full responsibility for all its contents, irrespective of how the contents were generated. If generative AI language tools generate inappropriate language, plagiarized content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
  3. generative AI language tools should not be listed as an author; instead authors should refer to (1).

Link: https://blog.arxiv.org/2023/01/31/arxiv-announces-new-policy-on-chatgpt-and-similar-tools/
Date: January 31, 2023

IEEE

In a personal email, an IEEE representative said:

  • AI-generated material is not considered a valid reference and should not be cited nor included as a reference.

Date: January 26, 2023

JAMA Network

JAMA Network has policies relating to non-human authors and:

  • Author responsibilities
  • Reproduced and Re-created material
  • Image integrity

Their guide also states that the JAMA Network journals have policies for reporting use of statistical analysis software.

Link: https://jamanetwork.com/journals/jama/fullarticle/2801170
Date: January 31, 2023

MLA

MLA has a page called "How do I cite artificial intelligence?" It does not deal directly with ChatGPT.

Link: https://style.mla.org/citing-artificial-intelligence/
Date: February 6, 2019

Nature

An article in Nature offers guidance:

  • Tools such as ChatGPT threaten transparent science; here are our ground rules for their use.

The submission guidelines for Nature (available at https://www.nature.com/articles/d41586-023-00191-1) include this:

  • Authors. Corresponding author(s) should be identified with an asterisk. Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.

Link: https://www.nature.com/articles/d41586-023-00191-1
Date: January 24, 2023

Science

In an editorial, the editor of science stated:

  • For years, authors at the Science family of journals have signed a license certifying that “the Work is an original” (italics added). For the Science journals, the word “original” is enough to signal that text written by ChatGPT is not acceptable: It is, after all, plagiarized from ChatGPT. Further, our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works. Of course, there are many legitimate data sets (not the text of a paper) that are intentionally generated by AI in research papers, and these are not covered by this change.

Link: https://www.science.org/doi/10.1126/science.adg7879
Date: January 27, 2023

Taylor & Francis

Taylor & Francis has clarified that:

AI tools must not be listed as an author. Authors must, however, acknowledge all sources and contributors included in their work. Where AI tools are used, such use must be acknowledged and documented appropriately.

Link: https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/
Date: February 17, 2023

Additional Suggestions

Librarians discussing this issue have suggested:

  • Include in brackets something like [Artificially generated text] or [AI generated text].
  • Create a Google doc that shows the original text input to the AI bot, plus the original text output. This Google doc would then be an additional link in the AI citation, thus showing both the URL for Chat GPT PLUS the link to the supplementary document.

This article in Nature discusses the issue:

ChatGPT listed as author on research papers: many scientists disapprove. (January 18, 2023)  https://www.nature.com/articles/d41586-023-00107-z

The article states "Publishers and preprint servers contacted by Nature’s news team agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list."


Copyright © 2008-2019 The Regents of the University of California, All Rights Reserved.
UCSB Library (805) 893-2478 • Music Library (805) 893-2641 • UCSB, Santa Barbara, CA 93106-9010
Contact UsPolicies