Recommendation for Labeling and Documenting AI Generated Content
State of the document: 2025-08-06

This recommendation shows how to label and document the application of artificial intelligence (AI) to the scientific writing process. Four different citation styles (APA, Chicago, MLA, and DIN ISO 690) are exemplary presented to label AI-generated text. In addition, special applications of AI to the writing process are named and how they can be documented. The principle of self-responsibility applies to compliance with legal requirements and the maintenance of good scientific practice.

1. Principles

For a text to reach a scientific level, the principles of good scientific practise must be observed. This also applies explicitly to the use of AI tools. For the purposes of this recommendation, the term “AI tools” refers to digital tools that are based on artificial intelligence technologies and are used in the writing process of scientific texts.

1.1 Legal compliance

Scientific integrity requires acting within the legal framework. When using AI technology, it is essential to comply with the German Copyright Act (1) , the Artificial Intelligence Act of the European Union (2) , and the General Data Protection Regulation of the European Union (3) . If applicable, examination regulations must also be followed.

1.2 Responsibility

In particular, when using AI-based tools, the user of the tool bears full responsibility for compliance with all requirements placed on them and their scientific work. Compliance with this principle obliges the user to always check AI-generated content using appropriate means to the best of their knowledge and belief and to adapt it if necessary. The tools may support the user's thought processes and intellectual efforts but must not replace them.

1.3 Differentiation between internal and external services

The distinction between one's own work and that of a third party is a fundamental requirement of scientific integrity. It must be always ensured that authorship can be clearly attributed. This means that text passages and statements taken from other individuals/sources must be documented and cited with a verifiable source reference. Since an AI system has no authorship over its output and its products are generally neither reproducible nor guaranteed to be factually correct, the citation of AI-generated text passages cannot be considered a citation of sources in the classical sense. However, unaltered AI-generated text passages can be marked directly in the text (see section 2 “Labeling of AI-generated text passages” ). Text passages manipulated by AI tools can be documented indirectly in an AI Utilization Table (see section 4 “Documenting special applications of generative AI (AI Utilization Table)” ).

1.4 Reliability of factual information

In the interests of scientific integrity, factual information that goes beyond the general knowledge and subject-specific basic knowledge of the scientific community addressed must always be checked for its truthfulness, relevance and accuracy and backed up with citable and trustworthy sources. AI tools that use generative methods are generally not citable sources of factual information unless they can guarantee the accuracy of the information. Since this requirement is generally not met by current generative AI systems, all factual information generated by these systems must be checked in the conventional way and provided with citable sources (4) .

1.5 Scope of Labeling and Documentation Requirements

Users of AI tools face the challenge of finding a balance between using AI tools as transparently as possible and keeping their use as low-threshold as possible. It must be considered whether and to what extent the use of AI-based tools should be documented directly or indirectly. If the AI output is used merely as inspiration and the independent intellectual effort is clearly evident through subsequent, self-developed work, the output can be used without labeling.

However, regardless of this general guideline and the recommendations of this document, departments, instructors, or examiners may define subject-, teaching-, or exam-specific requirements, prohibit the use of certain AI tools or AI-generated content, and require specific labeling and documentation. The regulations set by the responsible departments or lecturers/examiners are decisive. Failure to label i.e., concealing the origin of text passages and statements, may therefore be considered an attempt at deception within the meaning of § 38 APB (5) , depending on the type of examination.

2. Labeling of AI-generated text passages

In accordance with the principle 1.3 “Differentiation between internal and external services” an AI-generated text cannot be cited as a source in the classical sense. Nevertheless, the labeling of AI-generated text passages should be based on the existing rules for citing sources. Depending on the chosen citation style, it is then possible to mark direct and indirect “citations” of AI-generated text either directly in the text itself, for example in the form of a footnote, or alternatively with a reference to the bibliography. The following four subsections present four different documentation styles as examples for marking AI-generated text passages. The determination of the rules for labeling and documenting AI-generated content is the responsibility of the department or the lecturer/examiner (see Section 1.5 “Scope of Labeling and Documentation Requirements”).

2.1 American Psychological Association (APA)

Passages from the conversation history with an AI tool are to be marked as the output of an algorithm in accordance with APA Style (6) . The labeling in the text follows the following scheme:

Scheme: ([AI tool provider], [Year])
Example: (OpenAI, 2023)

“Year” refers to the year of the version used. If the conversation history is documented elsewhere, the reference “(OpenAI, 2023)” can be supplemented to “(OpenAI, 2023; see Appendix A for the entire conversation history)” by specifying the corresponding location.

The corresponding entry in the references section follows the template for software in the Publication Manual (American Psychological Association, 2020, Chapter 10.10). The author is the provider of the AI tool, while the date refers to the year of the version used. The entry in the references section is generated according to the following scheme:

Scheme: [AI tool provider]. ([Year]). [AI tool name] ([version]) [[type of AI]]. [Link to AI tool].
Example: OpenAI. (2023). ChatGPT (4o) [Large language model]. https://chat.openai.com/chat

2.2 Chicago Manual of Style (CMOS)

According to the recommendation of the CMOS (7) , it is sufficient to mention the AI tool in the text and write, for example, “The following recipe was generated by ChatGPT”. A more detailed identification can be made in the form of a footnote according to the following scheme:

Scheme: Text generated by [AI tool name], [AI tool provider], [date when text was generated], [link to AI tool or conversation history].
Example: Text generated by ChatGPT, OpenAI, 06.03.2024, https://openai.com/index/chatgpt/.

A link does not necessarily have to be included. Prompts can also be included in the footnote according to the following scheme:

Scheme: [AI tool name], response to “[prompt]”, [AI tool provider], [date when the AI tool was generated], [link to AI tool or conversation history].
Example: ChatGPT, reaction to “Create me a recipe for green sauce”, OpenAI, 06.03.2024

2.3 Modern Language Association (MLA)

Text passages from the AI reaction to the prompt “Create me a recipe for green sauce” are marked directly in the text according to the following schema (8) as follows:

Scheme: („[First words of the prompt]“)
Example: (“[Create a recipe for me]”)

The corresponding entry in the references section is generated according to the following scheme:

Scheme: “[Prompt]”, [AI tool name], [version], [AI tool provider], [date], [link to AI tool or conversation history].
Example: “Create me a recipe for green sauce”, ChatGPT, 4.0, OpenAI, 06.03.2024, https://openai.com/index/chatgpt/.

2.4 DIN ISO 690

An AI-generated text passage must be marked with a consecutive number in square brackets in accordance with the German industry standard ISO 690:

Scheme: [[consecutive numbering]]
Example: [23]

A detailed documentation according to the following scheme is only found in the references section:

Scheme: [[consecutive numbering]] [AI tool provider], [year], [AI tool name] [version] [type of AI], personal communication [accessed on [date on which the AI generator was created] [time]]. Available at: [Link to AI aid or conversation history].
Example: [23] OpenAI, 2024, ChatGPT 4.0 AI language model, personal communication [accessed on 17.12.2024 approx. 9 pm]. Available at: https://chatgpt.com/.

3. Special applications of generative AI

Generative AI can assist in the following applications:

  • Correcting spelling and grammar (labeling is not required)
  • Summarizing and clarifying texts (no labeling required if checked intellectually)
  • Rephrasing or paraphrasing texts. The use of a specific writing style or the use of simple language can be decisive here (labeling is required depending on the scope)
  • Translating texts (no labeling required if checked intellectually)
  • Generating texts or writing down ideas and outlines (labeling is required depending on the scope)
  • Structuring texts (labeling is not required)
  • Creating or optimizing outlines (labeling is required depending on the scope)
  • Collecting ideas and brainstorming (labeling is not required)
  • Identifying topics and gaining an overview of the current state of research (documenting is not required if the information only serves as a starting point for research)
  • Preparing searches, e.g., by identifying suitable search terms (documenting is not required)
  • Researching literature (documenting is not required)
  • Finding Pro-and-con-arguments (9) (labeling is required depending on the scope)
  • Transcribing sound recordings (labeling is required. Compliance with the General Data Protection Regulation (GDPR) must be observed)
  • Visualization in the form of images and graphics (labeling is not required)

Despite these recommendations, the binding rules for labeling and documenting AI-generated content are determined by the department or lecturer/examiner (see section 1.5 “Scope of Labeling and Documentation Requirements” ).

4. Documenting special applications of generative AI (AI Utilization Table)

The use of AI tools can be documented indirectly in a table, which supplements the references section of the work. A recommended structure may look as follows:

Table 1: Example table for documenting the use of AI tools (based on the guide “Leitfadens «Aus KI zitieren» (opens in new tab)” from the university of Basel)

5. Labeling of AI-generated images

AI-generated images and graphics should be identified in their captions, stating the AI tool used and its provider. If the prompts used to create the graphic are relevant to the work or its traceability and are documented elsewhere in the work, there should be a reference in the caption.

6. Documenting prompts and conversations

6.1 Selecting

Single Prompts or entire conversations should be documented if …

  • Significant contributions to the content are made:
    • The results from the AI tool flow directly into the scientific argumentation, structure, methodology, analysis, or interpretation. For example, an AI tool is used to manipulate or evaluate data and thus becomes part of the methodology.
    • The prompt leads to textual or content-related results that serve as the basis for central statements or conclusions.
  • The AI tool is the subject of research:
    • The work examines the application or performance of AI models. In this case, the conversations become research-data and must be documented as such (10) .
  • There is a creative or non-trivial use:
    • The design of the prompt requires special expertise or creativity, which can be regarded as an essential intellectual achievement.
  • Reproducibility must be guaranteed:
    • Prompts influence the results to such an extent that other researchers must be able to reproduce or validate them.
    • The work is published in an area or framework that places special demands on traceability and transparency.
  • The use of AI raises ethical questions:

6.2 Saving

In the conversation view within the web browser, ChatGPT from OpenAI offers the option of saving the conversation history by clicking on the “Share” button and making it publicly accessible via a web link.

ChatGPT also offers the option of exporting conversation histories in a desired data format. After entering the prompt “Provide me with the conversation history as .txt”, the download link of the desired file is provided in text format. Depending on the type of use, the export can be realized in different file formats:

  • Text file (.txt)
  • Portable Document Format file (.pdf)
  • Microsoft Word file (.docx)
  • Microsoft Excel file (.xlsx)
  • Comma-separated values file (.csv)
  • Extensible Markup Language file (.xml)
  • JavaScript Object Notation file (.json)

It is recommended to save at least one structured and machine-readable form of the data in CSV format so that no information is lost and it is readily available.

6.3 Documenting

Individual prompts worthy of documentation can be incorporated directly or via footnote in the running text. If there are relevant prompt sequences or shorter conversation sequences, it is advisable to provide a web link to the conversation sequence (if available) or a documentation in the appendix. If many and/or long sequences of prompts or conversations are to be documented and this cannot be done by providing a web link, these interactions should not be part of the written work but should be saved separately and made accessible depending on the intended use. It is generally advisable to treat this data as research data and, if available, to record it in the data management plan. If there is no such plan, reference must be made to the data at an appropriate place of the work.

The following applies to all three variants: if the prompts and conversation histories are not self-explanatory, they must be commented and discussed if necessary. In the case of conversation sequences, a tabular presentation with an additional comments column can be useful. This can be particularly useful or even mandatory in the context of Bachelor's or Master's theses, for example, as proof of the necessary AI competence. Depending on the application, further columns are conceivable.

Table 2: Example table for documenting the course of a conversation with an AI assistant. The “Assistant” is ChatGPT 4o from OpenAI. The conversation was held on 04.12.2024 via the web browser https://openai.com/index/chatgpt/ .

Footnotes

(1) https://www.gesetze-im-internet.de/urhg/index.html, last accessed on 28.02.2025

(2) https://eur-lex.europa.eu/legal-content/DE/TXT/HTML/?uri=OJ:L_202401689, last accessed on 28.02.2025

(3) https://eur-lex.europa.eu/legal-content/DE/TXT/HTML/?uri=CELEX:32016R0679, last accessed on 28.02.2025

(4) Sources worthy of citation must be reliable, verifiable (published) and scientifically sound. This includes, in particular, scientific publications from specialist journals, books or specialist conferences. Citation-worthy sources can be searched for using the TUfind search portal of the ULB Darmstadt. In addition, there are numerous (subject-specific) databases with citable sources, which are listed in the database information system DBIS. Further information on DBIS can be found at https://www.ulb.tu-darmstadt.de/artikel_details_1664.de.jsp.

(5) “General Examination Regulations of the Technical University of Darmstadt (APB)”, Tanja Brühl, 7th Amendment, https://www.intern.tu-darmstadt.de/media/dezernat_ii/ordnungen/apb-english.pdf (opens in new tab), last accessed on 02.05.2025

(6) “How to cite ChatGPT”, Timothy McAdoo, 07.04.2023, https://apastyle.apa.org/blog/how-to-cite-chatgpt, last accessed on 09.12.2024

(7) https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html, last accessed on 19.12.2024

(8) https://style.mla.org/citing-generative-ai/, last accessed on 19.12.2024

(9) For example, with the AI tool “ArgumenText” from Summetix, which has been licensed by the ULB Darmstadt. It enables the search for natural language arguments in scientific literature. Neural networks find and summarize the pros and cons of certain topics in real time.

(10) You can find more about research data at https://www.tu-darmstadt.de/tudata/tudata/digitale_forschungsdaten_an_der_tu_tudata/index.de.jsp. The official guidelines for handling digital research data at TU Darmstadt can be found at https://tuprints.ulb.tu-darmstadt.de/23200/.