AI systems can support literature research, but they should not be seen as flawless tools.
AI‑generated content may be incomplete, contain incorrect or non‑citable references (for example, wrong journal titles), and sometimes point to literature resources that do not actually exist, even if they look convincing at first. They also do not always follow academic standards reliably, which can lead to unintended and biased results.
It is therefore important to use this technology carefully and to check every output critically.
The use of AI tools in literature research requires careful consideration of legal and ethical frameworks.
The EU AI Act regulates AI systems on the basis of risk and mandates the labeling of AI‑generated content. Sensitive or confidential information should not be entered into general AI assistants, as many systems are hosted outside the EU and may store user inputs for training purposes. Uploading copyrighted materials is not permitted; only public‑domain or appropriately licensed works may be used.
Ethically, the substantial resource consumption of large models and the problematic labor conditions involved in global data‑annotation processes must also be taken into account. Moreover, the integration of AI‑generated results into scholarly work requires critical reflection to avoid distortions and to safeguard the integrity of the research process.
AI‑supported systems generate results on the basis of extensive but often non‑transparent datasets. It is often impossible to trace which sources were used, as the data basis is only disclosed to a limited extent.
Many tools primarily analyse freely accessible, English‑language literature, while current or discipline‑specific content may not be taken into account. Additionally, paywalled resources such as e-books, journals, or databases might also be missing from the results. AI Tools usually do not conduct scholarly verification of sources, and their capabilities vary depending on the provider.
The suitability of such tools therefore varies depending on the academic field and the language involved. Responsibility for adhering to academic standards in accordance with good scientific practice lies with the user.
Another significant issue is model bias: AI systems often reproduce prejudices or skewed patterns from the training data, which can result in inaccurate, biased, or discriminatory outputs.
Moreover, AI tools frequently generate statements that appear plausible but are in fact entirely fabricated. Because the sources of such errors are often opaque, users can rarely identify them. These so‑called 'hallucinations' exacerbate existing challenges posed by bias, as AI not only reproduces underlying distortions but also introduces additional inaccuracies.
AI Tools usually do not conduct scholarly verification of sources, and their capabilities vary depending on the provider.