AI authorship Guidance
Authors remain responsible and accountable for the use of AI in their research, where it adheres with our guidance and principles. We expect all authors to use AI responsibly and ethically and any use must be declared as outlined in our policies and below.
The use of generative AI tools should be declared in the Introduction or Acknowledgements of the manuscript. The use of an AI tool for “AI assisted copy editing” purposes does not need to be declared.
AI is a tool and cannot be held accountable. Thus, authorship and other tasks requiring accountability can only be performed by a human.
Authors should familiarise themselves with the current known risks of using AI models before using them in their manuscript. AI models have been known to plagiarise content and to create false content. As such, authors should carry out due diligence to ensure that any AI-generated content is correct, appropriately referenced, and follow the standards as laid out in our Authors' Code of Conduct.
Thinking about using AI?
Chatbots and other AI tools can offer many time saving advantages. It is important to remember our AI Principles and policies when using AI. The quality of your AI output will be determined by your input. So, it is critical to remember to be responsible when entering prompts. Prompts are what you input into the AI in order to generate an output. Prompts can be questions, instructional text, data, images, video, or a combination.
- Consider how you craft your prompt in terms of bias, both in terms of Input and output.
- Do not include personal, sensitive, confidential or copyright information in prompts.
- Do not create prompts that could lead to harmful or malicious outcomes.
- Take accountability for the impact of your prompts.
- Before deploying prompts widely, test them to ensure they lead to appropriate and ethical responses.
- Be guided by our AI Principles.
- Stay informed about the latest developments in AI ethics and incorporate best practices into prompt creation.
Can AI tools like ChatGPT be listed as authors?
No. Large Language Models (LLMs) do not meet Springer Nature’s authorship criteria, as they cannot take accountability for the work. Their use must be documented in the manuscript’s Methods section (or equivalent) or for book authors in the acknowledgements, introduction of preface of the manuscript.
Do I need to declare AI-assisted copy editing?
The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared.
In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.
Are generative AI images allowed in publications?
Generally no. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, and are very case specific, Springer Nature are unable to permit its use for publication. There are a limited range of established exceptions, more information about which can be found here.
What types of images are covered by this policy?
Examples of image types covered by this policy include:
- video and animation, including video stills
- photography
- illustration such as scientific diagrams, photo-illustrations and other collages
- editorial illustrations such as drawings, cartoons or other 2D or 3D visual representations
Not included in this policy are text-based and numerical display items, such as: tables, flow charts and other simple graphs that do not contain images. Please note that not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.
Does Springer Nature ever use any AI tools for the generation of supporting editorial content?
Some Springer Nature journals and books occasionally use internal Springer Nature-developed AI tools to support the generation of accessory content, such as summary points. These are always edited and fact-checked by the author and/or editor to meet publication standards. Any substantive use of artificial intelligence beyond accessory content will be declared on an individual article/ book chapter basis.
Accessory content can include but is not limited to:
- key points
- editorial summaries
- glossary terms
- plain language summaries
How is content published by Springer Nature used in an AI context?
Our focus is always on how we can best use AI to help researchers get published faster, find relevant content quickly, ensure the integrity of the academic record, and reduce barriers. We have been using AI for over 10 years already in support of these goals. It remains, though, a fast-moving area and we are always open to things which will improve the experience for our customers and the wider research community. When we do this, we will always apply the same considerations we have to date:
- Does it benefit our customers and the wider research community?
- Is there a ‘human’ in the loop?
- Does it meet our AI governance principles of fairness, transparency, accountability, and respect?”
(See the next section for more information)
Can peer reviewers use generative AI tools?
While Springer Nature explores providing our peer reviewers with access to safe AI tools, we ask that peer reviewers do not upload manuscripts into generative AI tools.
Peer reviewers play a vital role in scientific publishing. Their expert evaluations and recommendations guide editors in their decisions and ensure that published research is valid, rigorous, and credible. Editors select peer reviewers primarily because of their in-depth knowledge of the subject matter or methods of the work they are asked to evaluate. This expertise is invaluable and irreplaceable. Peer reviewers are accountable for the accuracy and views expressed in their reports, and the peer review process operates on a principle of mutual trust between authors, reviewers and editors. Despite rapid progress, generative AI tools have considerable limitations: they can lack up-to-date knowledge and may produce nonsensical, biased or false information. Manuscripts may also include sensitive or proprietary information that should not be shared outside the peer review process. For these reasons we ask that peer reviewers do not upload manuscripts into generative AI tools.
If any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report.