AI in Scientific Publications: Navigating the Rules (July 2025)

AI in Scientific Publications: Navigating the Rules (July 2025)

The rapid advancements in Artificial Intelligence (AI), particularly generative AI tools like Large Language Models (LLMs) and image generators, are transforming how research is conducted and communicated. As these tools become more sophisticated, the academic publishing landscape is grappling with how to integrate them ethically and transparently. As of July 2025, clear guidelines are emerging, but it's crucial for researchers to stay informed.

This post will detail the current consensus and specific policies from major publishers regarding the use of AI-generated content and figures, along with broader governmental recommendations on AI in research. Understanding these rules is paramount to maintaining research integrity and ensuring your work meets publication standards.


General Principles: The Core Consensus

While specific wordings may vary, several overarching principles guide most major publishers' policies on AI use:

  • AI Cannot Be an Author: This is a universal rule. AI tools lack accountability, responsibility, and the ability to approve a final manuscript, which are fundamental criteria for authorship.
  • Transparency is Key: Any significant use of AI tools in generating text, figures, or code must be explicitly disclosed. This disclosure typically goes in the Acknowledgments or Methods section.
  • Human Oversight & Responsibility: Authors remain fully responsible and accountable for the accuracy, integrity, originality, and ethical implications of all content, including any generated or assisted by AI. AI outputs must be carefully reviewed and edited by human authors.
  • Confidentiality in Peer Review: Reviewers and editors are generally prohibited from uploading submitted manuscripts into generative AI tools, as this can breach confidentiality and proprietary rights.

AI-Generated Text: What's Allowed and What's Not

The use of AI for text generation is often differentiated:

  • AI-Assisted (Generally Permitted with Disclosure): Using AI for minor improvements like grammar checks, language polishing, spell-checking, or rephrasing for clarity is generally acceptable. This is seen as akin to using advanced editing software. However, some publishers still request disclosure for even this level of assistance if it's substantial.
  • AI-Generated (Restricted or Prohibited, Requires Disclosure): Generating significant portions of text, entire sections, or substantive commentary directly from AI tools is heavily scrutinized. While some publishers permit it with explicit, prominent disclosure (e.g., in the Methods section detailing prompts and verification), others may reject manuscripts with extensive AI-generated content due to concerns about originality, factual accuracy ("hallucinations"), and potential plagiarism/copyright infringement.

AI-Generated Figures & Images: A More Cautious Stance

Policies regarding AI-generated visuals tend to be stricter due to concerns about data integrity, reproducibility, and potential misrepresentation:

  • General Prohibition: Many leading journals and publishers currently prohibit the use of generative AI to *create or alter* scientific figures, images, or artwork within the main manuscript. This includes enhancing, obscuring, moving, removing, or introducing specific features.
  • Permitted Adjustments: Basic adjustments like brightness, contrast, or color balance are usually acceptable, provided they do not obscure or eliminate original information. Publishers may use image forensics tools to detect irregularities.
  • Exceptions & Disclosure:
    • Cover Art/Graphical Abstracts: Some publishers may permit AI-generated images for non-scientific elements like journal cover art or graphical abstracts, but *only* with prior permission from the editor, clear disclosure, and assurance that all necessary rights are cleared.
    • Research on AI Itself: If the paper is specifically about AI-generated images or their analysis, the use of such images would be reviewed on a case-by-case basis and requires explicit labeling.
  • Raw Data: Authors may be asked to provide pre-AI-adjusted versions of images or the composite raw images used to create the final submitted versions for editorial assessment.

Specific Journal Policies (as of July 2025)

It is **imperative** to always check the specific author guidelines of your target journal, as policies can evolve rapidly. Here are examples from major publishers:

Springer Nature (includes Nature, Scientific Reports, etc.)

  • AI Authorship: Does not attribute authorship to AI.
  • Generative AI Images: "SN does not allow the inclusion of generative AI images in our publications." Exceptions may apply for images directly referenced in a piece specifically about AI, provided ethics and copyright are adhered to.
  • AI-Assisted Text: Use of LLMs for "AI assisted copy editing" does not need to be declared. Substantive use should be documented in the Methods section.
  • Peer Reviewers: Asked not to upload manuscripts into generative AI tools.
  • Springer Nature Editorial Policies (AI section)

Elsevier

  • AI Authorship: Disallows AI tools from being listed as authors.
  • AI-Assisted Text: Authors may use generative AI to improve readability and language, but with human oversight and accountability. Disclosure is required.
  • Generative AI Images: "We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts." This includes enhancing, obscuring, moving, removing, or introducing features. Exceptions for cover art may be allowed with prior permission and disclosure.
  • Elsevier AI Author Policy (Note: Link directs to general AI guidelines, specific image policy is often within broader ethical guidelines or author instructions).

Wiley

  • AI Authorship: AI tools cannot be authors.
  • Human Oversight: AI technology may only be used as a "companion" to the writing process, not a replacement. Authors are fully responsible for accuracy.
  • Disclosure: Authors must maintain documentation of AI use and disclose it upon submission. This includes instances where AI "generates supplementary materials, such as images, tables, or charts."
  • Generative AI Images: While their general guidelines emphasize disclosure, specific journal policies within Wiley may explicitly prohibit AI-generated images within the main content. Always check the specific journal's author guidelines.
  • Wiley AI Guidelines for Authors

PLOS (Public Library of Science)

  • AI Authorship: AI tools cannot be listed as authors.
  • Disclosure: "Contributions by artificial intelligence (AI) tools and technologies to a study or to an article's contents must be clearly reported in a dedicated section of the Methods, or in the Acknowledgements section for article types lacking a Methods section."
  • Image Integrity: PLOS has strong policies on image integrity, and while not explicitly detailing AI-generated images as a separate category, the general expectation is that figures represent original, unaltered data.
  • PLOS Ethical Publishing Practice (AI section within)

IEEE (Institute of Electrical and Electronics Engineers)

  • Disclosure: "The use of artificial intelligence (AI)–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical." This also applies to figures, images, and code. The specific AI system used should be cited.
  • Authorship: AI tools cannot be authors.
  • IEEE Author Center Submission Policies (See "Guidance for IEEE Publications Regarding AI-Generated Text").

ACM (Association for Computing Machinery)

  • AI Authorship: Generative AI tools may not be listed as authors.
  • Disclosure: Use of generative AI tools to create new content (text, images, tables, code) must be fully disclosed in the Acknowledgments or prominently elsewhere. The level of disclosure should be commensurate with the proportion of new content generated.
  • Responsibility: Authors accept full responsibility for the veracity and correctness of all material, including computer-generated material.
  • ACM Publications Policies - FAQ on AI

Government Recommendations & Broader Ethical Considerations

While direct governmental "rules" for AI in *scientific publications* are less common, governments and research bodies are issuing broader ethical guidelines for AI development and use that implicitly apply to research:

  • Transparency and Explainability: Governments emphasize the need for transparency in AI systems, including how they are trained and how their outputs are generated. This aligns with publishers' disclosure requirements.
  • Fairness and Bias: AI models are trained on data that may contain biases. Governments (e.g., the UK Government's AI Playbook) stress the importance of identifying and mitigating bias in AI outputs to ensure fairness and prevent the perpetuation of stereotypes or misinformation. Researchers using AI must be aware of and address potential biases in their generated content.
  • Human Oversight and Control: A recurring theme is that AI should augment, not replace, human capabilities. Meaningful human control at appropriate stages of AI deployment is crucial. This reinforces the idea that authors are ultimately responsible for their work.
  • Intellectual Property and Copyright: The legal landscape around AI-generated content and copyright is still evolving. Governments are exploring these issues, and researchers must be mindful of the terms of use of any AI tool to ensure they have the necessary rights to publish the generated content.
  • Data Privacy: When using AI tools, especially with sensitive or unpublished data, researchers must ensure compliance with data protection laws and protect confidentiality.
  • Example: UK Government's AI Playbook: This document (while aimed at government organizations) outlines principles for the safe, responsible, and effective use of AI, including using AI lawfully, ethically, and responsibly, managing the AI life cycle, and being open and collaborative. These principles are highly relevant to academic research.

Conclusion: Responsibility in the Age of AI

The integration of AI into scientific publishing is a dynamic and evolving area. While AI tools offer unprecedented opportunities for efficiency and innovation, they also bring significant ethical and practical challenges. As of July 2025, the clear message from major publishers and emerging governmental guidance is one of **human responsibility, transparency, and caution**. Researchers must meticulously disclose their use of AI, critically evaluate all AI-generated content for accuracy and bias, and remember that the ultimate accountability for the published work rests solely with the human authors. Staying updated with journal-specific policies and broader ethical guidelines is crucial for navigating this new frontier responsibly.

Keywords for SEO: AI in scientific publications, AI generated content, AI figures, journal policies AI, academic publishing AI, generative AI rules, research ethics AI, AI authorship, transparency in AI research, scientific integrity AI, July 2025 AI policy, Nature AI policy, Elsevier AI policy, Springer AI policy, PLOS AI policy, IEEE AI policy, ACM AI policy, government AI guidelines research.

To install this app on your device, tap the share icon and then select "Add to Home Screen".