Generative AI Policy

Guidelines on Artificial Intelligence, AI tools and Authorship

These guildelines are published by COPE. The use of artificial intelligence (AI) tools such as ChatGPT or Large Language Models in research publications is expanding rapidly. COPE joins organisations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper.
Use of AI by authors

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.

Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

Use of AI by reviewers

Reviewers are prohibited from submitting the manuscript content to external AI systems, as this violates confidentiality principles. However, reviewers may use AI tools to improve the structure and language of their reviews, provided that the manuscript content is not shared with the AI system. Any changes or recommendations made by reviewers must be based on their own analysis of the material.


Use of AI by the editorial team

The editorial team may use AI tools for technical checks of articles (e.g., plagiarism detection or grammar evaluation), automation of some editorial processes, and assistance in maintaining communication with authors and reviewers. However, the editorial team does not use AI to make editorial decisions regarding the publication of articles or to modify the content of manuscripts.


Ethical principles and academic integrity

This policy aims to ensure academic integrity in the use of AI technologies. Authors and reviewers are required to adhere to the following ethical principles:

  • the use of AI should not undermine academic integrity or scientific ethics;
  • all results obtained using AI must be verified and confirmed by the author or reviewer;
  • AI technologies must not be used to manipulate data or create fraudulent results.

Responsibility for AI use

Authors, reviewers, and the editorial team are responsible for the use of AI technologies within the academic process. If the use of AI tools leads to errors or breaches of ethics, these must be immediately disclosed and rectified. The editorial team has the right to retract articles that violate ethical norms or require further clarification from the authors.


Policy updates

This policy will be reviewed and updated in line with the development of AI technologies and changes in international editorial standards (Elsevier). Updates will account for new practices in the use of AI in the academic process as technology advances.