Generative AI policy
This policy aims to provide transparency and guidance for authors, reviewers, and editors regarding the use of artificial intelligence (AI) and AI-assisted technologies in writing. These technologies may be utilized to enhance readability and language clarity; however, they should not replace key authorial tasks, such as generating scientific insights or providing clinical recommendations. The application of such technologies must be conducted under human oversight, and authors bear full responsibility for the content of their work.
Authors are required to disclose the use of AI in their manuscripts, and AI should not be listed as an author or contributor. Adherence to publishing ethics is essential, ensuring that all contributions meet the standards of originality and proper attribution.
The use of AI to create or modify images in submitted manuscripts is prohibited, with specific exceptions related to research design. If AI is employed, it must be clearly described in the acknowledgments section, detailing the use of the AI tools and providing relevant model and manufacturer information. Additionally, the use of AI in the production of artwork, such as graphical abstracts, is not permitted.
Reviewers must treat all manuscripts as confidential and are prohibited from uploading any material into generative AI tools, as this could violate confidentiality and proprietary rights, particularly concerning personally identifiable information. This confidentiality requirement also applies to all communications related to the manuscript, including reviewer reports.
Reviewers should not use generative AI to assist in the review process, as critical thinking and original assessment are beyond the capabilities of such technologies, which may produce incorrect or biased conclusions. Reviewers are fully accountable for their review content.