Artificial Intelligence (AI) Policy
Introduction
With the increasing reliance on generative artificial intelligence (AI) technologies in academic writing and publishing, it has become essential to establish clear guidelines that regulate their use. These policies aim to enhance transparency, foster trust among authors, reviewers, editors, and readers, and uphold ethical and scientific standards throughout the publishing process.
I. Policies for Authors
-
The use of generative AI tools is permitted only in the writing stage, and strictly for improving language and readability. Final responsibility for the accuracy and integrity of the text lies entirely with the authors.
-
Authors must disclose the use of such tools in their manuscripts, and this disclosure will appear in the published article to ensure transparency and accountability.
-
Generative AI or AI-based tools must not be listed as an author or co-author, since authorship requires responsibilities and intellectual contributions that only humans can provide.
-
Authors remain fully responsible for the originality of their work, the integrity of data, compliance with ethical standards, and respect for intellectual property rights.
Regarding Figures and Illustrations
-
The use of generative AI tools to create or modify images, figures, or artwork in submitted manuscripts is not permitted.
-
The only exception applies when AI-based tools are an integral part of the research methodology (e.g., biomedical imaging techniques). In such cases, the process must be described in detail in the methodology section, including the tool’s name, version, and technical specifications.
-
Adjustments such as brightness, contrast, or color balance are allowed only if they do not obscure or misrepresent the original data.
-
The use of generative AI for graphical abstracts, scientific illustrations, or cover artwork requires prior approval from the editorial board.
II. Policies for Reviewers
-
Manuscripts submitted for review must be treated as confidential documents and may not be uploaded to or processed through AI tools, in order to safeguard authors’ rights and privacy.
-
Peer review reports are also confidential and should not be entered into AI tools, even for the purpose of language editing.
-
The peer review process requires critical thinking, expertise, and independent judgment, which are beyond the scope of AI technologies.
-
Reviewers are fully accountable for the accuracy and integrity of their reports.
III. Policies for Editors
-
Editors must treat all manuscripts as confidential and refrain from uploading any part of them into AI tools.
-
This requirement also extends to all editorial correspondence, including decision letters and communications with authors.
-
Editorial evaluation and decision-making involve intellectual and ethical responsibilities that cannot be delegated to AI.
-
If an editor suspects improper use of AI tools by authors or reviewers, the matter should be reported to the publishing authority.
Scope
This policy applies to all manuscripts submitted to the journal where AI is used at any stage of the research process, including—but not limited to—data analysis, academic writing assistance, experimental design, and interpretation of results.
Transparency and Disclosure
-
Declaration of AI Use: Authors must explicitly disclose any use of AI in their research or manuscript preparation, specifying the tools, algorithms, or software employed and their roles in the research process.
-
AI Contributions: Any section of the manuscript generated or substantially influenced by AI must be clearly indicated. The use of AI must not diminish the originality or intellectual contribution of the authors.
Ethical Considerations
-
Bias and Fairness: Authors must take steps to detect and mitigate potential biases in AI algorithms, ensuring that their use does not perpetuate or amplify existing biases.
-
Data Privacy: AI use must comply with all applicable data protection regulations and ethical standards, safeguarding the privacy and confidentiality of individuals.
-
Human Oversight: AI should augment, not replace, human judgment. Authors remain fully responsible and accountable for the content and integrity of their research.
Accuracy and Reliability
-
Validation and Verification: Authors should validate and verify AI outputs, conduct robustness checks, and ensure the reproducibility of AI-generated results.
-
Comprehensive Documentation: Full documentation of AI methodologies—including source code, training datasets, and parameter settings—should be provided to enable replication and verification.
Intellectual Property and Authorship
-
Authorship Criteria: AI cannot be listed as an author. Authorship is limited to individuals meeting the intellectual contribution criteria.
-
Attribution: Proper credit must be given to developers of AI tools and sources of datasets, with appropriate citations and acknowledgments.
Misuse of AI
-
Plagiarism Detection: The journal uses AI-based plagiarism detection to ensure originality. Using AI to generate content without appropriate attribution will result in manuscript rejection.
-
Fabrication and Falsification: The use of AI to fabricate or falsify data, results, or any part of the research is strictly prohibited and will lead to severe consequences, including article retraction and future submission bans.
Review and Monitoring
-
Peer Review: Manuscripts involving AI undergo rigorous peer review, including evaluation by AI experts to ensure ethical and methodological compliance.
-
Policy Updates: This policy will be periodically reviewed and updated to reflect technological advancements in AI and evolving ethical considerations.