GenAI policy

Editorial Team

1. The Editorial Board of the Humanities Research in the Russian Far Eastacknowledges the significant value of artificial intelligence (AI) as an assistive or adaptive technology. While AI use is not inherently unethical and can benefit research and publishing, we express concern about its uncritical application in scholarly work. We strongly urge authors and reviewers to employ these tools responsibly, ensuring alignment with principles of academic integrity and research ethics.
2. The editorial team does not use generative AI (GenAI) to select reviewers, evaluate manuscripts, or make editorial decisions.
3. The editorial team actively monitors evolving ethical standards for AI in academia and will revise this policy as needed.
 

Authors

1. No disclosure is required for routine AI-assisted tasks (e.g., spelling/grammar checks, text formatting, or bibliography verification). However, authors must ensure AI edits preserve their original meaning and are solely responsible for the final text.
2. The use of GenAI tools requires full disclosure. Authors must specify which GenAI tool was used (including its version), how and for what purpose. This information should be included in the relevant section of the manuscript (e.g., "Methods"). A note should also be placed on the first page of the manuscript:  "The author(s) used [AI tool name and version] for [purpose] in preparing this work. All AI-generated content was reviewed and edited, and the author(s) take full responsibility for the final text". The Editorial Board reserves the right to reject manuscripts with undisclosed or improper AI use at any stage.
3. GenAI must not be used to create or alter images or other visual content.
4. Authors should be aware that GenAI systems may reproduce unattributed content from their training data. Without careful oversight, this poses significant risks of unintentional plagiarism and breaches of publication ethics in submitted manuscripts.
5. Authors must also recognize that GenAI may produce false citations, references, or fabricated data. Without proper verification by authors, such AI-generated content risks publishing unreliable results and compromising research integrity.
6. Authors bear full responsibility for ensuring the originality, accuracy, and integrity of their manuscripts and supplementary materials, including AI-generated portions. GenAI systems cannot be listed as authors under any circumstances, as they cannot assume accountability for their output.
7. Authors must carefully consider confidentiality and copyright implications before uploading their work or related materials to any AI platform.

Reviewers

1. Reviewers must not use GenAI tools during the peer review process for the following reasons:
Confidentiality concerns: GenAI systems retain uploaded materials, and their data handling practices – including where submissions are stored, who has access, and how they may be used – lack transparency. Therefore, reviewers are prohibited from uploading manuscripts, related files, or draft reviews (e.g., for language polishing or structuring comments) to GenAI platforms, as this risks breaching manuscript confidentiality and copyright protections.
Bias and reliability issues: GenAI is trained on datasets that may contain errors or biased content, potentially leading to outdated, inaccurate, or prejudiced evaluations. Consequently, reviewers must not employ GenAI to assess manuscripts, as it may result in flawed or incomplete judgments about the work’s quality.
The journal reserves the right to terminate collaborations with reviewers who violate these guidelines.
2. If reviewers suspect improper or undisclosed use of GenAI in a manuscript under review, they must document these concerns in their report.