Innovative Educational Research (INNER) is dedicated to upholding the highest standards of academic integrity, originality, and ethical practices. Considering the growing integration of Artificial Intelligence (AI) tools within academic workflows, this policy outlines the acceptable and unacceptable uses of AI tools by authors, reviewers, and editors involved in the publication process.
For Authors
Permitted Uses of AI
Authors may use AI tools to assist with certain non-substantive aspects of manuscript preparation, as outlined below:
- Language and Readability: AI tools may be employed for grammar correction, language refinement, and stylistic adjustments. Authors are responsible for reviewing and verifying the accuracy of any changes made by these tools.
- Illustrative Examples: Authors may include AI-generated examples for explanatory or illustrative purposes, provided that these examples are clearly labeled and properly cited in the manuscript.
- Data Analysis: AI tools may support data analysis performed through traditional research methods. However, the methodology must include a detailed description of the tools used, the parameters applied, and the rationale for their selection.
Prohibited Uses of AI
- Authorship and Content Creation: Authors are prohibited from using AI tools to generate substantive content for manuscripts, including sections such as abstracts, literature reviews, methodologies, results, or conclusions. Intellectual contributions must be entirely the work of the authors.
- Data Generation: AI tools must not be used to fabricate or simulate data, statistical analyses, or findings. All reported data must be authentic and verifiable.
- Image Creation: The use of AI tools to generate images, figures, or visualizations is prohibited due to concerns related to authenticity and ethical compliance.
Disclosure Requirement
Authors are required to disclose any use of AI tools in their manuscript. A disclosure statement should be included in the Acknowledgments section, specifying the tools used and their intended purpose (e.g., "This manuscript was proofread using [AI Tool Name] for grammar and clarity improvement.").
For Reviewers
Permitted Uses of AI Reviewers may use AI tools to assist in the preparation of their review reports, including:
- Refining language, grammar, or formatting for clarity and readability.
- Verifying references or basic information related to the manuscript content.
Prohibited Uses of AI
- Evaluation of Intellectual Content: Reviewers must not use AI tools to assess the intellectual content of the manuscript or to generate substantive portions of their review. The evaluation must be entirely their own.
- Confidentiality: Reviewers must ensure that manuscript content is not disclosed to AI tools, thereby maintaining the confidentiality of the review process.
Responsibilities
Reviewers are expected to report any potential misuse of AI tools by authors, such as the generation of AI-based content or fabricated data, to the editorial team. Reviewers must uphold confidentiality throughout the review process.
For Editors
Permitted Uses of AI Editors may use AI tools for administrative tasks, including:
- Streamlining workflows or tracking manuscript progress.
- Enhancing communication clarity in editorial decisions or reviewer invitations.
Prohibited Uses of AI
- Substantive Editorial Decisions: Editors must not use AI tools to make substantive editorial decisions or to evaluate the intellectual merit of a manuscript.
- Content Creation: AI tools must not be used to generate content for editorial letters or correspondence without human oversight and validation.
Responsibilities
Editors must ensure compliance with the AI usage policy by:
- Verifying that authors disclose any use of AI tools and address any inconsistencies or ethical concerns.
- Ensuring that the peer review process remains rigorous and free from AI-generated biases.
- Providing reviewers with guidance on AI-related ethical standards and monitoring adherence to the journal’s policies.
Ethical Considerations for All Parties
AI tools may generate plausible sounding but incorrect or fabricated information, commonly referred to as "AI hallucinations." All parties involved in the publication process—authors, reviewers, and editors—must ensure the accuracy and validity of any content influenced by AI. Furthermore, the use of AI must comply with ethical standards, including respect for data privacy, informed consent, and mitigation of biases.
Consequences of Non-Compliance
Failure to comply with this AI policy may result in the following:
- Authors: Manuscript rejection or retraction of published articles.
- Reviewers: Removal from the reviewer pool for violating confidentiality or misusing AI tools.
- Editors: Investigation and corrective actions for editors who misuse AI tools or fail to enforce the policy.
Commitment to Ethical AI Use
INNER is committed to promoting ethical and responsible AI use in academic publishing. By adhering to this policy, all participants in the publication process contribute to maintaining the integrity, rigor, and trustworthiness of scholarly research. For more information on the use of AI and AI-assisted technologies in scientific writing, additional details are available here.
For further inquiries or clarification, please contact the editorial office at editor@innovativedu.org .