The International Journal of Communication and Art (IJCOMAR) recognizes the potential of generative artificial intelligence (AI) tools, such as ChatGPT and other large language models (LLMs), to assist authors in the writing of scholarly manuscripts. However, to ensure responsible use and to maintain the integrity of our publications, any use of generative AI during the research or writing process must be clearly disclosed.
As IJCOMAR, we uphold high ethical standards regarding the use of generative AI technologies (e.g., text- and image-generating LLMs, chatbots, AI-assisted graphic tools such as ChatGPT, DALL·E, etc.) in academic publishing. Below, we outline the key principles and ethical responsibilities for authors, editors, and reviewers, based on the guidance provided by the Committee on Publication Ethics (COPE), the International Committee of Medical Journal Editors (ICMJE), and the World Association of Medical Editors (WAME). These principles are integrated into our publication policy and serve as a reference for all stakeholders.
General Principles
• Only Humans Can Be “Authors”:
AI applications are not accepted as authors. Generative AI tools cannot assume responsibility for a publication's content, declare conflicts of interest, or hold legal rights such as copyright. Therefore, chatbots like ChatGPT or other AI-assisted tools cannot be listed as authors or co-authors. Our journal aligns with the COPE and ICMJE position: authorship criteria entail legal and ethical responsibilities that only humans can fulfill.
• Transparency and Disclosure:
All authors and stakeholders must disclose the use of AI tools with full transparency. If AI tools were used in writing, data collection or analysis, or generating visuals/figures, the specific tool and its role should be explained in detail in the relevant section of the manuscript. This is typically reported in the Methods section and, if necessary, in the Acknowledgements and cover letter at submission. Transparency requires a clear description of the extent and nature of the AI’s contribution.
• Academic Integrity and Responsibility:
AI use must not compromise the scientific integrity of research. Authors retain full responsibility for the originality and accuracy of their submissions, including content generated with AI assistance. Regardless of AI involvement, all work must adhere to academic integrity standards, such as avoiding plagiarism, data fabrication, or misleading information.
• Accuracy and Reliability:
Generative AI tools may produce plausible yet incorrect, incomplete, or biased outputs. Such content should not be automatically accepted as valid. Human oversight and expertise are essential. Authors, editors, and reviewers must verify any AI-generated information before using or publishing it to prevent the dissemination of false or unverified content. AI tools can assist but must not replace human critical thinking and expertise.
• Privacy and Security:
Confidentiality of research data and manuscript drafts must be maintained when using AI tools. Submitting unpublished manuscripts or confidential data to general-use chatbots (e.g., ChatGPT) may expose sensitive information to unauthorized access. Authors, editors, and reviewers are advised not to upload confidential materials to AI systems. All stakeholders are responsible for protecting privacy and intellectual property rights.
Ethical Responsibilities of Authors
• Declaration of AI Use:
Authors are required to disclose any use of generative AI tools during manuscript preparation. In line with ICMJE recommendations, our journal inquires about AI use at submission. Authors must state the tool used and its purpose in the cover letter and describe its use in relevant sections of the manuscript (e.g., in the Methods or Acknowledgements). For instance, AI support in writing should be acknowledged, while use for data analysis or visual generation should be detailed in the Methods section.
• Do Not Attribute Authorship to AI:
No AI tool may be listed in the author list or cited as an author. AI systems cannot meet authorship criteria, such as making meaningful intellectual contributions, approving the final version, or being accountable for the work as a whole. Authors must affirm that all listed authors meet these criteria and that AI tools are not included as contributors.
• Responsibility for Content and Corrections:
Authors are accountable for the accuracy of all statements and findings in their manuscript, including content generated with AI assistance. Any AI-generated text or visuals must be critically reviewed and, if necessary, corrected by the author. Authors should not accept AI outputs blindly and must verify their relevance, consistency, and reliability. Ultimately, authors bear responsibility for the accuracy and integrity of all materials submitted.
• Avoidance of Plagiarism and Proper Referencing:
Authors must ensure their work is free from plagiarism, including content generated by AI tools. AI outputs may sometimes replicate or resemble existing work. All quotations or borrowed material must be properly cited. Authors are responsible for identifying the original sources of AI-generated claims or data and providing complete, accurate references. AI tools may produce faulty or fabricated citations, so all references must be checked for accuracy and relevance.
• Clear Disclosure of AI Contributions:
Authors must define the scope of AI usage in their work. In accordance with WAME recommendations, if a chatbot was used to draft text, this should be stated in the Acknowledgements along with the exact prompts used. Similarly, if an AI tool was used for analytics, result generation (e.g., tables, figures), or coding, this should be disclosed in the manuscript (especially in the Abstract and Methods), along with details such as tool name, version, prompt content, and date/time of query. These details are crucial for ensuring scientific transparency and reproducibility.
• Scientific Integrity and Honesty:
Authors must ensure that their article reflects their own original research findings and scholarly contribution, even if assisted by AI. AI tools must never be used to fabricate results or manipulate data. The content of the article must truthfully represent the authors' data and ideas. Misusing AI to generate fabricated or plagiarized content is considered scientific misconduct and a serious ethical violation.
Ethical Responsibilities of Editors
• Enforcing the AI Use Policy:
Editors are responsible for ensuring that authors disclose AI use appropriately. In line with ICMJE guidelines, IJCOMAR requires authors to declare AI usage during submission, and editors must verify whether this information is correctly and fully reflected in the manuscript. In cases of missing or inconsistent declarations, editors should follow up with authors for clarification.
• Confidentiality and Data Security:
Editors must uphold confidentiality and avoid using AI tools that might compromise it. Uploading the content of unpublished manuscripts to AI systems may result in unauthorized third-party access. Editors must not share manuscript content, status, or peer reviews with external parties or AI tools. Editors are also responsible for informing and cautioning reviewers regarding confidentiality.
• Guiding Reviewers:
Editors should provide reviewers with clear guidelines regarding the use of AI tools. Reviewer instructions should outline the permissible scope of AI use (e.g., limited use for language editing) and require reviewers to notify the editor if AI was used. Editors must monitor for potential issues stemming from AI use, such as privacy breaches or factual inaccuracies, and act proactively to maintain integrity in peer review.
• Use of AI Detection Tools:
IJCOMAR employs technological tools to identify AI-generated or modified content and detect possible ethical breaches. Editors may use software to spot AI-generated texts, fake references, or plagiarized content. These tools should be made available to all editorial staff, regardless of financial resources, to help safeguard scientific integrity and adapt to new forms of misconduct arising from AI advancements.
• Editorial Content Responsibility:
Editors must exercise caution when using AI to draft editorial content such as decision letters, review reports, or correspondence. If AI assistance is used, editors are responsible for verifying the accuracy, objectivity, and tone of the output. They must be alert to potential errors or misleading statements produced by AI and accept full responsibility for the content.
• Editorial Transparency Regarding AI Use:
If AI tools are used during the editorial process (e.g., to draft review comments or decision letters), editors must disclose this transparently to relevant reviewers and authors. For example, in cases where AI-assisted decision letters are issued, the extent of AI involvement should be recorded in editorial documentation and reported to the editorial board if necessary. Transparency in the editorial process is essential to maintain trust and prevent misunderstandings.
Ethical Responsibilities of Reviewers
• Confidentiality and Accountability:
Reviewers must treat manuscripts as confidential material and must not share them with third parties. Uploading any part of a manuscript to unauthorized AI tools is strictly prohibited, as it violates confidentiality. In accordance with ICMJE recommendations, reviewers should be informed about AI-related policies and are expected not to discuss the content of the review publicly. Reviewers must use submitted files solely for evaluation and destroy them after the review, including refraining from uploading them to AI platforms.
• Disclosure of AI Use:
Reviewers may use AI tools in a limited capacity to improve language or clarity in their reports, but they must inform the editor if they do so. If AI was used to generate or refine specific paragraphs or suggestions, reviewers must verify the accuracy and quality of the content before incorporating it into their reports. The use of AI should remain a supplementary tool, never a substitute for the reviewer’s own analysis or judgment.
• Responsibility for Content Accuracy:
Reviewers are responsible for the accuracy of all content and recommendations in their reports, including any information derived from AI tools. If a reviewer uses AI-generated summaries, technical data, or source suggestions, they must personally verify their correctness. Every assessment must be grounded in the reviewer’s scientific expertise. Even if AI is used, the final evaluation must reflect the reviewer’s independent judgment. Reviewers are also accountable for any errors or biases in AI-generated content they incorporate.
• Objectivity and Diligence:
AI tools may reflect certain biases or have limitations. Reviewers should remain critical of AI-generated suggestions and maintain objectivity in their evaluation. For example, AI may overlook critical issues or exaggerate minor points. Reviewers must rely on their subject-matter expertise and apply a critical lens to any AI-generated input. Final reports should reflect the reviewer’s own scholarly evaluation, incorporating AI contributions only after thorough review.