Generative Ai Policy

The rapid advancement of generative artificial intelligence (AI) has brought significant changes to academic writing and research practices. Tools such as ChatGPT, Gemini, and other AI-driven platforms are increasingly used to assist authors, editors, and reviewers in the scholarly communication process. While these technologies offer new possibilities for efficiency and creativity, they also raise important questions about research integrity, authorship, originality, and accountability.

Recognizing these opportunities and challenges, Nabawi Journal of Hadith Studies considers it essential to provide clear guidance on the responsible and ethical use of generative AI within the research and publication process. The journal’s policy has therefore been developed with reference to Elsevier’s Generative AI Policies for Journals and the APA Journals Policy on Generative AI, ensuring that human scholarly judgment remains central at all stages of academic work.

Article 1: Purpose

This policy provides clear guidance on the responsible use of generative AI and AI-assisted tools by authors, reviewers, editors, and the publisher in all stages of academic publishing.

Article 2: Definitions

For the purpose of this policy:

  1. Generative AI refers to artificial intelligence systems capable of producing new text, images, or other content in response to user prompts.
  2. Text-Based Generative AIis divided into two categories:
    • Generative AI with verifiable references — provides responses with links to academic sources. Examples: Usul.ai, Perplexity, Scite Assistant, Elicit, Consensus.
    • Generative AI without verifiable references — produces outputs without reference links. Examples: ChatGPT, Gemini, Claude, Copilot.
  3. AI-Assisted Tools refer to non-generative AI (e.g., Grammarly, DeepL, Trinka, LanguageTool) used for grammar, translation, or formatting.
  4. AI Use includes any use of the above systems for writing, editing, data analysis, image generation, or reference searching.

Article 3: Use of AI by Authors

  1. AI may assist writing but must never replace scholarly reasoning or authorship.
  2. Authors may use AI tools under direct human supervision to improve readability or structure.
  3. Authors remain responsible for accuracy, originality, and ethical integrity.
  4. AI use must be declared in the manuscript (Methods or Author’s Note) with tool name and purpose.
  5. AI use must respect privacy, IP rights, and data protection.

Reason: To maintain author accountability and prevent misuse.

Article 4: Use of AI in Images and Illustrations

AI-generated visuals may distort or misrepresent scientific information.

  1. AI-generated or modified images are prohibited.
  2. Minor edits (brightness, contrast, color) are allowed if they don’t alter meaning.
  3. Exceptions allowed only if AI is part of methodology and explained in Methods section.
  4. AI-generated cover art requires editorial approval.

Reason: To preserve visual integrity and prevent manipulation.

Article 5: Disclosure and Transparency

  1. Authors must disclose AI tools used, their purpose, and human oversight.
  2. AI cannot be cited or listed as an author.
  3. Disclosure should appear in Methods or Author’s Note section.

Reason: To ensure openness and proper attribution.

Article 6: Authorship Responsibility

  1. Human authors are responsible for verifying all AI-generated content.
  2. AI tools cannot hold authorship or co-author status.
  3. Authors must verify all quotations, data, and interpretations before publication.

Reason: To affirm human accountability and scholarly accuracy.

Article 7: Ethics, Privacy, and Confidentiality

  1. Authors must not upload confidential or unpublished data to open AI systems.
  2. Reviewers/editors must preserve confidentiality and avoid uploading manuscripts to AI tools.
  3. All parties must ensure AI use aligns with ethical and data protection standards.

Reason: To safeguard privacy and research integrity.

Article 8: Use of AI by Reviewers and Editors

  1. AI cannot replace human judgment in editorial or peer review.
  2. Reviewers/editors must not upload manuscripts to AI tools (confidentiality risk).
  3. They may not rely on AI to evaluate or recommend editorial decisions.
  4. However, AI-based academic search tools (e.g., Usul.ai, Perplexity, Scite Assistant) may be used with manual verification.
  5. Full responsibility remains with reviewers and editors.
  6. Policy may be updated as AI tools meet ethical and accuracy standards.

Reason: To preserve independence, confidentiality, and avoid AI feedback loops.

Article 9: Use of AI by the Publisher (LP2M Ma'had Aly Hasyim Asy'ari)

  1. AI may support technical operations but not editorial judgment.
  2. AI tools may assist in technical checks, copyediting, and proof preparation.
  3. All publication decisions remain under human oversight.

Reason: To improve efficiency while maintaining human accountability.

Article 10: Review and Revision of Policy

This policy will be periodically reviewed in response to developments in AI technology and evolving standards of research ethics.

Last update: 29/10/2025