Generative Ai Policy

This policy is formulated with reference to Elsevier’s standards on responsible AI use to ensure integrity, transparency, and accountability in the research and publication process.

Article 1: Use of AI by Authors

AI may support writing but must never replace scholarly judgment.

  1. Authors may use generative AI tools to assist manuscript preparation under human supervision.
  2. The author retains full responsibility for the accuracy, originality, and ethical integrity of all content.
  3. Any use of AI must be clearly declared at the time of submission.
  4. AI use must respect data privacy, intellectual property rights, and third-party rights.

Reason: To maintain author accountability and prevent AI misuse that may compromise research authenticity.

Article 2: Use of AI in Images and Illustrations

AI-generated visuals may distort data or mislead interpretation.

  1. The use of AI to create or alter images within the manuscript is prohibited.
  2. Minor adjustments (e.g., color, contrast) are allowed if they do not change original information.
  3. Exceptions apply only when AI is part of the research method and fully explained.
  4. AI-generated illustrations or cover art require explicit editorial approval.

Reason: To ensure visual integrity and prevent data manipulation.

Article 3: Disclosure and Transparency

Transparency builds trust and allows proper assessment of AI’s role.

  1. Authors must disclose the AI tool used, its purpose, and the extent of human oversight.
  2. Any use of AI in research must be described in the methodology section.
  3. AI tools may not be listed or cited as authors.

Reason: To uphold openness and ensure proper attribution of scholarly work.

Article 4: Use of AI by Reviewers and Editors

  1. AI cannot replace human judgment or confidentiality in peer review.
  2. Reviewers and editors must not upload manuscripts or communications to generative AI tools, as this may breach confidentiality.
  3. They must not rely on generative AI to assist in the review process, given the risks of inaccuracy and bias.
  4. They may use database-based AI for concept searching or plagiarism detection, provided original references are verified.
  5. Reviewers and editors are fully responsible for their assessments and decisions.
  6. This policy may be revised when AI meets ethical, security, and accuracy standards set by the journal.

Reason: To preserve the independence, confidentiality, and objectivity of the review and editorial process.

Article 5: Use of AI by the Publisher (LP2M Ma'had Aly Hasyim Asy'ari)

AI may support technical workflows but not editorial judgment.

  1. Publisher is alloved to uses AI to assist with technical checks, editing, and proof preparation.
  2. Human oversight remains central to all publication decisions.

Reason: To enhance efficiency while ensuring human control and accountability.