Generative Ai Policy
This policy is formulated with reference to Elsevier’s standards on responsible AI use to ensure integrity, transparency, and accountability in the research and publication process.
Article 1: Use of AI by Authors
AI may support writing but must never replace scholarly judgment.
- Authors may use generative AI tools to assist manuscript preparation under human supervision.
- The author retains full responsibility for the accuracy, originality, and ethical integrity of all content.
- Any use of AI must be clearly declared at the time of submission.
- AI use must respect data privacy, intellectual property rights, and third-party rights.
Reason: To maintain author accountability and prevent AI misuse that may compromise research authenticity.
Article 2: Use of AI in Images and Illustrations
AI-generated visuals may distort data or mislead interpretation.
- The use of AI to create or alter images within the manuscript is prohibited.
- Minor adjustments (e.g., color, contrast) are allowed if they do not change original information.
- Exceptions apply only when AI is part of the research method and fully explained.
- AI-generated illustrations or cover art require explicit editorial approval.
Reason: To ensure visual integrity and prevent data manipulation.
Article 3: Disclosure and Transparency
Transparency builds trust and allows proper assessment of AI’s role.
- Authors must disclose the AI tool used, its purpose, and the extent of human oversight.
- Any use of AI in research must be described in the methodology section.
- AI tools may not be listed or cited as authors.
Reason: To uphold openness and ensure proper attribution of scholarly work.
Article 4: Use of AI by Reviewers and Editors
- AI cannot replace human judgment or confidentiality in peer review.
- Reviewers and editors must not upload manuscripts or communications to generative AI tools, as this may breach confidentiality.
- They must not rely on generative AI to assist in the review process, given the risks of inaccuracy and bias.
- They may use database-based AI for concept searching or plagiarism detection, provided original references are verified.
- Reviewers and editors are fully responsible for their assessments and decisions.
- This policy may be revised when AI meets ethical, security, and accuracy standards set by the journal.
Reason: To preserve the independence, confidentiality, and objectivity of the review and editorial process.
Article 5: Use of AI by the Publisher (LP2M Ma'had Aly Hasyim Asy'ari)
AI may support technical workflows but not editorial judgment.
- Publisher is alloved to uses AI to assist with technical checks, editing, and proof preparation.
- Human oversight remains central to all publication decisions.
Reason: To enhance efficiency while ensuring human control and accountability.