AI (Artificial Intelligence) Policy
PENA TEKNIK recognises the value of artificial intelligence (AI) and its potential to help authors in the research and writing process. PENA TEKNIK welcomes developments in this area to enhance opportunities for generating ideas, accelerating research discovery, synthesising, or analysing findings, polishing language, or structuring a submission.
Large language models (LLMs) or Generative AI offer opportunities for acceleration in research and its dissemination. While these opportunities can be transformative, they are unable to replicate human creative and critical thinking. PENA TEKNIK’s policy on the use of AI technology has been developed to assist authors, reviewers and editors to make good judgements about the ethical use of such technology.
For Authors
AI Assistance
We recognize that AI-assisted writing is becoming increasingly common as access to this technology becomes easier. AI tools that provide suggestions for improving or enhancing your work, such as tools for correcting language, grammar, or structure, are considered AI assistance tools and do not require disclosure by authors or reviewers. However, authors are responsible for ensuring that their submissions are accurate and meet rigorous research standards.
Generative AI
The use of AI tools that can generate content such as references, text, images, or other forms of content must be disclosed if used by the author or reviewer. Authors must cite the original source, not the generative AI tool, as the primary source in the references. If your manuscript is partially or entirely generated using AI, this must be disclosed upon submission so that the editorial team can evaluate the generated content.
Authors are required to follow these guidelines:
Clearly state the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods section or acknowledgments, as appropriate.
Verify the accuracy, validity, and appropriateness of the content and citations generated by the language model, and correct any errors, biases, or inconsistencies.
Be aware of the potential for plagiarism where LLM may have copied substantial text from other sources. Check the original sources to ensure you are not plagiarizing someone else's work.
Be aware of the potential for fabrication where an LLM may have generated false content, including factual errors, or generated citations that do not exist. Ensure you have verified all claims in your article before submission.
Please note that AI bots such as ChatGPT should not be listed as authors in your submission.
While submissions will not be rejected for disclosed use of generative AI, if the Editor becomes aware that generative AI was used improperly in the preparation of a submission without disclosure, the Editor reserves the right to reject the submission at any time during the publication process. Improper use of generative AI includes generating incorrect text or content, plagiarism, or improper attribution to prior sources.
For reviewers and editors
The use of AI or LLMs for editorial work raises issues of confidentiality and copyright. These tools or models will learn from the data they receive over time and may use it to generate output for others.
AI assistance
Reviewers may wish to use Generative AI to improve the language quality of their reviews. If they do so, they remain responsible for the content, accuracy, and constructive feedback in the review.
Journal editors are ultimately responsible for the content published in their journals and act as guardians of the scientific record. Editors may use Generative AI tools to help find suitable peer reviewers.
Generative AI
Reviewers who use ChatGPT or other Generative AI tools to generate inappropriate review reports will not be invited to review the journal and their reviews will not be included in the final decision.
Editors should not use ChatGPT or other Generative AI tools to generate decision letters or summaries of unpublished research.
Undisclosed or Inappropriate Use of Generative AI
Reviewers who suspect inappropriate or undisclosed use of Generative AI in a submission should report their concerns to the Journal Editor. If Editors suspect the use of ChatGPT or other Generative AI tools in a submitted manuscript or review, they should consider this policy in conducting their editorial assessment of the matter or contact a PENA TEKNIK representative for advice.
PENA TEKNIK and the Journal Editor will lead a joint investigation into concerns raised regarding the inappropriate or undisclosed use of Generative AI in published articles. The investigation will be conducted in accordance with the guidelines issued by COPE and our internal policies.
