i

Announcement RChD: Creación y Pensamiento |Vol. 11 Nº 20| JUN 2026 | Dossier: Intermediaries for sustainable design and transition in Ibero-America: Materials, strategies, and collaborative frameworks. Deadline for full manuscript submission: January 31, 2026. 

Guidelines for the Use of Artificial Intelligence (AI) in the journals of Faculty of Architecture and Urbanism, University of Chile

Justification

The widespread use of Artificial Intelligence (AI) tools has introduced new challenges for scientific production on a global scale. In this context, establishing guidelines within a framework of good practice is essential to safeguard academic integrity and transparency. In response to this scenario, the journals of the Faculty of Architecture and Urbanism (FAU Journals) establish guidelines aimed at guiding authors, reviewers and editorial teams towards the responsible use of these technologies, preventing malpractice and ensuring quality standards. Furthermore, these guidelines seek to promote a critical adoption of AI, aligned with principles of academic rigour that foster the production and circulation of knowledge from a fair and transparent perspective.

Objective

To establish principles, guidelines and good practices for the responsible use of AI in FAU journals, promoting transparency and traceability in the use of AI tools for the preparation of original manuscripts.

Principles

  1. Responsibility and human verification: AI tools are not substitutes for critical thinking and human evaluation. Therefore, if used at any stage of the editorial process, they must always be subject to human supervision and control.
  2. Human authorship: Authors and co-authors are responsible for the content of their work. AI tools cannot be recognised as authors, as authorship entails responsibilities that can only be attributed to human beings.
  3. Integrity, transparency and traceability: The use of AI must be declared to promote academic integrity, transparency and traceability, specifying the AI tool used, the purpose of its use and the extent of human supervision.
  4. Protection of personal data and intellectual property: At all stages of the editorial process, the privacy of personal, sensitive or confidential data must be protected, and the use of AI must not infringe intellectual property rights.

Framework of Reference

This proposal is based on international guidelines and policies regarding the responsible use of AI in scientific communication. The position of the Committee on Publication Ethics (COPE, 2023) and the SciELO Guide (2023) establish that AI cannot be recognised as an author, as authorship must be human. SciELO (2023) states that the use of AI must be declared in the abstract and methodology section, or their equivalent, in articles. Furthermore, they warn that concealing its use constitutes an ethical breach against scientific integrity.

The Heredia Declaration (2024) reinforces the need for transparent, traceable and reproducible use, with human verification, respect for intellectual property and measures to mitigate bias. Elsevier’s editorial policies (2025), for their part, emphasise the requirement to declare in detail the use of AI in articles, promoting respect for confidentiality; consequently, reviewers and editors must not upload manuscripts to AI tools for evaluation. The use of AI-generated figures is also restricted, except when they form part of the methodology.

Good practices from the Directory of Open Access Journals (DOAJ, 2025) highlight the importance of promoting the proper use of these AI tools, ensuring continuous human oversight and exercising caution regarding associated risks such as bias, inaccuracies or fabricated references.

In general, these guidelines converge around four key principles: i) exclusively human authorship, ii) mandatory transparency in the declaration of use, iii) full responsibility on the part of authors, and iv) the use of AI as support, not as a substitute for expert judgement.

Based on the above, and following discussions among the editorial teams of FAU Journals, FAU Library and SISIB, the following AI use guidelines are established for authors, reviewers and editors.

AI Use Guidelines for Authors

Permitted uses

  • Grammatical and stylistic correction of manuscripts with human verification.
  • Translation of title, abstract and keywords with human verification.

Non-permitted uses

  • Including AI as an author or co-author.
  • Incorporating fictitious data, citations or references generated by AI.
  • Integrating figures developed or altered by generative AI tools, unless they form part of the methodology.

Declaration of AI use
As a good practice to promote academic integrity, transparency and traceability, authors must declare the use of AI tools. In the journal’s submission checklist, authors must indicate whether their article includes the use of AI in any section, figure or table. If so, authors must complete an “AI Use Declaration” at the end of the submitted manuscript, specifying the following:

  • Tool: identification of the tool used.
  • Purpose of AI use: explicit explanation of the purpose and contribution of AI use to the manuscript.
  • Section of the manuscript where it was used: details of the sections in which AI was used and the degree of human supervision.

If the use of AI forms part of the article’s methodology, it must be described in detail in the methodology section or its equivalent.

If AI was used solely for spelling, grammatical or stylistic correction, this must be indicated in the “AI Use Declaration”; it is not necessary to include it in the methodology.

Authors retain full responsibility for the content and comprehensive review of the text, tables, figures and bibliographic references of submitted and published articles.

AI Use Guidelines for Reviewers

Permitted uses

  • Grammatical and stylistic correction when drafting review comments.

Non-permitted uses

  • Use of AI tools to upload and review an original, unpublished and confidential manuscript, whether in full or in part. The evaluation decision is an exclusively human task.

Reviewers must report to editorial teams any suspicion of fictitious references, fabricated data or improper use of AI.

Guidelines for the Editorial Team

  • Require authors to declare the use of AI at the time of manuscript submission and prior to approval of the final version.
  • Use similarity detection and AI detection tools (e.g., Turnitin), interpreting reports with caution, as they do not constitute definitive proof and require human supervision.
  • Once articles are published, the editorial team may use AI tools for dissemination purposes.

Review and Updating

The guidelines set out in this document will be periodically reviewed and updated in accordance with technological, ethical and regulatory developments in the field.

References

COPE Council. (2024). COPE Position Statement – Authorship and AI. https://doi.org/10.24318/cCVRZBms

DOAJ Blog. (2025). Help or hindrance? Peer review in the age of AI. https://blog.doaj.org/es/2025/09/16/help-or-hindrance-peer-review-in-the-age-of-ai/

Elsevier. (14 November 2025). Generative AI policies for journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

Penabad-Camacho, L., Penabad-Camacho, M. A., Mora-Campos, A., Cerdas-Vega, G., Morales-López, Y., Ulate-Segura, M., Méndez-Solano, A., Nova-Bustos, N., Vega-Solano, M. F., & Castro-Solano, M. M. (2024). Heredia Declaration: Principles on the use of artificial intelligence in scientific publishing. Revista Electrónica Educare, (28), 1–10. https://doi.org/10.15359/ree.28-S.19967

SciELO. (2023). Guide to the use of Artificial Intelligence tools and resources in the communication of research in the SciELO Network. https://wp.scielo.org/wp-content/uploads/Guia-de-uso-de-herramientas-y-recursos-de-IA-20230914.pdf