Announcement RChD: Creación y Pensamiento |Vol. 11 Nº 20| JUN 2026 | Dossier: Intermediaries for sustainable design and transition in Ibero-America: Materials, strategies, and collaborative frameworks. Deadline for full manuscript submission: January 31, 2026.
Justification
The widespread use of Artificial Intelligence (AI) tools has introduced new challenges for scientific production on a global scale. In this context, establishing guidelines within a framework of good practice is essential to safeguard academic integrity and transparency. In response to this scenario, the journals of the Faculty of Architecture and Urbanism (FAU Journals) establish guidelines aimed at guiding authors, reviewers and editorial teams towards the responsible use of these technologies, preventing malpractice and ensuring quality standards. Furthermore, these guidelines seek to promote a critical adoption of AI, aligned with principles of academic rigour that foster the production and circulation of knowledge from a fair and transparent perspective.
Objective
To establish principles, guidelines and good practices for the responsible use of AI in FAU journals, promoting transparency and traceability in the use of AI tools for the preparation of original manuscripts.
Principles
Framework of Reference
This proposal is based on international guidelines and policies regarding the responsible use of AI in scientific communication. The position of the Committee on Publication Ethics (COPE, 2023) and the SciELO Guide (2023) establish that AI cannot be recognised as an author, as authorship must be human. SciELO (2023) states that the use of AI must be declared in the abstract and methodology section, or their equivalent, in articles. Furthermore, they warn that concealing its use constitutes an ethical breach against scientific integrity.
The Heredia Declaration (2024) reinforces the need for transparent, traceable and reproducible use, with human verification, respect for intellectual property and measures to mitigate bias. Elsevier’s editorial policies (2025), for their part, emphasise the requirement to declare in detail the use of AI in articles, promoting respect for confidentiality; consequently, reviewers and editors must not upload manuscripts to AI tools for evaluation. The use of AI-generated figures is also restricted, except when they form part of the methodology.
Good practices from the Directory of Open Access Journals (DOAJ, 2025) highlight the importance of promoting the proper use of these AI tools, ensuring continuous human oversight and exercising caution regarding associated risks such as bias, inaccuracies or fabricated references.
In general, these guidelines converge around four key principles: i) exclusively human authorship, ii) mandatory transparency in the declaration of use, iii) full responsibility on the part of authors, and iv) the use of AI as support, not as a substitute for expert judgement.
Based on the above, and following discussions among the editorial teams of FAU Journals, FAU Library and SISIB, the following AI use guidelines are established for authors, reviewers and editors.
AI Use Guidelines for Authors
Permitted uses
Non-permitted uses
Declaration of AI use
As a good practice to promote academic integrity, transparency and traceability, authors must declare the use of AI tools. In the journal’s submission checklist, authors must indicate whether their article includes the use of AI in any section, figure or table. If so, authors must complete an “AI Use Declaration” at the end of the submitted manuscript, specifying the following:
If the use of AI forms part of the article’s methodology, it must be described in detail in the methodology section or its equivalent.
If AI was used solely for spelling, grammatical or stylistic correction, this must be indicated in the “AI Use Declaration”; it is not necessary to include it in the methodology.
Authors retain full responsibility for the content and comprehensive review of the text, tables, figures and bibliographic references of submitted and published articles.
AI Use Guidelines for Reviewers
Permitted uses
Non-permitted uses
Reviewers must report to editorial teams any suspicion of fictitious references, fabricated data or improper use of AI.
Guidelines for the Editorial Team
Review and Updating
The guidelines set out in this document will be periodically reviewed and updated in accordance with technological, ethical and regulatory developments in the field.
References
COPE Council. (2024). COPE Position Statement – Authorship and AI. https://doi.org/10.24318/cCVRZBms
DOAJ Blog. (2025). Help or hindrance? Peer review in the age of AI. https://blog.doaj.org/es/2025/09/16/help-or-hindrance-peer-review-in-the-age-of-ai/
Elsevier. (14 November 2025). Generative AI policies for journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
Penabad-Camacho, L., Penabad-Camacho, M. A., Mora-Campos, A., Cerdas-Vega, G., Morales-López, Y., Ulate-Segura, M., Méndez-Solano, A., Nova-Bustos, N., Vega-Solano, M. F., & Castro-Solano, M. M. (2024). Heredia Declaration: Principles on the use of artificial intelligence in scientific publishing. Revista Electrónica Educare, (28), 1–10. https://doi.org/10.15359/ree.28-S.19967
SciELO. (2023). Guide to the use of Artificial Intelligence tools and resources in the communication of research in the SciELO Network. https://wp.scielo.org/wp-content/uploads/Guia-de-uso-de-herramientas-y-recursos-de-IA-20230914.pdf