Position statement on the use of AI in scientific writing and publishing
Keywords:
LLM, large language model, Artificial intelligence, AI, Intelligence artificielle, IA, LLM in science, AI in scientific publications, IA et publications scientifiques, Scholarly publishing, Research integrity, AI disclosure, Scientific authorshipAbstract
Large language models (LLMs) are rapidly reshaping scientific writing, reviewing, and publishing. Journals must respond in ways that safeguard trust while acknowledging the new realities resulting from these Artificial Intelligence-based technologies. This contribution details the position developed by the editors of Madagascar Conservation & Development on the use of LLMs in scholarly work and for publication in the journal. These tools can support authors by enhancing clarity, reducing language barriers, and structural inequities in global science, but recent editorial experience shows that the use of LLMs can generate errors and fabricated references, and formulate false claims that may escape traditional peer review. In volunteer-run journals, such failures impose substantial burdens on editors and reviewers. Our position is simple: LLMs may be used to support and enable authors but their results must never be trusted blindly. Authors remain fully responsible for ensuring the accuracy, originality, and validity of all content, regardless of the tools employed, and any use of LLMs must be disclosed transparently. Protecting scientific integrity remains a shared responsibility.
Résumé
Les grands modèles de langage, ou large language models (LLM), transforment rapidement l’écriture scientifique, l’évaluation par les pairs et les processus de publication. Les revues doivent y répondre de manière à préserver la confiance, tout en reconnaissant les nouvelles réalités induites par ces technologies fondées sur l’intelligence artificielle (IA). Cette contribution présente la position élaborée par les rédacteurs de Madagascar Conservation & Development concernant l’utilisation des LLM dans les travaux scientifiques et pour toute contribution soumise à la revue. L’IA peut aider les auteurs en améliorant la clarté des textes, en réduisant les barrières linguistiques et certaines inégalités structurelles au sein de la science mondiale. Cependant, notre expérience éditoriale récente montre que l’utilisation des LLM peut générer des erreurs, produire des références inexistantes et formuler des affirmations erronées susceptibles d’échapper à l’évaluation par les pairs traditionnelle. Dans les revues reposant exclusivement sur le bénévolat, de telles défaillances se traduisent par une surcharge substantielle de travail pour les rédacteurs et les évaluateurs. Notre position est simple : les LLM peuvent être utilisés pour aider et accompagner les auteurs, mais leurs résultats ne doivent jamais être acceptés sans vérification. Les auteurs demeurent entièrement responsables de l’exactitude, de l’originalité et de la validité de l’ensemble du contenu, quels que soient les outils employés, et toute utilisation de LLM doit être déclarée de manière transparente. La préservation de l’intégrité scientifique reste une responsabilité partagée.
References
Berdejo-Espinola, V. and Amano, T. 2023. AI tools can improve equity in science. Science 379: 991. <<https://doi.org/10.1126/science.adg9714>
Bergstrom, C. T. and Bak-Coleman, J. 2025. AI, peer review and the human activity of science. Nature career column: 25 June. https://doi.org/10.1038/d41586-025-01839-w>
Borger, J. G, Ng, A. P., Anderton, H., Ashdown, G. W., Auld, M., et al. 2023. Artificial intelligence takes center stage: exploring the capabilities and implications of ChatGPT and other AI-assisted technologies in scientific research and education. Immunology & Cell Biology 101, 10: 923–93529. <https://doi.org/10.1111/imcb.12689>
Brainard, J. 2025. Far more authors use AI than admit it. Science 389: 1168–1169. <https://doi.org/10.1126/science.z87syeh>
Del Giglio, A. and Pereira da Costa, M. U. 2023. The use of artificial intelligence to improve the scientific writing of non-native English speakers. Revista da Associacao Medica Brasileira 69, 9: e20230560. <https://doi.org/10.1590/1806-9282.20230560>
Ghassemi, M., Birhane, A., Bilal, M., Kankaria, S., Malone, C., Mollick, E. and Tustumi, F. 2023. ChatGPT one year on: who is using it, how and why? Nature 624: 39–41. <https://doi.org/10.1038/d41586-023-03798-6>
Hutson, J. 2024. Rethinking plagiarism in the era of generative AI. Journal of Intelligent Communication 4, 1: 20–31. <https://digitalcommons.lindenwood.edu/cgi/viewcontent.cgi?article=1626&context=faculty-research-papers>
Katsnelson, A. 2022. Poor English skills? There’s an AI for that. Nature 609: 208–209. <https://doi.org/10.1038/d41586-022-02767-9>
Kusumegi, K., Yang, X., Ginsparg, P., de Vaan, M., Stuart, T. and Yin, Y. 2025. Scientific production in the era of large language models. Science 390: 1240–1243. <https://doi.org/10.1126/science.adw3000>
Naddaf, M. 2025. Will AI take over peer review? Nature 639: 852–854. <https://doi.org/10.1038/d41586-025-00894-7>
Parrilla, J. M. 2023. ChatGPT use shows that the grant-application system is broken. Nature 623: 443. <https://doi.org/10.1038/d41586-023-03238-5>
Pearson, H. 2025. Universities are embracing AI: will students get smarter or stop thinking? Nature 646: 788–791. <https://doi.org/10.1038/d41586-025-03340-w>
Ramananjato, V., Randimbiarison, F., Andriantsaralaza, S., Rafaharetana, A. R., Rabarijaonina, T. H. P., et al. 2025. 120 years of “Lemurology”: What has changed? Biotropica 57: e70026. <https://doi.org/10.1111/btp.70026>
Reynolds, S. A., Beery, S., Burgess, N., Burgman, M., Butchart, S. H. M., et al. 2025. The potential for AI to revolutionize conservation: a horizon scan. Trends in Ecology & Evolution 40, 2: 191–207. <https://doi.org/10.1016/j.tree.2024.11.013>
Silvestro, D., Goria, S., Sterner, T. and Antonelli, A. 2025. Improving biodiversity protection through artificial intelligence. Nature Sustainability 5, 5: 415–424. <https://doi.org/10.1038/s41893-022-00851-6>
Thorp, H. H. 2023. ChatGPT is fun, but not an author. Science 379: 143. <https://doi.org/10.1126//science.adg7879>
Walters, W.H. and Wilder, E.I. 2023. Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Report 13: 14045. <https://doi.org/10.1038/s41598-023-41032-5
Zou, J. 2024. ChatGPT is transforming peer review — how can we use it responsibly? Nature 635: 10. <https://doi.org/10.1038/d41586-024-03588-8
Zhao, C. 2025. A new preprint server welcomes papers written and reviewed by AI. Science 390: 1202–1203. <https://doi.org/10.1126/science.zjy3i9z>
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Madagascar Conservation & Development

This work is licensed under a Creative Commons Attribution 4.0 International License.
All journal content, except where otherwise noted, is licensed under a creative common Attribution 4.0 International and is published here by the Indian Ocean e-Ink under license from the author(s).
