SciELO - Scientific Electronic Library Online

 
vol.74 número3Meaning of having to cope with voluntary termination of late pregnancy in women undergoing the procedure in healthcare institutions of two Colombian cities índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


Revista Colombiana de Obstetricia y Ginecología

versão impressa ISSN 0034-7434versão On-line ISSN 2463-0225

Rev. colomb. obstet. ginecol. vol.74 no.3 Bogotá jul./set. 2023  Epub 30-Set-2023

https://doi.org/10.18597/rcog.4139 

Editorial

The use of artificial intelligence and scientific papers published in the Colombian Journal of Obstetrics and Gynecology

Hernando Gaitán-Duarte MD, MSc1 

1 Colombian Journal of Obstetrics and Gynecology (RCOG), Bogotá (Colombia).


Artificial intelligence (AI) is currently under the spotlight in terms of what it represents for society, but even more importantly, in terms of the expectations for the short and medium term. Merriam-Webster defines intelligence as "the ability to learn or understand or to deal with new or trying situations". In turn, AI has been defined as "the science and engineering of manufacturing intelligent machines, especially intelligent applications" 1. Therefore, it could be said that AI is the science and engineering of manufacturing machines that understand, comprehend and solve problems. More recently, the International Business Machines Corporation (IBM), citing Russell and Norvik 2, stated that there are two approaches to the definition of AI, classified in accordance with rationality and the thinking process for action. One is the human approach, of which the systems that think and act as humans are a part. The second, the ideal approach, encompasses systems that think and act rationally.

Robots, already available not only in hospitals 3 but also on online shopping platforms for home use 4, are among the best known AI technologies. Other examples of artificial intelligence are voice processors such as Siri or Google, or ChatGPT the innovative tech organization OpenAI ® 5 chatbot which communicates through written interaction. The latter has brought about a revolution in the sense the machine not only can produce simple text in response to a question, but appears to learn quickly from previous requests and responses. These technological innovations and the pace at which AI develops have prompted all kinds of reactions, ranging from unlimited expectations about what to expect to denial, incredulity and outright rejection of these new machines. However, what matters is that they are already here to stay and we need to learn about their strengths and weaknesses.

In the academic and scientific world, ChatGPT has imposed a challenge because its use could undermine scientific integrity in the form of plagiarism in its broadest definition. According to the American Psychologist Association (APA) (sections 8.2 and 8.3) 6 plagiarism is "the act of presenting the words, ideas, or images of another as your own; it denies authors or creators of content the credit they are due." I will attempt to explain, from the perspective of a neophyte in these technologies, some of the elements that can help us understand the risks posed by this processing tool and its use, in accordance with the recommendations of the International Committee of Medical Journal Editors - ICJME.

GPT stands for Generative Pre-training Transformer. It is a machine learning natural language processing tool driven by a Large Language Model (LLM). LLMs digest huge amounts of text data and infer relationships between the words embedded in the text. Launched in 2019, they allow to encode input text, using computer algorithms, and decode these algorithms to produce an output text. Inside, the tool allows the model to attach different weights to parts of the text sequence in order to infer meaning and context. The transformer (one or several neuronal networks) attempts to comprehend language modeling to "rightly predict" the relationship between words and produce a clearer output. In the 2022 release (GPT-3), the transformer resorts to books, articles, images and webpages, among other things. Moreover, it performs the process of selecting the pieces of the input text (or tokens) which it "considers" important to understand the context based on repeated weightings of the importance of the text in the output order (self attention mechanisms), managing to make predictions regarding the output text. However, it has limitations when it comes to aligning this text with user needs 7. It is currently available as an application (app).

ChatGPT is a form of deep learning with the advantage over its predecessors of being able to incorporate human comments in the training process to better align the outcome of the model with the intention of the user (reinforcement learning). This means that the machine learns to give the most appropriate answers depending on user specificities and the human rating or feedback regarding those answers. However, these algorithms cannot generate any form of intellectual reasoning or mental model 8.

For the matter at hand - academic work or publication of scientific texts - several problems arise in relation to the use of ChatGPT:

  • Inability to determine which part of the information emerging from the algorithms is valid and which is not.

  • Apart from preferences, the machine also embeds user biases and can also produce meaningless text (hallucinations).

  • Furthermore, given that the algorithm takes books or articles and mixes them, it does not identify the original information source and, consequently, cannot give the corresponding credit.

  • On the other hand, text resulting from requests to ChatGPT can be incorporated into the emerging text, that is to say that it then becomes part of the ideas thrusted into the public domain.

  • In the ChatGPT-generated text it is impossible to determine to what extent or whether the content was developed as a text model and how much was done by a human, making it difficult to know when to believe in ChatGPT.

These problems result in the loss of authorship rights which are of the greatest importance for a researcher who is putting forward a fact or an idea, which is incorporated into a field of knowledge. As mentioned, this constitutes plagiarism and is a deed that undermines scientific integrity. Tools are being currently developed to detect parts written by chatGPT, but their accuracy is still uncertain 9; on the other hand, other chatbots like Google Bard produce text with references and sources or which enable the individual to include references in the text. However, the machine fails to verify whether the citation is a primary or secondary source, a task that has to be performed by a human 10.

To circumvent this problem, at the Colombian Journal of Obstetrics and Gynecology (RCOG) we explain to the authors the importance of making proper use of direct citations (textual) and indirect citations (paraphrasing, but citing the original author of the idea) 11, in order to give the required credit to the authors who originally present cited facts. Moreover, the use of primary sources is prioritized (original source where the fact or ideas was first presented), over secondary sources (citation from papers summarized by others), which, if used, must reference the original source of information 12.

Consequently, to avoid these new forms of plagiarism, starting with the next issue, RCOG will join the ICJME initiative 13, urging authors to disclose the use of artificial intelligence in the submitted manuscript, given that it is the human who is responsible for the use of AI in the document. Moreover, authors must carefully review and edit the final text, because AI may generate results which appear to be adequate but may actually be wrong, incomplete or biased. This information must be included in the cover letter (title page) as well as in the Materials and Methods section of the submission. GPTs and ChatGPT should not be listed as authors or coauthors because they do not meet the criteria of accountability for the integrity, validity and originality of the work; neither should these AI applications be cited in the bibliography. Authors must be able to vouch for the absence of plagiarism in the text, citations and images and for the appropriate attribution of the entire cited material, including complete citations.

In this way, we expect to help our researchers and students understand the responsibility that authoring an academic or scientific work entails, the adequate use of these machines and their limitations, for the benefit of scientific integrity and of the validity of the information provided to the readers.

REFERENCES

1. McCarthy J. From here to human-level AI. Artif Intell. 2007;171(18):1174-82. https://doi.org/10.1016/j.artint.2007.10.009Links ]

2. International Business Machines Corporation. What is artificial intelligence (AI)? [Internet]. Available at: https://www.ibm.com/topics/artificial-intelligenceLinks ]

3. Morrell AL, Morrell Jr AC, Morrell AG, Freitas J, Tustumi F, De-Oliveira L, et al. The history of robotic surgery and its evolution: When illusion becomes reality. Rev Col Bras Cir. 2021;48:e20202798. https://doi.org/10.1590/0100-6991e-20202798Links ]

4. Alibaba.com. [Internet]. Available at: https://www.alibaba.com/showroom/humanoid-robot.htmlLinks ]

5. Open AI. [Internet]. Available at: https://chat.openai.com/auth/lLinks ]

6. American Psychological Association. APA Style. 7th ed. APA [Internet]. 2022. Available at: https://apastyle.apa.org/products/publication-manual-7th-editionLinks ]

7. Ruby M. How ChatGPT works: The model behind the bot [Internet]. Towards Data Science; 2023. Available at: https://towardsdatascience.com/how-chatgpt-worksthe-models-behind-the-bot-1ce5fca96286Links ]

8. Dien J. Editorial: Generative artificial intelligence as a plagiarism problem. Biol Psychol. 2023;181:108621. https://doi.org/10.1016/j.biopsycho.2023.108621Links ]

9. Demers, T. 16 of the best AI and ChatGPT content detectors compared. Search Engine Land. (2023, April 25). Available at: https://searchengineland.com/aichatgpt-content-detectors -395957Links ]

10. Google Bard [Internet]. Available at: https://bard.google.com/?hl=esLinks ]

11. Universidad de Puerto Rico. Sistema de bibliotecas. Manual APA 7a edición. Citas y referencias [Internet]. 2023. Available at: https://uprrp.libguides.com/c.php?g=985694&p=7256246Links ]

12. Universidad de Guadalajara. Clasificación general de las fuentes de información. [Internet]. 2023. Available at: http://biblioteca.udgvirtual.udg.mx/portal/clasificaciongeneral-de-las-fuentes-de-informacionLinks ]

13. International Committee for Medical Journal Editors - ICMJE. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals [Internet]. 2023. Available at: https://www.icmje.org/icmje-recommendations.pdfLinks ]

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License