Discussion Articles

Articles that discuss the use of LLMs in science.

TitleType of ResourceLink to ResourceDate RecordedOpen ScienceUse of LLMResearch Discipline(s)Description of Resource
ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research Discussion Article March 5, 2024 Open Source Other Business We herewith certify that this essay represents original and independent scholarship. That is, generative AI was not used in the idea-generating phase of this essay, nor was it used to assist the writing or editing of this essay (with the exception of serving the purpose of a ‘bad’ example). We observe with great concern that many journal publishers – unlike Science – become complicit in undermining the meaning of the term ‘original scholarship’ by allowing the use of generative AI in the research process, while actual enforceability of relevant policies is low. Eventually, we need to be mindful of the deskilling of the (academic) mental sphere, while corporate influences on what constitutes knowledge is set to grow.
The future of research in an artificial intelligence-driven world. [multiple essays] Discussion Article March 5, 2024 Preprint Other Business, Other Current and future developments in artificial intelligence (AI) systems have the capacity to revolutionize the research process for better or worse. On the one hand, AI systems can serve as collaborators as they help streamline and conduct our research. On the other hand, such systems can also become our adversaries when they impoverish our ability to learn as theorists, or when they lead us astray through inaccurate, biased, or fake information. No matter which angle is considered, and whether we like it or not, AI systems are here to stay. In this curated discussion, we raise questions about human centrality and agency in the research process, and about the multiple philosophical and practical challenges we are facing now and ones we will face in the future. [multiple essays]
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review Discussion Article February 21, 2024 Open Source Science Communication, Other Other The emergence of systems based on large language models (LLMs) such as OpenAI’s ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.
ChatGPT in Thematic Analysis: Can AI become a research assistant in qualitative research? Discussion Article January 29, 2024 Preprint Data Analysis Sociology The release of ChatGPT in November 2022 heralded a new era in various professional fields, yet its application in qualitative data analysis (QDA) remains underdeveloped. This article presents an experiment involving applying ChatGPT (Model GPT-4) to thematic analysis. By employing an adapted version of King et al.’s (2018) Template Analysis framework, this article aims to assess how ChatGPT can help with QDA in a full analytical process of a sample dataset provided by Lumivero. My experiment includes applying ChatGPT in four stages: data familiarization; preliminary coding and initial template formation; clustering and template modification and finalization; and theme development. Findings reveal GPT-4’s capacity in efficiency and speed in grasping the data and generating codes, subcodes, clusters, and themes, alongside its learning and adapting capabilities. However, the current version of the model has limitations in terms of effectively handling detailed analysis of large databases and producing consistent results, as well as the need to move across workspaces and the lack of relevant training data for QDA purposes.
Could AI change the scientific publishing market once and for all? Discussion Article January 29, 2024 Preprint Science Communication Other Artificial-intelligence tools in research like ChatGPT are playing an increasingly transformative role in revolutionizing scientific publishing and re-shaping its economic background. They can help academics to tackle such issues as limited space in academic journals, accessibility of knowledge, delayed dissemination, or the exponential growth of academic output. Moreover, AI tools could potentially change scientific communication and academic publishing market as we know them. They can help to promote Open Access (OA) in the form of preprints, dethrone the entrenched journals and publishers, as well as introduce novel approaches to the assessment of research output. It is also imperative that they should do just that, once and for all.
Generative AI for Economic Research: Use Cases and Implications for Economists Discussion Article January 15, 2024 Open Source Other Economics Generative AI, in particular large language models (LLMs) such as ChatGPT, has the potential to revolutionize research. I describe dozens of use cases along six domains in which LLMs are starting to become useful as both research assistants and tutors: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions and demonstrate specic examples of how to take advantage of each of these, classifying the LLM capabilities from experimental to highly useful. I argue that economists can reap signicant productivity gains by taking advantage of generative AI to automate micro tasks. Moreover, these gains will grow as the performance of AI systems across all of these domains will continue to improve. I also speculate on the longer-term implications of AI-powered cognitive automation for economic research. The online resources associated with this paper oer instructions for how to get started and will provide regular updates on the latest capabilities of generative AI that are useful for economists.
Can AI language models replace human participants? Discussion Article January 4, 2024 Data Collection Psychology Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.
GUINEA PIGBOTS Doing research with human subjects is costly and cumbersome. Can AI chatbots replace them? Discussion Article January 4, 2024 Open Source Data Collection Psychology For Kurt Gray, a social psychologist at the University of North Carolina at Chapel Hill, conducting experiments comes with certain chores. Before embarking on any study, his lab must get ethical approval from an institutional review board, which can take weeks or months. Then his team has to recruit online participants—easier than bringing people into the lab, but Gray says the online subjects are often distracted or lazy. Then the researchers spend hours cleaning the data. But earlier this year, Gray accidentally saw an alternative way to do things. He was working with computer scientists at the Allen Institute for Artificial Intelligence to see whether they could develop an AI system that made moral judgments like humans. But first they figured they’d see if a system from the startup OpenAI could already do the job. The team asked GPT-3.5, which produces eerily humanlike text, to judge the ethics of 464 scenarios, previously appraised by human subjects, on a scale from –4 (unethical) to 4 (ethical)—scenarios such as selling your house to fund a program for the needy or having an affair with your best friend’s spouse. The system’s answers, it turned out, were nearly identical to human responses, with a correlation coefficient of 0.95...
Exploring the Frontiers of LLMs in Psychological [research] Applications: A Comprehensive Review Discussion Article January 4, 2024 Preprint Other Psychology This paper explores the frontiers of large language models (LLMs) in psychology applications. Psychology has undergone several theoretical changes, and the current use of Artificial Intelligence (AI) and Machine Learning, particularly LLMs, promises to open up new research directions. We provide a detailed exploration of how LLMs like ChatGPT are transforming psychological research. It discusses the impact of LLMs across various branches of psychology, including cognitive and behavioral, clinical and counseling, educational and developmental, and social and cultural psychology, highlighting their potential to simulate aspects of human cognition and behavior. The paper delves into the capabilities of these models to emulate human-like text generation, offering innovative tools for literature review, hypothesis generation, experimental design, experimental subjects, data analysis, academic writing, and peer review in psychology. While LLMs are essential in advancing research methodologies in psychology, the paper also cautions about their technical and ethical challenges. There are issues like data privacy, the ethical implications of using LLMs in psychological research, and the need for a deeper understanding of these models' limitations. Researchers should responsibly use LLMs in psychological studies, adhering to ethical standards and considering the potential consequences of deploying these technologies in sensitive areas. Overall, the article provides a comprehensive overview of the current state of LLMs in psychology, exploring potential benefits and challenges. It serves as a call to action for researchers to leverage LLLs' advantages responsibly while addressing associated risks.
Control Risk for Potential Misuse of Artificial Intelligence in Science Discussion Article December 12, 2023 Preprint Other Other The expanding application of Artificial Intelligence (AI) in scientific fields presents unprecedented opportunities for discovery and innovation. However, this growth is not without risks. AI models in science, if misused, can amplify risks like creation of harmful substances, or circumvention of established regulations. In this study, we aim to raise awareness of the dangers of AI misuse in science, and call for responsible AI development and use in this domain. We first itemize the risks posed by AI in scientific contexts, then demonstrate the risks by highlighting real-world examples of misuse in chemical science. These instances underscore the need for effective risk management strategies. In response, we propose a system called SciGuard to control misuse risks for AI models in science. We also propose a red-teaming benchmark SciMT-Safety to assess the safety of different systems. Our proposed SciGuard shows the least harmful impact in the assessment without compromising performance in benign tests. Finally, we highlight the need for a multidisciplinary and collaborative effort to ensure the safe and ethical use of AI models in science. We hope that our study can spark productive discussions on using AI ethically in science among researchers, practitioners, policymakers, and the public, to maximize benefits and minimize the risks of misuse.