Articles

Below are articles that use LLMs in their research workflows. You can use the Search option to find examples from your discipline, or for specific workflow applications you may be considering.

TitleType of ResourceLink to ResourceDate RecordedOpen ScienceUse of LLMResearch Discipline(s)Description of Resource
Artificial Intelligence Can Persuade Humans on Political Issues Research Article September 28, 2024 Preprint Data Collection Political Science The emergence of transformer models that leverage deep learning and web-scale corpora has made it possible for artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets in the US and globally. Here, we investigate whether the currently most powerful, openly-available AI model – GPT-3 – is capable of influencing the beliefs of humans, a social behavior recently seen as a unique purview of other humans. Across three preregistered experiments featuring diverse samples of Americans (total N=4,836), we find consistent evidence that messages generated by AI are persuasive across a number of policy issues, including an assault weapon ban, a carbon tax, and a paid parental-leave program. Further, AI-generated messages were as persuasive as messages crafted by lay humans. Compared to the human authors, participants rated the author of AI messages as being more factual and logical, but less angry, unique, and less likely to use story-telling. Our results show the current generation of large language models can persuade humans, even on polarized policy issues. This work raises important implications for regulating AI applications in political contexts, to counter its potential use in misinformation campaigns and other deceptive political activities.
The persuasive effects of political microtargeting in the age of generative artificial intelligence Research Article September 28, 2024 Open Source Data Collection Political Science The increasing availability of microtargeted advertising and the accessibility of generative artificial intelligence (AI) tools, such as ChatGPT, have raised concerns about the potential misuse of large language models in scaling microtargeting efforts for political purposes. Recent technological advancements, involving generative AI and personality inference from consumed text, can potentially create a highly scalable “manipulation machine” that targets individuals based on their unique vulnerabilities without requiring human input. This paper presents four studies examining the effectiveness of this putative “manipulation machine.” The results demonstrate that personalized political ads tailored to individuals’ personalities are more effective than nonpersonalized ads (studies 1a and 1b). Additionally, we showcase the feasibility of automatically generating and validating these personalized ads on a large scale (studies 2a and 2b). These findings highlight the potential risks of utilizing AI and microtargeting to craft political messages that resonate with individuals based on their personality traits. This should be an area of concern to ethicists and policy makers.
Analysing the impact of ChatGPT in research Research Article September 23, 2024 Open Source Science Communication Other Large Language Models (LLMs) are a type of machine learning that handles a wide range of Natural Language Processing (NLP) scenarios. Recently, in December 2022, a company called OpenAI released ChatGPT, a tool that, within a few months, became the most representative example of LLMs, automatically generating unique and coherent text on many topics, summarising and rewriting it, or even translating it to other languages. ChatGPT originated some controversy in academia since students can generate unique text for writing assessments being sometimes extremely difficult to distinguish whether it comes from ChatGPT or a person. In research, some journals specifically banned ChatGPT in scientific papers. However, when used correctly, it becomes a powerful tool to rewrite, for instance, scientific papers and, thus, deliver researchers’ messages in a better way. In this paper, we conduct an empirical study of the impact of ChatGPT in research. We downloaded the abstract of over 45,000 papers from over 300 journals from Dec 2022 and Feb 2023 belonging to different research editorials. We use four of the most known ChatGPT detection tools and conclude that ChatGPT played a role in around 10% of the papers published in every editorial, showing that authors from different fields have rapidly adopted such a tool in their research.
Today's Academic Research: The Role of ChatGPT Writing Research Article September 23, 2024 Open Source Describing Results, Science Communication, Other Education, Other The purpose of this study is to examine the place of ChatGPT writing in the current academic environment. Significant attention has been drawn to the amazing capacity of ChatGPT, a sophisticated language model created by OpenAI, to produce text answers that nearly mimic human speech. The current study examines ChatGPT's effects on a number of academic areas, including writing support, data analysis, literature reviews, and scientific cooperation. The paper looks at the benefits and drawbacks of using ChatGPT in academic research and offers some insight into prospective uses for this technology in the future. To efficiently respond to the research questions and accomplish the stated goals, the present study used a quick review of the literature technique. The study has discovered several ChatGPT uses in academic writing, including data gathering, teamwork, implications, and restrictions. The study also looked at how to prevent plagiarism in written work produced using ChatGPT. In conclusion, if ChatGPT is used wisely and responsibly, it has the potential to dramatically enhance and revolutionize academic research, enabling multidisciplinary cooperation.
ChatGPT as Research Scientist: Probing GPT’s capabilities as a Research Librarian, Research Ethicist, Data Generator, and Data Predictor Research Article September 23, 2024 Open Source Research Design, Data Generation, Data Analysis Psychology How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more vs. less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation.
The Value of Generative AI for Qualitative Research: A Pilot Study Research Article, Use Case Example September 23, 2024 Open Source Data Analysis Data Science This mixed-methods approach study investigates the potential of introducing generative AI (ChatGPT 4 and Bard) as part of a deductive qualitative research design that requires coding, focusing on possible gains in cost-effectiveness, coding throughput time, and inter-coder reliability (Cohen’s Kappa). This study involved semi-structured interviews with five domain experts and analyzed a dataset of 122 respondents that required categorization into six predefined categories. The results from using generative AI coders were compared with those from a previous study where human coders carried out the same task. In this comparison, we evaluated the performance of AI-based coders against two groups of human coders, comprising three experts and three non-experts. Our findings support the replacement of human coders with generative AI ones, specifically ChatGPT for deductive qualitative research methods of limited scope. The experimental group, consisting of three independent generative AI coders, outperformed both control groups in coding effort, with a fourfold (4x) efficiency and throughput time (15x) advantage. The latter could be explained by leveraging parallel processing. Concerning expert vs. non-expert coders, minimal evidence suggests a preference for experts. Although experts code slightly faster (17%), their inter-coder reliability showed no substantial advantage. A hybrid approach, combining ChatGPT and domain experts, shows the most promise. This approach reduces costs, shortens project timelines, and enhances inter-coder reliability, as indicated by higher Cohen’s Kappa values. In conclusion, generative AI, exemplified by ChatGPT, offers a viable alternative to human coders, in combination with human research involvement, delivering cost savings and faster research completion without sacrificing notable reliability. These insights, while limited in scope, show potential for further studies with larger datasets, more inductive qualitative research designs, and other research domains.
Beyond the Average: Exploring the Potential and Challenges of Large Language Models in Social Science Research Research Article September 22, 2024 Open Source Other Computer Science This paper delves into the integration of Large Language Models (LLMs) in social science research through a case study at the Centre for Consumer Society Research (CCSR). It examines the use of LLMs across the research cycle—including model development, data collection, analysis, editing, and reporting—highlighting how they can augment research efficiency and creativity. It also critically addresses the propensity of LLMs to contribute to average-quality research, underscoring the urgency for ethical guidelines and educational initiatives. The paper contributes significantly by mapping out the human, technological, and procedural barriers, and enablers to AI integration, providing a multifaceted view of LLM adoption and its implications for academia and policy making. Through empirical investigation and analysis, this study offers practical insights, establishes a baseline of current LLM use, pinpoints perceived limitations, and articulates calls for responsible governance within the social sciences.
Codebook LLMs: Adapting Political Science Codebooks for LLM Use and Adapting LLMs to Follow Codebooks Research Article September 22, 2024 Preprint Data Analysis Political Science Codebooks -- documents that operationalize constructs and outline annotation procedures -- are used almost universally by social scientists when coding unstructured political texts. Recently, to reduce manual annotation costs, political scientists have looked to generative large language models (LLMs) to label and analyze text data. However, previous work using LLMs for classification has implicitly relied on the universal label assumption -- correct classification of documents is possible using only a class label or minimal definition and the information that the LLM inductively learns during its pre-training. In contrast, we argue that political scientists who care about valid measurement should instead make a codebook-construct label assumption -- an LLM should follow the definition and exclusion criteria of a construct/label provided in a codebook. In this work, we collect and curate three political science datasets and their original codebooks and conduct a set of experiments to understand whether LLMs comply with codebook instructions, whether rewriting codebooks improves performance, and whether instruction-tuning LLMs on codebook-document-label tuples improves performance over zero-shot classification. Using Mistral 7B Instruct as our LLM, we find re-structuring the original codebooks gives modest gains in zero-shot performance but the model still struggles to comply with the constraints of the codebooks. Optimistically, instruction-tuning Mistral on one of our datasets gives significant gains over zero-shot inference (0.76 versus 0.53 micro F1). We hope our conceptualization of the codebook-specific task, assumptions, and instruction-tuning pipeline as well our semi-structured LLM codebook format will help political scientists readily adapt to the LLM era.
Using artificial intelligence in academic writing and research: An essential productivity tool Research Article September 22, 2024 Open Source Research Design, Data Collection, Data Generation, Data Analysis, Science Communication Biology, Computer Science Background Academic writing is an essential component of research, characterized by structured expression of ideas, data-driven arguments, and logical reasoning. However, it poses challenges such as handling vast amounts of information and complex ideas. The integration of Artificial Intelligence (AI) into academic writing has become increasingly important, offering solutions to these challenges. This review aims to explore specific domains where AI significantly supports academic writing. Methods A systematic review of literature from databases like PubMed, Embase, and Google Scholar, published since 2019, was conducted. Studies were included based on relevance to AI's application in academic writing and research, focusing on writing assistance, grammar improvement, structure optimization, and other related aspects. Results The search identified 24 studies through which six core domains were identified where AI helps academic writing and research: 1) facilitating idea generation and research design, 2) improving content and structuring, 3) supporting literature review and synthesis, 4) enhancing data management and analysis, 5) supporting editing, review, and publishing, and 6) assisting in communication, outreach, and ethical compliance. ChatGPT has shown substantial potential in these areas, though challenges like maintaining academic integrity and balancing AI use with human insight remain. Conclusion and recommendations AI significantly revolutionises academic writing and research across various domains. Recommendations include broader integration of AI tools in research workflows, emphasizing ethical and transparent use, providing adequate training for researchers, and maintaining a balance between AI utility and human insight. Ongoing research and development are essential to address emerging challenges and ethical considerations in AI's application in academia.
The Ethics of Generative Ai in Social Science Research: A Qualitative Approach for Community-Based Ai Research Ethics Research Article September 22, 2024 Open Source Other Other Despite growing attention to the ethics of Generative AI, there has been little discussion about how research ethics should be updated for the social sciences. This paper fills this gap at the intersection of AI ethics and social science research ethics. Based on 17 semi-structured interviews, we present three narratives about generative AI and research ethics: 1) the equalizer narrative, 2) the meritocracy narrative, and 3) the community narrative. We argue that the ethics of AI-assisted social-scientific research cannot be reduced to universal checklists. Instead, the community-based approach is necessary to organize “ethics-in-practice.” In all narratives, technical functions of Generative AI were merely necessary conditions of unethical practices, while ethical dilemmas started to arise when such functions were situated in the institutional arrangements of academia. Our findings suggest that the ethics of AI-assisted research should encompass not only the specific ethical rules concerning AI functionalities but also community engagement, educational imperatives, institutional governance, and the societal impact of such technologies. It signifies democratic deliberations to address the complex, emergent interactions between AI systems and societal structures.