Agent Laboratory: Using LLM Agents as Research Assistants

Historically, scientific discovery has been a lengthy and costly process, demanding substantial time and resources from initial conception to final results. To accelerate scientific discovery, reduce research costs, and improve research quality, we introduce Agent Laboratory, an autonomous LLM-based framework capable of completing the entire research process. This framework accepts a human-provided research idea and […]

Continue Reading

Challenges in Guardrailing Large Language Models for Science

The rapid development in large language models (LLMs) has transformed the landscape of natural language processing and understanding (NLP/NLU), offering significant benefits across various domains. However, when applied to scientific research, these powerful models exhibit critical failure modes related to scientific integrity and trustworthiness. Existing general-purpose LLM guardrails are insufficient to address these unique challenges […]

Continue Reading

Using AI in Grounded Theory research – a proposed framework for a ChatGPT-based research assistant

The purpose of this paper is to explore the potential application of ChatGPT in relation to grounded theory. Our focus is building a case as to its usefulness to support the research process as an assistant to the researcher, rather than to replace the intellectual rigour needed to conduct credible grounded theory research. To aid […]

Continue Reading

A Computational Method for Measuring “Open Codes” in Qualitative Analysis

Qualitative analysis is critical to understanding human datasets in many social science disciplines. Open coding is an inductive qualitative process that identifies and interprets “open codes” from datasets. Yet, meeting methodological expectations (such as “as exhaustive as possible”) can be challenging. While many machine learning (ML)/generative AI (GAI) studies have attempted to support open coding, […]

Continue Reading

OpenScholar: Synthesizing Scientific Literature

Scientific progress depends on researchers’ ability to synthesize the growing body of literature. Can large language models (LMs) assist scientists in this task? We introduce OpenScholar, a specialized retrieval-augmented LM that answers scientific queries by identifying relevant passages from 45 million open-access papers and synthesizing citation-backed responses. To evaluate OpenScholar, we develop ScholarQABench, the first […]

Continue Reading

AI-Augmented Cultural Sociology

The advent of large language models (LLMs) presents a promising opportunity for how we analyze text and, by extension, can study the role of culture and symbolic meanings in social life. Using an illustrative example focused on the concept of “personalized service” within Michelin-starred restaurants, this research note demonstrates how LLMs can reliably identify complex, […]

Continue Reading

Beyond principlism: Practical strategies for ethical AI use in research practices

The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a “Triple-Too” problem: too many high-level ethical initiatives, too abstract principles lacking contextual and practical relevance, and too much focus on restrictions and risks over benefits and utilities. Existing approaches—principlism […]

Continue Reading

Designing Reliable Experiments with Generative Agent-Based Modeling: A Comprehensive Guide

In social sciences, researchers often face challenges when conducting large-scale experiments, particularly due to the simulations’ complexity and the lack of technical expertise required to develop such frameworks. Agent-Based Modeling (ABM) is a computational approach that simulates agents’ actions and interactions to evaluate how their behaviors influence the outcomes. However, the traditional implementation of ABM […]

Continue Reading