LLM in Science

Resource Description

Title
Automated Social Science: Language Models as Scientist and Subjects
Description of Resource
We present an approach for automatically generating and testing, in silico, social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are proposed and tested, finding evidence for some and not others. In the auction experiment, we show that the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from an LLM are inaccurate. However, the LLM’s predictions are dramatically improved if the model can condition on the fitted structural causal model. When given a proposed structural causal model for one of the scenarios, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those effects. This suggests that social simulations give the model insight not available purely through direct elicitation. In short, the LLM knows more than it can (immediately) tell.
Type of Resource
Discussion Article
Research Discipline(s)
Other
Open Science
Open Source
Use of LLM
Other