Humans efficiently predict in a sequence learning task
ORAL
Abstract
Much of human behavior is driven by prediction of sequences, as this allows one to navigate complex environments. Large language models such as ChatGPT are in fact built on prediction of symbols in sequences. But how well and how human participants efficiently predict non-naturalistic sequences. We use three highly artificial stimuli to answer these questions. We find that participants are near-optimal efficient predictors of sequences in an information-theoretic sense and that their strategy appears to be a Bayesian approach to order-R Markov modeling. Some participants appear to predict like Long Short-Term Memory Units (LSTMs), state-of-the-art recurrent neural networks, and indeed, LSTMs have been previously used to model human behavior. This study confirms the previously theoretical notion that humans are efficient predictors of artificial input and proposes a typical mechanism by which humans model and predict the world.
* This study was supported by the US Air Force Office for Scientific Research, Grant Number FA9550-19-1-0411.
–
Presenters
-
Amy Yu
Scripps College
Authors
-
Sarah Marzen
Scripps, Pitzer & CMC
-
Vanessa Ferdinand
University of Melbourne
-
Amy Yu
Scripps College