Keeping it together: How humans coordinate with little information
ORAL
Abstract
Swing in a crew boat, a good jazz riff, an excellent conversation: these are all results of good coordination between individuals. To perform such tasks, we must extract and encode sensory information about how others flow in order to mimic and respond. Recently, commercial virtual reality and motion capture platforms have become available, allowing us to manipulate visual and auditory fields while recording motion. Using these platforms to create a virtual reality environment, we study how people mirror the motion of a human avatar under different conditions. We find that people can coordinate well when the avatars are fully visible. However, when we limit visual information by blinking the visibility of the avatar we observe poor coordination where all individuals exhibit jerky motion at the blinking frequency. We then describe whether or not parallel visual or auditory feedback mitigates this transition. Finally, we comment on how such studies might be used to enhance coordination between individuals.
–
Presenters
-
Edward Lee
Cornell University
Authors
-
Edward Lee
Cornell University
-
Itai Cohen
Laboratory of Atomic and Solid State Physics, Cornell University, Physics, Cornell University, Cornell University, Department of Physics, Cornell University, Cornell Univ