Learning to track flows
ORAL
Abstract
Many aquatic animals can follow hydrodynamic trails by sensing and responding to flow signals. Despite numerous studies on this topic, a feedback control strategy that enact this behavior using only local and instantaneous flow sensing remains elusive. Here, we apply deep Reinforcement Learning to solve the problem of following vortical wakes to their generating source. We find that the trained swimmer reaches the source of the wake by two distinct policies: one drives the swimmer toward to region with larger flow speed and the other does the opposite. These policies reveal that flow sensor at tail is crucial for tracking traveling wave signals. Through analysis in a reduced order signal field, we map the sensor location to the stability of the controller in locating the source. Importantly, the sensory control strategy is generalizable to thrust and drag wakes of different Strouhal and Reynolds numbers and to 3D wakes. This work emphasizes the importance of both sensor location and sensor type and has implication on other source seeking control problems with traveling-wave characteristic.
* NSF CBET-2100209 and NSF RAISE award IOS-2034043 and ONR grant 12707602 and grant N00014-17-1-2062
–
Publication: Haotian Hang, Yusheng Jiao, Sina Heydari, Feng Ling, Josh Merel, Eva Kanso*, Learning to track flows, (in preparation)
Presenters
-
Haotian Hang
University of Southern California
Authors
-
Haotian Hang
University of Southern California
-
Yusheng Jiao
University of Southern California
-
Sina Heydari
University of Southern California
-
Feng Ling
Helmholtz Zentrum München
-
Eva Kanso
University of Southern California