Multi-agent reinforcement learning for subgrid-scale modeling of environmental turbulence
ORAL
Abstract
The accuracy of large-eddy simulations relies on closures that model the unresolved subgrid effects. Traditionally, such closure models are based on physical models of the structure of the subgrid-scale stress or the energy/enstrophy transfer and self-similarity. More recently, supervised learning approaches have been extensively investigated as an alternative to traditional closure models. These approaches learn the subgrid-scale closures from high-fidelity snapshots of the flow. Therefore, they require a large amount of data, which can be prohibitive to acquire, or non-existing, e.g., the direct numerical simulation of environmental flows of the atmosphere or the oceans. We learn closure models using multi-agent deep reinforcement learning. This approach relies on statistics that can be calculated from a few system snapshots. The local invariants of the flow at sparsely distributed agents represent the state to match the expected long-term statistics of the system. We demonstrate that the closure model accurately predicts probability distributions of forced two-dimensional and β-plane turbulent flows. Additionally, we draw interpretable statistical conclusions between the flow parameters and the learned closure.
* ONR Young Investigator Program (N00014-20-1-2722), a grant from the NSF CSSI program (OAC-2005123), and by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.
–
Presenters
-
Rambod Mojgani
Rice University
Authors
-
Rambod Mojgani
Rice University
-
Daniel Wälchli
ETHZ
-
Yifei Guan
Rice University
-
Petros Koumoutsakos
Harvard University
-
Pedram Hassanzadeh
University of Chicago