(Created blank page) |
|||
Line 1: | Line 1: | ||
+ | |||
+ | ==Summary== | ||
+ | We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations. |
We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations.
Published on 24/11/22
Accepted on 24/11/22
Submitted on 24/11/22
Volume Science Computing, 2022
DOI: 10.23967/eccomas.2022.271
Licence: CC BY-NC-SA license
Are you one of the authors of this document?