(Created blank page) |
m (JSanchez moved page Draft Sanchez Pinedo 543780886 to 2023i) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
+ | |||
+ | ==Abstract== | ||
+ | Sparse direct linear solvers are at the computational core of domain decomposition preconditioners and therefore have a strong impact on their performance. In this paper, we consider the Fast and Robust Overlapping Schwarz (FROSch) solver framework of the Trilinos software library, which contains a parallel implementations of the GDSW domain decomposition preconditioner. We compare three different sparse direct solvers used to solve the subdomain problems in FROSch. The preconditioner is applied to different model problems; linear elasticity and more complex fully-coupled deformation diffusion-boundary value problems from chemomechanics. We employ FROSch in fully algebraic mode, and therefore, we do not expect numerical scalability. Strong scalability is studied from 64 to 4 096 cores, where good scaling results are obtained up to 1 728 cores. The increasing size of the coarse problem increases the solution time for all sparse direct solvers. | ||
+ | |||
+ | == Full Paper == | ||
+ | <pdf>Media:Draft_Sanchez Pinedo_543780886pap_364.pdf</pdf> |
Sparse direct linear solvers are at the computational core of domain decomposition preconditioners and therefore have a strong impact on their performance. In this paper, we consider the Fast and Robust Overlapping Schwarz (FROSch) solver framework of the Trilinos software library, which contains a parallel implementations of the GDSW domain decomposition preconditioner. We compare three different sparse direct solvers used to solve the subdomain problems in FROSch. The preconditioner is applied to different model problems; linear elasticity and more complex fully-coupled deformation diffusion-boundary value problems from chemomechanics. We employ FROSch in fully algebraic mode, and therefore, we do not expect numerical scalability. Strong scalability is studied from 64 to 4 096 cores, where good scaling results are obtained up to 1 728 cores. The increasing size of the coarse problem increases the solution time for all sparse direct solvers.
Published on 02/11/23
Submitted on 02/11/23
Volume Iterative Methods and Preconditioners for Challenging Multiphysics Systems, 2023
DOI: 10.23967/c.coupled.2023.008
Licence: CC BY-NC-SA license
Are you one of the authors of this document?