Zaspel, Peter: Parallel RBF Kernel-Based Stochastic Collocation for Large-Scale Random PDEs. - Bonn, 2015. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5n-40391
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5n-40391
@phdthesis{handle:20.500.11811/6484,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5n-40391,
author = {{Peter Zaspel}},
title = {Parallel RBF Kernel-Based Stochastic Collocation for Large-Scale Random PDEs},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2015,
month = jul,
note = {In this thesis, the solution of large-scale uncertainty quantification problems is considered. Uncertainty quantification aims to extract stochastic (moment) information from processes with uncertain input data. These processes are here identified with random partial differential equations (PDEs). They have random input with respect to initial / boundary conditions, forcing terms, coefficients or domains. Non-intrusive methods are studied, in order to reuse existing PDE solvers. Applications are (elliptic) model problems and incompressible two-phase flows. The main contribution of this thesis is a new framework to solve these problems in a high-order convergent, scaling, parallel and optimal complexity fashion. To this end, the radial basis function (RBF) kernel-based stochastic collocation method is introduced. It combines high-order algebraic or even exponential convergence rates of spectral (sparse) tensor-product methods with optimal preasymptotic convergence of kriging and the profound stochastic framework of Gaussian process regression. The new method uses Lagrange bases from special reproducing kernel Hilbert spaces for approximation. Those Hilbert spaces are constructed from RBFs.
Numerical results show up to exponential convergence for model problems with high smoothness. For problems with low-smoothness, algebraic convergence rates are given. A small error in the preasymptotic regime is always achieved. Convergence results of (quasi-)Monte Carlo and (sparse) spectral tensor-product approaches are often clearly outperformed. An empirical error coupling analysis describes the interplay of all approximations, including conditions to balance all error contributions. Runtime complexity is expressed for a target error. Performance measurements show that a stochastic moment analysis for large-scale two-phase flow problems can be solved within a few hours. This excellent preasymptotic runtime becomes possible by parallelizing all relevant numerical methods, including a two-phase flow solver, iterative dense linear algebra solvers and all parts of the stochastic collocation on graphics processing units (GPUs). Most approaches scale across clusters of GPUs.
Optimal complexity and profound speedups are achieved by preconditioning of iterative sparse and dense linear solvers. Dense linear systems from interpolation are preconditioned with a localized restricted additive Schwarz method. Thereby, a new perfectly scalable preconditioner on multi-GPU clusters is constructed. Elliptic problems are solved with a newly implemented optimal Ruge-Stüben algebraic multigrid method. It uses CPU-based C/F splittings and parallelizes all remaining parts of the setup and solve phase on one GPU.
The curse of dimensionality is weakened or even broken for problems with fast decaying output covariance spectrum. To this end, anisotropic RBF kernel-based stochastic collocation is introduced. Optimal weights for two-phase flow problems are approximated by a Karhunen-Loève expansion of the solution flow field, which requires to solve a large-scale dense eigenvalue problem. Greedy optimization is used for optimal sampling in anisotropic space. Numerical experiments give profound (pre-)asymptotic results for elliptic and two-phase flow problems.
Overall, a combined effort of optimal numerical methods and parallel implementations allows to solve even large-scale uncertainty quantification problems in a small amount of time.},
url = {https://hdl.handle.net/20.500.11811/6484}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5n-40391,
author = {{Peter Zaspel}},
title = {Parallel RBF Kernel-Based Stochastic Collocation for Large-Scale Random PDEs},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2015,
month = jul,
note = {In this thesis, the solution of large-scale uncertainty quantification problems is considered. Uncertainty quantification aims to extract stochastic (moment) information from processes with uncertain input data. These processes are here identified with random partial differential equations (PDEs). They have random input with respect to initial / boundary conditions, forcing terms, coefficients or domains. Non-intrusive methods are studied, in order to reuse existing PDE solvers. Applications are (elliptic) model problems and incompressible two-phase flows. The main contribution of this thesis is a new framework to solve these problems in a high-order convergent, scaling, parallel and optimal complexity fashion. To this end, the radial basis function (RBF) kernel-based stochastic collocation method is introduced. It combines high-order algebraic or even exponential convergence rates of spectral (sparse) tensor-product methods with optimal preasymptotic convergence of kriging and the profound stochastic framework of Gaussian process regression. The new method uses Lagrange bases from special reproducing kernel Hilbert spaces for approximation. Those Hilbert spaces are constructed from RBFs.
Numerical results show up to exponential convergence for model problems with high smoothness. For problems with low-smoothness, algebraic convergence rates are given. A small error in the preasymptotic regime is always achieved. Convergence results of (quasi-)Monte Carlo and (sparse) spectral tensor-product approaches are often clearly outperformed. An empirical error coupling analysis describes the interplay of all approximations, including conditions to balance all error contributions. Runtime complexity is expressed for a target error. Performance measurements show that a stochastic moment analysis for large-scale two-phase flow problems can be solved within a few hours. This excellent preasymptotic runtime becomes possible by parallelizing all relevant numerical methods, including a two-phase flow solver, iterative dense linear algebra solvers and all parts of the stochastic collocation on graphics processing units (GPUs). Most approaches scale across clusters of GPUs.
Optimal complexity and profound speedups are achieved by preconditioning of iterative sparse and dense linear solvers. Dense linear systems from interpolation are preconditioned with a localized restricted additive Schwarz method. Thereby, a new perfectly scalable preconditioner on multi-GPU clusters is constructed. Elliptic problems are solved with a newly implemented optimal Ruge-Stüben algebraic multigrid method. It uses CPU-based C/F splittings and parallelizes all remaining parts of the setup and solve phase on one GPU.
The curse of dimensionality is weakened or even broken for problems with fast decaying output covariance spectrum. To this end, anisotropic RBF kernel-based stochastic collocation is introduced. Optimal weights for two-phase flow problems are approximated by a Karhunen-Loève expansion of the solution flow field, which requires to solve a large-scale dense eigenvalue problem. Greedy optimization is used for optimal sampling in anisotropic space. Numerical experiments give profound (pre-)asymptotic results for elliptic and two-phase flow problems.
Overall, a combined effort of optimal numerical methods and parallel implementations allows to solve even large-scale uncertainty quantification problems in a small amount of time.},
url = {https://hdl.handle.net/20.500.11811/6484}
}