I am a PhD student in Philipp Hennig’s group at the University of Tübingen and the International Max Planck Research School for Intelligent Systems (IMPRS-IS). My research revolves around probabilistic numerical methods for partial differential equations and Gaussian processes. I also work on applications of such methods in scientific machine learning.
news
Dec 26, 2022
A preprint of our recent article Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers is now available on arXiv.
Nov 25, 2022
I will be attending NeurIPS 2022 in New Orleans.
selected publications
Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models or analyses with a downstream application such that error quantification plays a key role. However, by entirely ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of a widely-applied theorem for conditioning GPs on a finite number of direct observations to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate and the capability to incorporate uncertain model parameters and observations. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models.
@misc{Pfoertner2022LinPDEGP,author={Pf\"ortner, Marvin and Steinwart, Ingo and Hennig, Philipp and Wenger, Jonathan},title={Physics-Informed {G}aussian Process Regression Generalizes Linear {PDE} Solvers},year={2022},publisher={arXiv},doi={10.48550/arxiv.2212.12474},url={https://arxiv.org/abs/2212.12474},}
Posterior and Computational Uncertainty in Gaussian Processes
Gaussian processes scale prohibitively with the size of the dataset. In response, many approximation methods have been developed, which inevitably introduce approximation error. This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior. Therefore in practice, GP models are often as much about the approximation method as they are about the data. Here, we develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended. The most common GP approximations map to an instance in this class, such as methods based on the Cholesky factorization, conjugate gradients, and inducing points. For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method’s posterior mean and the latent function. Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets.
@inproceedings{Wenger2022IterGP,author={Wenger, Jonathan and Pleiss, Geoff and Pf\"ortner, Marvin and Hennig, Philipp and Cunningham, John P.},title={Posterior and Computational Uncertainty in {G}aussian Processes},year={2022},booktitle={Advances in Neural Information Processing Systems},volume={34},editor={TBA},publisher={TBA},pages={TBA},doi={10.48550/arxiv.2205.15449},url={https://arxiv.org/abs/2205.15449},}