Skip to main content Skip to navigation

Abstracts


Alex Bespalov
Stochastic Galerkin finite element methods for saddle point problems with random data
Whilst there exists a large body of research work on numerical approximation of elliptic PDEs with random data, the case of saddle point variational problems is not so well developed.
In this talk we will give examples of saddle point problems with random data and will specifically address the issues involved in the error analysis
of stochastic Galerkin approximations to such problems. In particular, we will discuss the inf-sup stability and well-posedness of
the continuous and finite-dimensional problems, the regularity of solutions with respect to parameters describing the random
coefficients, and a priori error bounds for stochastic Galerkin approximations in terms of all the discretisation parameters involved.
This is joint work with Catherine Powell and David Silvester (University of Manchester)

Evelyn Buckwar
Stability issues for numerical methods for SDEs
Stochastic Differential Equations (SDEs) have become a standard modelling tool in many areas of science, e.g., from finance to neuroscience. Many numerical methods have been developed in the last decades and analysed for their strong or weak convergence behaviour. In this talk we will provide an overview on
recent progress in the analysis of stability properties of numerical methods for SDEs, in particular for systems of equations. We are interested in developing classes of test equations that allow insight into the stability behaviour of the methods and in developing approaches to analyse the resulting systems of equations.

Dan Crisan
Particle methods for stochastic partial differential equations
I will review recent and classical results regarding pathwise approximations to a class of parabolic SPDE with multiplicative noise. These SPDEs play a central role in stochastic filtering: Their solution gives the conditional distribution of a stochastic process X (the signal) given an associated observation process Y.

Arnaud Debussche 

Claude Gittelson 
Multilevel and/or high-order stochastic Galerkin approximation
Solutions of random elliptic boundary value problems admit efficient approximations by polynomials on the parameter domain. Each coefficient in such an expansion is a spatially-dependent function, and can be approximated within a hierarchy of finite element spaces. Surprisingly, simply choosing a single level of refinement for all active coefficients sometimes reaches the optimal convergence rate achievable by much more flexible multilevel approximations. This finding is particularly prevalent for high-order spatial discretizations. Thus high-order methods may provide an alternative to sparse tensor product constructions and other multilevel stochastic Galerkin approximations in that both approaches yield improvements in accuracy, but these are often not complimentary.

R. Ghanem
Focus on objectives resolves the curse of dimensionality
I will present new methods in uncertainty quantification that demonstrate an interplay between L2 and L1 probabilistic
characterizations. While Hilbert space constructions are conducive to approximating the deterministic maps from input to output,
they characterize the solution of a typical governing equation as a curve in high-dimensional space. In many cases of interest,
including most multi-scale problems and many problems of optimization under uncertainty, such a detailed description is of little interest. An
important task, in these cases, pertains to a sufficient characterization of a low dimensional quantity of interest. I will describe recent
mathematical construction and algorithms for delineating a minimal-dimensional space in which most of the probability measure of the quantity of interest
is localized.

M. Hairer
Approximations to stochastic PDEs of Burgers type
We study a class of spatial approximation to stochastic PDEs of Burgers type, driven by space-time white noise. It turns out that these equations exhibit "borderline" regularity, so that standard approximation techniques all fail.
As a consequence of this, we show that different approximations that would all converge to the same limit in the regular case converge to different limits here. This is a spatial analogue to the well-known fact that different temporal approximations of an SDE can converge to its solution interpreted either in the Itô or the Stratonovich sense.

Frances Kuo 
Liberating the dimension - quasi Monte Carlo methods for high dimensional integration
High dimensional problems are coming to play an ever more important role in applications, including, for example, option pricing problems in mathematical finance, maximum likelihood problems in statistics, and porous flow problems in computational physics and uncertainty quantification. High dimensional problems pose immense challenges for practical computation, because of a nearly inevitable tendency for the cost of computation to increase exponentially with dimension. Effective and efficient methods that do not suffer from this ''curse of dimensionality" are in great demand, especially since some practical problems are in fact infinite dimensional.
In this talk I will start with an introduction to ''quasi-Monte Carlo methods", focusing on the theory and construction of ''lattice rules" developed in the past decade. Then I will showcase our very latest work on how this modern theory can be ''tuned" for a given application. The motivating example will involve an elliptic PDE with random coefficient, which is based on a simplified porous flow problem where the permeability is modeled as a random field.
The PDE application part of this talk is based on a number of joint works with Ivan Graham and Rob Scheichl (Bath), Christoph Schwab (ETH Zurich), Dirk Nuyens (KU Leuven), and Ian Sloan and James Nichols (UNSW).

Omar Lakkis 
Maximum-norm strong approximation rates for noisy reaction-diffusion equations
Pointwise error and maximum norm estimates are important in estimating the risk of rare events happening. I will present new convergence
results for the approximation via finite element in space-time and Monte Carlo in the probability to the exact solution process of the
stochastic Allen-Cahn equation's. Convergence rates are established for the expected maximum norm in space-time, and are "strong" in this
sense. This improves previous results by Katsoulakis et al (2007) and, although we focus on specific case, our results can be applied to more
general reaction--diffusion equations. The results are based on joint work with Georgios Kossioris and Marco Romito.

Kody Law
Accurate filtering of the Navier-Stokes equation
In the perfect model scenario two ideas drive accurate filtering: (i) observe enough low frequency information, and (ii) model variance inflation: trust the observations. In this talk I will illustrate this for 3DVAR applied to the Navier-Stokes equations, in the low and high frequency observation limits.

Michael Mascagni 
Novel stochastic methods in biochemical electrostatics
Electrostatic forces and the electrostatic properties of molecules in solution are among the most important issues in understanding the structure and function of large biomolecules. The use of implicit-solvent models, such as the Poisson-Boltzmann equation (PBE), have been used with great success as a way of computationally deriving electrostatics properties such molecules. We discuss how to solve an elliptic system of partial differential equations (PDEs) involving the Poisson and the PBEs using path-integral based probabilistic, Feynman-Kac, representations. This leads to a Monte Carlo method for the solution of this system which is specified with a stochastic process, and a score function. We use several techniques to simplify the Monte Carlo method and the stochastic process used in the simulation, such as the walk-on-spheres (WOS) algorithm, and an auxiliary sphere technique to handle internal boundary conditions. We then specify some optimizations using the error (bias) and variance to balance the CPU time. We show that our approach is as accurate as widely used deterministic codes, but has many desirable properties that these methods do not. In addition, the currently optimized codes consume comparable CPU times to the widely used deterministic codes. Thus, we have an very clear example where a Monte Carlo calculation of a low-dimensional PDE is as fast or faster than deterministic techniques at similar accuracy levels.

Robert Scheichl
Randomising the time horizon in multilevel MC simulations of Levy processes

Standard Monte Carlo simulation techniques for time dependent SDEs and SPDEs usually first discretise the process using for example the simple Euler-Maruyama scheme on some fixed time grid, and then evaluate statistics of certain quantities of interest via sampling and averaging.
A key requirement of this method is to be able to sample from the underlying noise process over a fixed time horizon. While this is straight forward for a pure Wiener process, it is not easy/possible for more general Levy processes. Recently a new Monte Carlo simulation technique was introduced in [Kuznetsov et al., Ann. App. Probab. 21, 2011] that samples the Levy process over exponential periods instead. It is based on the Wiener-Hopf factorisation and on randomising the terminal time. While this means that in contrast to other methods we introduce an additional error due to the randomisation of the time horizon, it allows exact sampling from the increments for a large class of Levy processes. We pursue this idea further by first improving and, thereafter, combining their technique with the recently introduced multilevel Monte Carlo methodology, which itself has revolutionised the simulation of SDEs and SPDEs. We provide for the first time a theoretical analysis of the new Monte Carlo simulation technique and of its multilevel variant, and find that the rate of convergence is quasi-optimal and better than all other comparable techniques in the literature, uniformly with respect to the "jump activity" (e.g.characterised by the Blumenthal-Getoor index).


Endre Suli 
Greedy approximation of high-dimensional Ornstein-Uhlenbeck operators
We investigate the convergence of a nonlinear approximation method introduced by Ammar et al. (J. Non-Newtonian Fluid Mech. 139:153-176, 2006) for the numerical solution of high-dimensional Fokker-Planck equations featuring in Navier-Stokes-Fokker--Planck systems that arise in kinetic models of dilute polymers. In the case of Poisson's equation on a rectangular domain in R2, subject to a homogeneous Dirichlet boundary condition, the mathematical analysis of the algorithm was carried out recently by Le Bris, Leliévre and Maday (Const. Approx. 30:621-651, 2009), by exploiting its connection to greedy algorithms from nonlinear approximation theory, explored, for example, by DeVore and Temlyakov (Adv. Comput. Math. 5:173-187, 1996); hence, the variational version of the algorithm, based on the minimization of a sequence of Dirichlet energies, was shown to converge. Here, we extend the convergence analysis of the pure greedy and orthogonal greedy algorithms considered by Le Bris et al. to a technically more complicated situation, where the Laplace operator is replaced by an Ornstein-Uhlenbeck operator of the kind that appears in Fokker-Planck equations that arise in bead-spring chain type kinetic polymer models with finitely extensible nonlinear elastic potentials, posed on a high-dimensional Cartesian product configuration space D = D1 x ... x DN contained in R(N d), where each set Di, i = 1, ..., N, is a bounded open ball in Rd, d = 2, 3.

Elisabeth Ullmann

Multilevel Monte Carlo Methods for Groundwater Flow Problems in Random Media
The efficient quantification of uncertainties in the simulation of subsurface flows plays an important role in radioactive waste disposal.
The coefficients in such problems are highly uncertain, rough and oscillatory, resulting in very computing-intensive simulations that reach the limits of all existing methods even on the largest supercomputers. To overcome these limits we employ multilevel Monte Carlo (MLMC), a novel variance reduction technique which combines solution samples computed on a hierarchy of physical grids. We outline the principles of MLMC and apply this technique to a model elliptic problem of single phase flow in random media described by correlated lognormal distributions. A rigorous convergence and complexity analysis of MLMC requires the estimation of the error introduced by the finite element discretisation. We extend recent works on the analysis of standard nodal finite elements to mass-conservative lowest order Raviart-Thomas mixed finite elements. This is very important since the use of mass-conservative discretisation schemes is highly desirable in realistic groundwater flow problems. As in the standard case, the analysis is non-trivial due to the limited spatial regularity and the unboundedness of the employed lognormal random fields.
This is joint work with Andrew Cliffe (Uni Nottingham), Mike Giles (Uni Oxford), Minho Park (Uni Nottingham), Robert Scheichl (Uni Bath) and Aretha Teckentrup (Uni Bath).

Dimitri Vvedensky