Skip to main content Skip to navigation

Uncertainty Quantification

Authors : Theodoros Assiotis, Neil Chada, Pavlos Tsatsoulis, Curtis Wilson

Supervisors: Claudia Schillings, Aretha Teckentrup


We are interested in approximating numerically solutions of the following PDE problem,

$-\nabla(a(\cdot,\omega)\nabla{}u(\cdot,\omega))=f\text{ in }D$
$u(\cdot,\omega)=0 \text{ on } \partial{}D$

We consider the above problem on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, combined with the ellipticity assumption,

$\mathbb{P}(a_{min}\le{}a(x, \cdot)\le{}a_{max},{}\forall{}x{}\in{}D)=1.$

Furthermore, we consider coefficients of the form,

$a(\cdot, \omega)=a_0(\cdot)+\sum^K_{k=1}a_k(\cdot)y_k(\omega),{}y_k(\omega)\in{}[-1,1], \forall k,$

and we rewrite the equation in the following weak form,

$A(u,v;y) = l(v):= \int_D f v,{}\forall v\in{}H_0^1(D), y{}\in{}\Gamma=[-1,1]^K, \qquad(1)$

where,

$A(u,v;y) = A_0(u,v)+\sum_{k=1}^K{}A_k(u,v)y_k,$

and

$A_k(u,v) = \int_Da_k\nabla{}u{}\cdot{}\nabla{}v.$

The general idea of the method we use (RBM) is to approximate realisations of the solution (which we obtain by using a FEM) via deterministic solutions obtained by solving (1) on a finite subset of our parameter space $\Gamma$. In [1] there is an extended description of this method applied to (1). More precisely, given a finite subset $\Xi_{train}$ of $\Gamma$, which we call the training set, we construct a subset $S$ of $\Xi_{train}$ which is used so as to approximate any realisation of the solution of (1). The training set must of course satisfy some criteria. For example, it should be cheap in the sense that we can compute it easily, without too many useless samples. However, it is difficult to know a priori whether the choice of a training set is efficient. In [2] there is an extended description of the Lagrange optimal points followed by a numerical construction. Motivated by [3], where there is a comparison between the Reduce Basis Method and the Stochastic Collocation Method, we use this construction to compare the Lagrange optimal points with other training sets in terms of accuracy.

Throughout our work we are exploring different ways of approximating solutions of (1) in the case where $D=[0,1]$. In particular we are interested in different training sets $\Xi_{train}$, on which we solve our problem and then we compare our solutions in terms of their accuracy. We basically consider four different types of training sets, Random points, Uniform Grid, Clenshaw-Curtis points and Lagrange optimal points. We observe (see Numerical Results) that the Lagrange optimal points perform better than any other training set while the Clenshaw-Curtis points also indicate an interesting behaviour.

References

  • [1] R.Devore. The Theoretical Foundation of Reduced Basis Methods.
  • [2] M. Gunzburger, A.L. Teckentrup. Optimal Point Sets for Total Degree Polynomial Interpolation in Moderate Dimensions. (arXiv:1407.3291v1 [math.NA]).
  • [3] P.Chen, A.Quarteroni, G.Rozza. Comparison Between Reduced Basis and Stochastic Collocation Methods for Elliptic Problems. Journal of Scientific Computing, Vol. 59. Issue 1, 2014.

Acknowledgements

We acknowledge and thank the help of our supervisors Dr Aretha Teckentrup and Dr Claudia Schillings.
We also acknowledge the funding body EPSRC and the support from MASDOC CDT.

Contact: T.Assiotis at warwick.ac.uk, N.Chada at warwick.ac.uk, P.Tsatsoulis at warwick.ac.uk, Curtis.Wilson at warwick.ac.uk

MASDOCEPSRC