Skip to main content Skip to navigation

Reduced Basis Method

General Philosophy

It is worth to spend high computational costs to get ''good approximation'' subspaces. These subspaces are built hierarchically in a greedy manner until we satisfy some tolerance. At the end we have an approximation subspace,

$X_{N_{max}}=\text{span} \{ u_h(y^1),u_h(y^2), \ldots, u_h(y^{N_{max}})\},$

where $\{u_h(y^i)\}_{i=1}^{N_{max}}$ are called snapshots and $u_h(y^i)$ solves (2) for $y=y^i\in \Xi_{train}$. The $\{y^i\}_{i=1}^{N_{max}}$ are chosen using the Greedy Algorithm. If we want the solution to (2) for any $y\in \Gamma$ we employ a Galerkin Projection onto $X_{N_{max}}$,

$(u(y))(x)=\sum_{m=1}^{N_{max}} c_m^{N_{max}}(y) (u_h(y^m))(x).$

Three Key Ingredients

Training Set

In our project we always start off by picking a training set $\Xi_{train}$, consisting of $M$ points from parameter space $\Gamma$. We run the greedy algorithm on this set. The training set needs to be easy to compute without too many useless samples in order to avoid unnecessary computation but on the other hand it must be sufficient to capture most representative snapshots.

Greedy Algorithm

Suppose we are given a training set $\Xi_{train}$, a sample set $S_{1}= \{ y^{1} \} $ and a reduced basis space $X_{1}=span\{u_h(y^1)\}$. We seek to pick $y^{2},y^{3},...,y^{N_{max}}$ and build nested reduced bases spaces $X_{N}=span\{u_h(y^n), 1 \leq n \leq N\}$ in a greedy manner by solving the following optimization problem: For $N=2,...,N_{max}$ find,

$y^{N}={\arg{}\max}_{y\in{}\Xi_{train}}{}\Delta_{N-1}(y)$

where $\Delta_{N-1}$ is the a posteriori error bound (see below). We then add $y^N$ to the sample to get $S_{N}=S_{N-1} \cup \{ y^{N} \} $ and augment our basis space to get $X_N=X_{N-1} \oplus span\{u_h(y^N)\}$. We finally orthonormalize the $u_h(y^{N})$'s in $X=(V_h, (\cdot,\cdot)_X)$, where $(\cdot,\cdot)_X := A(\cdot,\cdot;0)$.

A posteriori error bound

Let $e(y)$ be the true error. Then we have the following bound,

$\|e(y)\|_X\leq \frac{\|\hat{e}(y)\|_X}{a_{LB}(y)}:=\Delta_N(y),$

where $a_{LB}(y)$ is the lower bound for the coercivity constant and $\|\hat{e}(y)\|_X$ is given by the formula,

$\|\hat{e}(y)\|_{X}^{2}= (\Phi,\Phi)_{X}+ \mathscr{G}(\Phi, \Psi, y),$

where,

$\mathscr{G}(\Phi, \Psi, y)= \sum_{k=0}^{K}\sum_{n=1}^{N} y_{k} c_n^{N}(y) \left(2(\Phi,\Psi_{n}^{k})_{X}+\sum_{k'=0}^{K}\sum_{n'=1}^{N} y_{k'} c_{n'}^{N}(y) (\Psi_{n}^{k},\Psi_{n'}^{k'})_{X}\right)$

and $\Phi$ and $\Psi_{n}^{k}$ are such that $(\Phi,v)_{X}=l(v)$ and $(\Psi_{n}^{k},v)_{X}=-A_{k}(\zeta_h^n,v)$, for all $v \in X_{N}$, $1\leq n \leq N, 0 \leq k \leq K$ and $\{\zeta_h^n\}_{n=1}^N$ are the orthonormalised snapshots.