Least Squares Polynomial Approximations

Posted by Beetle B. on Mon 11 March 2019

First, just a definition: A monic polynomial is one whose leading coefficient is 1.

We want to find a polynomial of degree \(n\) that minimizes the least square error:

\begin{equation*} \int_{a}^{b}w(x)\left(p(x)-f(x)\right)^{2}\ dx \end{equation*}

Let \(\{T_{i}\}\) be a set of orthogonal polynomials such that \(T_{i}\) is of degree \(i\), and the largest degree is \(n\). Let this set also be complete (i.e. any polynomial in \(\Pn_{n}\) can be represented in this basis).

Using the inner product notation:

\begin{equation*} <f,g>=\int_{a}^{b}w(x)f(x)g(x)\ dx \end{equation*}

Let \(a_{i}=\frac{<f,T_{i}>}{<T_{i},T_{i}>}\).

Then let \(p^{\ast}=\sum_{i=0}^{n}a_{i}T_{i}\). This is the polynomial with minimum error.

Note that the set \(\{T_{i}\}\) will depend on the choice of weight function and interval.

Proof:

For convenience, assume the \(T_{i}\) are normalized

If we compute the error with this polynomial, we get \(<f,f>-\sum_{i}<f,T_{i}>^{2}\).

Now let \(g(x)\in\Pn_{n}\). \(g(x)=\sum_{i}b_{i}T_{i}\). The error with this polynomial is:

\begin{equation*} <f,f>-2\sum b_{i}<f,T_{i}>+\sum b_{i}^{2} \end{equation*}

To minimize this expression, we want to maximize:

\begin{equation*} \sum b_{i}\left[2<f,T_{i}>-b_{i}<T_{i},T_{i}>\right] \end{equation*}

Since the \(b_{i}\) are independent, it sufficies to maximimize each one individually. This gives \(b_{i}=<f,T_{i}>\) which gives us the same polynomial.

Now the book has examples of the following orthogonal polynomials:

Legendre Polynomials

Definition

  • \(w(x)=1\)
  • \([a,b]=[-1,1]\)

Defined by:

\begin{equation*} T_{0}(x)=1 \end{equation*}
\begin{equation*} T_{1}(x)=x \end{equation*}
\begin{equation*} T_{n}(x)=\frac{2n-1}{n}xT_{n-1}(x)-\frac{n-1}{n}T_{n-2}(x) \end{equation*}

Scalar Product

For the scalar product:

\begin{equation*} <T_{i},T_{i}>=\frac{2}{2i+1} \end{equation*}

Otherwise they are orthogonal. Did not derive this.

Chebyshev

Definition

  • \(w(x)=1/\sqrt{1-x^{2}}\)
  • \([a,b]=[-1,1]\)

It is defined by:

\begin{equation*} T_{0}(x)=1 \end{equation*}
\begin{equation*} T_{1}(x)=x \end{equation*}
\begin{equation*} T_{n}(x)=2xT_{n-1}(x)-T_{n-2}(x)=\cos(n\cos^{-1}(x)) \end{equation*}

Scalar Product

For the scalar product:

\begin{equation*} <T_{0},T_{0}>=\pi \end{equation*}

If \(i\ne j\):

\begin{equation*} <T_{i},T_{i}>=\pi/2 \end{equation*}

Otherwise they are orthogonal.

Explicit Form

You can write the Chebyshev polynomial explicitly:

\begin{equation*} T_{n}(x)=\frac{n}{2}\sum_{k=0}^{\lfloor n/2 \rfloor}\left(-1\right)^{k}\frac{(n-k-1)!}{k!(n-2k)!}\left(2x\right)^{n-2k} \end{equation*}

(I did not prove this).

Roots

All \(n\) real roots are in \([-1,1]\). For proof, note that \(T_{n}(x)=\cos(n\cos^{-1}(x))\). Now the cosine is 0 for \((2k+1)\pi/2\). Thus:

\begin{equation*} n\cos^{-1}(x)=(2k+1)\frac{\pi}{2} \end{equation*}

And ultimately:

\begin{equation*} x=\cos\left(\frac{2k+1}{n}\frac{\pi}{2}\right) \end{equation*}

And we know that the cosine is in \([-1,1]\).

Extrema

There are \(n+1\) extrema for \(T_{n}(x)\). They occur at \(\cos\left(\frac{k\pi}{n}\right)\), and the values at the extrema are \(\pm 1\).

For proof, simply use the \(T_{n}(x)=\cos(n\cos^{-1}(x))\) definition and differentiate.

Minimal \(\infty\) Norm

Let’s consider all monic polynomials of degree \(n\). We want to find the polynomial that minimizes the maximum magnitude in the interval \([-1,1]\). This polynomial is actually:

\begin{equation*} \frac{1}{2^{n-1}}T_{n}(x) \end{equation*}

To prove this, assume another monic polynomial \(w(x)\) exists that is better. Construct the following polynomial:

\begin{equation*} f_{n}(x)=\frac{1}{2^{n-1}}T_{n}(x)-w(x) \end{equation*}

Note that this is an \(n-1\) degree polynomial. Now by design, we have

\begin{equation*} \left|w(x)\right|<\left|\frac{1}{2^{n-1}}T_{n}(x)\right| \end{equation*}

at the extreme points. Now this means that \(f_{n}(x)>0\) for half the exteme points, and \(f_{n}(x)<0\) for the other half. There are \(n+1\) extrema for \(T_{n}(x)\). This means that our function changes sign \(n\) times, and thus has \(n\) roots in the interval. But we know that the function is of order \(n-1\). This is a contradiction.

Now you can translate/scale this to \([a,b]\) to get:

\begin{equation*} \frac{(b-a)^{n}}{2^{2n-1}}T_{n}\left(\frac{2x-b-a}{b-a}\right) \end{equation*}

Jacobi Polynomials

  • \(w(x)=(1-x)^{\alpha}(1+x)^{\beta}\) with \((\alpha,\beta>1)\)
  • Interval: \([-1,1]\)

It is explicitly given as:

\begin{equation*} T_{n}(x)=\frac{1}{2^{n}}\sum_{m=0}^{n}\dbinom{n+\alpha}{m}\dbinom{n+\beta}{n-m}(x-1)^{n-m}(x+1)^{m} \end{equation*}

The scalar product is too nasty to write.

Laguerre Polynomials

  • \(w(x)=e^{-x}\)
  • Interval: \([0,\infty]\)

Definition:

\begin{equation*} T_{n}(x)=\frac{e^{x}}{n!}\frac{d^{n}}{dx^{n}}(x^{n}e^{-x}) \end{equation*}

The scalar product is:

\begin{equation*} <T_{i},T_{i}>=1 \end{equation*}

This was not derived.