We often will approximate functions as polynomial or rational functions. When doing this, we introduce two types of errors:
- Approximation: A measure of the distance between the function and the polynomial.
- Evaluation: The error introduced by evaluating the polynomial in a floating point environment.
\(\Pn_{n}\) is the set of all polynomials of degree less than or equal to \(n\), with real coefficients.
\(\Rn_{p,q}\) is the set of all rational functions with real coefficients whose numerator and denominator have degrees less than or equal to \(p\) and \(q\), respectively.
We will want the precision with which we store the coefficients to be greater than the precision of the desired result. Is there any concrete example as to why?
Let \(p^{\ast}\) be the polynomial approximation of \(f(x)\). There are two commonly used ways of measuring the approximation error:
- Least Squares approximation:
\(||p^{\ast}-f||_{2,[a,b]}=\sqrt{\int_{a}^{b}w(x)\left(p^{\ast}(x)-f(x)\right)^{2}\ dx}\)
- \(w(x)\) is a continuous, nonnegative function.
- This is an estimate of the average error in the interval.
- Minimax:
\(||p^{\ast}-f||_{\infty,[a,b]}=\max_{a\le x\le b}w(x)|p^{\ast}(x)-f(x)|\)
- \(w=1\) is for minimizing the absolute error, and \(w=1/f(x)\) for minimizing the relative error.
- This is an estimate of the maximum error in the interval.
We could utilize interpolation at specific points (e.g. Chebyshev points), and we can get really good results this way. However, it may require polynomials of a high degree, and for performance we prefer to use polynomials of smaller degree (in the low tens).