Polynomials With Exact Representable Coefficients

Posted by Beetle B. on Wed 24 April 2019

An Iterative Method Compute the minimax approximation in a wider format. Then round the coefficient of the constant term. Then recompute the approximation, but enforce that the constant term is fixed. Call this polynomial \(P_{1}\). Now define \(\hat{P_{1}}\) as the polynomial you get from \(P_{1}\) after rounding all the coefficients. Then compare \(\supf{f}{\hat{P}_{1}}\) with \(\supf{f}{P_{1}}\). If the numbers are close, then stop. If not, do the same with \(a_{1}\). Keep this up till you get a close enough minimax error.

Now suppose we’ve “frozen” coefficients \(a_{0},a_{1},\dots\). How do we compute the best polynomial with these constraints? Well, let your new \(f'\) be \(f-a_{0}-a_{1}x-\dots\), and you now want a lower order polynomial as the result. So if you only fix \(a_{0}\), your polynomial you are aiming to calculate is of order \(n-1\), and so on.

Specifically, say you froze \(a_{0},a_{1}\), your new \(f\) is actually:

\begin{equation*} \frac{f-a_{0}-a_{1}x}{x^{2}} \end{equation*}

Note that this method does not guarantee the best solution. But it usually yields a good one.

An Exact Method Let \(p\) be the minimax approximation. Let \(\hat{p}\) be the polynomial obtained by rounding the coefficients. Now let \(\epsilon\) be the minimax error of \(p\), and \(\hat{\epsilon}\) be the minimax error of \(\hat{p}\).

Let \(p^{\ast}\) be the best possible polynomial in precision \(p\). So we have this inequality:

\begin{equation*} \epsilon\le\supf{f}{p^{\ast}}\le\hat{\epsilon} \end{equation*}

Pick an error \(K\) that we want to “beat”. Clearly, if \(K<\epsilon\), we will not find any polynomials. If \(K>\hat{\epsilon}\), we’ll find at least \(\hat{p}\), but likely many more. So pick \(K\) less than \(\hat{\epsilon}\).

Take at least \(n+1\) points:

\begin{equation*} a\le x_{0}<x_{1}<\cdots<x_{m}\le b,m\ge n \end{equation*}

and impose the constraints:

\begin{equation*} f(x_{j})-K\le p_{0}^{\ast}+p_{1}^{\ast}x_{j}+p_{2}^{\ast}x_{j}^{2}+\cdots+p_{n}^{\ast}x_{j}^{n}\le f(x_{j})+K,\forall j \end{equation*}

Solve for \(p^{\ast}\). Now if \(n\) is not small, you’ll get too many solutions, and checking all for the minimum will be too expensive. So this method works much better for smaller orders. On the plus side, it does give you the best polynomial.

A Method Based on Lattice Reduction This is used in Sollya. Below is the brief idea, without details.

Let \(b_{1},\dots,b_{d}\) be linearly independent elements of \(\mathbb{R}^{d}\). The set of vectors generated from this basis set using only integer coefficients is called the Euclidean Lattice. Denote this by \(L\).

Often, given a vector \(x\), we want to find the vector in the Euclidean lattice that is the closest (based on some norm) to \(x\). This problem is NP-complete. I assume it is NP-complete because of the restriction to integers? Otherwise this is a standard least squares problem…

In any case, there is a solution to a similar problem: Given \(\gamma>1\), find \(y\in L\) such that \(||x-y||\le\gamma\min_{z\in L}||x-z||\). The book doesn’t give the algorithm.

There is a formulation of the minimax problem that reduces to this problem. Clearly, this is not an optimal solution, but a good solution.