An Iterative Method Compute the minimax approximation in a wider format. Then round the coefficient of the constant term. Then recompute the approximation, but enforce that the constant term is fixed. Call this polynomial \(P_{1}\). Now define \(\hat{P_{1}}\) as the polynomial you get from \(P_{1}\) after rounding all the coefficients. Then compare \(\supf{f}{\hat{P}_{1}}\) with \(\supf{f}{P_{1}}\). If the numbers are close, then stop. If not, do the same with \(a_{1}\). Keep this up till you get a close enough minimax error.
Now suppose we’ve “frozen” coefficients \(a_{0},a_{1},\dots\). How do we compute the best polynomial with these constraints? Well, let your new \(f'\) be \(f-a_{0}-a_{1}x-\dots\), and you now want a lower order polynomial as the result. So if you only fix \(a_{0}\), your polynomial you are aiming to calculate is of order \(n-1\), and so on.
Specifically, say you froze \(a_{0},a_{1}\), your new \(f\) is actually:
Note that this method does not guarantee the best solution. But it usually yields a good one.
An Exact Method Let \(p\) be the minimax approximation. Let \(\hat{p}\) be the polynomial obtained by rounding the coefficients. Now let \(\epsilon\) be the minimax error of \(p\), and \(\hat{\epsilon}\) be the minimax error of \(\hat{p}\).
Let \(p^{\ast}\) be the best possible polynomial in precision \(p\). So we have this inequality:
Pick an error \(K\) that we want to “beat”. Clearly, if \(K<\epsilon\), we will not find any polynomials. If \(K>\hat{\epsilon}\), we’ll find at least \(\hat{p}\), but likely many more. So pick \(K\) less than \(\hat{\epsilon}\).
Take at least \(n+1\) points:
and impose the constraints:
Solve for \(p^{\ast}\). Now if \(n\) is not small, you’ll get too many solutions, and checking all for the minimum will be too expensive. So this method works much better for smaller orders. On the plus side, it does give you the best polynomial.
A Method Based on Lattice Reduction This is used in Sollya. Below is the brief idea, without details.
Let \(b_{1},\dots,b_{d}\) be linearly independent elements of \(\mathbb{R}^{d}\). The set of vectors generated from this basis set using only integer coefficients is called the Euclidean Lattice. Denote this by \(L\).
Often, given a vector \(x\), we want to find the vector in the Euclidean lattice that is the closest (based on some norm) to \(x\). This problem is NP-complete. I assume it is NP-complete because of the restriction to integers? Otherwise this is a standard least squares problem…
In any case, there is a solution to a similar problem: Given \(\gamma>1\), find \(y\in L\) such that \(||x-y||\le\gamma\min_{z\in L}||x-z||\). The book doesn’t give the algorithm.
There is a formulation of the minimax problem that reduces to this problem. Clearly, this is not an optimal solution, but a good solution.