Introduction to Table Based Methods

Posted by Beetle B. on Tue 17 September 2019

In general, when approximating a function, we’ll split the domain into multiple intervals and approximate each one with a polynomial. Their coefficients will be stored in a table.

There is a tradeoff in choosing the interval sizes. Small intervals will mean many polynomial (larger table). The benefit is that each polynomial will evaluate quickly. The downside is that large tables can result in cache mishits which can completely negate the positives. If the intervals are large, you deal with a longer compute time for each polynomial.

Consider evaluating \(2^{x}\). Let \(k\) be the largest integer less than or equal to \(x\). Then \(2^{x}=2^{k}2^{f}\) where \(f\in[0,1)\). So if we can calculate \(2^{f}\) accurately in this interval, the rest is simply bit shifting. You can take this further by tabulating \(2^{i}\) for \(1\le i\le255\), and then approximating \(2^{z}\) in \([0, 1/256)\), and then multiplying by the value in the table and bit shifting.

Another thing to consider is what will happen at the boundaries of the intervals. For example, monotonicity may not be preserved!