Converting From ULP Errors to Relative Errors Let \(x\) be in the normal range, and \(|x-X|=\alpha\ulp(x)\). Then:
This is assuming there is no underflow.
The proof is that we know \(\ulp(x)=\beta^{e+1-p}\) for a normal number. And \(|x|\ge\beta^{e}\) for a normal number. Thus:
Converting from Relative Error to ULP Error
Let the relative error be \(\epsilon_{r}\). Then:
The proof:
We know that \(|x|\le\beta^{e+1}\) (i.e. go up one exponent). So \(|x-X|=|x|\epsilon_{r}\le\epsilon_{r}\beta^{e+1}=\epsilon_{r}\beta^{p}\ulp(x)\).
Note that this is true even for denormals. Example: Iterated Products
Say we are dealing with binary and have \(n\) floating point normal numbers. We multiply all of them together, using round to nearest. Assume no overflow occurs. Then the final result is \(P=x_{1}x_{2}\dots x_{n}K\) where:
So the relative error of the final is less than or equal to \(\left(1+2^{-p}\right)^{n-1}-1\). We can take the linear approximation if \(n<<2^{p}\).
To derive this, consider just multiplying two numbers, \(a,b\). Now we know that \(|o(ab)-ab|\le\frac{1}{2}\ulp(ab)\). Thus, \(\alpha=\frac{1}{2}\). Converting to an inequality for the relative error, we see that \(\epsilon\) is bounded by \(\frac{1}{2}2^{1-p}=2^{-p}\). So now we can say that:
Thus:
This gives us:
Since we are multiplying \(n\) numbers, we get: