A Prediction Interval for a Single Future Value For a Normal Distribution

Posted by Beetle B. on Sun 16 July 2017

What if you have \(n\) observations in a normal distribution and want to predict \(X_{n+1}\)?

The prediction interval is \(\bar{x}\pm t_{\alpha/2,n-1}s\sqrt{1+\frac{1}{n}}\) for a \(100(1-\alpha)\)

To derive this, see that the expected value of \(X_{n+1}\) is \(\mu\). Then \(E(\bar{X}-X_{n+1})=\mu-\mu=0\) and \(V(\bar{X}-X_{n+1})=V(\bar{X})+V(X_{n+1})=\frac{\sigma^{2}}{n}+\sigma^{2}\)

As \(\bar{X}-X_{n+1}\) is a linear combination of independent normal distributions, it is itself normal, with \(\sigma^{2}\left(1+\frac{1}{n}\right)\) as the variance. Replace with \(s\sqrt{1+\frac{1}{n}}\) and treat it as a t-distribution.

The prediction interval is wider than the confidence interval. This is because the former has two random variables, whereas the latter has one.

Important point: As \(n\) approaches \(\infty\), the confidence interval converges to \(\mu\). But the PI approaches \(\mu\pm z_{\alpha/2}\sigma\). There is always uncertainty in it.