REFERENCES 203
large-sample approximations (to be discussed in Chapter 13) to the log-likelihood, which
provides almost identical results to those we obtained above.
7.8 Comments and further reading
When understood, the phenomenon of regression to the mean is almost obvious, but it is still
consistently misunderstood and considered to be a fallacy, or a paradox, by many. The history
of its origin with Galton is described by Stigler (1997) in an issue of Statistical Methods in
Medical Research devoted to the subject. More details on other aspects of this concept can
also be found there. The same author has also given an overview of how Galton came up with
the correlation concept (Stigler, 1989). The original publication is Galton (1886).
In the next to last section we introduced the ideas behind one-sample multivariate analysis,
based on Gaussian distributions. We will discuss the two-sample case in the next chapter, and
a few words about the general theory of multidimensional Gaussian distributions are given
in Appendix 7.A.2 below. More comprehensive introductions to this area of statistics can be
found in numerous textbooks, including classics such as Anderson (1984) and more practically
oriented ones such as Srivastava and Carter (1983).
The Fieller interval was discussed by Edgar Fieller, an early statistician in the pharma-
ceutical industry. It was developed in connection with work on insulin in the Boots company
during the Second World War, but was described later (Fieller, 1954) as a special case of the
problem of obtaining confidence limits (though he talked about fiducial limits) for the solution
to a polynomial equation with coefficients that have a joint Gaussian distribution. Like us, he
used the Cushny–Peebles data as an illustration for the linear case.
The use of the confidence function in equation (7.10) to obtain univariate confidence
intervals for a particular combination of the means may only be accurate if we use straight
lines in the graphical method (and therefore both for a mean difference and ratio). It is then a
consequence of the fact that a linear combination of Gaussian variables (also when dependent)
is a univariate Gaussian variable, together with a similar property for the Wishart distribution.
For nonlinear functions no accurate general method seems available, but we can use the method
outlined to get an approximation, which is expected to be better the more linear the functions
are. It allows us at least a large-sample justification for the method for nonlinear parameter
functions. For more details, see Appendix 7.A.3 below. The simultaneous approach using the
confidence function in (7.9) is always valid, also for nonlinear functions of the parameters.
Stein’s paradox is really a paradox in estimation theory (Lehmann and Casella, 1998,
Chapter 5) and more general than we have indicated. A popular introduction can be found in
an article by Efron and Morris (1977), whereas Stigler (1990) gives a discussion more along
our lines.
References
Anderson, T.W. (1984) An Introduction to Multivariate Statistical Analysis second edn. John Wiley &
Sons.
Das, P. and Mulder, P.G.H. (1983) Regression to the mode. Statistica Neerlandica, 37, 15–21.
Efron, B. and Morris, C. (1977) Stein’s paradox in statistics. Scientific American, 236(5), 119–127.