42 CONCEPTS AND TOOLS
mate fit of your model to the data. That is, these fit indexes allow for an “acceptable”
amount of departure from exact fit or perfect fit between model and data. What is
considered “acceptable” departure from perfection is related to the estimated value of
the noncentrality parameter for the χ
2
statistic that the computer calculates for your
model. Other fit statistics in SEM measure the degree of departure from perfect fit, and
these indexes are generally described by central χ
2
-distributions. Assessment of model
fit against these two standards, exact versus approximation, is covered in Chapter 8.
BootstraPPIng
Bootstrapping is a computer-based method of resampling developed by B. Efron (e.g.,
1979). There are two general kinds of bootstrapping. In nonparametric bootstrapping,
your sample (i.e., data file) is treated as a pseudopopulation. Cases from the original
data set are randomly selected with replacement to generate other data sets, usually
with the same number of cases as the original. Because of sampling with replacement,
(1) the same case can appear in more than one generated data set and (2) the composi-
tion of cases will vary slightly across the generated samples. When repeated many times
(e.g., 1,000), bootstrapping simulates the drawing of numerous random samples from a
population. Standard errors are estimated in this method as the standard deviation in
the empirical sampling distribution of the same statistic across all generated samples.
Nonparametric bootstrapping generally assumes only that the sample distribution has
the same shape as that of the population distribution. In contrast, the distributional
assumptions of many standard statistical tests, such as the t-test for means, are more
demanding (e.g., normal and equally variable population distributions). A raw data file
is necessary for nonparametric bootstrapping. This is not true in parametric bootstrap-
ping, where the computer randomly samples from a theoretical probability density func-
tion specified by the researcher. This is a kind of Monte Carlo method that is used in
computer simulation studies of the properties of particular estimators, including those
of many used in SEM that measure model fit.
It is important to realize that bootstrapping is not a magical technique that can
somehow compensate for small or unrepresentative samples, severely non-normal dis-
tributions, or the absence of actual replication samples. In fact, bootstrapping can poten-
tially magnify the effects of unusual features in a small data set (Rodgers, 1999). More
and more SEM computer programs, including Amos, EQS, LISREL, and Mplus, feature
optional bootstrap methods. Some of these methods can be used to estimate the stan-
dard errors of a particular model parameter estimate or a fit statistic; bootstrapping can
be used to calculate confidence intervals for these statistics, too. Bootstrapping methods
are also applied in SEM to estimate standard errors for non-normal or categorical data
and when there are missing data.
An example of the use of nonparametric bootstrapping to empirically estimate the
standard error of a Pearson correlation follows. Presented in Table 2.3 is a small data set
for two continuous variables where N = 20 and the observed correlation is r
XY
= .3566.