When the time series are highly persistent (they have unit roots), we must exercise
extreme caution in using them directly in regression models (unless we are convinced the
CLM assumptions from Chapter 10 hold). An alternative to using the levels is to use the
first differences of the variables. For most highly persistent economic time series, the first
difference is weakly dependent. Using first differences changes the nature of the model, but
this method is often as informative as a model in levels. When data are highly persistent,
we usually have more faith in first-difference results. In Chapter 18, we will cover some
recent, more advanced methods for using I(1) variables in multiple regression analysis.
When models have complete dynamics in the sense that no further lags of any vari-
able are needed in the equation, we have seen that the errors will be serially uncorrelated.
This is useful because certain models, such as autoregressive models, are assumed to have
complete dynamics. In static and distributed lag models, the dynamically complete
assumption is often false, which generally means the errors will be serially correlated. We
will see how to address this problem in Chapter 12.
The “Asymptotic” Gauss-Markov Assumptions
for Time Series Regression
Following is a summary of the five assumptions that we used in this chapter to perform
large-sample inference for time series regressions. Recall that we introduced this new set
of assumptions because the time series versions of the classical linear model assumptions
are often violated, especially the strict exogeneity, no serial correlation, and normality
assumptions. A key point in this chapter is that some sort of weak dependence is required
to ensure that the central limit theorem applies. We only used Assumptions TS.1
through
TS.3
for consistency (not unbiasedness) of OLS. When we add TS.4
and TS.5
, we can
use the usual confidence intervals, t statistics, and F statistics as being approximately valid
in large samples. Unlike the Gauss-Markov and classical linear model assumptions, there
is no historically significant name attached to Assumptions TS.1
to TS.5
. Nevertheless,
the assumptions are the analogs to the Gauss-Markov assumptions that allow us to use
standard inference. As usual for large-sample analysis, we dispense with the normality
assumption entirely.
Assumption TS.1 (Linearity and Weak Dependence)
The stochastic process {(x
t1
,x
t2
,...,x
tk
,y
t
): t 1,2,...,n} follows the linear model
y
t
b
0
b
1
x
t1
b
2
x
t2
… b
k
x
tk
u
t
,
where {u
t
: t 1,2,…,n} is the sequence of errors or disturbances. Here, n is the number
of observations (time periods).
Assumption TS.2 (No Perfect Collinearity)
In the sample (and therefore in the underlying time series process), no independent vari-
able is constant nor a perfect linear combination of the others.
Assumption TS.3 (Zero Conditional Mean)
The explanatory variables are contemporaneously exogenous, that is, E(u
t
|x
t1
,...,x
tk
) 0.
Remember, TS.3
is notably weaker than the strict exogeneity assumption TS.3
.
404 Part 2 Regression Analysis with Time Series Data