.button { text-transform: none !important; }

Econometrics by Example

Chapter 16

The primary goal of this chapter was to introduce the reader to four important topics in time series econometrics, namely, (1) forecasting with linear regression models, (2) univariate time series forecasting with Box–Jenkins methodology, (3) multivariate time series forecasting using vector autoregression, and (4) the nature of causality in econometrics.

Linear regression models have long been used in forecasting sales, production, employment, corporate profits and a host of other economic topics. In discussing forecasting with linear regression, we distinguished between point and interval forecasts, ex post and ex ante forecasts, and conditional and unconditional forecasts. We illustrated these with an example relating real per capita consumption expenditure in relation to real per capita disposable income in the USA for the period 1960–2004 and saved the observations for 2005 to 2008 to see how the fitted model performs in the post the estimation period. We briefly discussed forecasting with autocorrelated errors.

We then discussed the ARIMA method of forecasting, which is popularly known as the Box–Jenkins (BJ) methodology. In the BJ approach to forecasting, we analyze a time series strictly on the basis its past history or purely moving average of random error term or both. The name ARMA is a combination of AR (autoregressive) and MA (moving average) terms. It is assumed that the time series under study is stationary. If it is not stationary, we make it stationary by differencing it one or more times.

ARIMA modeling is a four-step procedure: (1) Identification, (2) Estimation, (3) Diagnostic checking and (4) Forecasting. In developing an ARIMA model, we can look at the features of some of the standard ARIMA models and try to modify them in a given case. Once a model is identified, it is estimated. To see if the fitted model is satisfactory, we subject it to various diagnostic tests. The key here is to see if the residuals from the chosen model are white noise. If they are not, we start the four-step procedure once again. Thus the BJ methodology is an iterative procedure.

Once an ARIMA model is finally chosen, it can be used for forecasting future values of the variable of interest. This forecasting can be static as well as dynamic.

To deal with forecasting two or more time series, we need to go beyond the BJ methodology. Vector autoregressive models (VARs) are used for this purpose. In VAR we have one equation for each variable and each equation contains only the lagged values of that variable and the lagged values of all other variables in the system.

As in the case of the univariate time series, in VAR we also require the time series to be stationary. If each variable in the VAR is already stationary, each equation in it can be estimated by OLS. If each variable is not stationary, we can estimate VAR only in the first differences of the series; rarely do we have to difference a time series more than once. However, if individual variables in VAR are nonstationary, but are cointegrated, we can estimate VAR by taking into account the error correction term, which is obtained from the cointegrating regression. This leads to vector error correction model (VECM).

We can use the estimated VAR model for forecasting. In such forecasting we not only use information on the variable under consideration but also all the variables in the system. The actual mechanics is tedious, but software packages now do this routinely.

VAR modes can also be used to shed light on causality among variables. The basic idea behind VAR causality testing is that the past can cause the present and the future, but not the other way round. Granger causality uses this concept. In the PCE and PDI example, if the lagged values of PDI better forecast the current values of PCE than the lagged values of PCE alone, we may contend that PDI (Granger) causes PCE. Similarly, if the lagged values of PCE better forecast the current values of PDI than the lagged values of PDI alone, we may say that PCE (Granger) causes PDI. These are instances of unilateral causality. But it is quite possible that there is bilateral causality between the two in that PCE causes PDI and PDI causes PCE.

In establishing causality, we must make sure that the underlying variables are stationary. If they are not, we have to difference the variables and run the causality test on the differenced variables. However, if the variables are nonstationary, but are integrated, we need to use the error correction term to account for causality, if any.