Saturday, September 20, 2014

The (Non-) Standard Asymptotics of Dickey-Fuller Tests

One of the most widely used tests in econometrics is the (augmented) Dickey-Fuller (DF) test. We use it in the context of time series data to test the null hypothesis that a series has a unit root (i.e., it is I(1)), against the alternative hypothesis that the series is I(0), and hence stationary. If we apply the test to a first-differenced time series, then the null is that the series is I(2), and the alternative hypothesis is that it is I(1), and so on.


Suppose that the time series in question is {Yt; t = 1, 2, 3, ......, T}. The so-called "Dickey-Fuller regression" is a least squares regression of the form:

                           ΔYt = [α + β t] + γYt-1 + [Σ δj ΔYt-j] + εt   .                 (1)

Here, terms in square brackets are optional; and of these the "p" ΔYt-j terms are the "augmentation terms", whose role is to ensure that the there is no autocorrelation in the equation's residuals.

Standard econometrics packages allow for three versions of (1):
  • No drift - no trend: that is, the (α + β t) terms are omitted.
  • Drift - no trend: the intercept (drift term) is included, but the linear trend term is not.
  • Drift - and - trend: both of the α and (β t) terms are included.
For example, here's the dialogue box that you see when you go to apply the DF test using the EViews package:

Friday, September 19, 2014

Least Squares, Perfect Multicollinearity, & Estimable Functions

This post is essentially an extension of another recent post on this blog. I'll assume that you've read that post, where I discussed the problem of solving linear equations of the form Ax = y, when the matrix A is singular.

Let's look at how this problem might arise in the context of estimating the coefficients of a linear regression model, y = Xβ + ε. In the previous post, I said:
"Least squares estimation leads to the so-called "normal equations":
                                 
                         X'Xb = X'y  .                                                                (1)

If the regressor matrix, X, has k columns, then (1) is a set of k linear equations in the k unknown elements of β. You'll recall that if X has full column rank, k, then (X'X) also has full rank, k, and so (X'X)-1 is well-defined. We then pre-multiply each side of (1) by (X'X)-1, yielding the familiar least squares estimator for β, namely b = (X'X)-1X'y.
So, as long as we don't have "perfect multicollinearity" among the regressors (the columns of X), we can solve (1), and the least squares estimator is defined. More specifically, a unique estimator for each individual element of β is defined.
What if there is perfect multicollinearity, so that the rank of X, and of (X'X), is less than k? In that case, we can't compute (X'X)-1, we can't solve the normal equations in the usual way, and we can't get a unique estimator for the (full) β vector."
I promised that I'd come back to the statement, "we can't get a unique estimator for the (full) β vector". Now's the time to do that.

Thursday, September 18, 2014

"Inverting" Singular Matrices

You can only invert a matrix if that matrix is non-singular. Right? Actually, that's wrong.

You see, there are various sorts of inverse matrices, and most of them apply to the situation where the original matrix is singular

Before elaborating on this, notice that this fact may be interesting in the context of estimating the coefficients of a linear regression model, y = Xβ + ε. Least squares estimation leads to the so-called "normal equations":

                                     X'Xb = X'y  .                                                                (1)

If the regressor matrix, X, has k columns, then (1) is a set of k linear equations in the k unknown elements of β. You'll recall that if X has full column rank, k, then (X'X) also has full rank, k, and so (X'X)-1 is well-defined. We then pre-multiply each side of (1) by (X'X)-1, yielding the familiar least squares estimator for β, namely b = (X'X)-1X'y.

So, as long as we don't have "perfect multicollinearity" among the regressors (the columns of X), we can solve (1), and the least squares estimator is defined. More specifically, a unique estimator for each individual element of β is defined.

What if there is perfect multicollinearity, so that the rank of X, and of (X'X), is less than k? In that case, we can't compute (X'X)-1, we can't solve the normal equations in the usual way, and we can't get a unique estimator for the (full) β vector.

Let's look carefully at the last sentence above. There are two parts of it that bear closer scrutiny:

Saturday, September 13, 2014

The Econometrics of Temporal Aggregation - IV - Cointegration

My previous post on aggregating time series data over time dealt with some of the consequences for unit roots. The next logical thing to consider is the effect of such aggregation on cointegration, and on testing for its presence.

As in the earlier discussion, we'll consider the situation where the aggregation is over "m" high-frequency periods. A lower case symbol will represent a high-frequency observation on a variable of interest; and an upper-case symbol will denote the aggregated series. So,

           Yt = yt + yt - 1 + ......+ yt - m + 1 .

If we're aggregating quarterly (flow) data to annual data, then m = 4. In the case of aggregation from monthly to quarterly data, m = 3, and so on.

We know, from my earlier post, that if yt is integrated of order one (i.e.,  I(1)), then so is Yt.

Suppose that we also have a second temporally aggregated series:

           Xt = xt + xt - 1 + ......+ xt - m + 1 .

Again, if xt is I(1) then Xt is also I(1). There is the possibility that xt and yt are cointegrated. If they are, is the same true for the aggregated series, Xt and Yt?

Friday, September 12, 2014

Unit Root Tests and Seasonally Adjusted Data

We all know why it's common to "seasonally adjust" economic time series data that are recorded on a monthly, quarterly, etc. basis. Students are sometimes surprised to learn that in some countries certain such time series are reported only in seasonally adjusted terms. You can't get the original (unadjusted data). This applies to some U.S. economic data, for example.

Does this matter?

Wednesday, September 3, 2014

Some New Add-Ins for EViews

In my last post (here) I discussed "Add-ins" for the EViews econometrics package. In particular, I concentrated on an Add-in that makes it easy to get from Quandl into an EViews workfile.

The EViews team has just announced the availability of two new Add-ins:
  • BayesLinear: This add-in estimates a linear Gaussian model estimated by Gibbs Sampling.
  • OGARCH: This add-in estimates an Orthogonal GARCH model with 3-step procedure. It is written solely for educational purposes.

As usual, more information about these Add-ins can be found on the EViews website, here, or via the "Add-ins" tab in your installation of EViews:



Check them out!

© 2014, David E. Giles

Tuesday, September 2, 2014

Getting Quandl Data Into EViews

I've sung the praises of Quandl before - e.g., see here. What's not to like about millions of free time series data - especially when they're linked back to their original sources so that updating and accuracy is the least of your worries.

If you can then get your favourite statistics/econometrics package or programming language to access and import these data seamlessly, so much the better. The less you "handle" the data (e.g., by copying and pasting), the less likely you are to introduce unwanted errors. 

One of the great strengths of Quandl is that it facilitates data importing very nicely indeed:

A case in point is with EViews, where it's achieved using an EViews "Add-in". I've recently put together a handout that deals with this for the students in one of the courses I'm teaching this term. There's nothing in it that you couldn't learn from the Quandl and EViews sites. However, it's a "step-by-step guided tour", and I thought that it might be of use more generally. 

You can download it here.

© 2014, David E. Giles

Friday, August 29, 2014

September Reading List

In North America, Labo(u)r Day weekend is upon us. The end of summer. Back to school. Last chance to get some pre-class reading done!

  • Blackburn, M. L., 2014. The relative performance of Poisson and negative binomial regression estimators. Oxford Bulletin of Economics and Statistics, in press.
  • Giannone, D., M. Lenza, and G. E. Primiceri, 2014. Prior selection for vector autoregressions. Review of Economics and Statistics, in press.
  • Gulesserian, S. G. and M. Kejriwal, 2014. On the power of bootstrap tests for stationarity: A Monte Carlo comparison. Empirical Economics, 46, 973-998.
  • Elliot, G. and A. Timmerman, 2008. Economic forecasting. Journal of Economic Literature, 46, 3-56.
  • Kiviet, J. F., 1986, On the rigour of some misspecification tests for modelling dynamic relationships. Review of Economic Studies, 53, 241-261.
  • Otto, G. D. and G. M. Voss, 2014. Flexible inflation forecast targeting: Evidence from Canada. Canadian Journal of Economics, 47, 398-421. 

© 2014, David E. Giles

Tuesday, August 26, 2014

N.Z. Econometrics Study Group, 2015

Thanks to Peter Phillips, the New Zealand Econometric Study Group has been going strong for a quarter of  a century. I last mentioned the group after my participation in the meeting last year - see here

In February of 2015 the NZESG meeting will be going off-shore for the first time, and will be held at the Queensland University of Technology (Australia). In particular, it will be held in collaboration with the National Centre for Econometric Research, about which I've posted previously.

The full details of the upcoming Meeting can be found here. Definitely something to be looking forward to!


© 2014, David E. Giles

Monday, August 25, 2014

On Rockets and Feathers

"Rockets and Feathers" is a term that is often used to describe the (apparent) asymmetric responses of downstream price changes to changes in upstream prices. I believe that the expression was coined by Bacon (1991), and it's been used frequently in the literature in connection with the prices of gasoline and crude oil.

When the price of crude oil falls, does the price of gasoline fall as quickly as it rises when the crude oil price rises? Many studies suggest that the answer is "No". The price goes goes up like a rocket, but it falls like a feather.

There are several explanations for this apparent phenomenon, and a really good analysis of these competing hypotheses is provided by Douglas and Herrera (2010), for example.

This phenomenon was mentioned in my recent post about the paper that I presented at the Joint Statistical Meetings a few weeks ago. Since then, I've received a nice email from Andrea Bastianin, a post-doctoral fellow at the University of Milan. Andrea sent me a paper that he and his co-authors have completed, and that's to appear in Energy Economics.

While dealing with the "Rockets and Feathers" hypothesis in relation to oil and gasoline prices, their paper has an important and novel twist to it- they focus on forecasting performance of models that incorporate asymmetry. Here's the abstract: