It never ceases to amaze me that my post titled "

**How Many Weeks are There in a Year?**" is at the top of my all-time hits list! Interestingly, the second-placed post is the one I titled "**Testing for Granger Causality**". Let's call that one the number one**post. As with many of my posts, I've received quite a lot of direct emails about that piece on Granger causality testing, in addition to the published comments.***serious*
One question that has come up a few times relates to the use of a VAR model for the

*of the data as the basis for doing the non-causality testing, even when we believe that the series in question may be cointegrated. Why not use a VECM model as the basis for non-causality testing in this case?***levels**
On the face of it, this might seem like a good idea. It's been suggested that as the VECM incorporates the information abou the short-run dynamics, tests conducted within that framework may be more powerful than their counterparts within a VAR model. In fact, however, there's a very good reason for

First, let's recall the main message from my earlier post. A simple definition of Granger Causality, in the case of two time-series variables,

**using a VECM for this***not**particular*purpose.First, let's recall the main message from my earlier post. A simple definition of Granger Causality, in the case of two time-series variables,

*X*and*Y*is:"Xis said to Granger-causeYifYcan be better predicted using the histories of bothXandYthan it can by using the history ofYalone."

We can test for the absence of Granger causality by estimating the following VAR model:

*Y*=

_{t}*a*

_{0}+

*a*

_{1}

*Y*

_{t-1}+ ..... +

*a*+

_{p}Y_{t-p}*b*

_{1}

*X*

_{t-1}+ ..... +

*b*+

_{p}X_{t-p}*u*(1)

_{t}*X*

_{t}= c_{0}+

*c*

_{1}

*X*

_{t-1}+ ..... +

*c*+

_{p}X_{t-p}*d*

_{1}

*Y*

_{t-1}+ ..... +

*d*+

_{p}Y_{t-p}*v*

_{t}(2)

Then, testing H

_{0}:

*b*

_{1}=

*b*

_{2}= ..... =

*b*= 0, against H

_{p}_{A}: 'Not H

_{0}', is a test that

*X*

*does not*Granger-cause

*Y*.

Similarly, testing H

_{0}:

*d*

_{1}=

*d*

_{2}= ..... =

*d*= 0, against H

_{p}_{A}: 'Not H

_{0}', is a test that

*Y*

*does not*Granger-cause

*X*. In each case, a

*rejection*of the null implies there is Granger causality.

Now, if any of the variables are non-stationary (whether or not they are cointegrated), the usual Wald test statistic for this testing will

*have an asymptotic Chi-Square distribution. An easy way to deal with this is to use the following procedure proposed by Toda and Yamamoto (1995) - more details are provided in my previous*

**not****post**:

- Test each of the time-series to determine their order of integration.
- Let the
order of integration for the group of time-series be__maximum__*m*. - Set up a VAR model in the
(not the differences) of the data, regardless of the orders of integration of the various time-series.**levels** - Determine the appropriate maximum lag length for the variables in the VAR, say
*p*, using the usual methods. - Make sure that the VAR is well-specified.
- If two or more of the time-series have the same order of integration, at Step 1, then test to see if they are cointegrated.
*No matter what you conclude about cointegration at Step 6, this is not going to affect what follows. It just provides a possible cross-check on the validity of your results at the very end of the analysis.*- Take the preferred VAR model and add in
*m**additional lags*of each of the variables into each of the equations. - Test for Granger non-causality as follows. For expository purposes, suppose that the VAR has two equations, one for
*X*and one for*Y*. Test the hypothesis that the coefficients of (only) the first*p*lagged values of*X*are zero in the*Y*equation, using a standard Wald test. Then do the same thing for the coefficients of the lagged values of*Y*in the*X*equation. - Make sure that you
include the coefficients for the 'extra'__don't__*m*lags when you perform the Wald tests. - The Wald test statistics will be asymptotically chi-square distributed with
*p*d.o.f., under the null. - Rejection of the null implies a rejection of Granger
*non*-causality. - Finally, look back at what you concluded in Step 6 about cointegration:

"If two or more time-series are cointegrated, then theremust beGranger causality between them - either one-way or in both directions.However, the converse is not true."

(This last piece of information may provide a cross-check on your overall conclusions.)

O.K. - now back to VARs

*vs*. VECMs!
Suppose that at Step 6 we come to the conclusion that the time-series are cointegrated. In general, the presence of cointegration would suggest that we should

*the data using a VECM model, rather than using a VAR model. That's***model***the data, though, not***modelling***for Granger non-causality.***testing**
Here's the deal.

To get to the point where we are considering using a VECM model as the basis for the causality testing, we had to go through the prior step of testing for cointegration; and only if we rejected the hypothesis of "no cointegration" would we even consider estimating a VECM model. This is a classic example of "preliminary test testing". That is, the framework (model) chosen as the basis for the non-causality test is

*conditional*on the outcome of a previous test - a test for non-cointegration. It's as if the choice between a VAR model and a VECM model (as the framework within which to test for non-causality) is made by flipping a biased coin. (Remember that there are always Type I and Type II errors associated with any classical hypothesis test.)
So, one important question that arises is the following one:

If we first test for non-cointegration, and then (conditional onthe outcome of this test) we perform another test, what are the properties of this second test?

You see, the second test (the test for non-causality) will be of one form if we decide to use the VAR, and of a different form if we decide to use a VECM. When we pre-test, the

*second*test is actually a random mixture of two tests. The "actual" test statistic is a weighted sum of the test statistic that would be obtained if we used a VAR model, and the test statistic that would be obtained if we used a VECM model. And the weights are random, with values that depend on the properties of the prior (non-cointegration) test.
The upshot of all of this is as follows. When we test for no cointegration, then decide on a VAR model or a VECM model, and then apply a Granger non-causality test, the properties of this last test aren't at all what we think they are. They're really messy, and the best way to find out what's going on is to conduct a Monte Carlo experiment.

In particular, there will almost certainly be some distortion in the significance level (and hence the power) of the final test. We may

*we're applying the non-causality test at (say) the 5% level, but the***think****significance level (the***true**rate of rejection of the null hypothesis when this hypothesis is false) may be quite different. And this might (should?) bother us.***actual**
[As an aside, if you think that the "size distortion" that can arise from pre-test testing may not be a big deal, then take a look at the results in Table 2 of King and Giles, 1984.]

So, has anyone investigated the issue of the effects of pre-test testing in the case we're interested in here - the case of testing for Granger non-causality, after first testing to to see if there is cointegration, so that we effectively randomize the choice of a VAR or VECM model?

Of course they have! You can take a look at the studies by Toda and Phillips (1994), Dolado and Lütkepohl (1996), Zapata and Rambaldi (1997), and Clarke and Mirza (2006) for lots of interesting details. I particularly recommend the last of these papers, by my colleague Judith Clarke and her former student, Sadaf Mirza.

Zapata and Rambaldi (1997, p.294) find that the T-Y Wald test is clearly preferred to the likelihood ratio test used in the context of a VECM model, unless the sample size is

The big take-home message from this research is very simple:

*extremely*small. (Would we really want to be going through all of this with a very small sample, especially when cointegration is a long-run phenomenon?)The big take-home message from this research is very simple:

"We find that the practice of pretesting for cointegration can result in severe overrejections of the noncausal null, whereas overfitting [that's the T-Y methodology; DG] results in better control of the Type I error probability with often little loss in power." (Clarke & Mirza, 2006, p.207.)

**Note:**The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.

__References__**Clarke, J. A. and S. Mirza**(2006). A comparison of some common methods for detecting Granger noncausality.

*Journal of Statistical Computation and Simulation*, 76, 207-231.

**Dolado, J. J and H. Lütkepohl**(1996). Making Wald tests work for cointegrated VAR systems.

*Econometric Reviews*, 15, 369-386.

**Toda, H. Y. and P. C. B. Phillips**(1994). Vector autoregressions and causality: a theoretical overview and simulation study.

*Econometric Reviews*, 13, 259-285.

**Toda, H. Y. and T. Yamamoto**(1995). Statistical inferences in vector autoregressions with possibly integrated processes.

*Journal of Econometrics*, 66, 225-250.

**Zapata, H. O. and A. N. Rambaldi**(1997). Monte Carlo evidence on cointegration and causation.

*Oxford Bulletin of Economics and Statistics*, 59, 285-298.

© 2011, David E. Giles

I am a bit ashamed to say this, but my coworkers and I have been kicking this post around and we are unclear on the 'why' of step 3 "Set up a VAR model in the levels (not the differences) of the data, regardless of the orders of integration of the various time-series."

ReplyDeleteWe've flipped through our text books from back in grad school and couldn't find the answer. Could you help?

Anaonymous: Thanks for the comment. Not your fault!

ReplyDeleteThere's a bit more information in my April post (linked in the post above). You won't find anything about this in any of the texts written prior to 1994. Indeed, even recent general grad. econometric texts don't cover it - you'd need to look at something like Helmut Lutkepohls' "Multiple Time Series" text. This is a great example of the books lagging behind the theory (and practice, actually). The point is this. If the data are non-stationary then the usual Wald test (or the LRT for that matter) for testing the restrictions involved in causality doesn't have its usual asymptotic (chi square) distribution. The distribution is non-standard and involves unknown "nuisance" parameters, so it can't be tabulated, and you don't have proper critical values to use - even with an infinite amount of data. Now, there are basically 2 equivalent ways to deal with this, the simpler of which to apply is the Toda-Yamamoto "trick". That's all it is - a trick to "fix up" the distribution of the Wald test statistic so it is asymptotically chi square. You fit the model in the levels (counter-intuitive, I know, if the data are non-stationary). It's the ADDITION of the extra lags (that are NOT included in the formlulation of the test) that gets you the result you want.

Two things to note: (1) This will still be OK even if the data are stationary, so you can use the T-Y approach as an insurance policy, if you even "suspect" that one or more of the series may by I(1) or I(2); (2) This model in the levels, with the extra lags, is ONLY for causality testing. It's not to be used for forecasting, impulse response function analysis, or anything else. For those purposes you would still use a VAR in the differences, if the data were I(1) but not cointegrated, or a VECM if the data are in fact cointegrated.

DO take a look at the T-Y paper: even just the abstract, intro. aned conclusions. It really will help. I hope that these comments do too.

Dear Prof. Giles,

ReplyDeleteTo begin with, thank you very much about your extraordinary clarifying blog. A blog like yours I think is an exellent example how science and teaching can work in the 21st millennium.

This and your entry about Granger Causality explains the procedure sufficently, while e.g. in Lütkephol (it is a exellent book nevertheless) these issues are less clear presented and I think difficult to understand for many students. Actually, I have already read some (recently) published papers where Granger Causality tests were implemented in a questionable way. The prefered prodecure (or any other mentioned in Lütkephol) does not seem to be known in applied works all the time. Is there some published material that explains testing Granger-causality with respect to VEC, VECM, Integrated and Cointegrated data, etc. in a concise and lucid way like your blog entry? If not, would a clarifying methodological published note not be worth it.

By the way, another methodological question. Testing for cointegration first and choosing the model for the causality test conditional on the first test is "preliminary test testing". However, why is testing for the order integration fist and including additional lags for the causality test conditional on the first test not basically "preliminary test testing" in similar way? (additional lags are supposably not the same such as a different model [VECM] as a whole)

I will keep in touch with your great blog!

Kind regards,

Georg

Georg: Thank you for your kind comments. It's good to know that the blog is being helpful.

DeleteRegarding your first question, I can't think of an easy-to-read piece of material that's published. It's no doubt something that people would find helpful, though.

Regarding your point about pre-testing: Yes! Absolutely - there are important pre-testing issue when you (i) test for unit roots, and then subsequently test for cointegration; (ii) test for units roots (and/or cointegration) & then test for Granger non-causality, etc.

I've published a number of papers on pre-testing in the past - see my c.v. at

web.uvic.ca/~dgiles/dgiles_cv.pdf .

I have drafts of a couple of posts on pre-testing in general that I plan to put on the blog before too long: one on pre-test estimation; and one on pre-test testing.

Hopefully, these will be of some interest.

Dear Prof Giles,

ReplyDeleteAside from the reason you posted in your previous blog entry:"This might occur if your sample size is too small to satisfy the asymptotics that the cointegration and causality tests rely on."

Is there any other reason why there is no Granger causality between two cointegrated variables?

I am investigating oil price benchmarks in real effective exchange rates. In particular, with the Chinese yuan. I have performed the Granger causality test as you have outlined (very clear and helpful by the way) but there is none present. I'm using data from 1994 to the present, seasonal dummy variables are used (monthly and exogenous) and even when I omit the financial crises data from 2008 onwards, there is still no granger causality.

Many thanks in advance for your help!

Kindest regards

Anonymous

Thanks for your question. Despite the data you have omitted, thre could be structural breaks that are affecitn either the cointegration testing or the causality testing. You say you have monthly data, so another possibility is that there are seasonal unit roots and/or seasonal cointegration.

DeleteDear Prof. Giles,

ReplyDeleteWhen you talk about a "VECM model as the basis for non-causality testing" which testing procedure are you referring to? Is it the likelihood ratio test due to Mosconi/Giannini 1992?

Are these Granger causality-tests in a VECM context implemented in any standard econometrics software (I am using stata but I could not find any Granger causality-test in a VECM framework)?

Thanks to you I can see the problem of a pretest bias when conducting tests in a VECM. But - given that we have cointegrated variables - shouldn't these tests be more efficient as we impose correct and more specific restrictions? Is it perhaps that the negative pretest bias is stronger than the effect of imposing valid restrictions?

Thanks for this great blog!

Best regards,

Manuel

Manuel: Thanks for the comment.

DeleteYes, I had in mind tests like the Mosconi-Giannini test.

I'm not aware of this test being incorporated in any of the standrd econometrics packages, but other readers of this blog may be able to help on this point.

You are right that there is a trade-off between the loss of power arising from the pre-test testing, and the gain in power when we impose correct restrictions. This is a comon problem, and at the end of the day the net effect will depend on the particular problem we're looking at.

Dear Prof. Giles,

ReplyDeleteI would like to ask regarding to the coefficient of ECT. Some researchers said the coefficient of ECT consider good if the range between 0-1. What do you think, Prof.? Please advice.

Thanks.

Thanks for the question. We want the coefficient of an ECT to be negative, and we'd like it to be statistically significant.

DeleteDear Prof. Giles,

DeleteWouldn't the desired sign of the coefficient estimate of the ECT be based on which line of the VECM system we're looking at? For example, if we have the second row of the most simple bivariate VECM:

Delta*x_t = alpha*(y_{t-1}-beta*x_{t-1}) + e_t

then we would want alpha to be positive such that when y_{t-1} gets "too big", the process x will increase over the next period to correct the disequilibrium?

Dear Prof Giles,

ReplyDeleteI am currently researching whether remittances granger cause gdp and health expenditure. I have tested and transformed for stationarity(time series are I(2) processes), found lags elections using AIC etc and following this I had originally planned to simply model my data using VAR and then implement the Granger test in Stata. After reading your extremely useful (thank you!) blog posts I feel I need to employ a test for cointegration for each set of variables (i.e. remittances and gdp, remittances and health expenditure) and then decide whether my data must be modelled using a VAR or a VECM, rather than go straight to a VAR. Am I correct in my thinking? Econometrics is not my strong point!

Thanks in advance.

Thanks for your comment.

ReplyDeleteThe evidence to hand suggests that iut is preferable to test for Granger causality using a levels VAR model (modified as per the Toda-Yamamoto procedure), rather than using a VECM model for causality testing.

If you are using STATA, note that the Granger test there does NOT make the required Toda-Yamamoto adjustment. You will need to include (but not test) 2 extra lags of each variable, as some of your data are I(2).

Dear Prof Giles,

ReplyDeleteOnce again, I cannot thank you enough for what can only be described as a truly fantastic institution (your blog).

Having read the TY (1995) paper, and undertaken some tests, I was looking to go further and do some kind of robustness check within a VECM framework, but I am struggling to find any commercial software which tests the restriction ,which i believe originates in Mosconi-Giannini, and is what i believe is being tested in the EXCELLENT Clarke and Mirza paper - that is: the product of the two relevant elements of cointegrating vector and error correction mechanism, and the coefficients on the lagged differenced variables are jointly equal to zero.

If this is something which I want to pursue further, am I going to have to write up a matlab file or similar? Can this be coded into Eviews somehow? I can obviously estimate the VECM, then estimate this as a system equation by equation, and jointly test the \alpha=\differenced coefficients=0... but this is not quite what is what we're after, as it a test which is restricting the whole cointegrating relationship, not just one variable.

Do you have any suggestions? Presumably, Clarke and Mirza write up their own proprietary code, but this is something which I would obviously be keen to avoid, if at all possible!

Best wishes, thanks again for all of your hard work that goes into the blog!

Thanks for the kind comments. I'm glad that the blog is helpful.

DeleteI'll talk with Judith Clarke and see what can be done to get you some code, etc.

Dear Prof Giles,

ReplyDeletethank you very much for this helpful blog.

I am at the first stages of learning econometric. I am sorry to ask this simple question, may I know that if times series data are I(0) and I(1), (mixed integrated order, can we employ Granger causality based on VECM?

thanks in advance

The VECM model is only defined when the time-series are cointegrated. For this to be the case the series need to be integratd of the same order. So, the answer to your question is "no".

DeleteDear Prof,

DeleteDoes VECM show the direction of dependence in the long run? e.g. if we found that four stock market indices are co-integrated, Can VECM show which is the dependent market in the long run?

Thank you.

No - this is a matter for causality testing.

DeleteDG

Dear Professor Giles,

ReplyDeletePesaran's bound test approach is a way to test cointegration when underlying series are not integrated to the same order (am I right on this point?). If this is the case, is there a way to test causality under this situation? Thanks and regards, Kamrul, Murdoch University, Perth, WA

Dear prof,

ReplyDeleteHow to know which coefficient is significant in the VECM output?

The estimated coefficients will be asymptotically Normal, so if you have a big enough sample, treat the t-statistics as if they are z-statistics.

DeleteThank you prof.

DeleteOne more question, If I have four variables (stock indices) and the JJ cointegration shows two cointegrating equations, should I run VECM with two cointegrations, or run it with one cointegrating equation at a time? because I want to know in the long run which index is influenced by the other.

Thanks.

With 2 together.

DeleteDear Prof Dave Giles,

DeleteI got your posts regarding the T-Y approach to Granger non-causality very helpful. Thank you! But, my question is that is it for short-run or long-run Granger causality?

It's short run - one period. See my response(with a reference) to the same question on the "Testing for Granger Causality" post a couple of days ago.

ReplyDeleteDG

Dear Prof Dave Giles,

ReplyDeleteIf the equation contains only 2 variables (one dependent and only one independent variables) and dependent variable is I(0) while independent variable is I(1), can I test Engle and Granger cointegration based on this kind of data?

And if after the test of cointegration, can I continue to test VEC(if it is cointegrated) and VAR(if it is not cointegrated)?

Thank you very much in advance.

No - the whole concept of cointegration is based on variables that are integrated of the same order. So, if the variables are all I(1) it makes sense to test if a linear combination of them is I(0). If such a linear combination exists, we say the original variables are cointegrated. If you have just 2 variables, on I(1) and one I(0), cointegration isn't possible.

DeleteDear Prof Dave Giles,

ReplyDeleteThank you for your prompt reply.

I would like to ask you another question regarding the critical value in unit root process. I use Stata program to run the test of ADF in step of unit root, my data contains 239 observations. In order to determine whether it is I(0), can I compare t value with critical value in table of ADF result directly, or I have to use MacKinnon's Critical Values for ADF integration.

Thank you very much for the help.

I'm not a STATA user - you'd need to check if the critical values are the asymptotic ones or exact ones from MacKinnon. IN EViews, the exact ones are used, together with p-values. If you have n=239, there won't be much difference between exact and asymptotic values, but if in doubt, use the MacKinnon values. And check the STAT manual or "help" - it's important to know what the package is giving you. :-)

DeleteDear Prof. Giles,

ReplyDeleteI'm currently working on a VAR model with one I(0) variable and one I(1) variable. Is there any theoretical foundation on how to do this? Most papers write about VAR models based on differences or levels. Can I model with a differenced and a non-differenced variable? Thank you very much for your time and very clear explanations on this blog!

Kind regards,

Robin

Robin - if it's causality testing that you interested in, see this post: http://davegiles.blogspot.ca/2011/04/testing-for-granger-causality.html

DeletePutting causality to one side, it you just want to fit the VAR and use it for forecasting or impulse response functions, you have 2 options:

1. Use the level of the I(0) variable & the first-difference of the I(1) variable.

2. Difference both variables. The differenced I(0) variable will still be stationary. There is risk of over-differencing the I(0) variable, but overall I'd prefer to choose this option.

I hope this helps.

Dear prof Giles

ReplyDeleteMy research title is ( tourism -led growth hypothesis:case study of Liby and I would investigate the relasionship between tourism and economic growth. My data period is annual data from1995-2010. And my variables are GDP. International receipt, unemployment rate , also I would investigate the short run and long run relationship and the causality between these variables.

I would ask what are the steps should I follow to investigate the relationship and causality between the variables

My regards

Nagma

Nagma: I have spelled out the steps in detail in my post here:

Deletehttp://davegiles.blogspot.ca/2011/04/testing-for-granger-causality.html

DG

Dear Prof. Giles,

ReplyDeleteIs it make sense to run cointegration test on 2 variables which both are I(0) in unit root test?

However, I have run it and the result show that it is cointegrated.

But what I found in prior papers, they use I(1) variables to test cointegration, not I(0).

I am not sure whether I can use I(0) to test for cointegration or not?

Thank you in advance.

No - it makes no sense, due to the very definition of the concept of cointegration. Two series are cointegrated if (i) They are both integrated of the same order, but, (ii) there exists a linear combination of these variables that is integrated of a lower order. So, both I(), but a linear combination that is I(0), for example. In the case of just 2 variables, if such a linear combination exists, then it is unique. This need not be the case if there are more than 2 variables.

DeleteYour result that you mentioned could arise for several reasons, including a small sample; structural breaks in the data, etc. In any event, you can;t have 2 I(0) series that are cointegrated.

Thank you for your reply.

DeleteI am running the data to test the relationship between spot price and futures price. I intend to follow the step of testing unit root, cointegration, and ECM.

At first, I used the the prices to run unit root test, and found that both spot and futures prices are I(1).

However, I have been told that I need to use the returns which calculated by Ln(pt/pt-1)instead of prices. But by using the returns, the result change to I(0)for both variables.

So, is it means that I cannot go for further steps (cointegration and ECM)?

Thank you so much for your help.

Your returns data are stationary, so cointegration and an ECM are non-issues. You can just model your data using conventional methods.

DeleteSorry for my poor knowledge in this field, what do you mean by "conventional methods."?

ReplyDeleteThank you.

You can just use OLS regression.

DeleteDear Prof.

Deletethank you for the excellent service that you provide to the community (industry and academic) by running this blog. It is very valuable.

Follow up question on the above: what happens if you use, say, 6 times series of returns (stocks, forwards, options, etc) where all of them are stationary? How would you go about testing for granger causality among them?

Thank you

Estimate a 6-equation VAR model in the levels of the data and just use the usual Wald test. There is no need to use the modified test.

DeleteHi,

ReplyDeleteGreat blog!

I have found that the variables in my model are either level stationary or 1st difference stationary, however tests revealed no cointegration.

As a result I am trying to fit a VAR model. Can I fit the model in first differences or should I use a mixture of level and first difference variables?

Fred: Thanks for the comment.

DeleteIt depends on why you are estimating the VAR. If it's to test for Granger causality, then you should fit in the levels, and follow the TY procedure outlined in the "Testing for Granger Causality" post linked at the beginning of this post.

If you're estimating the model to use it for forecasting or impulse response functions, then from the information you've supplied, I'd difference ALL of the variables. This will make the I(1) variables stationary. The differenced I(0) variables won't be I(0), bit they WILL BE stationary.

You may have done some over-differencing, but it doesn't sound as if you have a lot of choice?

Are there any structural breaks in your data that may have "contaminated" the unit root/cointegration tests?

I hope this helps.

DG

Thank you very much for your quick reply.

ReplyDeleteI was planning on doing both Grangers causality and impulse response function. Assuming I am dealing with I(1) variables, do I have to fit two VAR models one at levels and one at first difference in order to test for both?

Fred - one in the levels for TY testing for causality. Then one in the differences for the IFR's.

DeleteDG

Unfortunately I'm a Stata user. I don't think I am capable of implementing your Eviews steps of the Y-T test.

DeleteCould you tell me what the main drawbacks of running the conventional Granger Causality on first difference VAR are?

All you have to do is fit a levels VAR, include an extra lag of each variable, and then make sure you DON'T include the coefficients of the extra lags when you do the Wald test. This is easy to implement with any of the usual packages.

DeleteIf you proceed with a first-difference VAR, the test statistic for Granger causality will not be valid - even asymptotically. It won't have an asymptotic chi-square distribution. Its asymptotic distribution will depend on unknown nuisance parameters. In short - you're sunk!

Hello,

DeleteWhat you are saying is that I cannot estimate an ARMAX model with the time series in first difference, in the case that they are cointegrated?

That's right.

DeleteHello sir,

ReplyDeletethank you very much for your posts which are extremely helpful.

I have a few questions about the macroeconomic data I am trying to estimate using VAR. I have 5 data series, including interest rate, exchange rate, money supply etc. I find that 3 of the series are I(0) and 2 of them have unit root, with stationarity at first differences. First of all, can I estimate using first difference of the two series and leave the other three at level and still use VAR? or would I need to take the first difference of all series in the model? Secondly, when I run the vAR with first difference of the two non-stationary series and levels of the 3 stationary series, I get very low R-squared around 13%, log likelihood 230, and determinant residual covariance as 4.71E-19. Can you tell me if I can go ahead with impulse response testing? I believe the model is a very bad fit based on these, what would be your suggestion to improve since I cannot change my data?

Many thanks in advance and regards,

Has

I'd suggest you do an ARDL/Bounds test analysis to see if there is a long-run relationship. Then you can decide whether you should be using a VAR or a VECM. It also sounds as if you may have an essentially-singular system.

ReplyDeleteHello sir, thank you for your response, but how can I solve the problem of having a singular system? Can I accept there is multicollinearity but continue with the model ? Could you provide some suggestions as to how to sort this issue please?

DeleteGreatly appreciated,

Regards

No, you can't continue - there is something about the way you have set up the equations that is causing the problem. You'll have to re-consider the specification of the equations.

DeleteDear Prof. Giles,

ReplyDeleteI want to test the impact of external shocks (oil, US gdp) on domestic variables (gdp, cpi). Variables like Oil, GDP, CPI are I(1) while US gdp is I(0). When I test the cointegration of all variables at LEVELS, they ar cointegrated. However, in order to impose restrictions, I prefer use SVAR model to estimate the IRF & FRVD. Is SVAR ok in this case?

In SVAR, I need to take differences of Oil, GDP, CPI. I am confused of differencing US GDP (because it is stationay at level). Because the results of 2 models are quite different.

Could you help me on this issue? Thank you so much :).

If US GDP is I(0) it shouldn't be included in the cointegration testing. In addition, if you difference an I(0) variable it is still stationary (hough not I(0)). SO, you could difference it along with the other variables in the model.

DeleteHI Giles

ReplyDeletemy sample is 16 years ,because i couldn't find more than 16 observation due to shortage of the data also there is no any daily or weekly or even monthly data. so , i would investigate the relationship between tourism and economic growth if there is long or short run by the VECM MODEL also the causality if possible. my question iS , IS it okay to run VECM , because all the econometrician are saying that it is impossible to run time series less than 30 observation? if not , is it okay to run ARDL model because in your post you mentioned that is the best model for the small sample , please really i need your help.

Thanks

NAGMA

Nagma - I would be very concerned to see such a short time series being used for this sort of analysis.

Deleteso , what can i do? because i found paper is used bound test for cointegratiom and T-Y FOR CAUSALITY with the same sample size.please your help and expand explanation and justification

DeleteThere are lots of bad/weak papers out there. You need more data. That's all I can say.

DeleteHi Giles,

ReplyDeleteI have two I(0) and two I(1) series. When I run a cointegration test, I find that there is cointegration between series. I also applied Granger causality test with T-Y procedure. But I also want to see impulse-response function. What should I do? 1- Take differences of all the variables and run a VAR model. 2- Take differences of only I(1) series and run a VAR model. 3- Run a VECM.

Thank you very much in advance :)

I'd use a VECM for this.

DeleteDear Prof. Giles,

ReplyDeleteI have estimated a VAR model in the levels to test for Granger causality. It consists of two I(1) and one I(0) time series and p=1, m=1 lags. Running a Wald test showed that the lags of time-series one together with those of time series 2 do not significantly differ from zero so that ts1 and ts2 do not Granger cause ts3. I also ran the Wald test for the other two equtions and there is no statistical significance so that there should be no Granger causality if I understood the procedure right.

Estimating a VAR in the differences (and hoping there will not be any over-differencing of the I(0) variable) shows some significant coefficients (even between the time-series). Shouldn't the coefficients be zero when there is no Granger causality? Or is one of the models false? I had to estimate the VAR model in the differences with more than p+m lags in order to make sure that this VAR modell is well-specified.

Thank you very much :)

Thanks for the comment. No, the coefficients need not be zero. By the way, as you have a mixture of I(1) & I(0) variables, did you allow for this properly when you tested for cointegration? For example, see my post at http://davegiles.blogspot.ca/2011/04/testing-for-granger-causality.html

DeleteSo if there is no Granger causality found when testing the VAR model in the levels with the T-Y approach there doesn't exist any Granger causality no matter what the significances and estimated values of the VAR in the differences are?

DeleteCorrect. I presume your sample size is large enough that you can safely appeal to the asymptotics needed for the TY -Wald test.

DeleteHello Sir,

ReplyDeleteI am a bit weak in this field. I want to know what steps proceed after testing for co-integration. I want to test the long run relationship between the US stock market and other stock markets. Firstly, i will do a pairwise cointegration test between the US and the other stock markets. If I want to find the short term relationship, what should I do? What is the difference between Granger causality test and VECM? Why and when should we use them?

Thanks in advance and regards.

If you find that your data are cointegrated then you can use the levels of the data in an OLS regression to estimate the long-run relationship between them. You can also estimate an ECM if you are interested in the short-run dynamics.

DeleteThank you

DeleteDear Dave Giles

ReplyDeleteI'm interested in estimating the long-run relationship between four variables that are I (1) and are also cointegrated (only one long-term relationship). I have the following questions

1. The Granger causality test should be done with the VAR or VECM.

2. What does not reject Ho of Granger causality test in VAR

I hope I can answer and really appreciate your work

1. I'd use the VAR - see my explanation in this post.

Delete2. If the variables are in fact cointergrated then there HAS to be G-causality in one direction or another. Failure to detect it may be due to the use of a very small sample, or the presence of structural breaks in the data.

Dear Dave

ReplyDeleteI would like to know-

1. When i use a VAR model how can i get impulse response function for a negative shock using Eviews?

2.In case of structural VAR, how can i create a confidence band for impulse response function?

1. Click the "impulse" tab, then "impulse definition", then "user specified". Use HELP to see how to specify your impulse.

Delete2. No idea off hand.

Sir,

ReplyDelete1) My x & y are both I(1). In fact both are growing. (d(x) & d(y) are both I(0)).

2) d(x) Granger-causes d(y) - and vice versa

3) T-Y tests show Granger-causality from y to x - but NOT vice versa

4) According to Johansen's test y & x are cointegrated. Impuls response functions from VEC show y declining in response to shock to x and x rising in response to y.

My ECONOMIC conclusion is that "x does NOT drive y". (This is pretty heretical to most economists).

My silly question: Do I need to report (or engage in) steps 2 & 3 at all?

L (an econometrics' autodidact).

P.S. Regarding confidence bands for Impulse response functions from VEC. Eviews gives these bands for VARs. How about transforming VEC into VAR (with VAR in differences, with the exogenous variable defined as the cointegrating eq. specified in VEC?)

Go with your economic reasoning first and foremost.

DeleteHi sir

ReplyDeleteI am working on panel data, All my variables are I(1) and cointegrated. I am interested in estimating long-run and short-run causality . How I can do this?

My second question is can we apply Dumitrescu and Hurlin (2012) causality test on multivariate model?

Dear Prof Giles,

ReplyDeleteFirst, thank you very much for this helpful blog.

I am writing my thesis and I want to be sure that I have undestand correctly.

First: Granger causality on VAr shoul be implemented with the Toda et al approach if there is cointegration

Second: if there is cointregration, there is for sure Granger causality

Thrid : Is it possible to implement it on stata or R?

Thank You very much

ale

Your understanding is correct in all 3 cases. You can do this in any package that allows you to estimate a VAR and to perform a Wald test - that inculdes Stata, R, etc.

DeleteDear Prof.,

ReplyDeleteThank you for all the good job! I've one seemingly simple question: how can I change negative values into logarithm in eviews? It automatically drops them

The logarithm of a negative number is not defined, mathematically. So, you're asking for the impossible!

DeleteThank you Professor. I understand the logic but I have many variables with negative values in my regression which I have to transform to logs. What shall I do about them?

DeleteYou can't. The fact that there are zero values is telling you that such a specification would be incorrect.

DeleteProf.,

ReplyDeleteIf there is co-integration among variables, is it a must that they have long term relationship?

Thank you!

Yes - that's what cointegration is.

DeleteThank you Prof.

DeleteSo, that means the error correction term in the vector error correction model must be negative and significant (?)

Yes.

Deletewhy should be also negative? is significant not enough?

DeleteDear prof,

ReplyDeleteis it important to be the the coefficient of ECT less then one? and if we get the coefficient of ECT equal 2 or 3, is it wrong ?

The coefficient of an error-correction term should be negative.

DeleteDear prof, what does it mean, if ECT less than -1, in other words, if it equals for example -1.1

DeleteThen the results make no sense - the error-correction term is "over-correcting" in trying to get back to equilibrium.

DeleteProf.

ReplyDeleteI tested cointegration using Johnsen cointegration test, but found no cointegration. Can I use Granger causality or what as an option?

Thank you

Yes - of course. Cointegration implies there must be Granger causality, but the converse isn``t true. YOu can have G-causality even if there is no cointegration.

DeleteDear Prof,

ReplyDeleteDo you know why VECM model take the first difference as dependent variable while VAR take level as dependent variable? Thanks.

This is to take account of the non-stationarity of the data.

DeleteDear Professor,

ReplyDeleteIf I test the cointegration between 5 variables using Johansen Cointegration test and cointegrating vector was found. Do it mean any pair of two variables is having the Cointegration relationship?

No - this is NOT the correct interpretation. If you use the Johansen method, the precise nature of any cointegrating vectors will be revealed explicitly.

DeleteDear Professor,

DeleteThanks. Would you mind to elaborate further "the precise nature of any cointegrating vectors will be revealed explicitly" means?

So for my case, should I run the Johansen Cointegration test for each individual pair separately instead of run all 5 together, in order to find out which pair is exactly cointegrated? Thanks.

No - you test all 5 for cointegration using Johansen's method. Any standard package - e.g., EViews - will provide output that shows how many cointgrating vectors there are ans what variables appear (with what weights) in any such vectors. That;s the whole point of the Johansen methodology.

DeleteThanks. Can you advise me how should I do if I want to test which pair of variables have long run relationship? Should I use Engle approach?

DeleteAnonymous (21 July) - use Johansen's cointegration testing methodology.

DeleteDear Prof,

ReplyDeleteHave you heard about exclusion test, which to be done after we found there is a cointegrating vector using Johansen procedure, to see which variable(s) do not participated in the cointegrating space? If yes, do not know how to run it? Thanks.

Yes - this type of testing was discussed in Johansen's original 1988 paper, as well as in several subsequent papers by Johansen and Juselius (among others). You can see an application in their 1992 paper, in the Oxford Bulletin of Economics & Statistics) involving the demand for money. For a good overview of this type of testing, see: http://www.nuff.ox.ac.uk/economics/papers/2003/w10/BoswijkDoornik.pdf

DeleteI'll try to do a post on this at some stage.

Dear Professor,

ReplyDeleteDo you know why we cant conduct unit root test in Eviews for panel date with N=20, T=6? It shows insufficient number of observation. But our total observation is 20*6=120 correct? Thanks.

What matters is the value of T, and T=6 is insufficient. Questions such as this would be better addressed to the EViews forum, rather than this blog. See http://forums.eviews.com/

DeleteDear Prof.,

ReplyDeleteThanks for your great work. It's really helping me lot.

I'm Kaleswaran. R doing Ph.D in Pondicherry Central University in the area of "India's Foreign Trade and Its Contributions on Economic Prosperity", in India. Generally, we have to conduct for 'Cointegration' test if two variables are integrated at same order. In my case, two variables are I(1). So, now I have conduct 'cointegration' test. Here, among five equations which one I have to choose. My data are having linear trend. Equation 3 and 4 ('Intercept (no trend) in CE and test VAR' and 'Intercept and Trend in CE-no intercept in VAR') are showing no cointegration relationship exist among the variables. But, equation 2 'Intercept (no trend) in CE - no intercept in VAR' is showing 1 cointegration exist (I'm using Eviews 7). Is it correct to choose this equation 2 when our data have linear trend?

I haven't seen your data, but "yes", that sounds correct.

DeleteDear Prof.

ReplyDeleteGreat Blog

if we found in the VECM no co-integration ( vector rank zero ) then I should not use VECM. rather I should use regular regression of variables first difference?

and if I found more then rank 3 vector for 2 variable model in VECM, than I can use VECM normally ?

1. Yes. You don;t use a VECM if there's no cointegration.

Delete2. I don't understand your second question.

Hello sir,

ReplyDeletei have read the post, but i still dont understand somethings, because im really new in this field. i did the engle- granger cointegration test and my variables are not cointegrated.The variables are all I(1)s at level and I(0) at first difference.can i proceed further to granger causality test( i keep seeing on the post, variables must be cointegrated to have a causality) .If i can, is VAR the appropriate model to use?do i need to use the variables at first difference or level? im really sorry, cus there was a similar post ,but i dont seem to get. thanks very many!

You have mis-read what I said, If the variables are cointegrated, then there must be Granger causality. However, the converse is NOT true: you can have causality without cointegration. If you are really sure that your variables are all I(1) and not cointegrated you could use a VAR in the first-differences for modelling and impulse response purposes. However, if you want to test for Granger causality in your case, I'd be using a levels VAR model and using the Toda-Yamamoto (MWALD) approach - see my other posts on this.

DeleteHello Prof.

ReplyDeleteIf I was using VAR model however there is autocorrelation

should I change the VAR model or is there some steps to fix it or should I just report my findings and keep it the way it is

?

thanks

Extending the lag length(s) will usually resolve this - if you have enough degrees of freedom to be able to do so.

DeleteHi, I'm studying the effects of macroeconomic variables on stock returns and four of my variables are I(1) and the other is I(0), can i run a johansen test of cointegration?

ReplyDeleteYou can only test for cointegration among variables that are non-stationary. However, take a look at my posts on ARDL models.

DeleteDear Prof. Giles,

ReplyDeleteI really appreciate your clear and transparent explanation of the procedure of conducting a T-Y Granger causality test. I have several questions longing for your kind reply. 1. If the T-Y procedure fixed the asymptotic distribution of the Wald test within a VAR framework, would the distribution LR test statistic or LM test statistic be the standard one? 2. How large is the sample size should be considered large enough? 3. If the sample size is "extremely" small, if some procedure else can be done? For example, can I bootstrap the Wald chi2 statistic? Thank you very much!

Kang: Yes, the same applies to the LM test or LRT. As with any asymptotics, there's no "magic number". In the case of a small sample, bootstrapping the test statistic is best thing to do.

DeleteDG

Hi Prof.

ReplyDeleteI am conducting a research on the relationship between two macro economic variables . ADF test gave me the result that the data are non stationary at level but at first difference the became stationary. Then I employed Johansen co-integration test and found 'there is one co-integrating equation' in both trace and max eigenvalue. Now what can I do? or what should be my next step?

You just estimate a model using the first-differences of the data.

DeleteThat is, a regular VAR in the differences.

Deletehello sir. My series are I(0) and I(1). They are cointegrated according to ARDL bounds test. What kind of causality should I use; Standar Granger Causality, VECM Causality or Toda-Yamamoto??

ReplyDeleteToda and Yamamoto.

Deletedear sir,

ReplyDeletein my modal, ect is negative but p-value is .25, what does it means? and what next step should be taken? thaks

Dear Prof.,

ReplyDeletei would like to know if its possible for to do the ARDL and VECM tests for 10 variables for annual data 28 years?

I truly appreciate you help.

Best Regards

Ali Alshawaf

Ali - no, you won't have enough degrees of freedom to do anything meaningful.

DeleteDear Prof and valued members

ReplyDeleteI employ a time series data that to measure the impact of Time deposit interest rate, M/BV, and a dummy variable on banks liquidity as measured by customer deposit. however, after checking for unit root (ADF); I've found that the variable of time interest rate is stationary at level I(0), M/BV at the second difference I(2), the dummy at level I(0), and the dependent variable "customer deposit" is stationary at the first difference I(1). could you please tell me the best model for achieving that?

Dear Professor,

ReplyDeleteTo begin with, I would like thank you for your extraordinary blog.It especialy helps a lot to people like me (Read : Working Professionals ) who are new or not comfortable with statistics and have to complete task in set time.

I have been working on a model to find relationship between market price and future contract prices (3 months) of stock.e.g. Amazon's stock price(M) with Amazon May(M1),June(M2) and July(M3) future contract prices on particular data.

I have collected daily data for last 4 years.I checked for stationarity using ADF in R with default lag.M is non-stationary with p value '0.6' and M1,M2 and M3 are stationary.But when tested at first difference,all of them are seen as stationary (p=0.01).

I have then planned to use VECM or VAR to find relation between them in terms of equation.But as I am reading,I have been suggested to go for contegration testing ,then granger's causality test and then VECM.

I don't understand why can't I simply apply VECM or VAR to time series instead on these testings.Because as I understand,these testings don't alter my time series.Is my approach correct?Should I use VECM or VAR or some other model? Should i be carrying out testing of time series at initial level or first difference?

Please let me know.I am confused and don't know how to progress further.

Thanks for your comment.

DeleteYou can't estimate a VECM model unless you have tested for, and found, cointegration. That's not going to be even possible unless ALL of your data are I(1). So, you should be using a VAR model. If you have a 10% significance level in mind, then ALL of your series are stationary according to the ADF test, so you could use the levels of all of the series in your VAR. If you want to be more cautious, you could cross-check the ADF results by also applying the KPSS test for confirmation. Alternatively, you could play it safe by estimating your VAR model using the differences of all of the series.

This comment has been removed by the author.

ReplyDeleteDear Sir,

ReplyDeleteI am analysing time series data using cointegration and VECM. All the series were tested for a unit root allowing for structural breaks. The tests reveal that all the series are non-stationary, and also contain structural breaks. This suggests that I will need to account for the breaks in the VECM model. However, the structural breaks in all the series have different break dates. As a result, I am not sure how to incorporate the different breaks in the VECM model. I am asking if you can assit me with the right approach on how to deal with the different breaks in the VECM.

Many thanks.

I would imagine that you need several dummy variables - for each of the breaks.

DeleteDear Prof. Giles,

ReplyDeleteThank you very much for this blog. It has been of tremendous help for my Master's thesis. I would be grateful if you could confirm whether the TY procedure can be applied in a panel data (Panel VAR context).

Regards,

Ankesh

Here are a couple of references that may help you:

Deletehttp://www.aebrjournal.org/uploads/6/6/2/2/6622240/paper_2.pdf

https://www.researchgate.net/publication/227347359_Testing_for_Granger_causality_in_heterogeneous_mixed_panels

Dear Dr. Giles,

ReplyDeleteI first want to express my sincere thanks for your blog filled with extraordinary knowledge. As far as I understand your comments correctly we cannot use VECM Granger causality unless all variables are I(1) and co-integrated. However, I am wondering what are the reasons why we cannot apply VECM Granger causality to a mix of I(1) and I(0) co-integrated variables.

I(0) variables can't be cointegrated, by definition.

DeleteDear Prof. Giles,

ReplyDeleteI'm very thankful for all your interesting and fabulous in detailed posts. I find my self in a very uncomfortable situation.

I have a set of 7 variables (I know it is a lot for a VAR) montly data from 1997 to 2014.

I'm trying to identify how different supply and demand variables affect a commodity price.

Therefore I conducted my study using an SVAR approach which delivered interesting IRFs.

After leaving that project on a side for more than a year I found this post of yours and wonder if the SVAR analysis was the right approach to my study and if I shouldn't maybe consider a VECM. Here some of the information:

- From the 7 variables 5 appear to be I(1), the rest are stationary.

- I conducted a Johansen test for all the variables and it finds 3 cointegration relations.

- If I choose only the I(1) variables to test for cointegration relations I obtain only 1 cointegration relation.

I'm very confused on what to do. After investing a lot of time on this project and going very deep into SVARs I have the feeling my research is wrong. Could you tell me what u think about this?

Thanks a lot for your input,

Chris

Chris - first, your Johansen testing for cointegration should involve only the 5 variables that are I(1). Second, if you have found cointegration then you really need to estimate a VECM model. The other 2 stationary variables should be included in the model in their levels (not differenced).

Delete