Friday, January 24, 2014

Testing Up, or Testing Down?

Students are told that if you're going to go in for sequential testing, when determining the specification of a model, then the sequence that you follow should be "from the general to the specific". That is, you should start off with a "large" model, and then simplify it - not vice versa.

At least, I hope this is what they're told!

But are they told why they should "test down", rather than "test up"? Judging by some of the things I read and hear, I think the answer to the last question is "no"!

The "general-to-specific" modelling strategy is usually attributed to David Hendry, and an accessible overview of the associated literature is provided by Campos et al. (2005).

Let's take a look at just one aspect of this important topic. 

Rob Hyndman on Forecasting


If you have an interest in forecasting, especially economic forecasting, the Rob Hyndman's name will be familiar to you. Hailing from my old stamping ground - Monash University - Rob is one of the world's top forecasting experts. 
Without going into all of the details, Rob is very widely published, and also has a great blog, Hyndsight. He's author of the well-known  "forecast" package for R (version 5 just released); and the co-author of several important books.

Last year, Rob taught an on-line forecasting course, titled, "Time Series Forecasting Using R". It comprised 12 one-hour lectures, on the following topics (with exercises):

  • Introduction to forecasting 
  • The forecaster's toolbox 
  • Autocorrelation and seasonality 
  • White noise and time series decomposition 
  • Exponential smoothing methods 
  • ETS models 
  • Transformations and adjustments 
  • Stationarity and differencing 
  • Non-seasonal ARIMA models 
  • Seasonal ARIMA models 
  • Dynamic regression 
  • Advanced methods
The really good news? You can access these presentations right here!



© 2014, David E. Giles