I saw this image in an article called ‘Doomsday Book’ by Debora MacKenzie in the January 7 issue of New Scientist magazine:

(click to enlarge)

It shows the behavior of the ‘standard run’ of the model in the book The Limits to Growth, which was published in 1972, as solid lines. Also depicted are actual data for the past 40 years.

There’s a quote in the article by scientist Yaneer Bar-Yam, which goes:

“It is reasonable to be concerned about resource limitations in fifty years,” Bar-Yam says, “but the population is not even close to growing [the way Limits projected in 1972].”

OK, answer me this: If the population is ‘not even close’ to what the Limits to Growth study suggested, then why does it seem to look very close in the graph?

Advertisements

  1. In my experience, when I see something like this and it’s all based on some crazy computer model (and we all know computer models can be made to create any outcome you want but that’s another story), what usually has happened is that they are re-running the model using updated historical data (i.e. ex ante) instead of ex post.

    For instance, using the model that was developed 30 years ago plugging real data in up to last year (into the same model) and then calling that the “forecast” for this year.

    Usually that makes any “model” come up with pretty good numbers for this year.

    Don’t know if that what they’ve done but it seems to be standard operating procedure for people trying to make stuff up.

    I once knew a fellow many years ago who did this in the semiconductor industry, he was changing forecasts after the fact by recalculating his model based on updated data as it came in, but he would never admit to it…but any fool could hold up issues of his newsletter to a light and see that his curves were being changed after the fact. This resulted in a great reputation for his prescience and doubtless sold many newsletters to gullible subscribers.

    Well, yeah, the numbers are highly predictive if you throw them out completely and recalculate your model with more recent data. But that’s not really a forecast in my opinion, at least, not one with actual predictive utility!

    Another example is the U.S. Index of Leading Economic Indicators – yes, it looks highly predictive – but the initially released one is later revised as compontents are revised – and that’s what is shown in all historical graphs you see of it. There was a paper in the California Journal of Management in the early 90’s that examined the LEI by assembling all the *initial* LEI estimates, and it had *much* crappier performance than the revisions would imply. Since it’s used for predicting, it’s the initial ones that people care about – no one cares how an older one that’s been revised predicts.

    I have a great sophisticated model for predicting the stock market next month – do a 3 month moving average of current stock prices. If you then put all the real history into it except the last 3 months, it predicts the last 200 years amazingly, and the last couple of months and the next month or two pretty darned well. But that doesn’t mean you should base any decisions on it!




Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



%d bloggers like this: