A model that correctly predicts - in advance - eight straight elections would be a damn fine model. So fine, in fact, that the posts and links about it above set off my "smell test" alarm. I decided to look into this further. What I'm trying to discern is whether this model was created over 32 years ago, and has, without alteration to the model, been correct each time since, or whether the model was created within the last four years or so, and was
designed around retrospectively giving the proper answer for the last eight elections.
The difference is important, though not necessarily critical. To understand what I mean I have to get a bit deep here. There are essentially two ways to create a model. In method #1 you think about causes and effects - about what
should influence behavior going forward - and then base your model on that. This is basically an intuitive approach, but one that could be very well informed by experience and knowledge. You have to be
smart to get a good model this way. Method #2 requires no real intuitive smarts, just a lot of data and some fast computers. What you do is take something like eight past instances of what you want to predict going forward, and collect data about the conditions present at those eight times. Then you let the computer figure out what combination of data points correlates best with the eight known outcomes.
Allow me to use a frivolous example to illustrate. Say I want to predict, at the beginning of the season, who will win the World Series. Method #1: I decide, since I'm a baseball genius, and have been following the sport closely for decades, that certain key stats are likely to be most critical - perhaps on-base percentage of my starters over the last three seasons and the average ERA of my pitching staff over the last two seasons, at a weighting of 5 to 3. Simple model. I publish it and hope for the best. Eight years later, when it's proven to be right every single time, I'm hailed a genius. (Or as someone who's very lucky!) Method # 2: I collect all sorts of extraneous data about the last eight World Series winners versus all other teams and perform some very fancy factor analysis and other statistical tours de force. I discover - contrary to anyone's intuitive suspicion - that a combination of average distance traveled to away games during August plus number of doubles hit by infielders, when weighted properly, is a perfect predictor of the LAST eight winners. Whodathunkit? I publish my model - no one pays it much attention.
I said above that "The difference is important, though not necessarily critical." In the frivolous baseball example you can see the important difference. But it may turn out that model #1 fails on the ninth attempt at prediction whereas the silly model works just fine in predicting the next winner. I purposely picked silly factors for baseball model #2, but in fact that backward-looking method is a damn fine way to make a good model. However, until it's proven itself in practice, getting all excited and saying "Prepare to buckle up boys; freeman2, bust out yer wallet" may be investing a bit too much a bit prematurely.
The model in question is based on the "economic indicators" of "state and national unemployment rates [and] per capita income". These might or might not be really, really critical variables; they might or might not have predictive value. According to Berry and Bickers they, when massaged the right way, retrospectively predict the last eight elections. Danivon (I see just now that we're cross-posting and that he's said much the same thing as I'm saying but MUCH more tersely) says eight instances isn't a rigorous enough test. But a lot of those elections were very close - I think it's significant.
How do we know Berry and Bickers didn't devise their model 32 or more years ago? Here's a pic of Berry: he's too young.

Bickers looks older, but according to his
bio he only got his BA in 1981. Unlikely he and Berry worked together back than and both ended up at CU today. (Bickers'
bio, oddly, has no data on which to base an age estimate other than the pic; that
could be horribly out of date.
You'd never know from the press reports that this model was just created recently. The several different reports I found, including
THIS press release at the CU website, sure make it sound like the model was created way back when. Maybe it was, but I doubt it. After it had been right six times running - prospectively! - these profs would surely have published and reaped their just plaudits. Read the Bickers bio - there's no mention of it, yet it would have been a HUGE feather in his cap.
When one thinks that it's been
prospectively right eight times in a row one is naturally going to be more trusting of it than if one realizes it's been created specifically to be retroactively correct. I strongly suspect that Dr. Fate has fallen into that trap, which explains his undue enthusiasm. Sorry to burst your bubble DF.
Now all that said, the real problem with the retrospective method of model-creation is this: they may have programmed their computer not just to predict eight past elections, but those eight plus one more - in other words, they could have (figuratively speaking) said to the computer, "Find me a small set of simple factors that correctly predict the past eight
and also show Romney winning." I accuse these profs of no such practice; I merely note that it's possible. It might also be possible to say to the computer, "Find me a small set of simple factors that correctly predict the past eight and also show Obama winning re-election." In fact, a potentially unlimited number of sets of factors (if you had enough data to plug in and a strong enough computer) could be found both ways. Then the best way to proceed might be to see how much computing time it took the computer to find pro-Obama sets versus pro-Romney sets!
The proof of Drs. Berry and Bicker's pudding will come, to a very small degree, a few months from now, then a bit more in another four years, and more in yet another four, and 32 years from now DF might be able to say, "I told ya' so!"
