Tradition is not the worship of ashes, but the preservation of fire.—Gustav Mahler

Thursday, April 10, 2008

Numbers Don't Lie! (Though sometimes people do bend them).

Larry Bartels, a political science professor at Princeton, has gotten a lot of play in "the internets" (as a friend of mine calls it) for his argument that Democratic presidents help create more economic growth and more egalitarian distributional effects than their GOP counterparts, at least since WWII. Jim Manzi's dismantling of at least the inequality claims over at NRO almost make me wish I had spent more time doing stats in grad school and less reading Plato or whoever. Well, almost.

What's remarkable about Manzi's analysis - and deadly for Bartels' claims - is that he makes clear that Bartels is playing a shell game with his numbers. When calculating presidential effects, Bartels gives himself a "lag year" and so Jimmy Carter gets credit for whatever happens economically for a year after he stopped being president. (That's 1980 for you young'uns). But as Manzi shows, if you get rid of the lag year or make it two years, all of Bartels' effects disappear. Poof! Now, maybe there's something robust about a year, but my guess is that Bartels fidgeted around a bit with the lag until he got what made sense to him.

5 comments:

James F. Elliott said...

In Bartels' footnotes, he tried to acknowledge some of the problems Manzi points out and then pooh-pooh them. Manzi's bit is definitely devastating to Bartels, and quite good on the stats.

Michael Simpson said...

It reminds me of some political economy book we had to read in grad school (can't remember which one, but it had won any number of awards). The book purported to show that the European economies had had growth rates similar to or better than the rates in the US in the post-war period. Some of that was obviously true, since most of the European economies were devastated in the war and their "growth rates" were obviously better. (Except, I might add, for those virtuous Swedes, who managed to stay out of the whole conflict and even profit by trading with the Nazis). But I noticed when reading the book that the author seemed to pick and choose his dates of comparison pretty carefully, enough so that when I ran his arguments using a broader set of dates, the argument collapsed.

I really don't think that most academics are dishonest in any straightforward way, but like most of us, their own "interpretive lenses" sometimes get the better of their understanding and explanation of things. And, unfortunately, since the academy has some biases that tend to be overwhelming, those errors can go uncorrected. It's striking, after all, that Manzi's analysis - which I agree is devastating to Bartels' argument - is done by a guy who just knows how to run numbers. Any grad student in political science, sociology, or economics who knows a basic stats package could have done what he did. So why didn't someone else do it before?

James F. Elliott said...

So why didn't someone else do it before?

Was Bartels' piece run in a peer-reviewed journal? Some journals leave it to readership to pick apart the errors if the argument is "interesting" enough. But I think your "lens" comment is also spot-on. All people look for interpretations of events and facts that support pre-conceived notions. Politically, I was of course inclined to favor Bartels' work. But because I try to be honest, and since my graduate work required a good working knowledge of statistical research methodology, Manzi's numbers simply can't be ignored.

I like the guys over at The American Scene (of which Manzi is one) very much; he, John J. Miller, and Ramesh Ponnuru bring a welcome dose of sanity to National Review.

Tom Van Dyke said...

Sanity happens even in the best of cabals.

Michael Simpson said...

I think Bartels' paper was a working one, meaning it hadn't been vetted through the review process. But I was thinking of the broader blogoworld - there are lots of people with very advanced econometrics training who could have spent a bit of time unpacking the data side of the argument, but didn't. That said, I doubt that a journal review process would catch something like this, as that would require a reviewer to run his own numbers.