It's the bane of any prognosticator: No matter how often you're right, there's always another election/game/award around the corner that could easily upend your record. Silver was the champion of last November's election, not only predicting an Obama win but being close to the actual margin of victory. Then came the Super Bowl - and with it an entirely different set of metrics - and Silver wrongly picked the 49ers. Now he's back with the Academy Awards in which he sports a good-but-not-great 75 percent success rate. Time to make some adjustments. From the NYT:
Our method will now look solely at the other awards that were given out in the run-up to the Oscars: the closest equivalent to pre-election polls. These have always been the best predictors of Oscar success. In fact, I have grown wary that methods that seek to account for a more complex array of factors are picking up on a lot of spurious correlations and identifying more noise than signal. If a film is the cinematic equivalent of Tim Pawlenty -- something that looks like a contender in the abstract, but which isn't picking up much support from actual voters -- we should be skeptical that it would suddenly turn things around. Just as our election forecasts assign more weight to certain polls, we do not treat all awards equally. Instead, some awards have a strong track record of picking the Oscar winners in their categories, whereas others almost never get the answer right (here's looking at you, Los Angeles Film Critics Association).
Silver's pick for Best Picture won't surprise anyone:
Far less certain is the race for Best Director. The most obvious pick, Ben Affleck for "Argo," wasn't nominated. Neither was "Zero Dark Thirty's" Kathryn Bigelow, who won a bunch of pre-Oscar awards.
Instead, the method defaults to looking at partial credit based on who was nominated for the other awards most frequently. Among the five directors who were actually nominated for the Oscars, Steven Spielberg (for "Lincoln") and Ang Lee ("Life of Pi") were nominated for other directorial awards far more often than the others, and Mr. Spielberg slightly more regularly than Mr. Lee. So the method gives the award to Mr. Spielberg on points, but it's going to be blind luck if we get this one right: you can't claim to have a data-driven prediction when you don't have any data.
He sees a super-tight race: