Last year I used maths to successfully predict 10 out of 11 key Oscar winners and now want to try it again to take all the glory for me and my calculator. I’ve had to leave it relatively late in the day to share my findings as the Independent Spirit awards can be key Oscar predictors and only take place the day before the Academy awards. Below I will take you through my methods, my predictions, and make clear where the data, and my sanity, has let me down.
If at anytime the heady mix of maths and popular culture becomes too much excitement for you to bear please close your browser, have a ten minute nap, and then come back and resume reading.
For each category I will be showing three predictions for each category; they have each been worked out in a different way to help me decide the best method to use going forward. The methods are as follows…
Statistical Method 1: This is the same method used last year. Every award a nominee wins leading up to the Oscars increases their chance of winning on the night. How accurate each award has been since the year 2000 in predicting the Oscar winner affects the amount it increases the nominees chances. If the BAFTA matches the Oscar 80% of the time it has greater weight than an Independent Spirit Award that agrees with the Oscars 24% of the time. Make sense? Good stuff. All maths here is performed in trusty Excel.
Statistical Method 2: In order to get a little more statistically robust this year I built a second set of predictive models using stepwise regression in R. This might be a good point to look away if you’re feeling faint. For each category I am looking at which other awards Oscar nominees win before the Academy awards and throwing Box Office and Rotten Tomatoes data into the mix too. I have built a model that only takes into account factors that have statistically significant correlation with Oscar winners, leaving some data points null and void, and allows for the fact that some awards might actually have been seen to have a negative impact on Oscar chances. Pretty sexy stuff.
Statistical Method 3: I emailed Adam Richmond from the late and great film podcast Out of the Canon and asked who he thought would win. One of the simpler techniques I’ve used.
Enough bad explanations of statistics. On with the guesswork!
Most commentators seem to have Birdman down as taking the top award but both my statistical models have settled on Boyhood while Adam has selected Selma. Adam is clearly delusional and my regression model (Method 2) is very healthy for this category and ranks Birdman in second to last position so I think it is Boyhood‘s to lose.
The old Excel technique gives Richard Linklater a clear lead for Boyhood but the newer model ignores a lot of the data and fixates on the DGA award which went to Alejandro González Iñárritu for Birdman. Adam agrees with this prediction so statistical significance and film knowledge win this round.
Prediction: Alejandro González Iñárritu for Birdman
Actress in a Leading Role
Having won endless awards Julianne Moore has bamboozled both my models into giving her an insurmountable lead for her performance in Still Alice. Interestingly a lower Rotten Tomatoes critics score here actually makes you more likely to win the Oscar. Make of that what you will. Adam has gone for Felicity Jones in The Theory of Everything which I would love but is a complete fantasy.
Prediction: Julianne Moore for Still Alice
Actor in a Leading Role
What can I say? Michael Keaton, Michael Keaton, Michael Keaton. Consensus feels like it is behind Eddie Redmayne for The Theory of Everything but he only takes second place in my old method whereas the regression analysis and Adam give it to Michael Keaton in Birdman easily. For this award a good Rotten Tomatoes audience score helps you win suggesting there is definitely something going on with the reception of films and the gender of their lead actors. Analysis for another time perhaps.
Prediction: Michael Keaton for Birdman
Actress in a Supporting Role
Another hat trick of predictions here for Patricia Arquette in Boyhood. In model 1 you need as many awards as possible. Done. In model 2 you just need the BAFTA and SAG awards. Done. In model 3 you need Adam’s blessing. Done. Nobody else has a chance.
Prediction: Patricia Arquette for Boyhood
Actor in a Supporting Role
A clean sweep for J.K. Simmons in Whiplash. He has won all but one of my predictor awards and my second, stricter methodology only looks at who won the Golden Globe which Simmons has somewhere on his crowded mantelpiece. Even Adam agrees which means that Simmons will not only win but deserves to do so.
Prediction: J.K. Simmons for Whiplash
My old technique for predicting the screenplay awards was a big shaky but with consensus across the three methods we can safely give the Oscar to Wes Anderson for The Grand Budapest Hotel now and not even bother having the ceremony. Key predictors here are the BAFTA and WGA awards but weirdly the lower the Box Office the better.
Prediction: Wes Anderson for The Grand Budapest Hotel
Interestingly here methods 1 and 2 use almost completely different factors for this award but come out with the same result. Method 2, the sexy new technique using only the most robust predictors, eschews nearly all award ceremonies preferring to look at just the WGA awards, Box Office takings, and Rotten Tomatoes critics scores. Either way you do the sums The Imitation Game by Graham Moore takes it. Unless you are Adam in which case Whiplash by Damien Chazelle is the favourite.
Prediction: Graham Moore for The Imitation Game
Animated Feature Film
This is where the new method falls over drunk. The main awards to look at for the Animated Feature category are the BAFTA and Critics Choice which both went to The Lego Movie making it the clear favourite for the Oscar. This category also has the strongest predictive model from a statistical point of view so should be the easiest for me to get right. The only issue here being that it isn’t even nominated. Bugger. Both Adam and my old technique suggest How to Train Your Dragon 2 whereas the new technique is confused and says it could be any nominee but most likely not How to Train Your Dragon 2. Stupid Oscars.
Prediction: How to Train Your Dragon 2
Foreign Language Film
This is the category that is hardest to predict. Very few factors align with the Oscars in a statistically significant way so my new method simply shrugs and says to look at the Golden Globes but that it really isn’t sure. Most of my models claim to be around 60% accurate, 80% for Animated Film (HA!), but here we’re at 12.6%. Model 1 gives the award to Ida with a narrow lead over Leviathan whereas Model 2 and Adam say it is going to Leviathan. Frankly I have no idea.
Another tricky one. Citizenfour has the greatest sheer volume of awards, and has Adam’s vote to boot. The regression model however is uncertain and with some hesitation has settled on Virunga. It is worrying how these statistical techniques have taken on human qualities for me now. I am going to have to go with my gut, and Adam’s brain, on this one.
There you have it. Eleven categories predicted in various way and with differing levels of accuracy. In the early hours of tomorrow morning we will see who was right and once again wonder if it really matters.