Wednesday, October 4, 2017

Comparing 2016-17 Preseason Rankings

Quick FYI: Our preseason SI college basketball material begins to roll out next week.

One piece of business I wanted to take care of before the rollout of this year's material, is a comparison of last year's preseason rankings.

2017 was a weird year. Every ranking system under the sun had Duke #1 and they did not live up to the hype despite immense talent. Meanwhile many of the top freshman (I’m thinking of Dennis Smith Jr. and Markelle Fultz) were great statistically, but failed to elevate their teams. Meanwhile, Ken Pomeroy added transfers to his model, but it turned out to be a year where transfers were not as impactful as usual, and Pomeroy was actually hurt by this addition in a lot of places. (See Syracuse). Finally, ESPN launched a second preseason rankings, the BPI preseason rankings, but the new system actually performed worse than the system their own ESPN Insider John Gasaway put online at the same time. Gasaway's preseason 351 crushed the BPI preseason 351.

Now, some people will look at the variance in college basketball and say that predicting the season is a fool's errand. And while there is always a lot of uncertainty, that doesn't mean that things like star ratings and AAU stats don't have some predictive power. I happen to believe that all of these rankings are useful, and together they paint a fair picture of preseason expectations. In fact, I personally consider the CBS rankings, that have fallen at the back of the pack in recent years, to be among the most important because they are based on coaching interviews and opinions, and that's an important additional data-point that many of the similar statistical systems don't catch.

As you will see below, last year our SI rankings beat CBS and ESPN again, so of the major websites, we won for a third year in a row. But the folks behind Torvik Rank actually took the top spot this year. After finishing 5th in 2016, I'm not quite convinced Torvik Rank has found the special sauce yet. I'd like to see a little more consistency first. But after last year, I highly recommend you follow them and read their work. Some of their ideas for evolving coach effects, i.e. allowing for the possibility that Thad Matta and John Thompson III got worse over time, turned out to be an important part of Torvik Rank winning last year. http://adamcwisports.blogspot.com/2015/09/t-rank-2016-preview-nuts-and-bolts.html

OK, so now onto the numbers. In the table below, I compared Sports Illustrated preseason rankings, the ESPN preseason rankings by John Gasaway, the ESPN BPI preseason rankings, the CBS Sports preseason rankings, Ken Pomeroy’s preseason rankings, David Hess’s preseason rankings, and the Torvik Rank preseason rankings.

Then I calculated the total absolute error in each ranking system. The total absolute error is found by taking the absolute value of the difference between each team’s preseason ranking and the end of season Sagarin ranking and adding up the total.

For the end of season rankings, David Hess asked me to use Sagarin instead of Pomeroy so we were not using Pomeroy to score Pomeroy, but I actually ran the numbers both ways and it didn’t make a major difference this year. The astute reader will notice that switching from a Pomeroy system to a Sagarin system did raise David Hess’s ranking in 2016, however. (I kid, I'm sure that was unintentional.)

This is certainly not the only way to compare the rankings. You may prefer to look at NCAA bids or conference titles or something else. But if you care about where every team is ranked, last year Torvik Rank finished first and Sports Illustrated finished second:



I have intentionally left John Gasaway's rankings out of the second table, since they were only available behind a paywall, but I can assure you, he did in fact finish 3rd.

Onto the new season!

Friday, November 4, 2016

Returning Minutes and Number of Players Who Were Former Top 100 Recruits

In our SI projections, we project every player and lineup to get our team projections.

But I still get lots of requests for a list of returning minutes. That isn't a direct input into our model, though it is something I can easily calculate with the roster data I have.

I cannot say that these numbers will be 100% accurate. We typically don't pull walk-on data unless those walk-ons are expected to play a lot. And we have to make some decisions about certain players. For example, this assumes Coastal Carolina's Shivaughn Wiggins will be able to return in the second semester. But it should be mostly accurate.

I also list the number of RSCI Top 100 recruits on each roster. This includes current RSCI Top 100 freshmen, former RSCI Top 100 recruits (who are now sophomores, juniors, and seniors), and players that we think were incorrectly ranked by RSCI because they changed classes.



 

Wednesday, October 12, 2016

Comparison of 2016 Preseason Rankings

(Quick note: This column is talking about last year's preseason rankings. The 2016-17 preseason rankings will be released soon.)

Ever since I partnered with Luke Winn at Sports Illustrated, we’ve been doing something rather unique when it comes to projecting the college basketball season. We project every D1 player, project every D1 lineup, and use those lineups to project every D1 team. I tend to think this is a unique and worthwhile exercise regardless of the accuracy, but every year we get questions about how our model has done in the past.

This year, we wrote a column that shows that we think we have had the most accurate projections for two years in a row. That said, as anyone who knows about statistics will tell you, there are often different ways to spin results. Our approach is to judge the preseason rankings based on the final ranking of teams 1-351 based on margin-of-victory (MOV). If you focus on NCAA tournament bids, NCAA wins, or conference wins, one of the other models may beat our model. But our feeling is that since the season-long MOV does a good job predicting those other outcomes, it is the best way to evaluate the rankings.

To judge the models I simply took the absolute value of the difference between each team’s preseason ranking minus the team's final MOV ranking and added up the absolute error for each model. (Taking the sum of the squared errors produced the same ordering of the various ranking systems.)
The column linked to above also highlighted some of the teams where the SI.com model did better than the other models and the teams where the SI.com model fell short. But I was asked on Twitter for a full comparison of all five preseason rankings from last year. In the interest of transparency, I list all five preseason rankings from last year below.

The first thing you will notice when you look at the full list of teams is that there were plenty of teams that surprised everyone. College basketball players are at a developmental stage of their career, and we only have a small sample of useful statistics, so not surprisingly there are positive and negative surprises every season. Still, all of the models meaningfully improved on simply running the final rankings from the previous year.

One final comment, the Final 2016 MOV 1-351 ranking is based on Ken Pomeroy’s final 2016 ranking as was on his website from April to August. I did not update this analysis after he recently made the decision to tweak his formula heading into this year.