Monday, January 31, 2011

Can we ask for a Delayed Reality Series?

Ken Pomeroy asked a new question on Monday. Should the NCAA committee go out and watch games? But that is not his real question. His real question is whether the NCAA committee should care about style of offense, the quality of guard play, and the quality of post play? And he answers with a resounding “no”. He argues that we should not confuse an already ridiculous process with more irrelevant variables. And I cannot imagine anyone disagreeing with him on that point. (Unless you have an injury situation to analyze, these types of data points should not come into the discussion.)

But I think his general question is actually more fascinating. Should the NCAA committee go out and watch college basketball games? I think the answer is a resounding yes. If the NCAA committee was made up of librarians who never watched college basketball, would they really have a handle on how to rank basketball teams? NCAA committee members should care about basketball and want to watch basketball games whenever possible.

Stats Do Not Tell Us Everything in Basketball

One of the things I love about basketball is that it is not baseball. Not every important piece of information can be found in a Sabermetric log. There are a ton of things you can learn by watching games that you just cannot pick up in the box score data. As an example, Kevin Pelton recently posted a fantastic discussion of what stats can and cannot tell us about NBA players. It is not a fair comparison, because Ken Pomeroy is asking about team quality and not player quality. And I cannot currently think of any team statistics that are not measured in the box score. But I would put it this way: Having more information is a good thing. I do not think anyone should harass the committee for attempting to learn more about teams throughout the season.

But the real problem that Ken and others identify is what happens if watching a subset of games causes the selection committee to have biased perceptions. [[This concern about “human bias” has long been discussed in the context of the BCS. The problem is that even if humans can be biased, at least humans are dynamic. People can put the wrong weight on certain pieces of data, but absent new forms of artificial intelligence, computers can only handle the problems they have encountered before. Formulas cannot anticipate or deal with unique or unusual new situations. I don’t know which form of bias is larger. I don’t know whether “personal experience bias” or “new situation bias” is a bigger problem. But I do know that computers will never win this argument. No selection process will survive if its conclusions do not mesh with popular opinion. And in the BCS, virtually all the weight has been put on the polls because that is the only system people will accept.]]

The idea that the NCAA selection committee may be biased by seeing a subset of games does not bother me. I happen to believe that people are quite capable of putting things in context. They can watch St. John’s win on Sunday and know that it is only one data point. Perception bias is a risk I am willing to take in order to have an engaged, aware, and thoughtful committee.

Moreover, if we are really concerned about personal biases, I would love to see the NCAA committee institute a monitoring system. We want the committee members to be free to have open and honest discussions, so I would not release the documentary immediately. But what if CBS recorded the NCAA selection process and agreed to air it 10 years after the tournament occurred? Would that be the most fascinating reality series of all time?

Wouldn’t you love to someday go back and listen to the discussion of where Davidson deserved to be seeded when Stephen Curry had led them on that long winning streak? What about when Memphis earned a 1-seed in 2006 with a questionable resume but a dominant late-season performance.

And wouldn’t it be fun to hear the committee debate the age-old questions? What value should we put on winning on the road relative to at home? What value do we put on close losses? What value do we put on how a team has played recently?

Ken Pomeroy may find his formula to be the best way to answer these dilemmas, but I think he would agree this is not a one-dimensional question. People can differ in the weights they put on different factors.

Ken’s rightful crusade is to try to remove the RPI from team data sheets, because the RPI is very weakly correlated with anything meaningful. And his crusade to eliminate non-essential variables like “style-of-offense” from the discussion is important. But I would never discourage the committee from following college basketball and collecting more information, even if watching games induces the possibility of “subset bias”.