1) Iowa St.’s Fred Hoiberg – Can he really succeed with no college coaching experience?
2) Clemson’s Brad Brownell – Is it a bad sign that he did not make the NCAA tournament the last three years at Wright St.?
3) Colorado’s Tad Boyle – Can a small conference coach successfully jump directly to a BCS league without a stop at a mid-major first?
Today I want to see if we can learn anything from the historical record. One outcome to examine is wins and losses. For example, how many games have Big Sky coaches won after they jumped to a BCS league? Unfortunately, we have a limited sample of hires, and each school’s situation is unique. As an example, Bill Carmody left Princeton for a Northwestern team that has never made the NCAA tournament. Shortly thereafter, John Thompson III left Princeton for a Georgetown team with a rich history and NCAA title. It might not be fair to label Carmody’s tenure a failure just because he has fewer wins per season.
Instead today I am going to focus on whether coaches are meeting expectations. To do this I will use the “termination” model I presented last week. The basic idea is simple. If a coach keeps his job, he is meeting expectations. If that coach is fired, he is not meeting expectations. And I can ask three questions that may help us to evaluate former hires:
1) When a BCS school hired someone who was not a D1 head coach, was he more likely to get fired?
2) When a BCS school hired someone without a recent NCAA appearance, was he more likely to get fired?
3) When a BCS school hired someone directly from a small conference, was he more likely to get fired?
Today I am focusing on only BCS coaching hires made after the 1984-1985 season. This limits my sample substantially to only 233 coaches. This includes
-26 from small schools
-48 from mid-majors
-74 from high majors and other BCS schools
-85 from the assistant coaching ranks, the NBA, unemployment, or non-D1 employment
When a BCS school hired someone who was not a D1 head coach, was he more likely to get fired?
The first table essentially presents the raw survival data. The blue data are hires of D1 head coaches. The red data are hires from assistant coaching ranks, the NBA, unemployment, or non-D1 employment.
As you may remember from last week’s post, the format of my database includes a number of interim head coaches who are usually assistants. This leads to a large peak in assistant coaches who get fired after 1 year and a large drop in the red line at year 1. But since these are not official hires, I do not want them to skew the results. Thus I’m going to drop all one-year coaches and estimate the rest of the hazard curve. The model also includes controls for NCAA appearances, as discussed last week.
The next table shows an estimate of the probability a coach will be fired at any point in time. Again, the blue data are hires of D1 head coaches and the red data are the other hires.
The results are only borderline statistically significant, but the results do match expectations. Coaches hired from the NBA or assistant ranks, that have not been D1 head coaches lately, are more likely to struggle and be fired.
When a BCS school hired someone without a recent NCAA appearance, was he more likely to get fired?
Next I include a control for whether the coach made the NCAA tournament in the year prior to taking their current job. And the graph looks very similar to the graph above. But in fact, the result is driven by the graph above. It is the lack of success by assistants and NBA types that makes previous NCAA tournament appearances meaningful.
When I contrast only D1 head coaching hires, the tournament effect disappears. For D1 head coaches that move to new programs, whether they made the tournament the previous year or not has no measurable effect on their future job security.
Now this doesn’t mean that any D1 coach could just step into a BCS job and do well. But it does say that the candidates that get hired without a recent NCAA tournament appearance have demonstrated their ability in other ways. Brad Brownell may not have that signature NCAA tournament appearance lately, but he’s proven he can win at Wright St. regardless. And Clemson fans should not worry that a 2nd place Horizon league finish is a permanent black mark on their new coach.
I tend to think people jump on the bandwagon a little bit too much based on one or two tournament upsets. I prefer to look at the larger body of work for any coach. But there is some information in an NCAA tournament run. Coaches that make the NCAA tournament and win in the tournament do demonstrate something about their ability to build a winning team.
And in fact when I control for NCAA wins in addition to appearances, a run in the tournament does predict future success to some degree. But based on my small sample and the large variation in coaching outcomes, the results remain statistically insignificant.
When a BCS school hired someone directly from a small conference, was he more likely to get fired?
Next I break out small majors, mid-majors, and high majors and see if any groups are more or less likely to keep their jobs. I find that there is not a statistically significant difference between the three groups.
To directly address my small school question, here is a graph of the raw survival data comparing small schools (in red) to the others (in blue.)
Again, this does not mean that any small school coach could step into a BCS job and thrive. But the small school coaches that are selected are often quality candidates.
And believe it or not, you can succeed even if you come out of a small conference and did not make the NCAA tournament. Did you know that both Mike Montgomery and Ben Howland were both initially hired to BCS leagues directly from the Big Sky conference? And did you know that neither played in a post-season tournament the year before they were hired? Tad Boyle seems like a bit of a reach, but if he succeeds after being pulled from Northern Colorado, it would certainly not be unprecedented.
More thoughts
I still question the Boyle hire for another reason. In four years at Northern Colorado, Tad Boyle’s teams have played mostly atrocious defense. I realize the defense improved somewhat in his final year, but I think a quality defensive coach would have made more of an imprint in four years.
But is this fear valid? This is also a testable hypothesis. Do coaches with horrible adjusted defense at their previous school struggle in BCS leagues? Sadly, we only have seven years of tempo free stats on kenpom.com, so we do not have a large enough sample size to do this issue justice.
Also, while the above numbers make the Fred Hoiberg hire appear to be the most suspect, that doesn’t necessarily mean anything. Just because certain non-traditional hires have failed in the past, does not mean any specific hire will not work out. Any of these coaches can still prove to be great or prove to be mediocre.
Boring Data Notes:
-When defining mid-majors, I use a variation of Kyle Whelliston’s red line. Small schools are in conferences with avg MBB budgets under 1.4 million, mid-majors are from 1.4-2.4 million, and high majors are 2.5 million and above.
-I forgot to mention it last week, but much of the data is censored. Obviously we do not have data after 2010, so we do not know how things will end for many coaches. But the model accounts for this. It only uses coaches to estimate the shape of the curve in the years for which the coach has data.
-Also, I am only estimating the probability of being fired, not the probability of taking a new job voluntarily. Coaches that voluntarily leave are also treated as censored.