There is nothing inherently wrong with the NCAA selection process. It does not require legislation to make it better. It does not require a tweak of some BCS formula. The people on the committee have the power to improve the process at any point. Unfortunately, not everyone agrees about what should be changed about the process. Here are four improvements I typically see mentioned:

1. Stop using the RPI as an organizational tool – Support 95%.

While I think almost everyone believes there are flaws using the RPI as an organizational tool (and I will point this year’s flaws below), one big issue we have not considered is what would replace the RPI. The problem is that there is not a consensus about how to rank teams. As of Monday, Middle Tennessee had a Pomeroy Ranking of 31. But Sagarin Predictor ranked Middle Tennessee 48th because Sagarin’s measure caps margin-of-victory. ESPN’s BPI which use a similar cap had the Blue Raiders at 46.

But the NCAA probably wouldn’t consider any of these measures when ranking teams because they all include margin-of-victory. And the NCAA has expressed that it does not want to encourage teams to run up the score. In college football, a sport where there is far less information and margin-of-victory is critical to accurately rank teams, the NCAA won’t use it. So in fact, we should expect that if the NCAA replaced the RPI with something, it would be a metric like Sagarin’s ELO Chess. As of Monday, ELO Chess had Middle Tennessee ranked 59th in the country. 

I believe that we should eliminate the RPI, because almost any of the established ranking systems would be an improvement. But replacing the RPI won’t completely eliminate this complaint. There will always be some ambiguity about how to measure quality wins.

2. Stop giving teams so much credit for close wins and penalizing them for close losses. – Support 25%.

I’m certain on Selection Sunday we will hear someone complain about how Florida is criminally under-seeded, and how New Mexico is criminally over-seeded. And Ken Pomeroy has expressed this the most eloquently when he has said the NCAA committee’s goal is supposed to be to pick and seed the “Best” teams.

But I think a lot of people view the mandate differently. I would argue the vast majority of college basketball experts believe teams should be selected and seeded based on what they have accomplished, not how impressive they were in victory. To put it bluntly, the question is “who did you beat” not “by how much?”

No one believes that Arizona should be given the Pac-12 title this year because they had the best season-long margin-of-victory numbers. No one believes you should take the co-Big East title away from Marquette because they won more close games than Louisville and Georgetown. You play to win the game. (Even Pomeroy has admitted that he enjoys following conference races even if they may not crown the “best” team.)

John Gasaway has argued for selecting teams based on accomplishment, and seeding teams based on quality (re: margin-of-victory.) And while I completely respect this viewpoint, I think this is far from the consensus. To win a national title (or to get to the Final Four for most teams), you have to over-achieve. No one should be favored to win 6 consecutive games (and few should be favored to win 4 consecutive games) in March. The odds are simply stacked against that. But I would argue that some teams over-achieve in the regular season, and some teams over-achieve in the post-season. Even if Marquette has over-achieved in Big East play relative to their ability, I think this should earn them an easier path in the NCAA tournament. To rank them as a 6-seed because their Pomeroy Ranking is 24th, when they won the Big East regular season title and beat a number of quality teams along the way, seems wrong to me.

The logical counter-argument is that by ignoring margin-of-victory, we often punish good teams by giving them less favorable tournament draws. But at this point, I would say the popular support favors basing things on “who you beat”, even if that isn’t the perfect measure of who is best.

3. Stop using Non-Conference Strength-of-Schedule as a criteria. – Support 30%

Every year the committee tends to leave out one bubble team because that team has a weak non-conference schedule. And some people have argued that this is dumb since this has nothing to do with picking the “Best” teams for the field.

I tend to endorse this criteria because I want to see more marquee non-conference games. But I agree this is an imprecise tool to encourage non-conference scheduling. You can’t punish the most egregious teams (like Indiana) without destroying the integrity of the tournament. And many teams screw up in making their non-conference schedule. When Georgetown scheduled UCLA, Texas, and Tennessee this year, they had every reason to expect that they were putting together one of the tougher non-conference schedule in the country. But when all three of these teams under-achieved relative to preseason expectations (due mostly to suspensions and injuries), it seems silly to claim that Georgetown’s non-conference scheduling was weak.

There is a legitimate argument to be made for including this criteria in the process. When teams from non-elite conferences have weak non-conference schedules, they can be almost impossible to evaluate. This year St. Mary’s falls in that category. I have no doubt that after dominating the WCC that St. Mary’s has a solid team this year. But since their non-conference schedule didn’t include enough Top 100 teams, we have very little information with which to evaluate the Gaels.

(Note: We can look at their margin-of-victory in those non-conference cupcake games, but based on 1 and 2 above, I don’t see the NCAA moving in that direction. And if you aren’t emphasizing margin-of-victory, wins over teams ranked 101+ in the rankings are almost useless in evaluating teams.)

Unfortunately, there just isn’t enough information in St. Mary’s profile, and the committee is therefore very reluctant to put a team like St. Mary’s in the field. Now, this doesn’t explain why teams like Virginia or Iowa would be excluded based on this criteria, (since we have more information on these teams.) But I suspect there is an equity concern. You don’t just want to single out smaller conference teams for not scheduling tough enough, and so the committee punishes teams in all conferences, big or small.

This seems like quite a stretch to justify using NCSOS as a criteria. And there probably are better ways to encourage teams to create tougher schedules. But for now, I don’t see the popular support to abandon this criteria.

4. Include more small conference teams, exclude teams with losing conference records –Support – 45%

I don’t think there is anyone out there who is not impressed with Middle Tennessee’s 19-1 regular season record. I think we would all like to see great seasons rewarded. No one really wants to see some middling major conference team make the tournament.

But the middling major conference teams are really who generate the eyeballs and revenue for the NCAA. You know the point about how congress has a 7 percent approval rating overall, but most voters give 60% approval ratings to their own congressman? This is a bit of the dilemma with including more small conference teams. While we collectively, all might prefer Middle Tennessee to say Minnesota or Cincinnati, the folks in Minnesota and Cincinnati sure don’t feel that way. And I guarantee those schools bring in more revenue and have more clout in developing the NCAA selection process than the small schools.

I also think we need to think hard about what it means to have a great season. Middle Tennessee’s season was unquestionably special. But Minnesota’s was pretty special too. They beat Indiana, Wisconsin, and Michigan St., and all of those are great teams. Should Minnesota really be punished because the Big Ten happened to have one of the deepest leagues in the past 10 years? How would Minnesota have fared in Conference-USA this season, considering how they dominated Memphis on a neutral floor?

I think I’m in the camp that says that teams with losing conference records shouldn’t get at-large bids. But I don’t think we should over-state wins and losses when evaluating teams either. As noted earlier, by almost any ranking system other than Pomeroy, Middle Tennessee is at best a bubble team.

I also think that UConn’s NCAA title run from a few years ago says we shouldn’t overlook teams from great leagues that have mediocre records.

Remove the RPI as an Organizational Tool

What would happen if we used the consensus of the Computer Rankings instead of the RPI as an organizational tool? How would that change the definition of a Top 50 win this year?

RPI underrates these wins

Conf

RPI Rank

Pomeroy Rank

Sagarin ELO Chess

Sagarin Predictor

ESPN's BPI

Villanova

BE

52

49

41

45

59

Ole Miss

SEC

56

45

44

40

40

Denver

WAC

57

28

52

43

51

Stanford

P12

64

44

47

46

41

Baylor

B12

61

41

30

30

47

Virginia

ACC

66

22

33

27

38

Iowa

B10

75

30

37

34

48

             

RPI overrates these wins

Conf

RPI Rank

Pomeroy Rank

Sagarin ELO Chess

Sagarin Predictor

ESPN's BPI

Louisiana Tech

WAC

46

79

84

86

75

Butler

A10

21

55

42

57

49

Southern Miss

CUSA

35

57

67

59

62

Temple

A10

38

65

49

55

56

Boise State

MWC

37

48

55

60

45

La Salle

A10

41

51

53

53

53

California

P12

48

54

51

52

52

I include both Sagarin measures in the table because they are independent evaluation systems. Denver is a little sketchy to include on this list, because only Kenpom.com really loves them, but I include them anyway. 

Now, what would happen if we used the computer rankings definition of a Top 50 win instead of the RPI rankings definition. It turns out that 80 teams would gain or lose a Top 50 victory. But the most impacted teams are as follows:

St. Louis and Xavier would each lose three Top 50 wins from their resume. And New Mexico, UNLV, Memphis and Charlotte would each lose two Top 50 wins from their resume.

Meanwhile, Indiana, Pittsburgh, Missouri, Minnesota, Kansas St., Iowa St., Colorado, Oklahoma, Providence and USC would each gain two Top 50 wins on their resume.

A lot of these teams are on the bubble, and I think they would be thrilled to have another Top 50 win. Would anybody be even remotely concerned about Minnesota’s late season swoon if they had two more Top 50 wins to their name? Wouldn’t Iowa St. be a more obvious pick with two more top 50 wins on their ledger? Using the RPI as an organizational tool has very real costs.

Of course, as stated at the top, the committee can make these adjustments on the fly, and notice that beating Baylor, Virginia, and Iowa is an accomplishment this year. But given the huge amount of information they have to process, I would not count on it. The time to replace the RPI as an organizational tool is now.