clock menu more-arrow no yes mobile

Filed under:

How The BCS Rankings Are Calculated

The BCS Rankings can be sort of confusing. I think that a lot of people understand that it is a mix of human polls and computer rankings. However, I also think that understanding of the details is clouded by generic coverage of the BCS on television, and even on the Internet. I know that some people have a keen understanding of how the BCS ranks teams, but they are probably in the minority. Since I've started to provide a weekly post about BCS rankings, I've decided I should also make an attempt to explain, in detail, how the rankings are arrived at.

Let's start with a little bit of history. The seeds of the BCS were planted in 1990 by the Colorado Buffaloes and the Georgia Tech Yellow Jackets. This was the first of two consecutive seasons in which there was a split national championship. In this case, the AP voters selected Colorado, while Georgia Tech beat out Colorado in the UPI poll by one vote. The very next season (1991), there were two undefeated teams - Miami and Washington - and both were ranked #1 in a national poll.

Therefore, in 1992, college football cooked up something called the "Bowl Coalition". This involved the SEC, Big 8, SWC, ACC, and Big East, along with Notre Dame. The design was to get the "top two teams" to play in a national championship game and "have the results be decided on the field". This wound up only lasting through the 1994 season as it had some problems. This was where the policy of exclusion began. The Big 10 and Pac 10 were excluded from the Bowl Coalition because their champions were tied into the Rose Bowl. The WAC, Big West, and MAC were also excluded by virtue of weak schedules. The 1994 season was really an indictment of the Bowl Coalition as Penn State went undefeated, but they couldn't play Nebraska for the national championship because they had to go to the Rose Bowl. Nebraska and Penn State both won their bowl games, and Nebraska was crowned the national champion.

We finish up the history, and move onto calculations after the jump.

The Bowl Coalition was followed by the Bowl Alliance which was really in effect from 1995 to 1997. This was a more focused agreement between the Fiesta, Sugar, and Orange bowls that, once again, tried to match the two best teams to compete for the national title. Its last year was marred by a split national championship between Michigan and Nebraska. I also find it interesting that BYU was, in effect, the Boise State of the mid-1990s. They had consistently won conference championships, and in 1997 they finished the regular season ranked #5 in the AP Poll. Despite that, they were excluded from a Bowl Alliance bowl because their conference was not part of the alliance. They wound up playing Kansas State in the Cotton Bowl, winning, and finishing the season 14-1. LaVell Edwards testified in Congress (sound familiar?) about how the Bowl Alliance was violating anti-trust laws.

And therefore, the Bowl Alliance gave way to the BCS in 1998, a system that we all know makes it possible for a non-automatic qualifer to make it to a top tier bowl game. However, it also makes it fairly difficult to compete for the national title for the likes of TCU, Boise State, Utah, and so forth. 

Non-AQ Appearances in BCS
2-0 2-0 0-1 0-1
2007 Fiesta (W)
2010 Fiesta (W)
2005 Fiesta (W)
2009 Sugar (W)
2008 Sugar (L)
2010 Fiesta (L)

However, as you can see, these teams made their first appearance in the BCS in 2005, after a full 7 years of being completely shut out. From 2005 onward, they've had 6 BCS bids, an average of 1 per year. However, in 2010, TCU was forced to play Boise State. In all other BCS games (against automatic qualifiers), the non-AQ teams are 3-1 with the only loss being Hawaii's at the hands of the Georgia Bulldogs. Enough of the history lesson, let's dive into how the BCS is calculated.

The BCS Formula

The BCS employs the Borda count method of voting for, and nominating their two teams to play for a national championship game. Wikipedia's description of Borda counts:

The Borda count is a single-winner election method in which voters rank candidates in order of preference. The Borda count determines the winner of an election by giving each candidate a certain number of points corresponding to the position in which he or she is ranked by each voter. Once all votes have been counted the candidate with the most points is the winner. Because it sometimes elects broadly acceptable candidates, rather than those preferred by the majority, the Borda count is often described as a consensus-based electoral system, rather than a majoritarian one.

I thought it was pretty interesting that the BCS formula employs the same voting methodology as presidential elections in the small island nation of Kiribati. The bolded portion is important, and we'll get into that later.

Anyway, the current BCS formula employs three components, all equally weighted: the Harris Poll, the Coaches Poll, and the computer rankings.

The Coaches Poll involves 58 voters. Each voter ranks the Top 25. The #1 team on each ballot will receive 25 points, the #2 team will receive 24 points, and so on. This is why the teams in the "receiving votes" category will have point totals. The BCS takes the number of points for each team, and divides them by 1450, which is the number of points a team would have if they were ranked #1 on all the ballots. 

The Harris Poll involves 114 voters. The process is the same as the Coaches Poll, except the number of points in the Harris Poll for each team are divided by 2850. 

The Computer Rankings: a Top 25 is determined for each computer poll (described below). The #1 team on each ballot will receive 25 points, the #2 team will receive 24 points, and so on. The lowest and highest computer rankings for each team are thrown out. The remaining points are added together, and divided by 100, which is the number of points a team would have if they were ranked #1 on the remaining 4 computer rankings. As you would expect, the exact methodology behind specific computer polls is sometimes a guarded secret, but here's what we know about each:

  • Sagarin Ratings: The BCS will use the set of rankings that Sagarin provides that are labeled ELO Chess. One might assume that it is modeled after the Elo rating system used to rank Chess players. Without getting into the complicated math, this system considers wins and losses the most important component. If a team wins, then they played at a higher level than their competition. Ratings are adjusted by calculating an expected number of wins for each team. If a team wins more than they are expected to, then their rating is bumped up, and vice versa. Teams move up the rankings by winning against strong opponents, and where the game is played is a factor in the rankings. The rankings start off the season by using a Bayesian network which factors in the preseason rankings. However, around mid-season this is thrown out and the rankings become unbiased.
  • Anderson and Hester: What I was able to find out is available on their home page. Teams are rewarded for playing and beating quality opponents, and margin of victory is not factored in at all. The rankings are not released until after Week Five, so there is no inherent bias at the beginning of the season. Strength of schedule is judged by opponents, teams the opponents have played, and the strength of the conference of all teams involved. 
  • Richard Billingsley: I am going to do no better job of explaining these ratings than the man does himself, so you can read that here. He does several unconventional things. Billingsley factors in starting position in the preseason polls, which winds up making a minor difference in the rankings during the season. He also designed his system to not factor in the strength of an opponent after that opponent was played. Basically if you play and beat the #2 team in the country, and then they tank it the rest of the season, you are credited with beating the #2 team in the country. He does use the strength of schedule, but he uses rank and rating rather than wins and losses. Like Sagarin, he factors in the location of the game. Billingsley also penalizes for losses, so being undefeated is very important in this ranking system. 
  • Colley Matrix: Wes Colley not only provides a basic description of his ratings, but also a mathematical one. Yay transparency! Strength of schedule and results on the field play a big role in these rankings. Score margin does not matter. There is no weighting of opponents winning percentage, or their opponents winning percentage.
  • Massey Ratings:  Ken Massey also provides a description of his ratings. Only the score, venue, and date of game are used to compute the Massey Ratings. From Massey:
    In essence, each game "connects" two teams via an equation. As more games are played, eventually each team is connected to every other team through some chain of games. When this happens, the system of equations is coupled and a computer is necessary to solve them simultaneously. The ratings are totally interdependent, so that a team's rating is affected by games in which it didn't even play. The solution therefore effectively depends on an infinite chain of opponents, opponents' opponents, opponents' opponents' opponents, etc. The final ratings represent a state of equilibrium in which each team's rating is exactly balanced by its good and bad performances.
    He applies a Bayesian correction, which rewards teams that win consistently, no matter how they do it.
  • Peter Wolfe He employs a Bradley-Terry model that uses a "maximum likelihood estimate". Each team is assigned a rating that is used to predict the expected result against a certain opponent. The probability of the team's performance thus far is calculated by simply multiplying the probabilities of each performance together. The rating that he produces is simply a rating that maximizes the probability for each team. He uses wins and losses, and not victory margin.

You may have noticed that I wrote things like "teams are rewarded for playing and beating quality opponents, and margin of victory is not factored in at all" an awful lot. The BCS specifically chose computer rankings that did not factor in the victory margin. This was because they used to use a New York Times poll, a Matthews and Rothman poll, and Dunkel poll that factored in victory margin, and that was not terribly popular.

Back to the calculations. As you may have surmised, by virtue of taking the number of points for each team in each component, and dividing by a certain number, there will be fractional numbers for each component. You may have a team that has a score of 0.910 in the Coaches Poll, a score of 0.900 in the Harris Poll, and a score of 0.885 in the Computer rankings. You simply average these to come up with the final BCS point total. In this example, the BCS score would be 0.888.

My Problem With The BCS

2009 Quinnipiac Poll said that college football fans favor a playoff by almost a 2-to-1 margin (63% in favor of a playoff, 26% opposed). Our own Week 4 Tracking Poll asked a slightly different question, but 60% of you said that the BCS was an unfair system. You can count me among those who favor a playoff. "Why?", you may ask. The Sooners have generally been shown a whole lot of favor in the BCS rankings, and that looks to be the case again this year provided they keep winning. Why am I so against the BCS?

I'm actually against the heavily entrenched system of "ranking" teams in sports. I think that it really makes no difference what rankings say, and I think that it is a horrible method of determining postseason play. It's fine for power rankings, but that's about where I think it should end. The reason I do a BCS post every week on Crimson and Cream Machine is to show our readers where the Sooners rank, because the BCS is the method used to determine postseason play, and I think that everyone should be informed.

The ranking system is flawed because you are assuming that you can place 120 teams in some sort of hierarchy in which one team is better than the other. However, this is not true. Sports are about winning, championships, and what you do on the field. Professional football, basketball, baseball, and hockey all employ playoffs at the end of the year, and teams that are guaranteed spots won their division. Why? Because that's the way that it should be. 

You could argue to me that Alabama is a better football team than Western Kentucky, and I wouldn't argue with you. However, both the AP Poll and Coaches Poll tell me that Auburn is a better team than Utah. Is that true? How can you definitively say that when both teams are undefeated?

The idea that the regular season is a playoff is somewhat true and somewhat of a farce. I think that it's true in the sense that you need to emerge from this playoff relatively unscathed to win your conference. However, in general the conference champions have not had a chance to play each other and determine which of those teams is most worthy to play for a national championship. The regular season is the playoff that determines who the best 10 to 15 teams in college football are, with little doubt. Sure, you can argue about whether a team deserves to be #15 or #16, but in general that doesn't matter a whole lot. The problem arises because, with a few exceptions, the teams in the top 10 have not played one another, so you are relying on a whole lot of assumptions and subjective comparisons to decide who plays in the National Championship Game. Basically, the regular season is a playoff to determine who the "elite" teams are, but often that "elite" category includes more than just 2 teams.

In some years, the system works - there are two undefeated teams who clearly deserve to be ranked #1 and #2, and they face off in the National Championship Game. And even in those years, sometimes there is controversy. For instance, let's say that there is an undefeated Pac 10 team who played a very weak non-conference schedule, an undefeated SEC team that played a strong non-conference schedule, and a 1-loss Big Ten team that played 2 ranked opponents in their non-conference schedule. Who is to say that the Pac 10 team would have gone undefeated if they had played the same non-conference schedule that the Big Ten team did (or even the same conference schedule)? There's no way to determine that. 

Why leave this open for debate? Institute a playoff system and settle all the questions.