College football conference over/under ratings

Previously we showed the perception of individual teams in the polls . As promised Matlabgeeks analyzed the conference over/underratings using Matlab, and found similar tendencies in both the AP and Coach’s polls over the last 10 years. A bias towards favoring east coast teams is shown during the preseason, only to have those teams/conferences under perform expectations, while the west coast conferences, including the Pac-10, WAC and MWC have all exceeded expectations by the postseason rankings. Most notable is the lack of respect shown the non-BCS conferences (of course). The bottom three conferences in both polls? ACC, Big 12 and SEC. Let’s take a look at how the analysis was performed:

Conference Under and Over Ratings
This graph indicates the average change for each team ranked within the conference. Interestingly, the Big East also shows a slight positive underrating score, which is doubly profound considering how weak the conference has been the past couple year. This is perhaps recency bias as the conference did have solid teams in the early part of the decade thanks to Miami (FL) and Virginia Tech (Note: we took into account teams switching conferences). The two most underrated conferences gain much of their points through the now well-known perception and performance of Boise State, Utah and TCU. Overall, the automatic bids for the 6 “elite” conferences seems to indicate a corrupt system, especially this year if (or when?) a West Virginia/Pittsburgh/Connecticut gets in over much more deserving Boise State or TCU.

Overall, the data shows just how important a role bias can play in the college football system. Conferences/teams that have played at a high level historically are usually favored by the voters, effectively shutting out newcomers. The system is no different than all-star game voting, with the “elite” or “named” schools having a large head start to each college football season, except in this case the voting actually matters for championships, money and prestige. In our next post we’ll analyze who these voters are and where they come from, but if interested, scroll down for the Matlab code and perform the current analysis yourself.

In order to do the analysis, we utilized an additional array of data which links teams to conferences. The first column includes the team name, and the second column includes the conference name. Then we went through our data from the previous tutorial and replaced each team with the conference name – voila, conference over/under ratings.

index = zeros(97,1);
for i =1:AP_numteams
   index(i) = find(strcmp(conf_data{1},overratedAP(i))); 
end
overratedCONF = conf_data{2}(index);

conferences = unique(conf_data{2});
numconf = length(conferences);
CONFrank = zeros(numconf,2);
for i = 1:numconf   
    index = find(strcmp(overratedCONF,conferences{i}));    
    CONFrank(i,1) = sum(AP_sorted(index));
    CONFrank(i,2) = mean(AP_sorted(index));
end

[confrank_order,order] = sort(CONFrank(:,2));
conferences_order = conferences(order);

%plot
figure(1);
subplot(2,1,1);
bar(confrank_order,'r');
text(1:6,confrank_order(1:6)-100,conferences_order(1:6),...
       'HorizontalAlignment','Center');
text(7:13,confrank_order(7:13)+100,conferences_order(7:13),...
       'HorizontalAlignment','Center');
ylim([-800 1300]);
set(gca, 'XTickLabelMode', 'Manual')
set(gca, 'XTick', [])
ylabel('Over-rated    Under-rated')
title('AP Poll'); 

It is useful to use the subplot function in order to place two plots for AP and Coaches on the same graph. Providing 3 options (2,1,1) signifies that we want 2 rows of graphs, 1 column of graphs and the final “1” is the current graph we will work with. We only show the code for the AP analysis. Similar coding can be done for the Coach’s data.

Leave a Reply

Your email address will not be published. Required fields are marked *