
The NCAA conducts 89 national championships in 23 sports. Competition is conducted in Divisions I, II and III, with 44 championships administered for women and 42 for men. Three are coed.

Each committee member has an advisory group and all conferences have a representative. The committee uses input from these individuals, as well as the RPI data. If a committee member is evaluating two or more teams, a wide difference in RPI rank can be a factor.
How “wide” is “wide”? A very generic rule of thumb is 20 or more ranking places, in addition to the actual mathematical difference between RPI rankings. But since every circumstance is different, those ranges can vary quite significantly from sport to sport and year to year.
No matter how well a team or league is able to “reload” every year, a sports committee is obligated to only look at student-athletes on the current roster, not those who may have helped a team advance in the championship the previous year or years. That philosophy would make it very difficult for an emerging team to receive a fair evaluation if it had no recent tournament history. No selection process at any level of sport, collegiate or professional, uses past success as a factor in determining participation in a playoff or championship.
Like tournament success, potential professional ability does not necessarily translate into team tournament potential. Many collegiate stars never play professionally, while some players struggled in college before exploding in the professional leagues.
Those who offer opinions about which are the best conferences tend to compare the top four or five teams with the top four of five of other leagues. There is nothing wrong with this type of evaluation, just as there is nothing wrong with a mathematical evaluation that takes into account all teams in one league. The factors that determine the “best” league always will vary depending upon who does the evaluating, and what region of the country they are from.
Because of location, many conferences only play a small percentage of other leagues in their non-conference schedule. If the number of potential teams is small, the possibility that these teams will “beat up each other” could mean there would be fewer outstanding records to catch the committee’s eye, resulting in less at-large selections for teams in that part of the country.
Mathematically, it certainly can be argued that with fewer teams available it is possible all the teams in that region could “bunch up” with similar records. Those in other parts of the country, however, could argue that if the great majority of these teams are strong clubs, that also reduces the opportunity to play very weak teams that hurt the strength of schedule element of the RPI.
If the committee is asked to look past BOTH a team’s Division I won-lost record AND its’ strength of schedule, what other factors should it then consider? By eliminating or reducing the impact of these two factors on selection, elements like perception or reputation may play a larger role in the process. While that might benefit the traditional powers, it would make it more difficult for emerging teams to be considered, even those in that particular region.
Should different parts of the country be considered for selection using different criteria? During its deliberations, committees frequently discuss scheduling limitations for many parts of the country and always have factored that into the decision process.
Without some type of mathematical ranking, it would appear an “RPI-less” committee could run the risk of selecting teams based on reputation rather than facts. An emerging program from a perceived mid- to lower-level conference, experiencing a breakthrough season, might be completely overlooked unless the committee representative from that area could make a compelling argument without a lot of hard data. By contrast, a traditional powerhouse, experiencing a down year, might sneak into the field based solely on reputation and history.
Those who wish to do away with the RPI, must be able to offer an alternative set of data for the committee to use.
The Rating Percentage Index is one of many factors used by NCAA sports committees when evaluating Division I teams for postseason selection, seeding and bracketing. Divisions II and III do not use RPI data.
The RPI first was used in 1981 to provide supplemental data for the Division I Men’s Basketball Committee. The Division I Women’s Basketball Committee began using the RPI in 1984 and the Division I Baseball Committee in 1988. Other Division I sports committees now using the RPI are men’s and women’s soccer, men’s and women’s volleyball, women’s field hockey, men’s and women’s lacrosse, softball and women’s water polo.
A team’s RPI ranking consists of three factors that are weighted as follows:
Only contests against Division I teams are factored into the RPI, although both DI provisional and reclassifying teams also may be included with some limitations.
Unlike some other formulas that create “preseason” ratings for each team, the NCAA’s RPI starts the season with every team exactly equal. That means an RPI cannot be compiled until all teams have played at least one contest.
Last Updated: Oct 17, 2012