You are here

NCAA Graduation Rates: A Quarter-Century of Tracking Academic Success

It’s been almost 25 years since the NCAA first began collecting and publishing student-athlete graduation rates, and in all but two of them – the first two, actually – the data show student-athlete rates consistently surpass those of their student-body counterparts.

In January 1990, the NCAA membership passed legislation requiring schools to report rates disaggregated by race, gender and sport (specifically, football, men’s and women’s basketball, baseball, and men’s and women’s track/cross country). The vote at the 1990 NCAA Convention preceded by about 10 months the U.S. Congress’s passage of the Student Right-to-Know Act, which required reporting for all students, in addition to student-athletes.

In the quarter-century since, the NCAA has refined collection methods and in fact improved the methodology by which the rates are calculated in order to provide leaders in higher education more accurate information on which to base academic policy. Graduation rates have not only supplied data reflecting student-athlete academic performance over time – they have added insight into how to positively affect that very performance. In that way, grad rates aren’t just ends to a mean, but a means to an end.

How the rate began

While the NCAA as an organization is more than a century old, it wasn’t until 1965 that the Association began linking athletics eligibility with academic performance. For the next two decades, the Association would adjust that linkage through various policy changes. The most strident move came in 1983 with the adoption of Proposition 48, which required that a student-athlete achieve a minimum 2.0 high school grade-point average (GPA) in 11 core academic courses and a prescribed minimum SAT or ACT score (700 and 15, respectively) to be eligible to compete in athletics as a college freshman.

This was not the NCAA’s first involvement in setting national academic standards, but it was the most aggressive and controversial, and it likely was the primary catalyst in the NCAA membership’s desire to collect academic data, including graduation rates. With more stringent eligibility standards in place, presidents and chancellors wanted to see whether those efforts paid dividends in the number of student-athletes earning their diplomas.

Prop 48 wasn’t actually implemented until 1986, and it had to survive several amendment challenges before taking hold. By the early 1990s, enough time had passed for the NCAA to start measuring Prop 48’s effect.

At the time, Prop 48 proponents and opponents alike were asking tough questions: Why did the NCAA choose these particular minimums on high school GPA and the ACT/SAT? Does the test-score standard unfairly affect racial/ethnic minority student-athletes and student-athletes from disadvantaged backgrounds? Will these standards lead to improved college success for student-athletes?

In short order, many of those questions were answered. Early graduation rates reporting revealed that the first two classes tracked – the entering classes of 1984 and 1985, the ones before Prop 48 became effective – graduated at rates lower than their student-body counterparts. But student-athletes in the entering class of the Prop 48 era in 1986 exceeded their student-body peers in graduation success, and subsequent reports would show that 1986 wasn’t an anomaly.

Since then, academic research has grown up. The NCAA today compiles national data on aggregate academic performance of teams (graduation rates and Division I Academic Progress Rates) and conducts longitudinal cohort research that follows student-athletes from high school, through college, to graduation and beyond. Taken together, these represent the most comprehensive portfolio of data on the academic trajectories of student-athletes (and among the largest on college students generally) available in the United States.

In tandem with that effort, the NCAA membership has been able to adjust eligibility rules and other academic standards based on these comprehensive data.

The Graduation Success Rate

The NCAA also devised a new metric for measuring graduation in an effort to amend the shortcomings of the federally mandated methodology. The federal rate is limited because it includes only students who enter college as first-time, full-time students in the fall of a specific year. The federal rate is simply tabulated as the number of students from within an entering cohort who eventually graduate from their institution of original enrollment within the ensuing six years and divides by the number of students in the original entering cohort. 

This rate is the most common one used in national reporting, but it does not consider the reality of student transfer patterns. Transfers are simply lost in measuring graduation performance under the federal methodology. As a consequence, the federal rate clearly understates and misstates graduation results, whether for the general student body or for student-athletes. Moreover, for a student-athlete (or any student) who transfers, the institution of original enrollment is penalized because a transfer is treated as a graduation “failure” regardless of the academic status of that transfer.

College and university presidents and chancellors grew increasingly frustrated with the federal rate in the early 2000s because it did not accurately reflect their students’ academic success.

In response, the NCAA in 2002 developed the Graduation Success Rate, which deals more appropriately with the reality of transfers by (1) removing from the calculation for a given institution any student-athlete who transferred but was academically eligible to continue their athletics participation at the institution of previous enrollment if they had stayed; and (2) including in the calculation for a given institution any student-athlete who transferred into that school. 

In essence, the GSR redistributes graduation responsibility from the initial college to the receiving institution, provided that the student-athlete is in good academic standing at the time of departure from the initial school.

Because many transfers do eventually graduate and would be appropriately treated as graduation “successes” in the GSR, values for the GSR tend to be higher than those in the federal rate for a given institution (although they can be lower if a school is less successful at graduating transfers than non-transfers). 

Although the GSR is not a perfect representation of a fully student-centered graduation rate (it does not fully track students from school to school across non-traditional academic pathways), it nonetheless was created to provide a more meaningful and inclusive summary of the graduation performance within a cohort of student-athletes. It is certainly more inclusive, as nearly 40% more students figure into the GSR than are part the federal calculation.  

The NCAA and other educators have encouraged the U.S. Department of Education to adopt similar methodology for evaluating the graduation performance of the general student body and any demographic subgroup of interest. To date, these efforts have not been successful. As a result, GSR values for student-athletes cannot be meaningfully compared to results for the general student body or any subgroups, but it remains as the best alternative and most accurate reflection of graduation success.

Over time, tracking graduation rates has become one of the most important functions the NCAA provides. Not only does it reveal student-athlete academic success (and the positive impact policy changes have had on that performance), but it also provides valuable data on which to base future academic policy.

Graduation Rates Timeline

  • 1983    Designed to bolster sagging student-athlete academic performance, the NCAA adopts Proposition 48, requiring incoming student-athletes to achieve an SAT score of 700 or ACT score of 15 and a 2.0 high school grade-point average in 11 academic core courses for athletics eligibility. The new standards are scheduled to take effect in 1986, and their implementation will prove to be among the driving forces behind the NCAA tracking graduation rates.
  • 1984    The NCAA establishes a “Presidents Commission” within its governance structure to give CEOs more control over intercollegiate athletics. The move is an important one as it relates to graduation rates, a metric of great interest to presidents and chancellors.
  • 1989    To strengthen Prop 48 even further, the NCAA at the 1989 Convention adopts Prop 42, which prevents partial qualifiers from receiving athletics aid in their freshman year.
  • 1990    In the face of intense opposition and protest throughout the previous year, Prop 42 is rescinded at the 1990 Convention.
  • 1990    Delegates at the 1990 NCAA Convention adopt legislation requiring all schools to submit graduation rates by annually completing the Integrated Postsecondary-Education Data System Graduation-Rate Survey (IPEDS GRS). The new legislation anticipates similar action that will be taken by the federal government later that year (see next entry).
  • 1990    The so-called federal graduation rate methodology is born when the U.S. Congress passes the Student Right-to-Know Act in November. The new law requires institutions “to calculate completion or graduation rates of certificate- or degree-seeking, full-time students entering that institution, and to disclose these rates to all students and prospective students.” It also requires reporting of graduation/completion rates of all students – as well as students receiving athletically related aid – by race/ethnicity and gender and by sport, and the average completion or graduation rate for the four most recent years.
  • 1991    Based on legislation adopted the previous year, the NCAA begins collecting graduation-rates data on the entering class of 1984, the most recent for which schools have results given the six-year graduation window built into the methodology.
  • 1991    The Knight Foundation Commission on Intercollegiate Athletics issues its first report calling for a three-pronged approach (presidential control, and academic and fiscal reform) to improve college sports. This fortifies the NCAA’s insistence that presidents oversee intercollegiate athletics, and it adds credence to tracking graduation rates as a viable concern.
  • 1992    In another effort to boost graduation rates, Division I adopts Prop 16, which modifies Prop 48 by establishing a “sliding scale” of test scores and high school grade-point average that do not go below a 700 SAT or 17 ACT and a 2.0 GPA.
  • 1995    Division I adopts a proposal that delays Prop 16, which had been scheduled to take effect this year, until the 1996-97 academic year (however, Prop 16’s increase in required core courses from 11 to 13 stayed). Another proposal adjusts the minimum test score for partial qualifiers to a 600 SAT (15 ACT).
  • 1997    The NCAA federates its governance structure to give each of the three divisions more autonomy. Division I adopts a representative governance structure designed to approve legislative changes more frequently throughout the year, while Divisions II and III retain the one-school/one-vote legislative structure culminating at the annual NCAA Convention. Presidential bodies continue to have the ultimate authority in each division.
  • 2002    Division I develops a “Graduation Success Rate” to accompany the annual federal requirement. The GSR is hailed as a more accurate alternative to the federal methodology because it takes transfers into account.
  • 2003    Division I increases core course requirements to 16 and, importantly, eliminates the hard cut-off for test scores, preventing any prospective student-athlete to be eliminated from full qualifier status by test score alone. The effort is done to maximize graduation rates while minimizing adverse impact on minorities and disadvantaged populations. The new legislation also eliminates any reference to partial qualifiers.
  • 2003    Division I enhances progress-toward-degree standards by requiring student-athletes to have completed 40 percent of their graduation requirements by the start of their third year, 60 percent by the start of their fourth year and 80 percent by the start of their fifth. In other words, student-athletes must remain on track to graduate in five years or less to maintain athletic eligibility.  The new standards replace the previous 25-50-75 progression.
  • 2003    The cornerstone of the academic reform push in the early 2000s came about with the membership’s request to create a real-time measure of academic success that was strongly predictive of eventual graduation rates. Called the “Academic Progress Rate,” the system rewarded teams whose student-athletes were making progress toward a degree. Cut-offs on the APR were set below which teams could be penalized, including preclusion from postseason competition.
  • 2005    NCAA researchers develop the Academic Success Rate (ASR) as a better way of tracking graduation rates for Division II. The methodology parallels the Division I GSR but includes student-athletes not on athletics aid.
  • 2005    The NCAA begins a series of studies of former student-athletes 10 years removed from their athletics eligibility called “The Study of College Outcomes and Recent Experiences (SCORE).” The survey demonstrates that even the GSR and ASR underestimate how many student-athletes earn a bachelor’s degree.
  • 2010    Data from the entering class of 2004 reveal a GSR for Division I student-athletes of 82 percent, fulfilling the prophecy NCAA President Myles Brand declared years earlier when he said the goal was for the Division I GSR to equal or exceed 80 percent.
  • 2014    Data from the most recent entering class of 2007 show an overall Division I GSR of 84 percent, and an overall Division II ASR of 72 percent.