Gaming the Rankings | Teen Ink

Gaming the Rankings

July 5, 2010
By Anonymous

In many aspects, college admissions is a game. Students with multiple ethnic backgrounds might try to choose the most “advantageous” ethnicity (or not choose one at all) in terms of affirmative action. They might put biomedical engineering or philosophy instead of pre-medicine as an academic interest to catch the eyes of admissions officers. There are, after all, many more pre-meds than biomedical engineers or philosophers.

What many don’t know, however, is that colleges play the game, too.

“Gaming the rankings,” as it is sometimes called, is the deliberate practice of certain admissions strategies specifically targeted toward raising a school’s rankings, especially those of US News and World Report. In a practice known as “yield protection” or “Tufts syndrome,” named after its most famous perpetrator, colleges sometimes waitlist students who admissions officers see as unlikely to enroll even if admitted (think: safety school applicants).

While a school might lose out on a few stellar admits, this practice has the net effect of decreasing the admission rate, a factor in the “student selectivity” category of the US News rankings. Clearly, Tufts does it. Washington University in St. Louis does it. Georgetown does it. A host of top-30 institutions do it.

Other dubious practices, such as shifting class sizes, target other US News categories. For instance, rumors surfaced last year on Boston.com that Clemson’s meteoric rise in the US News rankings might at least be partially attributed to class size redistributing. Since 30% of the faculty resources category accounts for the proportion of classes with fewer than 20 students and only 10% accounts for the proportion of classes with fewer than 50, Clemson allegedly shifted students from classes with a little over 20 students to classes with over 50 students to increase its US News ranking. All this points toward an important question: if certain colleges can simply game the rankings, then how much should we trust them?

No one college ranking system can comprehensively encompass such a subjective judgment, but there is much that can be done to make rankings more meaningful. One approach entirely discards the traditional notion of measuring a college’s intrinsic qualities. Instead, a 2004 study by researchers at Harvard, Boston University and the University of Pennsylvania relied entirely on the decisions of 3,240 high-achieving students as a measure of revealed preference, or the best possible option. The researchers took each student’s college decision to be a tournament in which each of that student’s prospective colleges competed against each other. Using a rating system similar to those of tennis and chess, they determined a ranking for about 100 colleges and universities.

"Our method produces a ranking that would be very difficult for a college to manipulate. In contrast, colleges can easily manipulate the matriculation rate and the admission rate," the authors state. "If our ranking method were used, the pressure on colleges to practice strategic admissions would be relieved."

Accordingly, well-known top school schools that notably game the rankings, including Clemson and Washington University in St. Louis, failed to make the top 100. Even with its advantages, though, the revealed preference ranking has only been repeated once since. Unfortunately, the data required make it very difficult to replicate.

Meanwhile, we can just sit back and take college rankings—US News, Forbes or otherwise—with a rather large chunk of salt.

The author's comments:
Many of my friends (and myself, at first) were influenced college rankings so heavily that they based their entire college process--from choosing schools to apply to, to visit, and to attend--on the U.S. News hierarchy. As I looked into the factors surrounding a rankings calculation, I began to question the accuracy of many college rankings.

Similar Articles

JOIN THE DISCUSSION

This article has 0 comments.