Close menu Resources for... William & Mary
W&M menu close William & Mary

Moving up the rankings

  • Measure of success
    Measure of success  Economists Robert Archibald (left) and David Feldman believe they have a better way of determining college graduation rates.  
Photo - of -

Two economists propose a better way to compare college graduation rates

It’s not so much comparing apples to oranges. It’s more like comparing apples to pizza, airplanes and kangaroos.

In part because institutions of higher education have vastly different missions and student populations, comparing universities and colleges has long been challenging and controversial. Still, college administrators, parents and prospective students look in earnest each year to rankings such as those that come from U.S. News and World Report.

But two William and Mary economics professors have found that there may be a better way to compare schools. When they run the numbers, the difference in outcomes is significant.

Robert B. Archibald and David H. Feldman actually never intended to create a new ranking system for American colleges and universities.

The two professors for years have been investigating issues of cost, price and access in higher education with the goal of helping people understand why college costs rise more rapidly than inflation rates. In that process, they started looking into how college outputs are measured.

Because of an accountability movement that started in K-12 schools, they said, people are increasingly looking for measurable outputs from colleges and universities for the money that is being put into them.

Standardized testing is commonly used to compare K-12 schools’ successes, but no comparable set of measures exist among the nation’s colleges. Instead, they said, typical rating systems for the effectiveness of colleges are based on what the colleges start off with—the SAT scores of incoming students, the percentage of students at the top of their class and the amount the school spends on each student. One of the few common measurable outputs that colleges do have are graduation rates.

However, comparing schools on graduation rates alone gets to be problematic because of the different populations schools serve. And so, traditional ranking organizations like U.S. News have employed a statistical technique called regression analysis, using input data, such as per-student expenditures and the student body’s SAT scores and high-school grades, to determine a school’s expected graduation rate. The rankers then compare their computed expected graduation rate with the raw graduation rate.

Comparing against the average

“If you do better than your expected graduation rate, you get a plus and if you do worse than that, you get a minus,” said Archibald. The problem with regression analysis, said the professors, is that it compares schools against an average—not against the best-performing schools. Archibald and Feldman thought there was a better way to compare graduation rates, using a tool called production-frontier analysis. They co-authored a paper outlining the benefits of comparing graduation rates using production-frontier analysis, a method often used in the corporate world. The paper is slated for publication in early 2008, but already has generated a lot of interest, with stories in The Chronicle of Higher Education and all over the web.

“What our paper did was say there’s another way to think about this,” said Archibald, “not in terms of what is the expected graduation rate, but in terms of what, for a given set of inputs, is the best graduation rate any college has achieved and then to compare yourself to the best instead of to the average.”

First, using the raw data collected by U.S. News and World Report from the past six years, the professors took 187 schools in the report’s “National University” category and compared their graduation rate performance using traditional regression analysis. The professors then applied production-frontier analysis to the same set of schools. That analysis resulted in a frontier, or set, of 35 universities whose graduation rates were higher than any other school with similar inputs. A school with a low graduation rate could be efficient if no other school did better without using more input. The standings of all the other schools were determined in comparison with that group.

When both the regression and production-frontier analyses were complete, the professors compared the results.

“Not surprisingly, in a lot of cases, what we got with production-frontier analysis was very similar to what we got with regression analysis,” said Feldman. “The interesting thing was to explore the situations in which we got different results.”

A set of 12 schools that did very well using the regression technique were found to be “quite inefficient” using the frontier technique, said Feldman. These schools ranked well in U.S. News. However, because members of this group did not do as well in the production-frontier analysis, they should be careful in being smug about their U.S. News ranking and look instead to the best among their peers to measure their success, the professors said.

“Just because U.S. News pats a school on the back, it does not necessarily mean a school is doing as well as it could with the input it has,” said Feldman.

Some win, some lose

On the other side of the coin, the Archibald and Feldman comparison also included a set of 27 schools that U.S. News downgraded because their graduation rates were below predicted. However, using production-frontier analysis, those same schools were shown to be very efficient.

Another significant finding of the professors’ study showed that the set of schools that serves large numbers of science and engineering students ranked poorly using both analyses.

“This tech-school bias is one of the biases that we uncovered that I don’t know if anyone else has ever talked about,” said Archibald. Feldman said that the amount of money required to produce a science or engineering graduate plays a large part in that outcome.

“A school that has 85 percent of its students studying non-tech disciplines isn’t going to have to spend as much per student as a school that has 85 percent of its students in chemistry or physics labs,” he said. “In other words, there’s another reason why a school that spends more isn’t seemingly getting any output for that—they are. They are getting the output. It’s just that their graduate and your graduate aren’t the same. They’re different. And if you lump them together, you’re mis-measuring things.”

The professors said that their study demonstrates the need for universities to examine rankings carefully and to look for better ways to measure their outcomes.

Quantifying the academy

“Universities are very good at telling stories, but we’re very poor at coming up with measures that can be quantified and compared across universities or across time,” said Archibald.

Even typically elite colleges need to show how the extra money that goes into their students results in improvements in the education they provide, said Feldman.

“What we are suggesting is that if other schools spend a third of what you spend per student, are you saying that your students get that much better of an education? How do you demonstrate this?” said Feldman. “There’s a spotlight being shined on what universities are doing and they’re going to have to think more clearly how they measure what they do.”

The paper is scheduled for publication in the February, 2008 issue of Research in Higher Education. Their work already has attracted a lot of attention—both positive and negative. But the reaction isn’t unexpected, they said.

“I think some of the pitches that have been thrown at us, like the ones being thrown at U.S. News, reflect a deep-seated anxiety about comparing schools on the basis of overly simplistic measures of output,” said Feldman.

“Or any single way of doing it,” added Archibald. “Because the best school in the nation for student X might not be the best school in the nation for student Y—but one school is the best school in the nation for U.S. News and that gets lots of publicity.”

Both professors said they are not trying to be the new U.S. News. They merely hope their study will make people think twice when looking at college rankings.

“People ought to think very seriously before they look at these rankings because when you throw schools together that have very different missions and you use the same technique to generate the score, there are biases that aren’t obvious until you think about them for a while,” said Archibald.

“We readily plead guilty to the charge that rankings are problematic and I think if you read our paper as though we’re trying to overturn U.S. News with a better way of ranking schools, then you are not taking the right message from the paper,” said Feldman. “We really didn’t write this to come up with a new, sexy ranking of schools.”   i