Legal Affairs

Current Issue


printer friendly
email this article
letter to the editor

space space space

November|December 2005
"Loser Pays" Doesn't By Herbert M. Kritzer
$tats By Alan Schwarz
Outranked and Underrated By Norman Bradburn

Outranked and Underrated

Rankings of law schools are inherently misleading. Ratings can reveal their strengths and weaknesses.

By Norman Bradburn

TENS OF THOUSANDS OF LAW SCHOOL HOPEFULS are now in the process of requesting admissions applications. For many, the decisions on where to apply and where to go will be influenced in no small part by law school rankings, in particular those published each year by U.S. News & World Report. Law school administrators have long complained about comparative evaluations and have counseled prospective students not to place too much weight on them. And yet, as The New York Times recently reported, many law schools have changed their curricula, retooled their admissions procedures, and manipulated data about their finances in an effort to boost their U.S. News rankings. The cat is loose among the pigeons.

It's not quite the menace it has been made out to be, however. At a time when greater accountability is being demanded at all levels of education, head-to-head assessments are inevitable and often have merit. While many ratings systems are deficient, those that conform to accepted social science standards can convey valuable information about the law schools under scrutiny. The real problem lies with the focus on rankings rather than ratings. Done correctly, ratings—standardized quantitative measures—can be very effective at revealing institutions' relative strengths and weaknesses.

But ratings are invariably distorted by rankings, which use this underlying data to make a claim about the precise ordering of schools. While the law school community should stick to its guns about the problems of rankings, it needs to accept that academic evaluations are here to stay, and it should push for more rigorous and accurate ratings.

Although most professionals recognize that law school rankings represent an attempt to create a precise order out of things that cannot be precisely ordered, it is almost always rankings, rather than ratings, that make the headlines. Is Yale still the one to beat? Has Stanford eclipsed Harvard? Can Duke crack the top five? Rankings are appealing because they provide a simple answer to a simple question—Is A better than B?—and it is this simple answer that draws the interest of the media and apparently ends up swaying so many law school applicants. Not surprisingly, I have yet to find the sponsor of a rating system who has resisted the temptation to turn ratings into rankings. Rankings are clearly good for business, even though they are woefully inadequate when it comes to drawing meaningful distinctions between law schools.

For one thing, rankings are inherently misleading. They imply, for instance, that the distance between adjacent ranks is uniform and that there is a clear, measurable difference between two schools ranked side by side. Take Yale and Harvard, which are ranked first and second in the 2006 U.S. News law school rankings, with overall scores of 100 and 94, respectively. Looking at these numbers, you naturally assume that there is a clear difference between the two schools. But when you look at the numbers within the numbers—the factor assessments used to tabulate the total score—the difference is hard to discern.

Now throw Stanford, the third-ranked law school, into the equation. Its total score was 93, just one point less than Harvard's. Six points separate the first- and second-ranked law schools. One point separates the second- and third-ranked schools. Yet the rankings suggest that the distance between first and second is the same as the distance between second and third. Things get even more confusing further down the ranks. As with Harvard and Yale, there is a six-point difference between the University of California's Hastings College of the Law and the University of Kentucky Law School. Whereas Yale and Harvard are ranked one right on top of the other, Hastings and Kentucky are ranked 17 places apart—in 39th and 56th places, respectively. To put it gently, the U.S. News list is not a model of consistency and logic. (I've said this to U.S. News as a consultant to the magazine about its rankings.)

Another problem with rankings is that they have a flat distribution: Each rank has only one occupant, except for occasional tied rankings. Yet this type of distribution runs counter to what we intuitively know about quality—namely, that it is not evenly and neatly distributed. There are a few law schools that are truly excellent, a number that are quite good, a great many that are simply good, some that are not so good, and a few that are not very good at all. This is what is known as a "normal distribution," or bell-shaped curve, and is a much more accurate representation of the quality hierarchy among law schools than the flat distribution usually depicted in rankings.

IN CONTRAST TO RANKINGS, which proceed from the flawed premise that any law school can definitively be called "the best" and that all law schools can be lined up in an accurate hierarchy, it is possible to create a credible ratings system—one that is reasonably objective, methodologically sound, and illuminating in its results. What does such a system look like?

For ratings to be credible, they need to be based on believable indicators of quality. Much of the controversy about law school rankings centers on the choice of quality indicators and their relative weights in an overall rating. Take, for instance, LSAT scores. Most people agree that they matter, but there is disagreement about what aspect of the LSAT scores is important for a school—is it the median score of students who go there, the average, or some other measure? There are similar disagreements about how to properly assess admissions standards, faculty scholarship, student-faculty ratios, success rates of graduates taking state bar exams, and a host of other metrics. (The U.S. News formula, though it has been refined over time with feedback from law schools and social scientists, remains the target of dispute. The four major factors are: a law school's reputation among leaders of other schools and among lawyers and judges; its selectivity according to the median LSAT scores and GPAs of its students and the percentage of applicants the school accepts; its success in placing graduates in jobs; and its faculty, financial, and library resources.)

One way of choosing appropriate benchmarks would be to conduct a survey of law school faculty members, practicing lawyers, and judges. What do they believe are the most reliable indicators of quality in a law school? Assuming that there are differences of opinion among these constituencies, separate ratings could be devised based on the indicators favored by each group. You might have a rating based on indicators cited by law professors, another based on opinions of law firms that employ graduates, and a third that uses criteria favored by judges before whom graduates appear.

A more difficult challenge concerns quality indicators for which there are no data, like the integrity of students and faculty. Critics often complain that academic ratings are based on criteria that are easily measured and neglect other attributes that are important yet difficult or impossible to quantify. Measures about a school's reputation are probably the best way around this problem. A well-designed reputation survey can yield valuable information about a school and be an effective means of quantifying wholly subjective judgments.

Once the indicators of quality are selected, the challenge is to find a way of credibly combining them into a single measure—a rating. As with the quality indicators, a sensible method of devising a weighting scheme would be to survey the producers and consumers of law school output. The difficulty is that, unlike with other types of ratings, which have a "gold standard" against which all other indicators can be statistically validated, there is no single metric that is universally recognized as such for academic ratings. However, if enough experts are brought into the process and given a stake in the end result, it is possible to produce a weighting scheme that enjoys broad acceptance.

No ratings system, however well-designed, can succeed if the data on which it relies is flawed. One of the persistent criticisms of comparative evaluations is that much of the data comes from the institutions being rated. While this creates a temptation for schools to engage in creative accounting, the incentive to cheat is presumably offset by an awareness of the damage that can be done to a school's reputation if it is caught fudging its numbers. Still, the better evaluators prefer to depend on data from public sources, like the Integrated Postsecondary Education Data System, and accrediting agencies, like the American Bar Association, and to minimize the amount that comes from institutions being judged.

Law schools are absolutely right to complain about rankings. And when we talk about law school rankings, it is almost always with the highly influential U.S. News rankings in mind. While the magazine's ratings system is reasonably sound, it presents the final rankings in a way that misleads prospective students and does a disservice to law schools. The law school community can push the U.S. News guide and other evaluations to strengthen their ratings mechanisms by, among other things, adopting a unified position on quality measures and on the methods used to combine them. The cat may be loose, but the pigeons need not stand by helplessly.

Norman Bradburn, a senior fellow at the National Opinion Research Center, is a University of Chicago emeritus professor.

printer friendly email this article letter to the editor reprint premissions
space space space

Contact Us