Expert Q&A: Measuring Partisan Fairness in Maps

Small text that says, "REDISTRICTING" accompanied by large text that says "Q&A" with a photo of Nicholas Stephanopoulos

Our new Expert Q&A series features conversations with professionals about topics related to democracy. The Democracy Docket team has formulated the questions asked, and the expert has provided the answers, which have been lightly edited for clarity.

Nicholas Stephanopoulos is a legal scholar and lawyer focusing on election law and constitutional law. We reached out to him over email to discuss the efficiency gap, a redistricting metric used to measure partisan fairness coined by himself and political scientist Eric McGhee.

How do states go about drawing new districts from census data?

The census data includes population counts for every unit of geography, all the way down to the census block. States use this data to assemble new districts that are approximately equal in population. Ever since the 1960s, this redrawing of districts to achieve rough population equality has been required by the U.S. Supreme Court. 

How do states ensure districts are equal in size and fair?

States rely on the census data to ensure districts are about equal in population. With respect to partisan fairness, states only have to pursue it if state law requires them to do so. Unfortunately, state law usually doesn’t require them to do so. So, in states where a single party is in control of the redistricting process, we often see the obsessive pursuit of partisan unfairness — the subordination of most other criteria so the line-drawing party can extract as large an advantage as possible.

How do states take racial and political considerations into account when drawing new districts?

In a nutshell, the Voting Rights Act requires states to draw districts in which minority voters are able to elect their preferred candidates if the minority population is reasonably compact and there exists racial polarization in voting. Determining the extent of racial polarization in voting requires fairly sophisticated empirical analysis. Once completed, this same analysis enables states to figure out if districts will likely “perform” for minority voters — that is, elect their candidates of choice.

As for political considerations, gerrymanderers routinely use past election results to predict how districts will perform — in partisan terms — in the future. They most often use an aggregate of past election results, though more advanced modeling can also be done. This same data can be used for good, not for evil, if it’s harnessed to achieve partisan fairness, not unfairness.

How can you determine if a district qualifies as a partisan gerrymander? And a racial gerrymander?

I prefer to think of partisan gerrymandering as a plan-wide (not a district-specific) concept. The textbook case of a plan that’s a partisan gerrymander is one that’s (1) drawn with partisan intent, (2) highly biased in that the line-drawing party’s votes translate more efficiently into seats and (3) more biased than most maps generated without considering partisanship. 

Racial gerrymandering, in contrast, is district-specific, not plan-wide. A particular district is an unconstitutional racial gerrymander (under the Supreme Court’s doctrine) if race was the primary motive for the district’s construction and the way the district was drawn is not tailored to any compelling state interest (such as complying with the Voting Rights Act).

What is the efficiency gap (in layman’s terms)?

The efficiency gap is a measure of partisan fairness that Eric McGhee invented (and that Eric and I introduced to a legal audience). The efficiency gap is grounded in Eric’s insight that the two essential techniques of gerrymandering — cracking and packing — both work by “wasting” votes for the targeted party’s candidates. In the case of cracking, all votes cast for the losing candidate in a district are wasted (because that candidate fails to be elected). In the case of packing, all votes cast for the winning candidate in excess of the 50% (plus one) needed to prevail are wasted (because that candidate would still have won without those votes). The efficiency gap is simply one party’s total wasted votes, across all the districts in a map, minus the other party’s total wasted votes, divided by the overall number of votes cast. It tells you, in a single tidy number, which party benefits (and to what extent) from all of a map’s cracking and packing choices.

What was the impetus for developing the efficiency gap?

The methodological impetus was Eric’s dissatisfaction with the then-dominant measure of partisan fairness, a metric known as partisan symmetry. Partisan symmetry asks how different the parties’ seat shares would be if they each hypothetically received the same share of the statewide vote (e.g., 50%). Eric was unhappy with the need to predict the outcome of a counterfactual election. The efficiency gap entirely avoids these kinds of hypothetical scenarios and is thus usable in settings (like safely blue or red states) where partisan symmetry often goes haywire.

The legal impetus for the efficiency gap was Justice Kennedy’s crucial opinion in the 2006 case of LULAC v. Perry. In that opinion, Kennedy showed some interest in quantitative measures of partisan fairness. But he criticized partisan symmetry precisely because of its reliance on counterfactual elections. Because the efficiency gap isn’t subject to this critique, there was some hope that it would appeal to Kennedy and help persuade him that gerrymandering could be judicially curbed. Unfortunately, Kennedy retired before he ever had the chance to evaluate the efficiency gap (and other promising methodological advances) on the merits.

What are the advantages and disadvantages of using the efficiency gap to evaluate a partisan gerrymander?

Again, one advantage of the efficiency gap is that it doesn’t necessitate speculation about hypothetical elections. Another advantage is that it’s usable in essentially any electoral setting, even one where a single party predominates. Moreover, American elections over the last half-century have had a median efficiency gap near zero. So, the measure’s theoretical ideal is close to the most common historical outcome, and maps with large efficiency gaps are biased not just arithmetically but also relative to the historical record.

As for disadvantages, like any metric that’s based on seats won or lost, the efficiency gap is somewhat volatile. This volatility follows from the knife-edge quality of seats themselves (under a plurality voting rule), where a handful of votes can sometimes be the difference between winning and losing. Additionally, a plan’s efficiency gap tells us nothing about the reason for the plan’s bias (or lack thereof). A large efficiency gap could be attributable to intentional gerrymandering or an accident of political geography.

What are some other methods to evaluate partisan gerrymanders?

The efficiency gap is one of several measures that are calculated using past or projected election results under a plan. I mentioned partisan symmetry above; other such metrics include the mean-median difference and the declination. I won’t define those metrics here, but curious readers can find descriptions and scores at PlanScore.

Another highly promising approach is to use a computer algorithm to generate large numbers of maps that don’t consider partisanship but that do match or beat the enacted plan along every nonpartisan dimension (e.g., population equality, compactness, political subdivision splits, Voting Rights Act compliance). After this ensemble of maps is created, partisanship is then brought back into the picture to analyze both the enacted plan and the computer-generated maps. If the enacted plan is more biased than most or all of the computer-generated maps, that’s powerful evidence that the plan has a partisan intent and effect.

Do any states use any of these metrics when drawing districts? Or just as an evaluative measure after? 

A handful of states (notably Michigan and Ohio) are now affirmatively required to achieve partisan fairness when they redraw their maps. Unfortunately, Ohio’s Republican-dominated political commission flouted this requirement (and ended up in litigation as a result). In contrast, Michigan’s independent commission has been carefully considering the efficiency gap and other measures of partisan fairness. Its maps aren’t final yet, but they seem likely to be far fairer than their predecessors in the 2010s.

Can minimizing the partisan effects conflict with other map-drawing interests — compactness, geographic breaks, keeping communities intact, etc.?

In theory, any redistricting goal could conflict with any other redistricting goal (especially if it’s framed in maximalist terms). In practice, it’s usually possible to draw a reasonably fair map that also performs reasonably well along every other legally significant dimension. In states where the prior plan was a gerrymander, it’s usually possible to achieve simultaneous improvement along every criterion — so not just better partisan fairness but also greater compactness, more subdivision preservation, and so on.

With big cities containing so many “wasted” Democratic votes, how does the efficiency gap and other metrics account for the impact of the rural-urban divide? 

They don’t. As I noted earlier, poor scores on the efficiency gap (and other measures of partisan fairness) shed no light on what the explanation for the partisan bias might be. Fortunately, computer algorithms do incorporate political geography — that is, how Democrats and Republicans happen to be distributed geographically. Those residential patterns are the crucial backdrop for the computer-generated maps just as they are for the enacted plan. So if the enacted plan is more biased than the computer-generated maps, political geography can’t be the explanation because it’s held constant.

Now that the Supreme Court has held partisan gerrymandering claims to be non-justiciable, what are other applications/uses for metrics measuring partisan effects?

The metrics can play the same role in state constitutional litigation that they previously played in the federal courts — as one kind of evidence of partisan gerrymandering. The metrics can also be used by commissions and other line-drawers who are trying to achieve partisan fairness, not unfairness. And the metrics can help mobilize journalistic coverage, popular outrage and efforts to reform. That’s one of the goals of sites like PlanScore — to make it blindingly obvious to everyone when maps are biased and thus corrosive of basic democratic values.


Nicholas Stephanopoulos is the Kirkland & Ellis Professor of Law at Harvard Law School. Stephanopoulos and Eric McGhee defined the efficiency gap as a concept and proposed that the metric be used to help determine whether illegal partisan gerrymandering has occurred in a jurisdiction. He is also a co-founder of PlanScore, a website evaluating past, present and proposed district plans.