Measuring Polarization at the Court

Although numerous commentators have characterized the Wisconsin Supreme Court as polarized, this conclusion often seemed based on nothing more precise than acrimonious exchanges between justices or the number of 4-3 decisions filed that term.  Furthermore, “indicators” of polarization cited by observers are not weighed against the impressions left by the court in preceding decades—which makes it impossible to determine whether the court is any more polarized now than in, say, 2010 or 1970.  Today’s post addresses this by offering a means of measuring polarized voting and comparing the results over the last hundred years.

The method
As ChatGPT’s coding skills far surpass mine, I asked it to write the code necessary for laborious quantifying of polarized voting, and I then supplied the code to Pandas (a Python library for data analysis) in order to scrutinize data over the period from 1924-25 through 2024-25.  The program focuses on nonunanimous decisions and omits terms with more than seven justices (one justice dying or resigning partway through a term and being replaced by another).

For each term the program runs through every possible way in which the justices can be arranged into two groups and checks to ascertain which pair of groups contains justices who are least likely to vote in agreement with the justices in the other group.  There could be four justices in one group and three in the other, or five in one group and two in the other.  Once the program has discovered the two groups with the smallest amount of agreement between them, it generates the following numbers.

Within-group agreement: How often, on average, do two justices in the same group vote in agreement?  This is computed as a percentage—a single-number average for both groups.

Across-group agreement: How often, on average, do two justices in different groups vote in agreement?  This average is also computed as a percentage.

Gap Points: This number (not a percentage) is the difference between the within-group and across-group percentages.  The larger the number, the more polarized the voting.[1]

Here are two illustrations.  For the 2016-17 term, the program identified these two groups of justices: (Abrahamson and AW Bradley) and (Roggensack, Ziegler, Gableman, RG Bradley, and Kelly).  It calculated that on average two justices in the same group voted in agreement 83% of the time—and that, on average, two justices in different groups voted together 20.6% of the time.  This yielded a “gap” number of 62.4 (83 – 20.6), indicating highly polarized voting in non-unanimous decisions. 

In contrast, the two most distinct groups of justices[2] for the 1948-49 term produced a “within-group” percentage of 66.9, an “across-group” percentage of 57.9, and thus a “gap” of only 9 points, suggesting a negligible level of polarized voting.

Given the errors it spawns on occasion, we have all been well advised to utilize ChatGPT with caution.  The results here seem plausible, but I would want to confirm them with some other technique if fateful consequences were on the line.  Should anyone wish to inspect the code that ChatGPT created, I’m happy to provide it upon request.

Results
The following graph of “gap points” demonstrates that voting has grown more polarized over the decades.  Indeed, the ten smallest gaps all occurred before the mid-1960s, while the ten largest gaps are concentrated after 2010.  By this measure, the most polarized voting of all took place during Justice Michael Gableman’s decade on the bench—2008-09 through 2017-16—which recorded the six largest gaps of the century.  Polarized voting was also conspicuous during the two most recent terms, with 2023-24 accounting for the seventh largest gap and 2024-25 the ninth
.

Conclusion
It’s worth noting that simply calculating the number or percentage of 4-3 decisions will not necessarily tell us much about polarization.  It might, if the same clusters of justices usually voted together, but there could also be 4-vote majorities formed by many different combinations of justices.  In 2007-08, for example, the court filed fourteen 4-3 decisions with only two different majorities, while in 1964-65 there were nine 4-3 decisions with eight different majorities.  No surprise, then, that 2007-08 generated well over twice as many “gap points” as 1964-65 (51.6 compared to 21.6). 

Other topics take us beyond this post’s scope—one being the issue of when does polarization become unhealthy?  This resembles, from another perspective, a question raised in the previous post regarding the “desirable” level of unanimity.  There will doubtless remain a wide range of views as to whether the degree of polarization of any particular court is troubling—and, if so, which justices bear the most blame.

Finally, there are other ways of assessing polarization.  For instance, rather than focus on statistical measures of polarized voting as this post does, one might analyze the changing tone of language employed in dissents over the decades.  I’ve noticed quite a difference, but that’s a subject for another day.

 

[1] The code also generated other measures of polarized voting, all of which created the same impression as the output summarized below.

[2] Justices Hughes and Wickhem in one group and Justices Broadfoot, Martin, Fairchild, Fritz, and Rosenberry in the other.

About Alan Ball

Alan Ball is a Professor of History at Marquette University in Milwaukee, WI.

alan.ball@marquette.edu

SCOWstats offers numerical analysis of the voting by Wisconsin Supreme Court justices on diverse issues over the past 104 years.

Comments

  1. Alan: as always, very helpful.
    Thank you

Speak Your Mind

*