Numerous articles have wrestled with the question of whether amicus briefs influence judicial decisions. These essays generally conclude that in some cases amicus involvement can impact outcomes—by providing technical expertise or discussing the broader implications of a ruling, for instance. However, unless a majority opinion closely follows the line of argument (and even the wording) in an amicus brief, or lauds the brief extensively in some other way, it is often impossible to establish to what degree an amicus party helped shape the findings of a majority opinion. Still more inaccessible is the answer to a bolder question: Would a court have ruled for a different party had amici been absent altogether?
No such mystery cloaks the AI justice employed in a recent post. There, we asked Google’s NotebookLM to consider only the parties’ briefs to determine which among them made the most compelling arguments. Today we’ll add amicus briefs to the mix and see if their contributions ever persuade our AI umpire to change its “mind.”
2015-16
As in the previous post, our experiment utilizes cases from 2015-16 and 2023-24, but filtered now to include just cases with amicus participation: ten cases in 2015-16 and eight in 2023-24. We’ll instruct NotebookLM to begin with 2015-16, analyzing each of the ten cases twice—first by reading only briefs submitted by the parties and reporting which party made the most credible arguments. Next, we’ll introduce the amicus briefs and see if these alter NotebookLM’s “opinion”[1] as to which party proved more convincing.
Table 1 shows that in four of the ten cases, the addition of amicus briefs prompted NotebookLM to switch its opinion. Two of the changes (Milwaukee Police Association and McKellips) brought AI’s assessment into alignment with that of the court’s majority, and in two other cases (Dufour and Wisconsin Pharmacal) the inclusion of amicus briefs persuaded AI to part company with the majority opinions.
Turning to individual justices, the enhancement of NotebookLM’s diet with amicus briefs increased the frequency with which AI “verdicts” aligned with the court’s conservative wing (Table 2). The percentages for Justices Roggensack and Gableman did not budge (nor did they for the court’s two liberals, Justices Abrahamson and A.W. Bradley), but the amicus supplements induced AI to side more often with the two other staunch conservatives (Justices Ziegler and R.G. Bradley).
2023-24
Eight cases attracted amicus briefs in 2023-24, close to the total of ten such cases in 2015-16. Beyond that similarity, however, several differences catch the eye when comparing amicus briefs from the two terms. First, reflecting the large share of politicized cases in 2023-24, far more of these briefs were filed—a total of 40, for an average of five amicus briefs per case. The ten cases from 2015-16 included only 16 amicus contributions, for an average of just 1.6 per case.
As to the issue at hand—how frequently did amicus briefs prompt NotebookLM to reconsider its finding—the abundance of amicus briefs in 2023-24 did not have a pronounced effect. Perhaps these briefs tended to neutralize each other under AI’s scrutiny. Be that as it may, in only two of the eight cases did the addition of amicus briefs convince our AI justice to reverse the assessment that it had offered based on the parties’ briefs alone (Table 3)—a “reversal rate” of just 25%, compared to 40% for 2015-16.
The 2023-24 term also departed from 2015-16 regarding the impact of amicus briefs on the alignment of AI with individual justices. We saw that in 2015-16 the inclusion of amicus briefs persuaded AI to reach conclusions that, overall, brought it closer to the conservative wing of the court than when AI relied solely on the parties’ briefs. The opposite happened in 2023-24. As shown in Table 4, AI fell farther out of step with all of the court’s conservatives than had been the case prior to the addition of amicus briefs for AI’s appraisal.
Conclusion
Amicus briefs did sway our AI justice on occasion, though it is unclear which factors proved decisive. Whatever the explanation, though, it must not center on merely the number of briefs per case. While four of the eight cases in 2023-24 drew six or more amicus briefs, there were just three such briefs in each of the two cases where AI reversed itself. In 2015-16, the ten cases with amicus briefs averaged 1.6 of these briefs per case. Only one case had as many as three, and four cases had two. In the cases where AI reversed itself, the average was just 1.75.
Nor is it evident that amicus briefs supporting “liberal” or “conservative” viewpoints enjoyed markedly greater success in convincing AI to change the impression that it had formed after reading only the parties’ briefs. Perhaps AI was simply responding to amicus briefs of unusually high quality—or, at any rate, briefs that managed to hit sweet spots programmed into the virtual justice.[2]
[1] In addition to the examples of NotebookLM’s “verdicts” provided in the first post, here are two more—one for Walworth State Bank v. Abbey Springs Condominium Association and another for Priorities USA v. Wisconsin Elections Commission.
[2] Given that the supreme court not only received the briefs but also conducted oral arguments with the parties in all these cases, I had hoped to see if oral-argument transcripts ever affected AI’s “verdicts.” Unfortunately, the transcripts available to me, while certainly intelligible, contained enough glitches that it seemed best to wait in the hope of cleaner documents someday.
Speak Your Mind