The last few years have shown that the Supreme Court is now covered by the same toxic cloud that has enveloped all of the nation’s public discourse. Although the Court is still respected more than most institutions, it’s increasingly viewed through a political lens. What most concerns people is how judicial politics affect the Court’s “legitimacy” — perhaps a subject for another time — but what lessons can we draw from the history of confirmation battles?
Politics Has Always Been Part of the Process
Politics has always been part of the process of selecting judicial nominees, and even more part of the process of confirming them. From the beginning of the republic, presidents have picked justices for reasons that include balancing regional interests, supporting policy priorities, and providing representation to key constituencies. Whether looking to candidates’ partisan labels or “real” politics, they’ve tried to find people in line with their own political thinking, and that of their party and supporters. Even in the old days, it was rare for someone to be on the Supreme Court “short list” of presidents of multiple parties. Look at the judicial battles of John Adams and Thomas Jefferson, with the Midnight Judges Act — the original court-packing — as well as Jefferson’s failed attempts to appoint justices to counter the great Federalist John Marshall. In the years that followed, when U.S. politics were defined by rivalries within the Democratic-Republican Party and its successors, ambitious lawyers knew that their careers depended on navigating the intra-party split. There’s never been a golden age when “merit” as an objective measure of legal acumen was the sole consideration for judicial selection.
When those nominees got to the Senate, they faced another gauntlet, particularly when the president’s party didn’t have a majority. Historically, the Senate has confirmed fewer than 60 percent of Supreme Court nominees under divided government, as compared to just under 90 percent when the president’s party controlled the Senate.1 Timing matters too: over 80 percent of nominees in the first three years of a presidential term have been confirmed, but barely more than half in the fourth (election) year.
Nearly half the presidents have had at least one unsuccessful nomination, starting with George Washington and running all the way through George W. Bush and Barack Obama. James Madison had a nominee rejected, while John Quincy Adams had one “postponed indefinitely” — you have to love that euphemism. Andrew Jackson was able to appoint Roger Taney only after a change in Senate composition, while poor John Tyler, a political orphan after the Whigs kicked him out of their party, had only one successful nomination in nine attempts. Most 19th-century presidents had trouble filling seats, before we had a run from 1894 until 1968 where only one nominee was rejected, John Parker under Herbert Hoover in 1930. Since LBJ, all presidents who have gotten more than one nomination had one fail, except George H.W. Bush, Bill Clinton, and Donald Trump — who nonetheless had three of the most contentious nominations in our history.
In all, of 164 nominations formally sent to the Senate (counting each submission, even if the same person), only 127 were confirmed, a success rate of 77 percent. Of those 127, one died before taking office and seven declined to serve, the last one in 1882 — an occurrence unlikely ever to happen again. Of the rest, 12 were rejected, 12 were withdrawn, ten expired without the Senate’s taking any action, and three were postponed or tabled. In other words, for various reasons, fewer than three-quarters of high court nominees have ended up serving.
Based on relative rates of unsuccessful nominations, the argument could be made that the nomination and confirmation process was more political during the nation’s first century than since. Both the presidency and the Court were relatively weak and the process was more of an insider’s game, with many picks based on personal loyalty and political philosophy rather than approach to the law. Of the 57 justices confirmed between 1789 and 1898, 17 lacked significant judicial experience.2 As the judiciary took on a greater role, however, nominations attracted more public attention, and also more transparency. Interest groups began to matter — unions and the NAACP contributed to Parker’s 1930 rejection — as public relations became just as important as Senate relations. Politics came back into the process, but in a different way. The battle became one over ideology and public perception rather than satisfying intra-party or regional factions.
Confirmation Fights Are Now Driven by Judicial Philosophy
As we’ve seen over the long sweep of American history, confirmation controversies are hardly unprecedented. To a certain extent, the politicization of Supreme Court appointments has tracked political divisions nationally. But the reasons for such controversies in the last few decades are largely unprecedented. While inter- and intra-party politics have always played a role, couching opposition in terms of judicial philosophy is a relatively new phenomenon that represents a departure from the past.
Pre-modern controversies tended to revolve around either the president’s relationship with the Senate or deviations from shared understandings of the factors that go into nominations for particular seats — especially geography and patronage. That dynamic is markedly different from the ideological considerations we see now for at least two reasons. First, modern fights transcend any particular nominee or even president, evolving and growing and filtering into the lower courts. Second, ideological litmus tests cause more of a problem than the geographic, patronage, religious, and other past criteria because there’s no longer widespread acceptance that a president gets to have his choice as long he meets those other, more neutral criteria. With the two major parties adopting essentially incompatible judicial philosophies, it’s impossible for a president to find an “uncontroversial” nominee.
The conservative legal movement, meanwhile, has learned its lesson; “no more Souters” means there has to be a proven record, not simply center-right views and affiliations, showing not telling a commitment to originalism and textualism. Once you consider someone who doesn’t have a long judicial record, or at least academic writings to the same originalist-textualist effect, it opens the door to the sort of presidential discretion that has backfired in the past.
The entire reason candidate Trump released his list was to convince Republicans, as well as cultural conservatives who may otherwise have stayed home or voted Democrat, that he could be trusted to appoint the right kind of judges. This was a real innovation, and we could see lists become standard practice, even if candidates from the two parties might use different criteria for shaping those lists, with more concern for demographic representation among the Democrats, who have a broader swath of lawyers — if not necessarily federal judges — to choose from.
The current emphasis on judicial philosophy may well be an updating of the “real politics” approach favored by presidents in the early 1900s — except now applied to intellectual commitments instead of trying to find (or avoid) progressive Republicans or conservative Democrats. But the problem is that there aren’t really too many progressive originalists or conservative living-constitutionalists, at least not in any way where the ideological appellation doesn’t swallow the philosophical one. Even Merrick Garland, who’s about as much of a moderate as President Obama could find, didn’t budge the Republican Senate.
Modern Confirmations Are Different Because the Political Culture Is Different
The inflection point for our legal culture, as for our social and political culture, was 1968, which ended that 70-year near-perfect run of nominations. Until that point, most justices were confirmed by voice vote, without having to take a roll call. Since then, there hasn’t been a single voice vote, not even for the five justices confirmed unanimously or the four whose no votes were in the single digits. And despite those “easy” confirmations, we’ve seen an upswing in no votes; five of the closest eight confirmation margins have come in the last 30 years. Not surprisingly, the increased opposition and scrutiny has also signaled an increase in the time it takes to confirm a justice; six of the eight longest confirmations — and all but one that took longer than 80 days — have come since 1986. Every confirmation since the mid-1970s except Sandra Day O’Connor, Ruth Bader Ginsburg, and the expedited pre-election process for Amy Coney Barret has taken more than two months.
There are many factors going into the contentiousness of the last half-century: the Warren Court’s activism and then Roe v. Wade, spawning a conservative reaction; the growth of presidential power to the point where the Senate felt the need to reassert itself; the culture of scandal since Watergate; a desire for transparency when technology allows not just a 24-hour media cycle but a constant and instant delivery of information and opinion; and, fundamentally, more divided government.3 As the Senate has grown less deferential, and presidential picks have become more ideological, seeking to achieve a certain legal agenda or empower a certain kind of jurisprudence rather than merely appointing a good party man, the clashes have grown.
And as these philosophical battle lines have hardened, so have the media campaigns orchestrated by supporters and opponents of any given nominee. There’s a straight line from the national TV ads against Robert Bork to the tens of millions of dollars spent on the fight over Brett Kavanaugh, including sophisticated targeting of digital media to voters in states whose senators are the deciding votes. “It’s a war,” explained Leonard Leo, who now chairs the public affairs firm CRC Advisors, “and you have to have troops, tanks, air, and ground support.“4
To put a finer point on it, all but one failed nomination since Abe Fortas in 1968 have come when the opposite party controlled the Senate. The one exception is Harriet Miers, who withdrew because she was the first nominee since Harrold Carswell in 1969 to be seen as not up to the task. The last nominee rejected by a Senate whose majority was the same party as the president was Parker, by two votes in 1930. For that matter, this turbulent modern period has seen few outright rejections — Nixon’s two and Bork are the only ones, in 53 years — with pre-nomination vetting and Senate consultation obviating most problematic picks.
At the same time, the inability to object to qualifications has led to manufactured outrage and scandal-mongering. This was more evident before considerations of judicial philosophy became standard practice, when Bork was an outlier. “Many people sneer at the notion of litmus tests for purposes of judicial selection or confirmation — even as they unknowingly conduct such tests themselves,” Harvard law professor Randall Kennedy wrote 20 years ago. The real problem, as he saw it, was that not being able to discuss ideology led to a search for scandal. “A transparent process in which ideological objections to judicial candidates are candidly voiced,” he concluded, “is a much-needed antidote to the murky ‘politics of personal destruction.’ ”5 Sounding the same refrain at the same time was one Chuck Schumer: “The taboo [on invoking ideology] has led senators who oppose a nominee for ideological reasons to justify their opposition by finding non-ideological factors, like small financial improprieties from long ago. This ‘gotcha’ politics has warped the confirmation process and harmed the Senate’s reputation.“6
Well, that taboo no longer exists — which is a good, honest thing, because vetting a nominee’s judicial philosophy is important — and yet we still got the Kavanaugh hearings.
Hearings Have Become Kabuki Theater
Public confirmation hearings have only been around for a century, starting with Louis Brandeis’s nomination in 1916. But Brandeis didn’t testify at his own hearing; the first hearing where the nominee took unrestricted questions in an open hearing was Felix Frankfurter in 1938. It simply wasn’t regular practice until the 1950s. At that point, the hearings became a chance for Southern Democrats to rail against Brown v. Board of Education. Few senators other than the segregationists even asked the nominees questions. Otherwise, hearings became perfunctory discussions of personal biography, as with Charles Whittaker in 1957 or the man who succeeded him in 1962, Byron White. John Paul Stevens, the first nominee after Roe v. Wade, wasn’t even asked about that case — which was already controversial, have no doubt. The focus in that post-Fortas, post-Watergate time was on ethics, and he was confirmed 19 days after nomination.
Things changed in the 1980s, not coincidentally when the hearings began to be televised. Now all senators ask questions, especially about key controversies and fundamental issues, but nominees largely refuse to answer, creating what Elena Kagan 25 years ago called a “vapid and hollow charade.“7 But even with this conventional narrative, there has been a subtle shift; from Bork in 1987 through Stephen Breyer in 1994, nominees went into some detail about doctrine.8 “This is not to say that nominees during those years made commitments about how they would rule on contested legal issues. But they did discuss their judicial philosophies, their past writings and their beliefs about the role of judges.“9 Clarence Thomas discussed natural law and the role that the Declaration of Independence plays in constitutional interpretation. Ruth Bader Ginsburg talked about gender equality and the relationship between liberty and privacy.
Beginning with John Roberts in 2005, however, the nominees still covered the holdings of cases and what lawyers call “black letter law” — what you need to know to get a good grade in law school — but there’s been little revelation of personal opinions. The nominees speak in platitudes: Roberts and his judicial umpire, Sotomayor saying that fidelity to the law was her only guidepost, Kagan accepting that “we’re all originalists now.” President Trump’s nominees, starting with Neil Gorsuch and filtering down to lower-court nominees, have even been hesitant to take a view on whether iconic cases like Brown were correctly decided, lest their inability to similarly approve of another longstanding precedent (notably Roe) cast doubt on its validity.
These days, senators try to get nominees to admit that certain controversial cases are “settled law,” whether Roe when coming from a Democrat or District of Columbia v. Heller from a Republican. Of course, when you’re dealing with the Supreme Court, law is settled until it isn’t, so nominees have come to say that every ruling is “due all the respect of a precedent of the Supreme Court,” or some such. That may or may not be a lot of respect, depending on the future justice’s view of the merits and of the weight of stare decisis — the idea that some erroneous precedent should be allowed to stand to preserve stability in the law and protect reliance interests. And that’s before we even get to the “gotcha” questions, or last-minute accusations of sexual impropriety.
Every Nomination Can Have a Significant Impact
The actual hearings, and the confirmation process more broadly, have very little to do with being a judge or justice. Once that spectacle is over, the new justice takes his or her seat among new colleagues — a lifetime “team of nine,” as Justice Kavanaugh called it at his confirmation hearing — to begin reading briefs and considering technical legal issues. It must be a surreal experience, having run an American Ninja Warrior course to win a life of quiet contemplation and oracular pronouncements. Or, as President Trump’s first White House Counsel Don McGahn put it, “it’s a Hollywood audition to join a monastery.“10
Regardless, once you’re in, you’re in. As Justice White told Justice Thomas when the latter first joined the Court, “It doesn’t matter how you got here. All that matters now is what you do here.“11 After all the nomination hoopla, the Supreme Court is still a court, albeit with a new composition that affects both internal dynamics and external results. White was also fond of saying that every justice creates a new Court, so each change shakes up the previous balance — regardless how close in “expected” philosophy a new justice might be to his or her predecessor.
That’s why every vacancy is important. Not all historically significant cases would’ve turned out differently if one justice were replaced — Marbury v. Madison and other Marshall Court cases were typically unanimous, Dred Scott was 7–2, Plessy v. Ferguson was 8–1, Korematsu v. United States was 6–3, Wickard v. Filburn was unanimous, as was Brown, while Roe was 7–2 — but some would have. And not simply by changing the party of the president making the appointment. The Slaughterhouse Cases, which eviscerated the Fourteenth Amendment’s protections against state action, were a 5–4 ruling with Lincoln appointees split 2–3, Grant appointees split 2–1, and a Buchanan appointee breaking the tie. Lochner v. New York was another 5–4, with Republican appointees split 3–3 and Democratic appointees split 2–1. The early New Deal cases typically split 6–3 or 5–4 against expansions of federal power, aligning the Four Horsemen (three Republican appointees and James Clark McReynolds) against the Three Musketeers (two Republican appointees and Brandeis), with two other Republican appointees in the middle, culminating in 1937’s “switch in time that saved nine.”
And all that’s before we get to the modern era, when we got used to first Justices Potter Stewart and Lewis Powell, then Sandra Day O’Connor and Anthony Kennedy, as the swing votes on issues ranging from affirmative action and redistricting to religion in the public square and gay rights. So many cases would’ve been decided differently had the conservative Bork been confirmed instead of the moderate Kennedy, and differently still had the libertarian Douglas Ginsburg occupied that seat. For that matter, had Edith Jones been nominated in 1990 instead of David Souter, Kennedy wouldn’t have been the median vote from 2005 to 2018; John Roberts would’ve been. And if Michael Luttig had been picked instead of Roberts in 2005 — whether as chief justice or with Antonin Scalia elevated and Samuel Alito in Scalia’s place — it would’ve been a very different Court these last 16 years.
Moreover, Court majorities are fragile and subject to affinities and clashes. Chief Justice Marshall drew people toward him who normally wouldn’t agree with him. Justice McReynolds pushed everyone away. Justice William Brennan was gregarious and a skilled tactician, often outmaneuvering Chief Justice Warren Burger. Justice O’Connor may have shaded left in response to Justice Scalia’s provocations, or to balance the arrival of the more conservative Justice Thomas.
In part because they’ve been burned so many times, Republicans focus on the Court as an election issue much more than Democrats. Bush v. Gore, Citizens United, and Shelby County, the three biggest progressive losses of the last 25 years, have riled activists and elites, and ratcheted up confirmation battles, but haven’t translated into campaigns regarding judges as such. “Republicans seem conditioned to feel that when they’re not paying attention, the courts will cause them all kinds of trouble,” Co-Chair Bauer once explained to me. “Democrats have come to have a similar concern, but for a long time, with visions of Warren, Brennan, Stevens and the like, they were more optimistic — maybe to a fault.“12
Democrats may now be catching up, even though during the Garland experience, they didn’t make much of the vacancy or the Republicans’ blockade. The result of the 2016 presidential election is that, for the first time in the modern era, and perhaps more clearly than ever, different judicial methodologies and approaches to legal interpretations line up with partisan preferences. For the foreseeable future, every Supreme Court vacancy is an opportunity to either prolong one party’s control of a particular seat or “flip” it.
Another reason why filling each vacancy is such a big deal is that justices now serve longer. In the late 1700s, when life expectancy was under 40 — skewed by infant mortality, of course — the average age of a Supreme Court nominee was about 50. In the late 1900s/early 2000s, when life expectancy in the United States is just under 80 — more than that for those who are already in late middle age — the average age of a Supreme Court nominee is still not much above 50. And that includes the outlier Merrick Garland, who at 63 wouldn’t have been picked had it not been for the unusual situation in which President Obama tried to offer a compromise. Since 1972, only one of 16 justices (Ginsburg) was over 55 at confirmation.
To put it another way, before 1970, the average tenure of a Supreme Court justice was less than 15 years. Since then, it’s been more than 25. The life expectancy of justices once confirmed has grown from about eight years at the beginning of the Republic to 25–30 today. Justices appointed at or before age 50, like Roberts, Kagan, Gorsuch, and Barrett, are likely to serve 35 years, or about nine presidential terms, projecting the legal-policy impact of Presidents Bush, Obama, and Trump, respectively, as far into the future as Justices Scalia and Kennedy did for President Reagan. Justice Thomas, who was 43 when he joined the Court and has already served nearly 30 years, could serve another decade!
The Hardest Confirmations Come When There’s a Potential for a Big Shift
In addition to divided government, at a time when the Court’s ideological profile is more clearly defined, the most contentious nominations are those that threaten a shift in the Court’s jurisprudence. Replacing the centrist Powell with the conservative Bork provoked a firestorm, but putting another moderate in that seat was easy. Replacing liberal lion Thurgood Marshall with counterculture conservative Clarence Thomas was a fight, but appointing Scalia to William Rehnquist’s seat when Rehnquist was elevated was a cakewalk. Would Kavanaugh have faced such strong opposition had he been nominated for Thomas’s seat? Probably not.
There are only two obvious shifts in a more liberal direction. The first was Ginsburg’s replacement of White, but that smooth confirmation came at a time when the Democrats had a significant Senate majority (57–43), newly elected Bill Clinton was enjoying his honeymoon — remember when presidents had those? — and White himself had been appointed by a Democratic president. The second was Garland’s nomination to replace Scalia.
Think of it this way: regardless of which party controlled the Senate, would there have been as big a political firestorm last fall if President Trump were replacing Justice Thomas rather than Justice Ginsburg? Will the fight to replace Justice Breyer be fiercer under President Biden or a Republican president?
Of course, presidents aren’t always successful in moving the Court in their preferred direction. Thomas Jefferson tried valiantly to dislodge the powerful Federalist judicial impulse, only to see his nominees fall under John Marshall’s sway. Abraham Lincoln named Treasury Secretary Salmon P. Chase as chief justice, partly to get him out of his hair, but more importantly to uphold the legislation by which the federal government had financed the Civil War, and which Chase had helped draft. Instead, Chief Justice Chase wrote the opinion finding the Legal Tender Act unconstitutional. Ulysses Grant wanted to mold the Court for the post-Civil War world, but it took him eight nominations to seat four justices of varying quality and political direction. Teddy Roosevelt should’ve been pleased with the great progressive Oliver Wendell Holmes, but after a vote in the major antitrust case of the time, TR inveighed that “I could carve out of a banana a judge with more backbone than that.“13
Woodrow Wilson, a renowned scholar of jurisprudence and thus in theory more sensitive to these concerns than most other presidents, named another storied progressive, Brandeis, but also the most retrograde justice of that or possibly any time, McReynolds, who didn’t seem to share any of Wilson’s views other than with regard to antitrust (and bigotry). Calvin Coolidge’s sole nominee, Harlan F. Stone, would end up betraying his benefactor’s laissez-faire proclivities by joining with Justices Holmes and Brandeis in taking the Court in a judicially restrained, and therefore progressive, direction. Harry Truman called putting Tom Clark on the Supreme Court his “biggest mistake” after Justice Clark ruled against his 1952 seizure of steel mills.14 Dwight Eisenhower was disappointed with both Earl Warren and William Brennan, although the latter was more of a political calculation ahead of the 1956 election, intended to help with the Catholic (and crossover Democrat) vote. Nixon’s appointment of Harry Blackmun similarly mitigated the reversal of the Warren Court that he had hoped to achieve, particularly given that Warren Burger wasn’t a particularly strong leader and Lewis Powell became more of a moderate.
Ronald Reagan too advanced his own legal-policy agenda only with Scalia — elevating Rehnquist didn’t add any votes — as O’Connor and Kennedy occupied the Court’s middle rather than pushing originalism, “strict construction,” law-and-order conservatism, or any other articulation of what Republicans wanted. George H.W. Bush of course had Souter in addition to Thomas. His son, looking for reliable conservatives, checked that box with Roberts and Alito but didn’t realize that a focus on judicial restraint could also lead to an over-deference to Congress.
While a justice might feel “loyal to the president who appointed him,” then-Justice Rehnquist told a law school audience in 1984, “institutional pressures … weaken and diffuse the outside loyalties of any new appointee.“15 At the same time, he explained, “one may look at a legal question differently as a judge than one did as a member of the executive branch” — and Rehnquist would know, having been a high Justice Department official. Moreover, a nominee picked for his views on the issues of the day — government expansion under FDR, executive power over national security under George W. Bush — might act contrary to type when the issue mix changes. The judicial restraint of Felix Frankfurter, a New Deal progressive who co-founded the ACLU, made him a conservative in the postwar era, while John Roberts’s similar restraint leads him to defer both to a wartime president and a peacetime Congress.
The Court Rules on So Many Controversies That Political Battles Are Unavoidable
Under the Framers’ Constitution, by which the country lived for its first 150 years, the Supreme Court hardly ever had to curtail a federal law. If you read the Congressional Record of the 18th and 19th centuries, Congress debated whether particular legislation was constitutional much more than whether something was a good idea. Debates focused on whether something was genuinely for the general welfare or whether it only served a parochial or regional interest. “Do we have the power to do this?” was the central issue. In 1887, Grover Cleveland vetoed an appropriation of $10,000 for seeds to Texas farmers who were suffering from a terrible drought because he could find no warrant for such appropriation in the Constitution.16 Twenty years later, the Supreme Court declared, “the proposition that there are legislative powers affecting the nation as a whole although not expressed in the specific grant of powers is in direct conflict with the doctrine that this is a government of enumerated powers.“17
We also had a stable system of rights that went beyond those listed in the Bill of Rights. These rights were retained by the people under the Ninth Amendment — and similarly the Tenth Amendment was redundant of the whole structure of powers, which was based on the idea that we have a government of delegated and enumerated, and therefore limited, powers.
Judges play bigger roles today; as the Court has allowed the government to grow, so has its own power to police the federal programs its own jurisprudence enabled. For example, the idea that the General Welfare Clause justifies any legislation that gains a majority in Congress — as opposed to limiting federal reach to national issues — emerged in the Progressive Era. In 1935, FDR wrote to the chairman of the House Ways and Means Committee, “I hope your committee will not permit doubts as to constitutionality, however reasonable, to block the suggested legislation.“18 Decades later, Rexford Tugwell, a New Deal architect, wrote that “to the extent that these [policies] developed they were tortured interpretations of a document intended to prevent them.“19 In the 1930s and ’40s, we thus had the perverse expansion of the Commerce Clause with cases like NLRB v. Jones & Laughlin and Wickard v. Filburn, which gained renewed prominence in the constitutional debate over Obamacare. After the “switch in time that saved nine,” when the Court began approving grandiose legislation it had previously rejected, no federal legislation would be set aside as going beyond congressional power until 1995.
We also had the flipside of the expansion of powers: the warping of rights. In 1938, the infamous Footnote Four in the Carolene Products case bifurcated our rights such that certain rights are more equal than others in a kind of Animal Farm approach to the Constitution. So it’s the New Deal Court that politicized the Constitution, and thus also the confirmation process, by laying the foundation for judicial mischief of every stripe — but particularly in letting laws sail through that should be invalidated. The Warren Court picked up that baton by invalidating laws in areas that are best left to the political branches, micro-managing cultural disputes in a way that made the justices into philosopher kings, elevating and sharpening society’s ideological tensions.
In that light, modern confirmation battles — whether you look at Bork, Thomas, the filibustering of George W. Bush’s lower-court nominees, the scrutiny of Sotomayor’s “wise Latina” comment, or the party-line votes on Trump’s appointees — are all part of, and a logical response to, political incentives given judges’ novel expansive role. When judges act as super-legislators, the media and the public want to scrutinize their ideology for that very reason.
As Roger Pilon wrote presciently nearly 20 years ago, “Because constitutional principles limiting federal power to enumerated ends have been ignored, the scope of federal power and the subjects open to federal concern are determined now by politics alone. Because the rights that would limit the exercise of that power are grounded increasingly not in the Constitution’s first principles but in the subjective understandings of judges about evolving social values, they too increasingly reflect the politics of the day.“20
The ever-expanding size and scope of the federal government has increased the number and complexity of issues brought under Washington’s control, while the collection of those new federal powers into the administrative state has transferred ultimate decision-making authority to the courts. The imbalance between the executive branch and Congress — especially the latter’s abdication of its leading constitutional role by delegating what would otherwise be legislative responsibilities — has made the Supreme Court into the decider both of controversial social issues and complex policy disputes. Senator Ben Sasse (R‑Neb.) wrote about this dynamic in a Wall Street Journal op-ed adapted from his opening remarks at the Kavanaugh hearings: