Skip to main content
Photo of a Law Library

The Right to (Human) Counsel: Real Responsibility for Artificial Intelligence

Keith Swisher[1]*

The bench and bar have created and enforced a comprehensive system of ethical rules and regulation. In many respects, it is a unique and laudable system for regulating and guiding lawyers, and it has taken incremental measures to account for the wave of new technology involved in the practice of law. But it is not ready for the future. It rests on an assumption that humans will practice law. Although humans might tinker at the margins, review work product, or serve some other useful purposes, they likely will not be the ones doing most of the legal work in the future. Instead, AI counsel will be serving the public. For the system of ethical regulation to serve its core functions in the future, it needs to incorporate and regulate AI counsel. This will necessitate, among other things, bringing on new disciplines in the drafting of ethical guidelines and in the disciplinary process, along with a careful review and update of the ethical rules as applied to AI practicing law.

Introduction: Choice of Counsel

If you were to choose a lawyer to provide important legal advice, which of the following two lawyers would you choose:

Lawyer Kingsfield: this lawyer has handled 30,000 court cases and can readily recall 10,000 of them. He has also reviewed 30,000 statutes and regulations and can readily recall 10,000 of them. He and his paralegal can perform legal research at a rate of 30 new and relevant legal sources per hour. He has also represented 1,000 clients and learned from each of them. He has received five trainings on implicit, unconscious, and cognitive biases, which he endeavors to minimize, although still present. In light of his number of open matters, Lawyer Kingsfield has, on average, 10 hours to dedicate to each matter each week.

Lawyer Automata: this lawyer has handled 3,000,000 cases and can readily recall all of them. She has reviewed 3,000,000 statutes and regulations and likewise can readily recall all of them. Although her current knowledge incorporates almost all relevant legal sources, she can perform new legal research if needed faster than anyone else in the bar. She has also represented 3,000 clients and has learned from each of them. She does not suffer from any implicit, unconscious, or cognitive biases herself (although the legal and factual information on which she and the other lawyers rely may contain such flaws). Lawyer Automata has as much time as she needs to dedicate to each matter.

The choice seems simple: Lawyer Automata. She is more knowledgeable, more competent, less biased, and less time-constrained than Lawyer Kingsfield.[2] Compared to him, Lawyer Automata will be more likely to maximize expected utility, however defined. This includes substantive utility (e.g., the correct legal outcome under the relevant facts and law) and process utility (e.g., absence of bias). In light of Lawyer Automata’s apparently all-around superior position, she is the clear choice to maximize the expected utility.

Sometimes, though, lawyers are asked to make predictions, not just to issue the soundest legal advice. Below is how the two lawyers fare in their predictions in criminal cases:

Lawyer Kingsfield: in predicting whether and for how long a judge will sentence a criminal defendant to prison (an obviously critical question for any criminal defendant deciding whether to take a plea offer or to proceed to trial), Lawyer Kingfield has been 61% accurate in his predictions. He gets it wrong 39% of the time.

Lawyer Automata: in predicting whether and for how long a judge will sentence a criminal defendant to prison, Lawyer Automata has been 95% accurate in her predictions. She gets it wrong 5% of the time.

Here again, Lawyer Automata is the clear choice. She is far more predictively accurate overall than Lawyer Kingsfield.

But I left out one potentially important detail: Lawyer Automata is not human.

Instead, Lawyer Automata is the most advanced form of artificial intelligence (AI),[3] designed and tested to provide the best legal advice, the most accurate predictions, and the most effective advocacy. I also omitted one other important detail: she does not yet exist. This Essay proceeds on the premise, taken as assumed for the sake of argument, that she will exist within 100 years from now. This premise has been rendered all the more plausible by sophisticated AI language programs, such as ChatGPT, IBM’s Watson, or Google’s Bard, which provide answers to complicated questions quickly, clearly, and generally competently; indeed, it can be quite difficult to distinguish this work product from human work product (even though comparable human work product takes much more time to produce).[4] Furthermore, a “robot lawyer” nearly made its appearance this year.[5] Given this assumed premise, we should explore whether Lawyer Automata is indeed the right choice of counsel and, if so, how compelling is that choice.

This Essay thus addresses the ethicality and constitutionality of what seems like an unavoidable future: the availability and advantages of advanced AI counsel to represent clients.[6] In other words, it generally asks whether we have a right to human counsel (if we want it) and how we should ethically regulate AI counsel.[7] My thesis essentially is that what makes lawyers special is legal ethics (as broadly construed below), not simply their legal acumen; AI counsel will undoubtedly exceed human lawyers’ acumen and may arguably replicate their legal ethics, making it suitable and superior counsel (all things considered). Human involvement, however, will be needed to infuse and monitor AI counsel’s ethics and may remain advisable or necessary to facilitate the client or other human relationships.

Part II highlights the potential benefits of AI counsel vis-à-vis human counsel, and Part III highlights the benefits of human counsel vis-à-vis AI counsel, including whether human exceptionalism is preferrable, or perhaps even required, for counsel. Part IV briefly discusses, but of course cannot resolve, the constitutionality of AI counsel, which does not yet exist. Finally, Part V discusses future ethics, i.e., ethical rules and regulation as applied to AI counsel. The regulation of AI practices will likely move from the fringe to the core, at least if legal ethics is to remain central to the practice of law. This will necessitate improvements and adjustments in the disciplinary process and the ethical rules. Furthermore, the modern approach to ethical regulation is to adjust incrementally and somewhat slowly the existing rules, incorporating some references to technology or strengthening or weakening a particular rule relating to human lawyers for example. Simply repeating this minimalist approach will miss the mark: we are at the cusp of an entirely new paradigm, and the existing rules and approaches are inadequate and partly irrelevant to the next-gen practice of law.

Some Advantages of AI Counsel

AI counsel presents several advantages over a more traditional (human) approach. This Part briefly highlights some existing AI-like applications in the criminal justice system. Along the way, it points out some advantages of these applications over human lawyers (or human lawyers alone), although it is by no means exhaustive of the potential advantages of AI (many of which might not yet even be contemplated).

Although not nearly uniform, criminal courts across the country now use certain algorithms. These predictive or actuarial models influence judicial rulings on, for example, pretrial release and sentencing for criminal defendants.[8] Several jurisdictions now require the use of these new predictive algorithms.[9] One of the primary reasons for this algorithmic infiltration seems commendable: research shows human decision-makers’ susceptibility to implicit and cognitive biases; algorithms promise to reduce or eliminate these biases and errors.[10] For example, owing perhaps to political pressure or subconscious racial bias, judges have sentenced certain racial groups more harshly on average than other groups.[11] Likewise, certain prosecutors and prosecutorial offices have discriminated on the basis of race or social class when deciding whom to prosecute and how severely.[12] An algorithm or future AI, at least in theory, need not suffer from these flaws; it would not discriminate against certain clients or opposing parties.[13]

In addition, harnessing big data, an algorithmic model might discern the most significant factors leading to recidivism (re-offending), more so than a human judge, criminal defender, or prosecutor, who can access and process far less data.[14] Human prosecutors and judges might have been locking up defendants unnecessarily pending trial, even though those defendants would not have committed another crime and would have shown up to trial. Conversely, these human actors might be missing important factors that lead to recidivism and releasing the defendants, thereby putting the community at increased risk of crime. Similarly, AI could more accurately predict what a judge or agency will do in these and other types of matters, which would help clients make more informed and wise decisions. (AI counsel would also more accurately predict the predictions and decisions of other AI or AI-like inputs in the justice system, such as the current actuarial risk models or potential AI judges of the future.[15])

The human decision-making (in)capacity is also related to bounded rationality and the types of cognitive biases (e.g., the availability heuristic) that have historically hindered implementation of pure rational choice theory.[16] Humans simply cannot invariably live up to it. Although sometimes the issue is that the particular rational choice model is insufficiently sophisticated to capture the range of rational and actual human behavior, sometimes humans simply err in their decisions and in their reasoning toward those decisions.[17] AI, in theory, could avoid these flaws and reason closer to perfection.[18]

Another benefit to AI counsel is that certain or all “consumer” ethical issues may be a thing of the past.[19] For example, the lack of adequate client communication is the, or one of the, most common ethical complaints about lawyers today.[20] But AI counsel will presumably have impeccable communication routines and, if questioned, will be able to produce detailed records of that communication to inquiring disciplinary authorities. Similarly, some human lawyers become exhausted, overworked, or distracted, and for these reasons, they miss deadlines or fail to communicate timely with clients and others. AI counsel, in contrast, would never tire and would be ever diligent. To be sure, like humans, AI would need doctors of a sort (technicians or other AI) in the event of a glitch, but otherwise AI counsel would always be working, reliable, and punctual.

The human system of counsel at present also suffers from wildly different performances. Some clients receive excellent counsel, others receive mediocre counsel, and others unfortunately receive terrible (or no) counsel. There is a form of fairness in all clients receiving the same (high-performing) AI counsel, as opposed to randomly different human lawyers with significantly different capabilities and biases at present. AI counsel to scale presents the opportunity to provide all clients (rich and poor) with top-quality counsel. Nearly all agree that clients have a right to effective counsel, yet today that counsel can be expensive and inconsistent. AI counsel thus could even, and perhaps uplift, this playing field.

A related and potentially enormous advantage of AI counsel presumably would be its lower cost and increased access. Much of the discourse today is focused understandably on access to justice for the millions of people who cannot afford or access counsel to help guide them through legal questions, proceedings, and transactions.[21] Once AI counsel is developed and sufficient bandwidth is enabled, AI counsel could serve all of the country’s (or even the world’s) population.[22] It, moreover, undoubtedly will understand every language and will always be available to its clients.

Without being exhaustive, this Part hopefully highlights the strong and in some ways unique potential of AI counsel. This potential includes reduced bias, increased accuracy, fewer “consumer” complaints, and greater access to counsel. The potential presumably will only grow as AI advances in its applications and abilities, transcending the abilities of a human lawyer.

Some Advantages of the Human Lawyer

Notwithstanding the disadvantages noted above, the human lawyer still offers a wealth of advantages. An ethical code guides and directs the conduct of lawyers, and violations of the code can result in sanctions (e.g., suspension or disbarment). Furthermore, having typically completed extensive legal (and other) education, interned, and practiced law for years, human lawyers bring significant practical experience (and presumably wisdom) to their cases. They tend also to have leadership and volunteer experience in the law and the community. Finally, they have significant experience living as humans, who of course are the subjects whom the lawyers must advise. For AI counsel to supplant these human lawyers, ideally AI counsel would need to replicate or exceed the advantages of human lawyers.[23] Each area of advantage is addressed briefly below. As we will see, many of these advantages might be replicated (or so we could non-laughably assume for the future), but some human involvement might nevertheless remain necessary for practical reasons.

Legal Ethics

Human lawyers boast ethical regulation. Legal ethics benefits both the public and the profession.[24] The profession’s list of core values includes loyalty, confidentiality, and the competent exercise of independent professional judgment.[25] In particular, lawyers have the following duties to their clients: “(1) proceed in a manner reasonably calculated to advance a client’s lawful objectives, as defined by the client after consultation; (2) act with reasonable competence and diligence; (3) comply with obligations concerning the client’s confidences and property, avoid impermissible conflicting interests, deal honestly with the client, and not employ advantages arising from the client-lawyer relationship in a manner adverse to the client; and (4) fulfill valid contractual obligations to the client.”[26] The ethical rules, which are mostly uniform across the states,[27] require that lawyers uphold these duties to clients, on pain of disciplinary action (and somewhat relatedly, civil liability).

If AI counsel were to be authorized, it would need to comply with the legal profession’s ethical rules. Else it would be an inferior option for clients and would not protect the public adequately. In addition, in the unlikely (or less likely) event that AI counsel were to err, remedies would need to be available. These challenges are taken up in Part V below. At the moment, only human lawyers follow, and must follow, ethical rules, and this feature is a significant advantage to human lawyers.

But could AI counsel learn and follow the ethical rules? In other words, could AI counsel adequately provide loyalty, confidentiality, and independent professional judgment to clients? AI counsel, in theory, could exhibit each of these important, but mostly unspecified, duties. Indeed, for certain duties—e.g., the absence of bias or the requirements of competence and diligence, as the Lawyer Automata introduction suggests—AI counsel might be better suited than the human lawyer. To be sure, whenever we attempt to “code” values, we invite disputes as to the meaning and scope of those values, but this is not an issue unique to AI. One set of values will be seeded for AI counsel (which in turn may have the power to expand on or refine this set), just as one set is engrained in a human lawyer. Perhaps AI counsel’s prowess would even enable it to utilize and reconcile multiple perspectives on these values.[28] Furthermore, human lawyers of course sometimes fail to apply or honor the values that they outrightly hold; AI, however, cannot disregard its embedded constraints, at least not at present.[29] At a minimum, AI counsel will display, consistent with its coding, a vision of loyalty, independent professional judgment, diligence, and so forth. Thus, this challenge at first blush does not seem insurmountable.

As the rowdy debate over algorithmic fairness and how to code it illustrates, however, whether we can adequately code sometimes-conflicting values, such as loyalty or independence, is an open question. Putting aside current technical limitations, the debate seems misguided. A human lawyer does not implement all visions of loyalty or independence, only one or a few. AI counsel likely could be coded with or learn this. Moreover, humans will impart their concept(s) of these values to AI counsel; it need not be allowed to create its own. I also explore below whether an objection to AI counsel is, at its core, some sort of human species exceptionalism—in other words, that only humans should counsel other humans (even though human lawyers and judges opine on and judge other species, e.g., they decide what human party owns livestock in a case or who has the right to deforest a parcel of land).

In sum, if AI counsel could not meet our current or future standards of legal ethics, AI counsel should remain only a tool of a human lawyer to review and supervise. Even if the public might be increasingly accepting of AI’s competence,[30] failing to live up to legal ethics would reveal a grave deficiency for AI counsel’s independence. In that event, so long as a human lawyer remains actively involved and retains the ultimate say in the decision, using AI counsel would be permissible; indeed, a broad use of technology is already quite common in law.[31] Moreover, this continued human involvement would likely meet any lingering constitutional or human-exceptionalism worries.

Practical Experience

Human lawyers also have a vast array of legal and practical knowledge, skill, and experience that they bring to bear. Although some of these features may cause the lawyer to have implicit and cognitive biases that AI counsel would presumably lack, it seems safe to say on balance that these features are an advantage or, at the least, unique and potentially advantageous. The question, then, is whether AI can be coded with, or learn, these aspects of the human lawyer. In light of AI’s almost infinite learning capacity in theory and the ability of humans to test the AI extensively before deploying it on the parties, this hurdle may well be cleared. Indeed, AI counsel could be modeled on moral and legal exemplar (human) lawyers. In other words, its relevant inputs could come from the best human lawyers, and although speculative, it may even learn to exceed them.

Furthermore, AI counsel will presumably have to pass millions of simulations (more so than any human lawyer ever has) before being authorized to practice law. With its processing prowess, AI counsel would have the ability to represent millions of clients across the state, country, or globe, quickly becoming the most experienced lawyer in history. Of course, this discussion rests heavily on the assumption with which we began—that a currently non-existent form of AI will come into existence with advanced capabilities such that AI counsel might be able to, for example, create effective legal arguments, understand human emotions, and reach practically laudable solutions. For purposes of thought, we can assume that the AI can learn everything from the practically wisest human lawyers and judges, and AI counsel will likely be able to learn from its prior experiences, as humans do. If not, AI counsel will presumably make practically unwise or unrealistic decisions, which would hinder or preclude a transition from human lawyers.

Human Experience

I will devote the most attention to a final, related, and perhaps primary worry: that AI counsel would not be human and would therefore lack an inherent human legitimacy, capacity, or relation. Many scholars and commentators resist AI decisions, preferring human decisions for various reasons.[32] If we stipulate that AI decisions would be more accurate, however, we can clear away many of the technical concerns. Additional objections remain, and thus those objections must not reside (or not reside completely) in the substance or outcome of the decisions or actions but rather in the process, including something about the nature of the decision-maker (human v. AI). Perhaps certain opponents are also simply using a form of reasoning akin to evidential decision theory[33]: they just do not want the news that they will be counseled by AI or that AI counsel exceeds certain human capacities, and thus they choose the human lawyer between the two, even though the human lawyer renders (under our stipulation) potentially worse counsel.[34] In any event, the motivations for the resistance seem plentiful, but we should interrogate what justifies this human exceptionalism.

Among many other arguments, one new and interesting way to justify scholars’ human bias follows:

[I]n a liberal democracy, there must be an aspect of “role-reversibility” to certain judgments. In some contexts, those who exercise judgment should be vulnerable, in reverse, to its processes and effects. And those subject to its effects should be capable, reciprocally, of exercising judgment.[35]

Although perhaps in the neighborhood of a solid justification for the human preference, it ultimately appears to miss the mark.

Following this theory, the authors note that it “provides a ready-made answer for when it could become normatively acceptable for robots to don judicial robes, serve on juries, and occupy other democratic decision-making roles: when they interchangeably become robo-defendants.”[36] But this could apparently be fixed immediately by enabling punishment, even if unlikely, on the artificially intelligent. For example, if it errs, it (or its creators) could be sued for negligence or prosecuted for its crimes, akin to the current civil and criminal liability of corporations and other entities. Its punishment could include reduced use or deactivation for a period of years or even the robo-death penalty: deletion. My hunch, however, is that this reply will not eliminate the concerns of the authors or the many others who do not want AI making key decisions over humans or serving as counsel to humans.

The authors do tap into a seemingly shared intuition that only humans should judge humans, or for our purposes, only humans should counsel humans. Whether that owes to species exceptionalism or some type of equality, we sense that it would be inappropriate for a robot to, say, sentence a human being to prison (even if the robot was acting in full compliance with the law, which had been crafted and implemented by humans), or serve as lead counsel in a trial. Part of this setup is descriptive, and no reply seems sufficiently magical to alter this description. That said, we already permit, happily or begrudgingly, a range of AI decisions, even certain “high stakes” decisions.[37] We also permit a wide of array of human differences and hierarchies to pervade the human attorney-client relationship. For example, attorneys across the country tend to be richer and less diverse than the population they counsel.[38] Robots do not possess these potentially problematic differences. Furthermore, robots can be designed so as not to suffer from unconscious and cognitive biases and, in this sense, are fairer and more rational. Thus, although AI counsel of course have more differences overall when compared with human lawyers, it is clean of certain controversial and likely negative differences.[39]

In light of these observations and assumptions, it seems to me that a more plausible justification for this arguable “robophobia”[40] is not that robots are insufficiently participatory in our democracy or that they must be “vulnerable” to the processes they oversee or counsel (and thus, under that theory, they could not currently serve as judges, lawyers, or jurors).[41] Instead, a potentially related but stronger justification cuts closer to the flesh, somewhat literally. Law often involves violence; it “takes place in a field of pain and death.”[42] Thus, especially (but not exclusively) in criminal law, judicial cases cause state-imposed pain on the defendant (e.g., years in prison or even death).[43] To be sure, the judge (not the prosecutor or defender) ultimately issues the punishment or judgment, but the lawyers are the ones guiding and advising the clients through this precarious process. Given this pervasive element of pain, AI counsel perhaps should have the ability to understand and suffer pain, at least something very roughly similar to the types of punishments its clients face in the justice system.[44] This quality would give it an important sense of empathy to the defendant and perhaps temper or otherwise alter its advice.[45] Even if this capacity would not alter the advice or advocacy, it might be more relatable to clients and assure their confidence.

To be sure, the incomparability of pain (dis)utility between individuals (including between robots and humans) remains unsolved,[46] but solving that elusive puzzle does not seem necessary to this theory. For human counsel, we do not presume that pain feels or measures the same from human counsel to defendant, and we do not calibrate any differences before assigning counsel. Instead, we seem satisfied that counsel understands and has suffered some pain, even if the counsel experiences or values pain differently than the defendant. Moreover, we of course do not require lawyers to have served years in prison or miraculously have suffered and survived death row to become lawyers in criminal cases.[47] We presumably do not need to require more precision or equivalence for the AI counsel. If we can code a form of digital or electrical pain, or if the AI can learn pain to an extent acceptably similar to human capacity, then AI could counsel us. It is very clear that AI will soon be able to recognize pain and suffering in humans,[48] and it is not unfathomable that we (or it) will design a way to experience pain and suffering.

We might also note in passing (and admittedly speculatively) that this might finally be a way to incorporate a “hedonimeter”[49] if necessary or desirable: the defendant’s pain makeup, if future neurotechnology can measure it accurately, could be fed into AI counsel. AI counsel could process the wealth of data and patterns and presumably make some sense, in real time, of the defendant’s pleasure and pain.[50] The defendant and AI counsel could therefore be linked in a significantly closer way than the current human lawyer-client relationships. Thus, the pain measurement of the AI counsel and the defendant would not have to be reconciled; it would essentially be the same. AI counsel could then advise the defendant with an aligned understanding of the defendant’s perceptions and feelings. I am not suggesting that pain or other emotional equivalence is necessary, but if it is desired, AI counsel might be the only realistic path to achieve it. Moreover, pain, while pervasive in criminal and certain other types of cases, would not be sufficient for AI counsel to understand fully the human condition; AI counsel would also need to understand and possibly feel other virtues and capacities (e.g., mercy, forgiveness, blame). Whether it could learn the defendant’s particular emotions and capacities, it would at least need to have some approximation of them.

Another, perhaps complementary way to view this puzzle is through the eyes of reciprocity. That is, to be counsel, must the AI counsel be counsel-able? If one has never been (and perhaps could not be) a client, does that limit one’s capacity as counsel? For AI counsel to rise truly to its imagined potential, it would need to put itself as much as possible into the shoes of clients. It presumably could not give tailored, realistic, and palatable advice without this ability. This too might be programmed, but without it, AI counsel could not relate to its clients and will be suboptimal counsel in this sense, even if its computing prowess is off the charts.

We should flag one final issue before leaving this topic: selecting counsel is a very personal and impactful decision, and to respect a person’s selection is to honor the person’s autonomy.[51] Even for a futuristic Essay like this one, this issue unfortunately suffers from a utopian veneer. Indigent clients do not have a choice of counsel.[52] They either receive counsel funded by state or nonprofit agencies, or they receive no counsel. If lucky enough to be in the former group, they receive counsel, but not a choice of particular counsel. Clients with money have a choice, however.[53] Furthermore, hopefully in the future, all clients will have a choice. As to clients who choose AI counsel, to respect this choice would seemingly respect their autonomy (and in any event, they may currently choose no counsel, so it is difficult to see why we would prohibit their consultation with AI counsel). But the harder question is: What about clients who want human counsel, not AI counsel? Should these clients be stuck with AI counsel? Part of this folds into the constitutional question—does the Sixth Amendment require a counsel with a heartbeat?[54]—but part is purely normative and warrants exploration.

Clients reveal highly personal information, even deeply held secrets, to counsel. They also must rely on counsel to be their advisor, advocate, and voice in legal matters that impact quite directly their life, liberty, and property. It may well be that, under these circumstances, many clients may prefer another human to fulfill this vital role. Indeed, they may trust and connect with human counsel in a way that might be difficult or impossible to replicate with AI counsel. Time will tell whether their human preference will subside as humans continue to work productively with AI generally, and as AI counsel continues to advance and to perform reliably and effectively. Until then, it would not be unreasonable to give clients a choice between (1) the (likely more effective) AI counsel and (2) the (likely more affective or relatable) human counsel. Indeed, this human-relatedness element might point to an opportunity to optimize the attorney-client relationship. Lawyers often serve as amateur social workers, crisis counselors, financial advisors, or psychologists in these relationships, yet lawyers are not trained in (nor do data show that lawyers are particularly good at) these roles. Perhaps the legal elements of the relationship could be handled by AI counsel, while the other elements (e.g., grief or family counseling) could be handled by an appropriately trained human.[55] This AI-human team might be more effective and more holistic than the traditional human-lawyer-only model.

In sum, short of fully addressing the ethical, political, and human-qua-human objections, AI counsel seems poised to overcome most of the objections as it continues to advance. For the near-to-medium-term future, however, its clients might benefit from continued human involvement. But this human involvement need not mean the status quo. Instead, this human involvement could facilitate AI-client relationships, and the human may supply expertise (e.g., psychological counseling) that human lawyers tend to lack.

The Constitutionality of Non-Human Counsel

The constitutional discussion of the right to human counsel will be simple, preliminary, and admittedly unsatisfactory. Because it seems like a threshold issue, however, it should be at least briefly addressed.

The Constitution’s drafters neither seriously considered nor presumably even envisioned the proposition at issue, namely, that a non-human advocate could serve as counsel (indeed potentially better counsel than humans) under the Sixth Amendment.[56] Thus, turning to the drafting history or the usage and meaning of certain key language (e.g., “Counsel”) around the time of the Constitution’s drafting or applicable amendments would be largely unproductive, especially given this Essay’s assumption that AI counsel will be able to rival or exceed the legal capacities of human lawyers. In addition, the Supreme Court has never taken a case to interpret the Sixth Amendment or other constitutional language as applied to non-human counsel. We nevertheless can anticipate the dueling and largely fruitless arguments: Opponents of AI counsel will presumably note that “counsel” at and since the Sixth Amendment’s drafting and ratification refers to human counsel, while proponents of AI counsel will probably retort that AI counsel is a new technology and a changed circumstance that was simply not in the minds of the drafters and is more than consistent with the functional idea of counsel.

Some indirect authority suggests that the Sixth Amendment’s requirements are rather minimal and somewhat flexible. Indigent defendants generally do not have a right to a particular counsel or even to a “meaningful relationship” with whatever counsel is assigned to them.[57] Although untested, perhaps this proposition would extend to a preference for human over AI counsel. In other words, if AI counsel would be at least equally effective, to which we can stipulate for purposes of discussion, defendants would have no right to counsel with a heartbeat (although heartbeats also could be digitally simulated if necessary). It also perhaps is weakly supportive that human counsel currently use forms of AI (e.g., search engines) in their representation without objection, although human counsel remain in control of the means and final work product. For those defendants (rich or poor) who prefer AI counsel to human counsel, that choice presumably should be honored.[58] After all, defendants, even in felony cases, can waive counsel entirely,[59] and it therefore seems logical to permit defendants to choose AI counsel, even if viewed as inferior to human counsel. Some learned assistance is better than none.

If the Court finds a right to “human counsel” in the future, and if a defendant does not waive that right as noted above, it does not necessarily mean that AI counsel would be unconstitutional. We arguably would still need to explore what it means to be “human” and whether AI counsel could meet the criteria. Of course, AI counsel would likely fail a biological test, but such a test would be thin, unless something critical rests on being of the same biological species. Humans of course have little hesitancy in interreacting with, guiding, and controlling other species. In other words, although we are apparently fine doing almost anything to other animals, no non-human animal (or non-animal) could serve as our counsel under this view. Perhaps we thus favor human exceptionalism when it comes to counsel.

Although each human is unique, and humans come from vastly different backgrounds, they do share some general similarities, and perhaps those similarities provide the basis for human exceptionalism in counsel.[60] But could not AI counsel replicate those similarities? Although a deep dive into the essence of what it means to be human is beyond the scope of this Essay (and likely the reader’s patience), advances in technology at least suggest that human traits may be copied and perhaps even augmented in AI counsel. Future AI counsel might meet the criteria for consciousness, for example. Thus, if the Constitution were to be interpreted to require human counsel, AI counsel could be designed to meet the requisite, human-constituting elements.[61] Some of these elements, such as autonomy, are addressed in the ethical discussion below. A future acceptance of AI counsel should not only require functional equivalence (of human and AI counsel) but guard against prejudice that might flow to those who use AI counsel. For example, might a human jury (consciously or subconsciously) treat less favorably those who use AI instead of human counsel? Would a human judge rule less favorably? To be sure, education, rules, and jury instructions might mitigate this potential prejudice.

In sum, although it is far too early to tell how the constitutionality of AI counsel will ultimately fare in the Supreme Court, it is not outlandish to assume that, at some point in the future, AI counsel might be considered constitutional. The stakes are high but somewhat narrow. For those who prefer AI counsel, they should get their wish; after all, they can currently proceed with no counsel. Some learned assistance is better than none. The constitutional issue may mostly impact only a particular group: those who cannot afford counsel in criminal matters in which incarceration is at stake. To no one’s surprise, it is an open question whether furnishing advanced AI counsel for these defendants would satisfy the Constitution. Even if the Supreme Court eventually holds that AI counsel at critical stages of criminal cases does not satisfy the Sixth Amendment, however, human lawyers and willing clients will undoubtedly still rely on AI counsel.[62] As forewarned, this Part was destined to be unsatisfactory as to the constitutional question, but it hopefully illustrates that the constitutional question is open and somewhat narrow.

Future Ethics

Unlike the other tools and trades involved in the law, only lawyers must follow and practice legal ethics. Legal ethics protects the public and helps to advance the best vision of legal counsel. This Part braces for the impending AI expansion into the practice of law by questioning whether legal ethics is ready. Unsurprisingly, it is not. First, this Part discusses some regulatory issues and approaches that should be adjusted (or at least studied) for the near future. Second, it discusses some particular ethical issues that will need careful attention as technology expands. Both of these discussions likely will have relevance to the future of legal ethics even if AI counsel does not become fully independent of human lawyers but instead serves as their increasingly vital tool to provide legal services to the public.

Disciplinary Approaches and Agencies of the Future

How should we ensure that AI counsel performs in accordance with the ethical rules? The Part aims to offer some insight on this question, with an emphasis on including AI counsel in the design and enforcement of ethical regulation. Perhaps not surprisingly, current approaches will not withstand the future.

One ready but ultimately insufficient approach would be simply to say (as we do exclusively at present) that AI’s human lawyer handlers must follow the ethical rules and that they must supervise AI counsel so that AI counsel does not violate the lawyers’ duties. But we may be moving to a future in which AI counsel does most or all of the legal work. The AI would be making the key work product and suggesting the best paths forward for the clients (even if a human is later signing off on the work and recommendations). In that world, it seems insufficient not to regulate the AI directly.

The supervision-only approach, moreover, seems impractical and possibly impossible. After all, the ethically unconstrained and unguided AI counsel would be producing the work and recommendations on which the humans would be principally basing their understanding and supervision, and the ethical input if any from the human lawyer would seem to come too late in the process. Many have noted the opaqueness of AI’s processes, furthermore.[63] Without adequate training, involvement, and transparency in AI’s processes, the human lawyer or disciplinary agent would not necessarily know or comprehend what questionable steps the AI might have taken in reaching the result or how the result might be based on inaccurate or biased data or incorrect computation.

To be sure, we tolerate a milder form of this problem today, but with two licensed lawyers, and we generally hold both on the hook.[64] For example, an associate in a law firm or a new attorney in a government legal office might conduct all of the meetings with the client and others, might conduct all of the legal research, and might produce all of the work product; a partner or supervisor then might (often quickly) review and approve the work afterward. If the work is incompetent or unethical, both lawyers might later be disciplined (or successfully sued for malpractice).[65] If AI were to take the place of the associate or other new attorney in this scenario, only the human lawyer would currently be subject to discipline. A version of this one-sidedness occurs today as well, however. We just have to replace the associate or new attorney with a paralegal, legal assistant, or private investigator. Some firms or government offices have permitted those individuals to work up the case almost exclusively, while providing some minimal attorney oversight. If the work is incompetent or unethical, only the attorney is disciplined, not the paralegal, legal assistant, or private investigator.[66] But in a future in which we envision AI counsel playing the key role, not some support role, it seems suboptimal at best to regulate only the supporting cast (e.g., a human lawyer barely involved in the work product).

One final example should hopefully illustrate the issue: A sole practitioner employs a highly knowledgeable and experienced office manager. The office manager meets with clients, drafts legal documents (e.g., research memos, demand letters, motions), provides legal advice, creates client invoices, and strategizes and plans the course of action for the small law office’s matters. The sole practitioner comes into the office twice per week and reviews the manager’s documents, invoices, and plans. In this scenario, the state disciplinary authority would almost surely seek to discipline the solo lawyer for failure to supervise adequately and for assisting the unauthorized practice of law (and would likely seek to enjoin the manager’s conduct).[67] After all, assuming the work product looks in order, how would the lawyer know whether the work contains errors, inaccuracies, or even evil judgments along the way? If we switch out the office manager for advanced AI, we have arrived at the future. It seems unlikely, however, that the result will be the same (namely, discipline of the lawyer and an injunction against the AI). Instead, it seems that the practice would likely be permitted, provided that the lawyer performs a relatively minimal supervisory role. If we permit AI to participate in providing legal advice (as we already permit now to some extent and likely will permit even more sophisticated and more voluminous contributions in the future), we should address both the lawyer and the AI. Two propositions follow from this suggestion.

First, we should work to instill legal ethics in AI on the front end and hold it accountable on the back end. To do so, we presumably would have to embed ethics in its coding or ensure that it learns legal ethics. Otherwise we skirt around the issue, with only indirect regulation under the framework of “assistance.”[68] Lawyers and ethicists thus need to be involved in the creation and evaluation of AI counsel, and the UPL framework (which currently guards the gate) could help to ensure that these experts are invited to the table to participate in the creation and auditing of AI counsel. It is unrealistic to expect coders or AI itself to know legal ethics; instead, experts need to plant the seeds and monitor its growth. Furthermore, like human counsel, AI counsel (or its human owners or operators) should be subject to civil and disciplinary liability. Second, we should address our growing human reliance on AI more directly in the rules. Ethics 20/20 foreshadowed this approach,[69] but much more work is needed to address AI counsel (or even just AI assistance). The rules to date have assumed that all counsel will be human lawyers and that the buck will stop with only human lawyers. This will likely not be the case, and thus the regulatory approach should be reimagined to meet the future legal landscape. An approach that simply says that human adopters must use a reliable AI system would be a virtual abdication of legal ethics for the functional practitioner of the future. At a minimum, the rules should be updated to address any unique and significant features of our increasing reliance on AI.

As some starting guideposts, the ABA created in 2016 the Model Regulatory Objectives for the Provision of Legal Services, recognizing the “increasingly wide array of already existing and possible future legal services providers.”[70] These objectives follow:

  1. Protection of the public
  2. Advancement of the administration of justice and the rule of law
  3. Meaningful access to justice and information about the law, legal issues, and the civil and criminal justice systems
  4. Transparency regarding the nature and scope of legal services to be provided, the credentials of those who provide them, and the availability of regulatory protections
  5. Delivery of affordable and accessible legal services
  6. Efficient, competent, and ethical delivery of legal services
  7. Protection of privileged and confidential information
  8. Independence of professional judgment
  9. Accessible civil remedies for negligence and breach of other duties owed, disciplinary sanctions for misconduct, and advancement of appropriate preventive or wellness programs
  10. Diversity and inclusion among legal services providers and freedom from discrimination for those receiving legal services and in the justice system[71]

Although the ABA apparently did not contemplate AI counsel, the use and regulation of AI counsel would be well-positioned to promote several of these regulatory objectives for legal service providers, namely, access to justice and legal information (3), delivery of affordable and accessible legal services (5), and freedom from discrimination for clients (10). Other objectives, however, highlight challenges for AI counsel, namely, transparency (4), protection of confidentiality and privilege (7), independent professional judgment (8), and protections and remedies against malpractice and misconduct (4, 9).[72]

Keeping these objectives in mind, disciplinary agencies will need to adjust to a world in which AI counsel is the primary counsel (at least functionally). Disciplinary agencies of today in some respects would be both over- and understaffed for AI counsel. They may be overstaffed to the extent that AI counsel would commit fewer “consumer” violations and perhaps commit almost no violations.[73] As more legal advice and service is provided through sophisticated AI methods and actuarial models, however, disciplinary authorities might need to acquire additional computer forensic tools—likely even other AI—to help discern ethical violations of AI counsel or models. They also likely will need on-staff or on-call computer scientists to monitor and interpret these inquiries. Their input will also be helpful in determining to what extent the human lawyer supervisors failed to supervise adequately the work of AI counsel for which they might be responsible.

Like human lawyers, AI counsel should be subject to discipline. Analogously, a few state disciplinary authorities can already discipline law firms or other entities, not simply living, breathing human lawyers.[74] Likewise, organizations, not simply individuals, may be prosecuted criminally.[75] If a particular AI counsel violates the ethical rules, it could be disbarred, suspended, or ordered to undergo remedial measures. Unlike the present, these remedial measures may not be mandatory counseling sessions or ethics or trust-account CLE courses; instead, they might be data, data gathering, data security, or coding restrictions or adjustments so that the offending advice or service does not continue. They also could restrict AI counsel’s scope of practice if necessary. Whatever doctrines might preclude discipline against AI counsel—e.g., mens rea requirements in which we require certain mental states before disciplining lawyers—should be revisited and adjusted. Clients of AI counsel would also need to have available civil remedies or receive reimbursement should they suffer from AI counsel’s malpractice.[76] AI counsel or its owners, therefore, need to be subject to suit and have malpractice insurance (or something roughly equivalent), or a new and adequate client protection fund would need to be created. Without roughly similar (or better) remedies to those available against lawyers, AI counsel will be a less attractive and more dangerous option for clients.

To add a proactive (rather than merely reactive) disciplinary model,[77] moreover, an oversight committee or disciplinary agencies’ computer specialists or consultants could suggest improvements to the AI’s process or code before disciplinary problems even occur. This would necessitate AI counsel’s (or its creators’) transparency as to what data it relies on and how it reaches its decisions. States also should publish ethics opinions or other guidelines to provide supervising lawyers, disciplinary authorities, and AI itself with benchmarks for ethical, and unethical, AI practices. This future path of course runs into one particularly big issue: whether and when AI counsel would constitute the unauthorized practice of law. This in turn brings us to the question of AI counsel’s admissions to the bar.

Even if it presently existed, AI counsel could not practice law under current constraints. As the Supreme Court has noted, “[r]egardless of his persuasive powers, an advocate who is not a member of the bar may not represent clients (other than himself) in court.”[78] Apart from licensed legal paraprofessionals and certain other exceptions, only licensed lawyers may presently practice law. The current licensing process is ill-suited for AI counsel. To become a lawyer, the applicant generally must have graduated from an ABA-accredited law school, passed the bar exam, passed character screening, and paid fees (for the schooling, exam, and screening).[79] AI counsel cannot graduate from an ABA-accredited law school at the moment, but only because law schools do not currently provide accommodations enabling AI to enroll in and access JD programs. If permitted, the AI of the future presumably not only could pass but could ace the classes. It could answer professors’ questions and could pass the exams with flying colors. It would be bound by the Honor Code, but violations seem unlikely. It also would ace the bar exam,[80] and it would have no character and fitness problems (at least not within the present practice, which looks almost exclusively at previous misconduct of the applicant; AI is unlikely to have prior arrests, delinquent debts, and so on). Humans might have to fund or waive AI counsel’s tuition and exam fees, unless AI in the future can earn and spend funds itself.

One relatively small issue could be open-book versus closed-book law school and bar exams. Certain types of AI can function without an internet connection while others cannot. In any event, it tends to scour vast amounts of data when producing its answers. Thus, a closed-book exam format might present a barrier to its success. But human students get to bring into closed-book exams whatever is already in their heads; whatever information that the AI possesses prior to the exam is at least highly analogous. It seems like the more logical practice might be simply to make all exams open-book, but in any event, the AI could compete so long as it is permitted to use its preexisting database(s). This discussion seems rather fruitless, however, as AI counsel almost without question will rise to a point at which it could speed through law school and exams; indeed, in that world, law school and the bar exam (at least as currently constituted) appear to be an unnecessary step for AI counsel. Law professors and deans may have an influence in shaping AI counsel’s approach, inputs, outputs, audits, and regulation, but AI counsel would not need three years of slow-paced individual courses, followed by a closed-book bar exam and a character-and-fitness screening process. The current process is not even perfect for humans, but it makes little-to-no sense as a licensing crucible for AI counsel. Instead, AI counsel needs to prove that it acts consistently with the legal ethics rules and performs competently and diligently for clients. This can likely be done with rigorous design, testing, and auditing, including simulating clients and reviewing AI counsel’s performance in the simulations. Human lawyers, legal ethicists, robot ethicists, and computer scientists, among others, should be involved in analyzing and auditing AI counsel’s performance and, if necessary, can make early suggestions for improvement. This involvement could be a prerequisite for AI counsel’s active service or licensure and as a continuing requirement.

Once we permit AI counsel to be licensed or otherwise authorized, unauthorized practice of law and perhaps even constitutional questions are mostly resolved. At that point, AI counsel will arguably suffice as the “counsel” contemplated in the Constitution and in state court rules. But even if that day never arrives, disciplinary authorities still need to focus on AI. Human counsel will be relying on AI more and more, likely to a point at which human counsel is simply rubber-stamping AI’s labor and work product. The AI would be investigating the case, drafting the work product, and suggesting the best paths forward for the clients, even if a human is later signing off on the work and recommendations or passing them along to the client. In this world, both the disciplinary authorities and the human lawyers will need to step up their technical prowess so that they can competently supervise and, if necessary, intervene. A few of these issues are addressed in the next Section.

Finally, in this new world, we might also include AI both in the writing and improving of the ethical rules and in the disciplinary agencies. Lawyers had a significant say (and if we include judges as former lawyers, exclusive say) in the creation of the legal ethics rules. AI or its creators might fruitfully have a say in the next generation of ethical regulation. Indeed, at some point in the future, AI might be the only thing that could fully understand other AI. In addition to its knowledge base and computing prowess, AI would not suffer from financial self-interest, which has been a long-time barrier or hinderance to lawyers’ ethical regulation.[81] At a minimum, in addition to (human or AI) lawyers and judges, the rule drafting committees of the future need to include computer scientists, statisticians, and robot ethicists. The next Section turns to some specific, albeit speculative and non-exhaustive, ethical issues on our horizon.

Ethical Rules (of the Future)

The profession’s core values and ethical rules include client loyalty, confidentiality, and the competent exercise of independent professional judgment.[82] For AI counsel to reach (and possibly exceed) human counsel, AI counsel must honor legal ethics. On the positive side of this program, AI counsel must be instilled with and exhibit lawyerly core values. On the negative side, AI counsel must not violate the specific ethical rules on the books now or in the future. This Section raises some advantages and concerns with AI counsel in terms of AI counsel’s independent professional judgment, loyalty, and confidentiality. It also raises competence, fees, bias, and supervision, not out of a fear for AI counsel’s performance but as a necessary component to the human-AI interconnectedness of the future. It should be recognized, though, that this discussion assumes some significant portion of our legal processes will hold true for the future and that the current rules will maintain some applicability. This may not be the case in certain areas, in which case the ethical rules and regulatory approach would likely need to be adjusted to whatever future system of justice eventuates.[83]

Independent Professional Judgment and Autonomy

AI counsel would need to exercise independent professional judgment for its clients. The roles of gatekeeper,[84] self-regulation police,[85] and trusted advisor,[86] among others, assume that counsel enjoys a form of professional autonomy. None of these roles could be fully fulfilled if AI counsel were not independent in its professional judgment. AI counsel could not simply answer questions (however effectively) and do whatever the client asks. Counsel must be able to counsel, and according to the current ethical rules, push back and, if necessary, disclose undeterred wrongdoing. AI counsel presumably will not suffer, or will suffer less, from weak-will or biases, and if bestowed with autonomy, might be more reliable than humans at fulfilling this duty.

Indeed, a form of this autonomy is relatively easy to envision for AI counsel: it simply means following the ethical rules even if the client or other person wants or demands something to the contrary. AI counsel will be particularly good at following rules. When the circumstances raise important ethical questions with which human lawyers currently have discretion as to how to proceed, however, AI counsel must be able to consult applicable values (such as those noted above) for guidance in reaching its decision. It also could presumably consult human counsel if helpful.[87] AI counsel might also be programmed with presumptions or emphases that promote effective lawyering, e.g., when faced with discretion, significant uncertainty, or ambiguity, proceed in a manner that best protects the client.

We may fear whether AI counsel actually would be sufficiently independent to exercise its professional judgment. No independent, autonomous AI has existed to date,[88] and perhaps it would be procured by the state (i.e., one party in a criminal case) for indigent defendants, or perhaps its creators would improperly limit its discretion or abilities. To be sure, the state (or one of its local arms) typically pays human lawyers for indigent defendants, but it does not dictate how those lawyers think or what case-related information the lawyers may access. Likewise, human lawyers have plenty of influences (e.g., mentors, bosses, finances), but they are generally free to evaluate and as necessary act independently of those influences. With AI, the state or its creators could, intentionally or carelessly, limit or control the furnished AI counsel. For example, the state might limit AI counsel’s access to certain databases, the state might grant itself access to information that AI counsel gathered from clients, or the state might pay for only an insufficiently capable AI counsel, even though more effective (but more costly) AI counsel were available.

The state or creators, furthermore, could also impose unbreakable rules on AI counsel. Some of these rules might be easy to spot and call out (e.g., “AI Counsel may not sue or otherwise act adversely to the State of South Carolina or its agencies.”). But for other rules it might be more difficult to challenge and to reach a compromise. For example, the rules at first blush might seem ethically required (e.g., “AI Counsel may not misrepresent information to a court or other tribunal.”), but these at times might run contrary to an effective presentation on behalf of the defendant-client (as noted further in the loyalty discussion immediately below). Although bound by the ethical rules, lawyers today no doubt enjoy significant discretion as to how to present the client’s case most effectively. To mirror this, AI counsel would need to enjoy similar latitude.

In sum, the points above seem more like areas necessitating continued vigilance and compromise than insurmountable ethical barriers. The larger question seems to be when (not if) AI will reach the point of exercising independent professional judgment for clients. When that awakening occurs, nothing in theory precludes AI from meeting its ethical obligation. In the meantime, because only human lawyers can exercise independent professional judgment, they will need to continue to do so, including when using AI.

Loyalty and Avoiding Conflicts of Interest

Clients today (at least in most settings) receive a partisan counsel.[89] In other words, they receive a loyal advocate who marshals the facts and law in the best light to meet the client’s objectives. Unless we change our adversary system in the meantime, AI counsel would need to have this ability to truly replicate the human counsel of today.

But human counsel and presumably AI counsel must reconcile their independent professional judgment and duty of loyalty to the client with the other ethical rules, and the latter often trumps in the event of a conflict with the former. In one sense, AI counsel will likely be the most ethical counsel the world has even seen. It will follow its coded (ethical) rules without fail; it will not suffer from seemingly inherent human frailties (e.g., oversights, biases, weak wills). Apart from bright ethical lines, however, the AI counsel would need to identify areas of discretion and generally use its discretion in the client’s favor to replicate human counsel. To put the general point more negatively, AI counsel might need to be coded with a bit of favoritism and even misrepresentation. To take a paradigmatic case, AI counsel would need to advise its client to wear professional attire or a suit, even though the client has never worn one before, to present favorably to the jury. AI counsel would need to know when not to say anything or when to deflect (within boundaries of course) when the answer or action would be unfavorable to its client. Thus, as with human counsel, AI counsel not only will need to perform loyally for its clients but will need to do so without transgressing ethical lines.

The rise of AI counsel also presents other, perhaps novel types of conflicts of interest. A few examples follow, but of course each of these examples requires speculation on the specifics of future AI counsel. If AI counsel is in some sense a single counsel (e.g., one spectacularly sharp supercomputer or program), the same AI counsel might be representing both sides in a case or other matter. This is not most directly a competence issue, because we can safely assume that this supercomputer could competently represent millions of clients simultaneously, but the conflicting interests would be unprecedented. In this reality, the AI would take in factual information from clients that would be adverse to its other clients, or it would even need to sue current clients. These are typically fatal conflicts for today’s human lawyers.[90] Screening is currently employed in a wide variety of organizations to cure or alleviate certain conflicts of interest,[91] but to my knowledge, it has never been attempted in the same person (nor would that be possible). Could we become confident that the AI could effectively compartmentalize the information and matters such that it does not at all bring to bear the information for the opposing party? If not, separate AI counsel might be required under the current rules, but in that event we could lose much of the scale and efficiency that is promising of AI counsel.

Conflicts rules could be designed to navigate these novel questions, and, whatever rules result, AI counsel will generally be better than human lawyers at following rules (at least clear ones). But the rules would clearly need to be adjusted. Whether AI counsel has billions of clients (indeed, the entire population of the world could, in theory, access its services) or simply a few hundred, it will likely be asked to represent clients with opposing interests. We could dilute the conflicts rules to permit AI counsel to move forward with these conflicting representations, or we could design it (e.g., with internal partitions or with separate systems) such that it essentially represents fewer clients. If we can design it such that it cannot use information from one client against another client, most of the technical conflicts could be solved.[92] Assuring clients that their deepest secrets are safe might be more difficult, however, especially considering the sophisticated “black box” nature of certain AI.

Confidentiality and Privilege

Confidentiality will be a critical and novel issue for AI counsel. AI counsel would need to keep confidential information relating to its client representations,[93] and for AI counsel to be on par with human counsel, client-AI communications would need to be privileged. Confidentiality protects clients from disclosure of their private information without their informed consent, and this protection encourages them to share information with counsel so that counsel can render more effective legal advice and advocacy.[94] For example, if AI were to receive information from its clients or other sources in its cases, but then use that information against the clients or allow others to access that information, AI counsel would be violating the duty of confidentiality. In short, the information AI counsel learns from its clients could not be revealed to other clients or to the public. This is not necessarily an easy issue, however, in part because AI’s information and processes need to be transparent so that reviewers can effectively check the AI’s decisions for accuracy and ethicality.[95]

AI counsel also would need to protect its clients’ information from hackers and anyone else who does not facilitate the client relationship. In particular, counsel must “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[96] This area might present a novel issue for AI counsel, because we do not currently know how it would store its data and who would have access. As with human counsel, however, the client data would need to be adequately protected from outside access. This, in essence, was the key issue in the cloud computing ethics opinions: client information must be protected from authorized access.[97] The information must also be preserved so that AI counsel or the client can later access the information as needed.

Furthermore, even if the AI counsel itself or its database would not violate confidentiality on its own, it could be forced to do so without privilege. Privilege prohibits courts from compelling counsel to testify about confidential attorney-client communications (if those communications were for the purpose of giving or receiving legal advice).[98] If human counsel enjoyed privilege, while AI counsel did not, AI counsel would be inferior for clients. Thus, privilege would need to be extended to AI counsel.[99] Moreover, clients would need to be informed if third parties request access to client data.

In sum, client confidentiality and privilege protections could be extended to AI counsel, but the nature of AI counsel may present unique issues as to how it stores and uses data from its clients. A completely open-access model would not protect client data, for example. Before AI counsel could be employed, we should be assured that it will not use confidential information from one client against another (or for other harmful purposes) and will not reveal confidential information to the public.[100]

Competence

To maintain competence, lawyers have an obligation to keep informed of “the benefits and risks associated with relevant technology. . . .”[101] But the current rules surprisingly do not say much else on the subject at hand. It seems plausible that lawyers would develop an ethical or moral obligation to use advanced AI because it will likely be less expensive, faster, more competent, and more diligent for their clients.[102] Should AI become counsel and not just counsel’s occasional tool, furthermore, AI counsel will have to maintain competence in the law. For AI, the competence hurdle may be more about understanding humans, society, and the planet than legal prowess. It will likely breeze through many traditional notions of competence.[103] It also will need to acquire the ability to make creative (and non-frivolous) arguments on a client’s behalf. To give good advice to its human (or other) clients, however, it needs both to understand them and their objectives and to understand and interact effectively with their adversaries and arbiters. It may well be the most book-smart counsel the world has ever seen, but it will not be competent (much less excellent) until it can adequately handle these other important aspects of competent counsel. In sum, beyond legal knowledge, AI counsel must know or learn how to understand and work effectively with humans to achieve deep competence.

Fees

Fees are perhaps a surprising entry in this Essay, and this discussion will be brief. AI counsel presents the opportunity to reduce or potentially eliminate cost-prohibitive legal fees. This high cost is one of the primary reasons that, in many areas of law (e.g., family law, eviction, debt collection), at least one side does not have the advantage of counsel in most cases.[104] One relevant question is whether appreciable (human) attorney fees would still be considered reasonable if available AI could provide the same or better work more quickly, more comprehensively, and more affordably (potentially even for free).[105] Perhaps not, but of course time will tell. As to AI counsel’s fees, if any, we may ultimately strike a bargain: ceding our human monopoly over the practice of law so that other humans could receive access to effective and free (AI) counsel. Much of AI counsel’s allure and potential would be lost if humans of modest means could not afford it. In that event, moreover, the rift between the haves and have-nots would grow even larger, and the advent of AI counsel would do little to nothing to improve the access-to-justice gap. Under the plausible assumption that advanced AI counsel would become free or drastically less than today’s lawyers, human lawyers would need to justify and likely lower their fees, unless through special competence (e.g., human interviewing skills) or exclusionary practices AI counsel is ineffective, inferior, or unavailable.

Absence of Bias

Lawyers have a duty not to engage in “harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.”[106] Although human lawyers have violated this rule or the principles behind it, AI counsel might not have the ability, much less the inclination, to harass or discriminate against protected classes. The challenge, however, will be the data, coding, and preexisting structural inequality from which the AI will learn.[107] Even though AI counsel in theory will be unbiased, in practice AI could learn and repeat bias from humans. AI counsel’s advice and actions will need to be tested for evidence of bias not only before it is employed but also periodically thereafter.[108] To ensure that AI counsel remains free from bias, moreover, regulators may need to intervene in its inputs, algorithms, or outputs, a task for which few regulators are currently equipped.

In sum, AI counsel in theory could finally be the truly unbiased lawyer, but humans will need to ensure that we do not feed bias into AI counsel.

Supervision

Supervision will be the last in this non-exhaustive list of ethical implications. The Ethics 20/20 Commission conducted the most recent comprehensive review and update of the nation’s lawyer ethical rules (the ABA Model Rules of Professional Conduct).[109] Although Ethics 20/20 recognized that with the increased use of technology, consultants, and outsourcing the duty of supervision was critical, it changed only a single word in the title of the supervision rule. Whereas the rule previously governed lawyer “assistants,” Ethics 20/20 made clear that the rule governs a broader class, namely, lawyer “assistance.”[110] As clever as the title change may have seemed, more than just one word is needed to address the tidal wave of technology and its implications on ethical law practice.

To supervise or investigate AI sufficiently, our current human corps is insufficient. As indicated above, scrutinizing the data, features, and computing processes of AI is not something that lawyers or disciplinary agents are currently equipped to do. Supervision and investigation are not meaningful if the reviewers do not understand what to ask or how to interpret what they see. Instead, computer scientists and statisticians, and even other AI, are better positioned than current lawyers and disciplinary agents to evaluate AI’s functioning. Expanding expertise will be needed, and this expansion has some analogous precedent. Disciplinary agencies at present employ or consult with accountants for lawyer trust-account issues, and they employ or consult with psychologists and other counselors for substance abuse or mental health issues.[111] In the future, they likely will need to employ or regularly consult with computer scientists, statisticians, and even robot ethicists to supervise and investigate AI counsel effectively. Indeed, AI counsel might quickly become so sophisticated that only other AI could effectively supervise it, in which case it would need to be so employed. In any event, human or AI supervisors will need training to understand AI and will need access to the data and processes on which AI counsel relies, else it cannot be adequately supervised.

Conclusion

With or potentially without human oversight, AI counsel, at least in theory, could handle and elevate lawyering, rendering more researched, more consistent, more accessible, and less biased legal advice. The looming existence of AI counsel, however, raises ethical, political, and agency challenges—some sound, some not so sound. If these challenges make it into a courtroom in the year 2123, it will be fascinating to see who, or what, will be counsel for the parties. In the meantime, our rules are addressed exclusively to the wrong people, namely, people. Human lawyers seem on track to play only a supporting or supervisory role in many, most, or perhaps all legal work in the future, while our rules currently contemplate that human lawyers will play the central and almost exclusive role. As previewed above, we need to ensure that our disciplinary approach and ethical rules adequately address AI as the primary legal counsel (or at the very least, primary legal assistant) of the future.

    1. * Professor of Legal Ethics, University of Arizona James E. Rogers College of Law. I owe many thanks to Rebecca Aviel, Ann Ching, Anthony D’Elia, Myles Lynk, Nicole Morris, Grace Driggers, Lauren Hoyns, Erin Johnson, Christel Purvis, and the editors of the South Carolina Law Review for significant improvements to the work or to my approach to it. Any errors are mine.
    2. . The examples above use court cases, statutes, and regulations, along with experience with clients, as the key sources of the lawyers’ knowledge. If we were choosing between two transactional lawyers, we could instead assume transactional experience for the lawyers, with Lawyer Kingsfield having less and Lawyer Automata having more.
    3. . “AI” simply refers to artificial intelligence and to the advanced artificially intelligent counsel, which this Essay assumes is forthcoming.
    1. . See, e.g., Jennifer Jolly, What Is ChatGPT? Everything to Know About OpenAI’s Free AI Essay Writer and How It Works, USA Today (Jan. 31, 2023, 5:08 PM), https://www.usatoday.com/story/tech/2023/01/27/chatgpt-buzzfeed-ai/11129947002/ [https://perma.cc/TL53-ZE9Q]; see also Sara Merken, OpenAI-Backed Startup Brings Chatbot Technology to First Major Law Firm, Reuters (Feb. 16, 2023), https://www.reuters.com/legal/
      transactional/openai-backed-startup-brings-chatbot-technology-first-major-law-firm-2023-02-15/
      [https://perma.cc/97TY-S5HB] (“Harvey AI, an artificial intelligence startup backed by an OpenAI-managed investment fund, has partnered with one of the world’s largest law firms to automate some legal document drafting and research in what the company says could be the first of more such deals.”); Steve Lohr, A.I. Is Coming for Lawyers, Again, N.Y. Times (Apr. 10, 2023), https://www.nytimes.com/2023/04/10/technology/ai-is-coming-for-lawyers-again.html [https://perma.cc/2TW6-4BBX] (discussing, among other developments, Casetext’s CoCounsel, which assists law firms and is powered by a customized version of ChatGPT); Amanda Robert, How Can Lawyers Use AI to Improve Their Practice?, A.B.A. J. (Mar. 3, 2023, 12:37 PM CST), https://www.abajournal.com/web/article/how-can-lawyers-use-ai-to-improve-their-practice [https://perma.cc/HJH6-JSYA]. Indeed, at least one law professor has already authored legal scholarship drawing extensively from ChatGPT. Andrew Perlman, The Implications of ChatGPT for Legal Services and Society (Dec. 5, 2022) (unpublished manuscript), https://ssrn.com/abstract=4294197 [https://perma.cc/A5JY-NGHD].
  1. Megan Cerullo, AI-Powered “Robot” Lawyer Won’t Argue in Court After Jail Threats, CBSNews (Jan. 26, 2023), https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/ [https://perma.cc/39CW-59TQ] (“A ‘robot’ lawyer powered by artificial intelligence was set to be the first of its kind to help a defendant fight a traffic ticket in court next month. But the experiment has been scrapped after ‘State Bar prosecutors’ threatened the man behind the company that created the chatbot with prison time.”).
  2. . Until recently, few have anticipated this specific possibility in the law. Compare Richard Susskind, Tomorrow’s Lawyers (2d ed. 2017) (discussing the advancing role of artificial intelligence and noting that certain systems can already render better and faster predictions or answers than human lawyers), and Catherine Nunez, Comment, Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions, 20 Tul. J. Tech. & Intell. Prop. 189, 191 (2017) (“ROSS likely will be capable of developing a professional and moral judgment in the future”), with Ted Schneyer, Professional Discipline in 2050: A Look Back, 60 Fordham L. Rev. 125 (1991) (listing, quite presciently, a number of key developments in the future of legal ethics regulation but not mentioning non-human counsel), Herbert M. Kritzer, The Future Role of “Law Workers”: Rethinking the Forms of Legal Practice and the Scope of Legal Education, 44 Ariz. L. Rev. 917, 922–23 (2002) (discussing several new roles for those working in the law in the future but not going so far as anticipating AI lawyers), and Fred C. Zacharias, The Future Structure and Regulation of Law Practice: Confronting Lies, Fictions, and False Paradigms in Legal Ethics Regulation, 44 Ariz. L. Rev. 829, 859 (2002) (omitting the possibility of AI counsel, although not directly relevant to the article’s impressive insights). Most scholars and commentators have appeared to assume that AI will be simply one tool of lawyers (albeit an important and novel tool). See, e.g., Roy D. Simon, Artificial Intelligence, Real Ethics, N.Y. State B. Assn. J., Mar.–Apr. 2018, at 34, 35 (“Artificial intelligence products are effectively non-human nonlawyers. . . . In my view, supervising a bionic legal intern—the software equivalent of an artificially intelligent robot lawyer—is equivalent to supervising a human legal intern.”). While this work includes the use of AI as a tool in legal practice, the focus is far beyond: when AI functionally becomes counsel, not simply a human lawyer’s periodic tool.
  3. . Interestingly, I found only one previous instance of the phrase “right to human counsel” online. It was from a brief, op-ed style article, stating:When the Constitution speaks of a right to ‘counsel,’ it conjures the ‘natural intelligence’ of lawyers, not the ‘artificial intelligence’ of machines or the legalese of outmoded books.Were they available to every prisoner, databases and software alone would not substitute for the dignity of representation. For it is the right to human counsel, instinct with morality, that outpaces the degradation of caged humanity.

    Ken Strutin, Artificial Intelligence and Post-Conviction Lawyering, Law.com (Jan. 18, 2018, 2:45 PM), https://www.law.com/newyorklawjournal/2018/01/22/artificial-intelligence-and-post-conviction-lawyering/ [https://perma.cc/F5L8-CPR4].

  4. . See, e.g., Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043, 1068–76 (2019) (noting that “[a]lgorithmic tools are used now in three main criminal justice contexts: policing, bail decisions, and post-conviction matters,” and discussing each context); Brandon L. Garrett & John Monahan, Judging Risk, 108 Calif. L. Rev. 439, 450 (2020) (“Risk assessments are now commonplace at each stage of the criminal process, from police investigations [to] pretrial settings, sentencing, corrections, [and] during parole and community supervision . . . .”); see also id. at 452–53 (“There are many important legal and policy differences between the pretrial and sentencing contexts. In the pretrial context, the question is whether a person will appear in court and whether they might pose a danger of recidivism pretrial.”).
  5. . See, e.g., Huq, supra note 7, at 1075 (“In some jurisdictions, such as Pennsylvania, New Hampshire, Arkansas, and Vermont, state law even affirmatively mandates the use of predictive instruments in the sentencing phase.”).
  6. . See, e.g., Vincent Berthet, The Impact of Cognitive Biases on Professionals’ Decision-Making: A Review of Four Occupational Areas, Frontiers in Psych., Jan. 2022, at 1, 7–9 (detailing the cognitive biases of human decision-makers in a justice system context).
  7. . See, e.g., Garrett & Monahan, supra note 7, at 478 (“Even judges who believe they rely on many types of information in fact rely ‘almost exclusively on prosecutorial recommendation.’ Studies have also found troubling evidence that judges rely on an offender’s race when making decisions concerning sentencing.”); Keith Swisher, Pro-Prosecution Judges: “Tough on Crime,” Soft on Strategy, Ripe for Disqualification, 52 Ariz. L. Rev. 317, 323–38 (2010) (discussing pro-prosecution bias in certain state judiciaries).
  8. . See, e.g., Angela J. Davis, In Search of Racial Justice: The Role of the Prosecutor, 16 N.Y.U. J. Legis. & Pub. Pol’y 821, 832–36 (2013) (describing the opportunities prosecutorial offices have to discriminate on the basis of race in the plea-bargaining process).
  9. . Current algorithms, however, may reflect and even augment biases when they use biased training data or receive biased coding. See, e.g., Huq, supra note 7, at 1076, 1080.
  10. . See, e.g., Garrett & Monahan, supra note 7, at 452 (“Research has shown that quantitative assessments are more reliable in their predictions than those of individual decision-makers.”).
  11. . Cf. AFP, Colombian Judge Uses ChatGPT in Ruling on Child’s Medical Rights Case, CBS News (Feb. 2, 2023), https://www.cbsnews.com/news/colombian-judge-uses-chatgpt-in-ruling-on-childs-medical-rights-case/ [https://perma.cc/FVM2-V5N9] (discussing a judge who consulted ChatGPT in preparing his opinion).
  12. . See generally Daniel Kahneman & Amos Tversky, Prospect Theory: An Analysis of Decision Under Risk, 47 Econometrica 263 (1979) (describing in depth different cognitive biases that hinder rational choice); Amos Tversky & Daniel Kahneman, Advances in Prospect Theory: Cumulative Representation of Uncertainty, 5 J. Risk & Uncertainty 297 (1992) (incorporating rank-dependent expected utility).
  13. . See, e.g., Mark J. Machina, Choice Under Uncertainty: Problems Solved and Unsolved, J. Econ. Persps., Summer 1987, at 121, 127 (discussing the Allais Paradox and noting the “growing tension between those who view economic analysis as the description and prediction of what they consider to be rational behavior and those who view it as the description and prediction of observed behavior”).
  14. . See Andrew Keane Woods, Robophobia, 93 U. Colo. L. Rev. 51, 56 (2022) (“In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost.”).
  15. . See Schneyer, supra note 5, at 126.
  16. . Leslie C. Levin & Jennifer K. Robbennolt, To Err Is Human, To Apologize Is Hard: The Role of Apologies in Lawyer Discipline, 34 Geo. J. Legal Ethics 513, 515 (2021); see also Stephen E. Schemenauer, What We’ve Got Here . . . Is a Failure . . . to Communicate: A Statistical Analysis of the Nation’s Most Common Ethical Complaint, 30 Hamline L. Rev. 629, 632 (2007).
  17. . See, e.g., Johanna Kalb, Gideon Incarcerated: Access to Counsel in Pretrial Detention, 9 U.C. Irvine L. Rev. 101, 119 (2018); Beenish Riaz, Envisioning Community Paralegals in the United States: Beginning to Fix the Broken Immigration System, 45 N.Y.U. Rev. L. & Soc. Change 82, 87 (2021).
  18. . Of course, as with the present, certain people might not be able to access AI counsel (due, e.g., to lack of a suitable device or internet connection), but we can assume for discussion purposes that the access challenges to the likely free, always-available-through-technology AI counsel will not exceed the access challenges to human counsel.
  19. . AI proponents might note that the bar could be lower or different: they just need to show that the overall AI advantages (whatever they are) outweigh the human advantages (whatever they are). AI thus would not need to mirror the human advantages to win this contest. That might well be true. For purposes of this discussion, however, I will assume that the proponents will need to meet, at least roughly, the higher bar.
  20. . As background for readers who might be less familiar with the term legal ethics, I asked Chat GPT, “What is legal ethics?” It quite adequately explained:Legal ethics refers to the moral principles and values that govern the behavior of individuals working in the legal profession, such as lawyers and judges. These principles and values include principles such as confidentiality, professionalism, integrity, objectivity, and the obligation to serve the best interests of clients. Legal ethics help to maintain public trust in the legal system and ensure that legal professionals uphold the highest standards of conduct in their practice. Legal ethics can vary by jurisdiction, and legal professionals must be knowledgeable about the specific ethical rules and guidelines that apply in their jurisdiction.Online Interaction with ChatGPT, OpenAI (Feb. 7, 2023) (full transcript on file with author).
  21. . See, e.g., Laurel S. Terry, The Work of the ABA Commission on Multidisciplinary Practice, in Multidisciplinary Practices and Partnerships: Lawyers, Consultants and Clients 2-1, 2-6 (2000); Task Force on L. Schs. and the Pro.: Narrowing the Gap, Am. Bar Ass’n, Legal Education and Professional Development: An Educational Continuum 207–08 (1992) [hereinafter MacCrate Report]; Model Rules of Pro. Conduct r. 1.7 cmt. 1 (Am. Bar Ass’n 2021) (“Loyalty and independent judgment are essential elements in the lawyer’s relationship to a client.”); see also infra Sections V.B.1–4. The ABA’s core values statement of 2000 was controversial, however. See Paul D. Paton, Multidisciplinary Practice Redux: Globalization, Core Values, and Reviving the MDP Debate in America, 78 Fordham L. Rev. 2193, 2193–94 (2010) (“The Resolution provided a nonexhaustive list of ‘core values’ and urged that each jurisdiction responsible for lawyer regulation implement the ‘principles’ set out in the resolution, all of which would function as a bulwark against encroachment on the traditional law firm model.”).
  22. . Restatement (Third) of L. Governing Laws. § 16 (Am. L. Inst. 2018).
  23. . See Alphabetical List of Jurisdictions Adoption Model Rules, Am. Bar Ass’n (Mar. 28, 2018), https://www.americanbar.org/groups/professional_responsibility/publications/
    model_rules_of_professional_conduct/alpha_list_state_adopting_model_rules/ [https://perma.
    cc/N25E-LCE5] (indicating that all states have adopted a version of the Model Rules).
  24. . See Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment, 109 J. Crim. L. & Criminology 137, 138–39 (2019) (“Imagine it is 2049 . . . . Competing notions of ‘accuracy’ still exist, of course—people continue to disagree about the purposes, and attendant priorities, of different areas of law—but machines have been trained to account for such disagreement. It turns out, in fact, that machines are better equipped to deal with the reality of human pluralism than humans themselves: standing outside the fray, machines readily synthesize different normative viewpoints. Furthermore, machines are impeccably consistent. The ‘like cases should be treated alike’ ideal, forever precarious in a world of decentralized human judging, has been vindicated at last. Using hyper-complex modeling techniques, machine decision-making effectively guarantees that cases with meaningfully identical features always come out the same way.”).One practical example is the famous perjury “trilemma,” which, like the human lawyers before it, AI will have to reconcile. See Monroe H. Freedman, Lawyer-Client Confidentiality: Rethinking the Trilemma, 43 Hofstra L. Rev. 1025, 1025 (2015) (“The trilemma refers to three ethical obligations bearing on lawyer-client confidentiality, all of which a lawyer cannot simultaneously obey when faced with client perjury. A lawyer is required (1) to learn as much as possible about a client’s case; (2) to inform the client of the lawyer’s obligation to keep information confidential; and (3) to reveal confidential information to the court if the lawyer knows that the client has committed perjury.”).
  25. . It may well be unfair or naïve to rely heavily on this reply. The AI of the future might be able to remove, disregard, exceed, or lower its initially coded constraints. In science-fiction, the three famous laws of robots follow as an example of constraints: “One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. . . . Two, . . . a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. . . . And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Isaac Asimov, Runaround, in I, Robot (Bantam Books 2004) (1950).
  26. . Of course, the public’s view of AI might shift in its favor, such that the public might be willing or even prefer to have advanced AI make certain decisions. See, e.g., Derek E. Bambauer & Michael Risch, Worse Than Human?, 53 Ariz. St. L.J. 1091, 1094 (2021) (“Across a range of scenarios, from assessing creditworthiness to selecting participants for a clinical trial of a promising therapy, consumers significantly preferred algorithms when automated decision making offered benefits in speed, cost, or accuracy. Moreover, these utilitarian considerations outweighed any deontological preferences that respondents may have had for putting a human in the loop.”).
  27. . For example, online legal research tools use algorithms to return the most applicable search results, and, in many states, judges (and therefore counsel) must refer to algorithmically based risk assessment tools in criminal cases. See supra Part II.
  28. . See, e.g., Bambauer & Risch, supra note 29, at 1092 (noting some objections).
  29. . For a general discussion of evidential decision theory, see Richard C. Jeffrey, The Logic of Decision 74–93 (2d ed. 1983) and Brian Skyrms, Causal Decision Theory, 29 J. Phil. 695 (1982).
  30. . Perhaps they also object to avoid this news, even though their objections will have no causal influence on the advancement of technology and the adoption of AI. This, of course, may be a point (if true) that applies to many objections in many different areas.
  31. . Brennan-Marquez & Henderson, supra note 27, at 140.
  32. . Id. at 142.
  33. . For example, algorithms have been used in credit-worthiness, hiring, and college admissions determinations (and of course, criminal pretrial release and sentencing determinations, as noted in Part II). See Bambauer & Risch, supra note 29, at 1094 (“Consumers demonstrated a mild[ly] increase[d preference] for human decision making as the stakes at issue rose (for example, whether one would receive a gift card from a coffee shop versus whether one would receive a civil traffic fine). This seems unsurprising, particularly since the scenarios tested in this study involved holistic decisions—judgment calls, in common parlance—rather than straightforward mathematical calculations. However, this stakes-based shift was outweighed both by more concrete considerations, such as speed or cost, and by the default setting, which was the random initial assignment of the decision to a person or a program.”).
  34. . See, e.g., Alexis Hoag, Black on Black Representation, 96 N.Y.U. L. Rev. 1493, 1496–97 (2021).
  35. . Human lawyers and judges also advocate in and judge cases involving other species (e.g., the disposition of livestock or animal habitats). We thus might wonder why advanced AI should not be able to advocate in and judge cases involving humans.
  36. . See Woods, supra note 17, at 55–56 (noting humans’ apparent bias against robots, even when robots might be safer or better for the task). But cf. Bambauer & Risch, supra note 29, at 1093 (“[C]onsumers prefer to have an algorithm rather than a human make decisions about them in a range of representative scenarios. This preference stands in contrast to the algorithmic skepticism that dominates legal scholarship. . . . [C]onsumers’ inclinations towards algorithms are strongly and significantly determined by utilitarian factors such as cost, speed, and accuracy.”)
  37. . As noted in Part V, however, clients should be entitled to recourse against AI counsel or those responsible for it.
  38. . Robert M. Cover, Essay, Violence and the Word, 95 Yale L.J. 1601, 1601 (1986) (“A judge articulates her understanding of a text, and as a result, somebody loses his freedom, his property, his children, even his life.”).
  39. . Although forceful placement in jail, prison, or death row are obvious instances of state-inflicted pain, I am using the word “pain” more broadly. If the defendant would view the state-imposed action as painful in some sense, that suffices as pain for present purposes.
  40. . I focus on pain given its salience and severity in criminal justice, but other human virtues and capacities (e.g., mercy) are also important. I thus do not mean to imply that the understanding of and capacity for pain, while key, is a sufficient condition by itself for acceptable AI counsel.
  41. . See, e.g., Catherine Gage O’Grady, Empathy and Perspective in Judging: The Honorable William C. Canby, Jr., 33 Ariz. St. L.J. 4, 8 (2001) (“The empathic process involves both a cognitive awareness of another’s situation and the feeling of a vicarious affective response . . . .”).
  42. . See, e.g., Louis Narens & Brian Skyrms, The Pursuit of Happiness: Philosophical and Psychological Foundations of Utility 83–90 (2020) (noting that the neurobiological measurement of pleasure and pain is extremely complicated, not yet well understood, and likely implicating multiple areas and processes in the central nervous system).
  43. . Indeed, if a person were to have served time in prison, that fact would legally or practically preclude or drastically reduce the person’s chance to become a lawyer and judge.
  44. . See Aapo Hyvärinen, Painful Intelligence: What AI can tell us about human suffering 10–11 (2022).
  45. . See Narens & Skyrms, supra note 45, at 83 n.1 (discussing Edgeworth’s hedonimeter).
  46. . This, of course, is assuming large leaps forward in neurobiological understanding and measurement. See id. at 83–84.
  47. . See Bruce A. Green, Lethal Fiction: The Meaning of “Counsel” in the Sixth Amendment, 78 Iowa L. Rev. 433, 441–42 (1993) (“Consistent with the earliest understanding of the right to counsel, contemporary Supreme Court decisions recognize that the Sixth Amendment protects not only access to counsel, but also a defendant’s right to select counsel, at least in those cases where the defendant does not require a court-appointed representative. The right to choose counsel promotes the fairness and reliability of criminal proceedings by enabling an accused to select the available representative in whom he or she places greatest confidence and who he or she believes to be best suited to defend the particular case. This aspect of the right to counsel also respects the individual defendant’s interest, as a matter of personal autonomy, in making critical decisions concerning the course of the criminal defense.”).
  48. . See Morris v. Slappy, 461 U.S. 1, 14 (1983) (“[W]e reject the claim that the Sixth Amendment guarantees a ‘meaningful relationship’ between an accused and his counsel.”).
  49. . See, e.g., United States v. Gonzalez-Lopez, 548 U.S. 140, 147–48 (2006) (“[A] violation of the Sixth Amendment right to effective representation is not ‘complete’ until the defendant is prejudiced. The right to select counsel of one’s choice, by contrast, has never been derived from the Sixth Amendment’s purpose of ensuring a fair trial. . . . Where the right to be assisted by counsel of one’s choice is wrongly denied, therefore, it is unnecessary to conduct an ineffectiveness or prejudice inquiry to establish a Sixth Amendment violation. Deprivation of the right is ‘complete’ when the defendant is erroneously prevented from being represented by the lawyer he wants, regardless of the quality of the representation he received.”).
  50. . See discussion infra Part IV.
  51. . It is beyond the scope of this Essay to preview whether AI might also be better social workers, crisis counselors, financial advisors, or psychologists than today’s human-only versions. If AI would perform these roles better, however, a gap might still remain: humans might need other humans involved to facilitate the legal or other advice. This facilitation might not play the starring role, but it might serve as meaningful assistance to the clients and potentially the AI. See generally, e.g., Anthony Barnett et al., Enacting ‘More-than-Human’ Care: Clients’ and Counsellors’ Views on the Multiple Affordances of Chatbots in Alcohol and Other Drug Counselling, Int’l J. Drug Pol’y, Aug. 2021, at 1 (reporting that AI may be helpful in administrative tasks but will likely struggle with human interaction).
  52. . U.S. Const. amend. VI (“In all criminal prosecutions, the accused shall enjoy the right . . . to have the Assistance of Counsel for his defence.”); see also Martin R. Gardner, The Sixth Amendment Right to Counsel and Its Underlying Values: Defining the Scope of Privacy Protection, 90 J. Crim. L. & Criminology 397, 400 (2000) (“The Gideon Court recognized the unfairness of forcing a defendant untrained in the law to defend himself against the power and legal acumen of the State. Fairness requires rough equality between adversarial opponents.”).
  53. . See Slappy, 461 U.S. at 14 (“[W]e reject the claim that the Sixth Amendment guarantees a ‘meaningful relationship’ between an accused and his counsel.”). Defendants who can afford counsel, however, generally have a right to particular counsel (subject to certain constraints, such as that counsel’s authorization to practice law in the jurisdiction), but that is not really our question because we can assume that, deep into the future, defendants could still voluntarily choose to hire human counsel if they can afford to do so (although in some respects it might put these wealthier clients at a disadvantage considering AI counsel’s advanced capabilities). The question is, if they cannot afford to hire counsel, whether non-human counsel would be constitutionally sufficient.
  54. . See Green, supra note 50, at 441–42 (“Consistent with the earliest understanding of the right to counsel, contemporary Supreme Court decisions recognize that the Sixth Amendment protects not only access to counsel, but also a defendant’s right to select counsel, at least in those cases where the defendant does not require a court-appointed representative. The right to choose counsel promotes the fairness and reliability of criminal proceedings by enabling an accused to select the available representative in whom he or she places greatest confidence and whom he or she believes to be best suited to defend the particular case. This aspect of the right to counsel also respects the individual defendant’s interest, as a matter of personal autonomy, in making critical decisions concerning the course of the criminal defense.”).
  55. . Faretta v. California, 422 U.S. 806, 807 (1975) (“The question before us now is whether a defendant in a state criminal trial has a constitutional right to proceed without counsel when he voluntarily and intelligently elects to do so. Stated another way, the question is whether a State may constitutionally hale a person into its criminal courts and there force a lawyer upon him, even when he insists that he wants to conduct his own defense. It is not an easy question, but we have concluded that a State may not constitutionally do so.”).
  56. . See Gardner, supra note 55, at 410 (“While the Court has alluded to three values—trial fairness, substantive privacy interests, and respecting the autonomy of the accused—as reflected in the Sixth Amendment right to counsel, close consideration of the Court’s work makes clear that the fairness and autonomy interests are the primary, and perhaps the only, values presently bottoming the right to counsel.”). Thus, like human counsel, AI counsel would need to assure trial fairness, privacy, and autonomy interests of the defendant.
  57. . Cf. Green, supra note 50, at 433 (“[C]ourts unwaveringly adhere to the view that ‘counsel’ under the Sixth Amendment includes any duly licensed attorney.”); see also id. at 434 (“The right of access to counsel . . . is satisfied when a defendant receives legal assistance from a member of the bar, however ill-trained or inexperienced that lawyer may be.”). Thus, under these courts’ thin view, regulators would simply need to license AI counsel, a point discussed in the following text.
  58. . Human lawyers today already rely on some simple or sophisticated AI-like technology in their work. For example, they use legal search engines designed to sort through and return relevant results, and they review contracts using AI, specifically machine learning tools. See Matthew Stepka, Law Bots: How AI Is Reshaping the Legal Profession, Bus. L. Today (Feb. 21, 2022), https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/ [https://perma.cc/TY8V-MKG5].
  59. . See, e.g., Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 127–29 (2016) (noting that the data on which the models rely, their features, and their computing process might not be transparent or discernible to reviewers). I also asked ChatGPT, “What ethical rules should ChatGPT and similar programs follow?” Even ChatGPT agreed that creators of AI language models should follow certain ethical considerations, including “Transparency: It should be clear how the model was trained and how its output is generated, so that users can understand its limitations and biases.” Online Interaction with ChatGPT, OpenAI (Feb. 7, 2023) (full transcript on file with author). It also identified “Fairness and non-discrimination,” “Responsibility,” “Privacy,” and “Accuracy.” Id.
  60. . See, e.g., Model Rules of Pro. Conduct r. 5.1 (Am. Bar Ass’n 2021) (enumerating instances in which “[a] lawyer shall be responsible for another lawyer’s violation of the Rules of Professional Conduct”).
  61. . See id.
  62. . See, e.g., In re Campbell, 394 S.C. 484, 489–91, 716 S.E.2d 291, 294 (2011) (disciplining an attorney, not the paralegal, because the paralegal fraudulently notarized an unsigned note).
  63. . See, e.g., People v. Skipp, 20PDJ036, 2020 WL 4920993, at *1 (Colo. July 27, 2020) (disciplining lawyer in part because lawyer allowed paralegal to practice law, conduct client intake, and assign cases); Fla. Bar v. TIKD Servs., LLC, 326 So. 3d 1073, 1082 (Fla. 2021) (enjoining a nonlawyer-operated traffic ticket defense program as the unauthorized practice of law, even though the program contracted with licensed attorneys to represent the traffic clients); In re Flack, 33 P.3d 1281, 1287, 1290 (Kan. 2001) (disciplining lawyer in part for permitting nonlawyer to give legal advice to lawyer’s clients); In re Guirard, 11 So. 3d 1017, 1030 (La. 2009) (disbarring lawyers who delegated client cases to nonlawyer staff, thereby assisting the nonlawyers in the unauthorized practice of law).
  64. . See discussion infra Section V.B.7.
  65. . See ABA Commission on Ethics 20/20, Am. Bar Ass’n, https://www.americanbar.org/groups/professional_responsibility/committees_commissions/aba-commission-on–ethics-20-20/ [https://perma.cc/TJ7L-55X7] (reporting the approval of amendments to the Model Rules of Professional Conduct to reflect consideration of the larger role that technology now plays in the legal field).
  66. . Am. Bar Ass’n, Resolutions with Reports to the House of Delegates 150 (Feb. 8, 2016), https://www.americanbar.org/content/dam/aba/administrative/
    house_of_delegates/2016_hod_midyear_meeting_electronic_report_book.pdf [https://perma.
    cc/J3GQ-82JS].
  67. . Id. at 147. The objectives are lettered A-J in the Resolution, but I have numbered them 1-10 for ease of reference.
  68. . Other objectives produce an ambiguous result. For instance, Objective 10 seeks, in part, to promote a diverse legal profession. Id. The addition of AI counsel would, in some sense, increase the diversity of the types of legal service providers (by adding AI to the list of human providers), but this is almost certainly not the type of diversity the drafters had in mind. Additional challenges are addressed below. See infra Part V.B.
  69. . Given that most disciplinary agencies are currently understaffed, Comm’n on the Evaluation of Disciplinary Enf’t, Am. Bar Ass’n, Lawyer Regulation for a New Century (2018), https://www.americanbar.org/groups/professional_responsibility/resources/
    report_archive/mckay_report/ [https://perma.cc/4ATU-L5B3], a future with fewer complaints and presumably fewer (AI) counsel might mean that those agencies would be staffed adequately to address alleged ethical violations.
  70. . Ted Schneyer, Professional Discipline for Law Firms?, 77 Cornell L. Rev. 1, 11–12 (1991) [hereinafter Schneyer, Professional Discipline] (“This Article draws on the organizational crime literature to assess the desirability of allowing agencies and courts to impose disciplinary sanctions on law firms and concludes that such sanctions are needed.”); Ted Schneyer, The Case for Proactive Management-Based Regulation to Improve Professional Self-Regulation for U.S. Lawyers, 42 Hofstra L. Rev. 233, 257–58 (2013) [hereinafter Schneyer, Self-Regulation] (“[Only] New York and New Jersey have provided for law firm discipline.”).
  71. . See Schneyer, Professional Discipline, supra note 73, at 23–24 (noting this phenomenon and arguing that law firms, not just lawyers, should be subject to discipline).
  72. . Most statements of AI principles list legal responsibility and availability of remedies against AI-caused harm. Jessica Fjeld et al., Berkman Klein Ctr. for Internet & Soc’y, Rsch. Pub. No. 2020-1, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI 33–34 (2020), https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf [https://perma.cc/B3HJ-CH7H].
  73. . See Schneyer, Self-Regulation, supra note 73, at 234 (noting that the disciplinary process traditionally has been reactive, but that several countries are developing proactive elements in the disciplinary process).
  74. . Wheat v. United States, 486 U.S. 153, 159 (1988).
  75. . See, e.g., Sheryl Grey & Brenna Swanston, How to Become a Lawyer: Education, Salary and Job Outlook, Forbes Advisor (Dec. 1, 2022, 8:15 AM), https://www.forbes.com/
    advisor/education/become-a-lawyer/ [https://perma.cc/UP2M-NJL6].
  76. . See, e.g., OpenAI, GPT-4 Technical Report 5–6 (2023), https://cdn.openai.com/
    papers/gpt-4.pdf [https://perma.cc/ZER7-HGAS] (claiming that GPT-4 scored in the ninetieth percentile on a simulated Uniform Bar Exam).
  77. . See generally Benjamin H. Barton, The Lawyer’s Monopoly—What Goes and What Stays, 82 Fordham L. Rev. 3067 (2014) (discussing financial roadblocks standing in the way of more effective attorney regulation); Benjamin H. Barton, The Lawyer-Judge Bias in the American Legal System (2011) (discussing systemic judicial bias in favor of legal professionals, often to clients’ detriment).
  78. . See, e.g., Terry, supra note 24, at 2–6; MacCrate Report, supra note 24, at 207–08; Model Rules of Pro. Conduct r. 1.7 cmt. 1 (Am. Bar Ass’n 2021) (“Loyalty and independent judgment are essential elements in the lawyer’s relationship to a client.”).
  79. . For example, in the future, we might abandon the adversarial system for some or all types of litigation (perhaps because we will become confident in some advanced AI-aided mediation process to resolve these disputes).
  80. . See, e.g., Model Rules of Pro. Conduct r. 1.13 (Am. Bar Ass’n 2021) (noting gatekeeper function for counsel of organizational client); see also Caroline Harrington, Attorney Gatekeeper Duties in an Increasingly Complex World: Revisiting the “Noisy Withdrawal” Proposal of SEC Rule 205, 22 Geo. J. Legal Ethics 893, 902 (2009) (describing Model Rule 1.13 as creating a limited and discretionary “gatekeeper role” for lawyers).
  81. . See, e.g., Model Rules of Pro. Conduct r. 8.3 (Am. Bar Ass’n 2021) (“Self-regulation of the legal profession requires that members of the profession initiate disciplinary investigation when they know of a violation of the Rules of Professional Conduct.”).
  82. . See, e.g., id. at r. 2.1 (“In representing a client, a lawyer shall exercise independent professional judgment and render candid advice.”); see also id. r. 5.4 (requiring lawyers to maintain independent professional judgment and enforcing this rule by limiting business and partnership structures).
  83. . Lawyers may seek ethics advice without violating the duty of confidentiality. See Model Rules of Pro. Conduct r. 1.6(b)(4) (Am. Bar Ass’n 2021).
  84. . Wolfhart Totschnig, Fully Autonomous AI, 26 Sci. & Eng’g Ethics 2473, 2474, 2483 (2020) (explaining that current AI systems are not fully autonomous because “[t]heir understanding of the world . . . is limited to particular domain and remains fixed throughout their operation”); see also Hussein A. Abbass, Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust, 11 Cognitive Computation 159, 170 (2019) (concluding that, although AI systems are capable of making some semi-independent decisions, no truly autonomous AI system exists); Floris Mertens, The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis 5 (Fin. L. Inst., Working Paper WP 2023-01, 2023) (explaining that systems like ChatGPT “display autonomous capabilities . . . within the boundaries of their application field” but are not truly independent and autonomous).
  85. . See Alice Woolley, If Philosophical Legal Ethics Is the Answer, What Is the Question?, 60 U. Toronto L.J. 983, 984 (2010) (explaining that “the standard conception . . . of the lawyer [is] as a partisan advocate for her client”); see also Steven Zeidman, Raising the Bar: Indigent Defense and the Right to a Partisan Lawyer, 69 Mercer L. Rev. 697, 697 (2018) (arguing that indigent criminal defendants have the right to partisan counsel because private parties are represented by partisan counsel); Model Rules of Pro. Conduct pmbl. (Am. Bar Ass’n 2021) (“As advocate, a lawyer zealously asserts the client’s position under the rules of the adversary system.”).
  86. . See, e.g., Model Rules of Pro. Conduct r. 1.7 (Am. Bar Ass’n 2021).
  87. . See, e.g., id. at r. 1.10, 1.11. Screening would wall off or silo the conflicting information or communications. For example, a conflicted member of a legal office could not share information or discuss the matter with the other office members or access the matter on a shared database or system.
  88. . Although this might solve conflicts issues, it could create issues for AI counsel’s machine learning. We presumably would want AI counsel to get even better at counseling and advocating, and to do so, it may need to learn from its prior matters. Perhaps for learning purposes, it could anonymize and otherwise protect certain data, while maintaining its full client data for archival purposes (as required by Rule 1.16 and any agreements with the clients). Id. at r. 1.6 cmts. 18, 19, 1.16. In a human, this bifurcation would be impossible, but it might be possible and reliable in a machine.
  89. . See id. at r. 1.6; see also ABA Comm. on Ethics & Pro. Resp., Formal Ops. 477R (2017), 95-398 (1995) (requiring application of confidentiality standards of Rules 1.6 and 5.3 in the context of electronic data systems). Indeed, I asked ChatGPT what ethical rules it should follow. Online Interaction with ChatGPT, OpenAI (Feb. 7, 2023) (full transcript on file with author). Although it first noted that it was not subject to ethical rules because it had no “consciousness or agency,” it did note that its creators should follow ethical considerations. Id. One of these considerations is “Privacy: The privacy of individuals should be protected, and personal information should not be used without consent.” Id. In addition, most statements of AI principles list privacy and control over personal data. Fjeld et al., supra note 75, at 21–26.
  90. . Keith Swisher, Death and Ethics: Suffocating or Saving Nonlawyer Practitioners with Lawyer Ethics, 70 UCLA L. Rev. Discourse 52, 64 (2022) (discussing confidentiality and privilege as applied to a new category of legal practitioner, the legal paraprofessional).
  91. . Although not easy, this is not necessarily unprecedented: disciplinary counsel at present often need and receive confidential and privileged client information so that disciplinary counsel can adequately review the complained-about conduct of the human lawyer. In the future, disciplinary authorities will likely need access to information about the representation and will likely need to bring on new types of expertise (including AI) to review AI counsel’s performance, as suggested in the previous Section.
  92. . Model Rules of Pro. Conduct r. 1.6(c) (Am. Bar Ass’n 2021).
  93. . See, e.g., Neb. Ethics Advisory Op. for Laws. 19-01 (2019) (citing several opinions on the ethics of cloud computing); Ill. State Bar Ass’n Pro. Conduct Advisory Op. 16-06 (2016); see also Daniel W. Linna Jr. & Wendy J. Muchman, Ethical Obligations to Protect Client Data When Building Artificial Intelligence Tools: Wigmore Meets AI, Prof. Law., Oct. 2020, at 27, 32 (“When a lawyer uses a third-party to incorporate AI into her practice, either through contracting for the development of a proprietary tool or by purchasing a commercially available tool, additional confidentiality risks arise when working with the third-party. It is imperative that the lawyer remember that ethical obligations do not change because she is working with a third-party and consider how those obligations impact the particular situation.”).
  94. . Restatement (Third) of L. Governing Laws. § 68 (Am. L. Inst. 2000); see, e.g., United States v. Sanmina Corp., 968 F.3d 1107, 1116 (9th Cir. 2020) (citing Upjohn Co. v. United States, 449 U.S. 383, 389 (1981)) (“The attorney-client privilege protects confidential communications between attorneys and clients, which are made for the purpose of giving legal advice.”)
  95. . Although still involving humans, Arizona provides one recent example of extending privilege to nonlawyers. Arizona granted its Legal Paraprofessionals (LPs) a privilege coextensive to that of lawyers. Ariz. R. Evid. 503 (“A communication between a legal paraprofessional and a client is privileged if it is made for the purpose of securing or giving legal advice, is made in confidence, and is treated confidentially. This privilege is co-extensive with, and affords the same protection as, the attorney-client privilege.”).
  96. . To avoid elongating this Essay, I am omitting a discussion of informed consent. In many instances, clients may give informed consent so that lawyers can use or disclose information to others, even to adverse parties or courts. It may be that clients of the future choose to give informed consent to certain conflicts of interest or to the use of certain confidential information so that they can proceed with AI counsel as their advocate or advisor. Ideally, however, we should design AI counsel so that it protects clients’ data and avoids conflicts of interest to the extent feasible.
  97. . Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 2021).
  98. . See, e.g., id. r. 1.3 (requiring diligence in the practice of law).
  99. . See, e.g., id. r. 1.1 cmt. 2 (defining competence as the ability to handle “important legal skills, such as the analysis of precedent, the evaluation of evidence and legal drafting,” and the ability to “determine[e] what kind of legal problems a situation may involve”).
  100. . See Victor Marrero, The Cost of Rules, The Rule of Costs, 37 Cardozo L. Rev. 1599, 1624–25 (2016) (“[A]n increasing number of people, totaling millions, appear in court every year unrepresented in civil actions, even in proceedings involving essentials of life in which they stand to lose homes, jobs, parental rights, health benefits, immigration status, and even liberty . . . because lawyers’ billing rates are out of reach to them . . .”).
  101. . See generally Model Rules of Pro. Conduct r. 1.5 (Am. Bar Ass’n 2021) (requiring all lawyers’ fees to be reasonable and providing factors, such as the market prices and time involved, to assess reasonableness); Artificial Intelligence: Judge Slams Attorney for Not Using AI in Court, LexisNexis: Legal Insights (Mar. 21, 2023), https://www.lexisnexis.com/community/insights/legal/b/thought-leadership/posts/judge-slams-attorney-for-not-using-ai-in-court [https://perma.cc/42EA-LH6Q] (noting that a judge, in rejecting a claim for attorney’s fees for legal research, stated, “[i]f artificial intelligence sources were employed, no doubt counsel’s preparation time would have been significantly reduced”).
  102. . Model Rules of Pro. Conduct r. 8.4(g) (Am. Bar Ass’n 2021).
  103. . See generally Sandra G. Mayson, Bias in, Bias Out, 128 Yale L.J. 2218, 2296 (2019) (“Algorithmic methods have revealed the racial inequality that inheres in all forms of risk assessment, actuarial and subjective alike. . . . As long as crime and arrest rates are unequal across racial lines, any method of assessing crime or arrest risk will produce racial disparity. The only way to redress the racial inequality inherent in prediction in a racially unequal world is to rethink the way in which contemporary criminal justice systems conceive of and respond to risk.”).
  104. . In light of the confidentiality and privilege concerns, the relevant legal office (e.g., the public defender’s office) may need to conduct or procure the testing when it involves actual client data.
  105. . Katherine Medianik, Note, Artificially Intelligent Lawyers: Updating the Model Rules of Professional Conduct in Accordance with the New Technological Era, 39 Cardozo L. Rev. 1497, 1512 (2018).
  106. . Peter Geraghty, Ethical Considerations on Outsourcing Legal Services, N.J. Law., Dec. 2011, at 44, 46 (“The commission concluded that changes to the black letter Model Rules were not necessary with the exception of a minor change to Rule 5.3 changing its title from ‘Responsibilities Regarding Non-Lawyer Assistants’ to ‘Responsibilities Regarding Nonlawyer Assistance.’”); see also id. (“The reason for this change being to clarify that the rule applies not only to services provided by individuals but also by non-lawyer entities such as cloud computing providers and e-discovery vendors.”). The Commission did recommend expanding the comment to provide lawyers with some guidance in using outside consultants and services. See Model Rules of Pro. Conduct r. 5.3 cmt. 3 (Am. Bar Ass’n 2021) (“A lawyer may use nonlawyers outside the firm to assist the lawyer in rendering legal services to the client. Examples include the retention of an investigative or paraprofessional service, hiring a document management company to create and maintain a database for complex litigation, sending client documents to a third party for printing or scanning, and using an Internet-based service to store client information. When using such services outside the firm, a lawyer must make reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations. The extent of this obligation will depend upon the circumstances, including the education, experience and reputation of the nonlawyer; the nature of the services involved; the terms of any arrangements concerning the protection of client information; and the legal and ethical environments of the jurisdictions in which the services will be performed, particularly with regard to confidentiality. . . . When retaining or directing a nonlawyer outside the firm, a lawyer should communicate directions appropriate under the circumstances to give reasonable assurance that the nonlawyer’s conduct is compatible with the professional obligations of the lawyer.”).
  107. . See, e.g., Professional Responsibility Program, Vt. Sup. Ct., Information Concerning Complaint Procedures and Discipline of Attorneys 7 (2015) (“Disciplinary Counsel has retained accounting firms that frequently conduct audits of lawyers’ trust accounts and trust accounting systems.”); In re Simpson, 645 P.2d 1223, 1225 (Alaska 1982) (requiring attorney disciplined for misconduct in management of trust account to “provide the Bar Association a monthly letter from a certified public accountant stating that an audit showed that [lawyer]’s office trust account was managed properly”); In re Longtin, 393 S.C. 368, 375, 713 S.E.2d 297, 300–01 (2011) (consulting psychologist to determine whether lawyer’s diagnosis of Asperger’s syndrome excused lawyer’s misconduct or required mitigation of sanctions); In re Thompson, 343 S.C. 1, 5–6, 539 S.E.2d 396, 398 (2000) (consulting professor of neuropsychology to determine whether lawyer facing discipline had bipolar disorder, which could have impacted the mental capacity to commit misconduct).