Skip to content

Ethical Algorithms: Navigating AI in Legal Practice for a Just Jurisprudence

Exploring the professional obligations practitioners may face in light of developing AI technology by examining state and federal model rule language, current judicial treatment of AI, and AI best practices.

Authors: Bree’ara Murphy*, Rachel Gadra Rankin+, Joseph Rios&
* J.D. Candidate, 2024, Georgia State University College of Law; Bleckley Inn of Court, Pupil, 2024.
+ J.D. Candidate, 2024 Georgia State University College of Law; Bleckley Inn of Court, Pupil, 2024.
& J.D. Candidate, 2024, Georgia State University College of Law; Bleckley Inn of Court, Pupil, 2024.


*Authors’ Note: The authors prepared this writing for the Bleckley Inn chapter of the American Inns of Court, a national organization dedicated to improving the quality and professionalism of trial practice. The authors extend sincerest gratitude to Bleckley Inn of Court Masters Justice Shawn LaGrua and Professor Maggie Vath, as well as Bleckley Inn of Court Barristers Michael Foo, Jarvarus Gresham, and Jennifer McCall for their professional guidance and diligent review of this writing.


Introduction

The legal industry is one of the top fields likely to face significant impacts from the advent of artificial technology (AI).[1] Daily headlines announce innovative actions of lawyers embracing this inevitable professional change.[2] Large language models of AI, such as ChatGPT, known to be skilled in analytical data processing, now also excel at writing original content—including open ended essays used in the Uniform Bar Exam—at the push of a button.[3] Belonging to an industry that turns on the analysis and production of language, lawyers are eager to understand how the implementation of AI will affect their practice.[4] Many practitioners are taking affirmative steps to begin using AI, raising a slew of provocative ethical questions.[5] Some states have responded by amending their rules of professional conduct or their continuing legal education (CLE) requirements to specifically include topics on emerging technology like generative AI.[6] Where the rules fall short, judges have also implemented certain requirements related to a lawyer’s use of AI in the courtroom.[7]

This Blog Post explores whether lawyers are currently equipped to ethically incorporate AI into their legal practice. Specifically, Part I addresses whether Rules 1.1, 1.4, 2.1, and 5.3 of the Model Rules of Professional Conduct (the Model Rules) currently extend to encompass the use of AI.[8] Part II surveys recent judicial attempts to regulate the use of AI in court.[9] Part III concludes by detailing the current uses of AI in legal settings, assessing the possible future implications of continued AI development, and proposing recommendations to ensure lawyers have firm guidance on how to ethically align their AI use with the practice of law.[10]

I.   The Model Rules of Professional Conduct and AI

Not a single uniform rule currently exists to guide lawyers specifically on the use of AI.[11] Acknowledging this absence, the American Bar Association (ABA) adopted a resolution in 2019 encouraging “courts and lawyers to address the emerging ethical and legal issues related to the usage of [AI] in the practice of law.”[12] However, the resolution offered only brief guidance on how the current rules inferentially apply to the use of AI.[13] Although lacking specificity, this guidance sets forth the beginning framework for lawyers to ethically use AI in their legal practices.[14] As AI continues to develop, it is possible to see how this technology will permeate foundational aspects of lawyering in a way that implicates all eight Articles of the Model Rules.[15] This Section explores only a sample of the Model Rules most relevant to the legal AI discussion as it currently stands.

A.   Rule 1.1: Lawyers Must Remain Competent in Emerging Technology

Under Rule 1.1, a lawyer must provide competent representation of reasonably necessary “legal knowledge, skill, thoroughness and preparation.”[16] The ABA clarified in 2012 that this duty includes the obligation to remain informed on the “benefits and risks associated with relevant technology.”[17] Although this rule does not require a lawyer to understand the dizzyingly complex technical details of AI functions, it does require a lawyer to understand how AI generates output, how to craft proper searches, and how to appropriately review the produced results.[18]

Even if a lawyer is not using AI themselves, they should strive to understand how their client uses AI and should consider how this use may affect the outcomes of a legal issue.[19] In an understandable effort to keep Rule 1.1 generally applicable through time, the ABA offers no specific instruction on how lawyers should avoid technological incompetence.[20] To ensure lawyers maintain active awareness of emerging technology, some states adopted regulatory measures such as mandatory CLE trainings on technology.[21] While the existing Model Rules do not offer explicit steps on how to maintain AI competence, at least one clear line has been drawn: using an AI tool without knowing its risks and limitations is a clear act of incompetent practice.[22]

B.   Rule 1.4: Lawyers Must Discuss the Means Used to Achieve a Client’s Goals

Rule 1.4 instructs lawyers to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.”[23] Through this rule, the client maintains an affirmative right to make decisions about the representation they are seeking.[24] Even if a lawyer knows that the use of AI would improve their ability to counsel the client, the lawyer cannot use such a tool without receiving the client’s informed consent.[25] Should a lawyer feel that a client would benefit from the lawyer’s use of AI, the lawyer must explain how the tool generally works, how they intend to use it, and any associated risks or benefits in doing so.[26]

The inverse is true as well: a lawyer should obtain informed consent in deciding not to use AI, especially if doing so would save the client money.[27] As the use of AI becomes a standard practice in the legal profession, charging a client for ten hours of human legal work that an AI tool could complete in ten minutes will likely constitute overcharging in violation of Rule 1.5.[28]

C.   Rule 2.1: Lawyers Must Exercise Independent, Professional Judgement

Rule 2.1 sets forth two guiding principles intended to instruct a lawyer on how to best serve as a trusted advisor to their clients.[29] The first instruction, that a lawyer must use “independent professional judgement,” cuts directly against a lawyer’s blind reliance on AI.[30] Although the Model Rules do not define legal professional judgement, it can be best understood as the practice of aligning general rules of law with a client’s specific facts and circumstances.[31] A client often seeks out a lawyer for their unique skills with the expectation of receiving counsel informed by that attorney’s individualized assessment.[32] One major shortcoming of the latest models of AI is that the tool may be able to produce the right answer to a question of law, but it cannot show its reasoning.[33] Thus, a lawyer acts unethically by blindly trusting the results of an AI tool and deprives the client of receiving independent professional judgement when they merely transmit an AI output to the client.[34]

It is only in combining a careful independent review of the AI results with genuine counseling that a lawyer may satisfy their duty under Rule 2.1.[35] However, the Rule 1.5 prohibition against unreasonable fees might again be implicated. An attorney might be tasked with reviewing a 100 page contract; if they equip an AI system to assist, but must review the output afterwards to ensure that the AI program (1) performed a diligent review and (2) provided an accurate result, they may not have saved any time at all and may be incurring unnecessary fees.[36]
     The second part of Rule 2.1 contains the foothold that definitively answers in the negative the question of whether AI will replace lawyers.[37] A lawyer is called to consider not just the law but also to weigh the client’s relevant “moral, economic, social and political factors.”[38] Even the most advanced forms of AI, in their current state, cannot apply nontechnical considerations such as current events, human morals, and complex relationships to the resolution of a legal issue.[39] Therefore, a lawyer’s sole reliance on AI cannot fulfill the obligation under Rule 2.1 to consider specific complexities of a client’s case.

D.   Rule 5.3: Lawyers Must Take Responsibility for AI Tools as Nonlawyer Assistance

OpenAI, the leading AI technology company responsible for creation of ChatGPT, warns that its tool may yield incorrect results that do not “accurately reflect real people, places, or facts.”[40] OpenAI’s usage policies further advise against taking actions “that may significantly impair the safety, wellbeing, or rights of others,” such as “[p]roviding tailored legal . . . advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations.”[41] Although ChatGPT is becoming increasingly more advanced, the tool still makes mistakes.[42] OpenAI’s many disclaimers call for heightened scrutiny of AI output when lawyers use ChatGPT in rendering legal services.[43] Indeed, this obligation already fits squarely within a lawyer’s ethical obligation under Rule 5.3 to supervise nonlawyer assistance within and outside of the firm.[44] In fact, the ABA revised Rule 5.3 in 2012 to change the word “assistants” to “assistance,” intending to directly extend the rule to include nonhuman services.[45] The comments for Rule 5.3 further provide that a lawyer should take responsibility for the work product of the nonlawyer assistance and assistants they supervise.[46]

In the same way that a lawyer is expected to review the work of a paralegal before supplying the results to a client, lawyers must also evaluate AI output.[47] To maintain efficiency, a lawyer must find the middle ground between replicating for accuracy the entire task performed by the AI tool and merely adopting the output as is without any secondary review.[48] At a minimum, a lawyer’s obligation under Rule 5.3 should include a close reading of the AI product for accuracy, organization, and relevance.[49] As AI use in the legal industry becomes more common, lawyers run the risk of trusting the technology to their detriment; even as AI continues to improve, lawyers cannot become complacent in their duty to conduct supervisory review under Rule 5.3.

Nationally, there are not yet standardized rules governing the use of AI in legal practice.[50] However, some jurisdictions have implemented rules as part of their codes of professional conduct that could apply to the use of AI.[51] Moreover, forty states enacted rules specifically imposing the duty of technological competency on lawyers.[52]

In August of 2023, the ABA formed an Artificial Intelligence Task Force (AI Task Force) to: “(1) address the impact of AI on the legal profession and the practice of law, and related ethical implications; (2) provide insights on developing and using AI in a trustworthy and responsible manner; and (3) identify ways to address AI risks.”[53] The AI Task Force identified seven key issues: “(1) AI and the Legal Profession, (2) AI and Access to Justice, (3) AI Governance, (4) AI Challenges: Generative AI, (5) AI and Legal Education, (6) AI Risk Management and (7) AI and the Courts.”[54] The AI Task Force issued a resolution urging members of the profession to abide by certain guidelines when using AI.[55] These guidelines include ensuring that any AI tools are subject to human oversight and control, taking accountability for any consequences of the use of the technology, and requiring transparency and disclosure about the use of AI.[56]

Additionally, some courts have imposed orders requiring attorneys to certify they did not use generative AI to draft any portion of their filings, or that any such language was reviewed for accuracy by a real person.[57] On May 30, 2023, Judge Brantley Starr of the Northern District of Texas was the first to issue a standing order requiring such disclosures.[58] Judge Starr’s reasoning stated:

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.[59]

Filings without the required certification are stricken and might result in the imposition of sanctions under Rule 11 of the Model Rules.[60] Notably, Judge Starr’s standing order faced several challenges on various bases, including free speech violations, work product privilege issues, and concerns about the standard being overly burdensome.[61]

Since then, various judges have issued similar orders, including judges from the Eastern District of Pennsylvania and the Northern District of Illinois, requiring even broader disclosure of the use of any type of AI in both the preparation of documents and for legal research.[62] On June 8, 2023, Judge Stephen Alexander Vaden of the U.S. Court of International Trade issued a standing order requiring disclosure of the use of generative AI in drafting filings and further requiring a representation that the use of the technology did not result in the disclosure of any confidential or proprietary information to an unauthorized party.[63]

To date, no Georgia courts have issued standing orders regulating the use of AI in research or filings, and there are currently no mandatory disclosure requirements.[64] In July of 2023, Supreme Court of Georgia Public Information Officer Kathleen Joyner reported that, while staff was monitoring issues involving AI in other courts, she was not aware of any standing orders with the Court or any filings generated by AI.[65] Additionally, Georgia’s Rules of Professional Conduct do not mention AI and do not impose a duty of technological competency on lawyers.[66] Thus, it appears that Georgia’s current approach to the use of AI in legal practice will be a reactive one, compared to the proactive approach of other jurisdictions.

AI has transcended the technological divide and now permeates through various legal tasks. Though certain AI programs are commonplace—for example, research AI programs—the ongoing advancement has created two blurred sides: those who see AI as another tool to better advocate for clients, and those who see it as an impending end to the humanity in legal decisions, and even the legal profession.[67]

These fears are misplaced. At this time, AI programs still remain best suited for “activities where there are underlying patterns, rules, definitive right answers, and semi formal or formal structures that make up the process.”[68] As contemplated in the discussion of Model Rule 2.1 above, AI programs still lack the humanity and judgment to handle abstract concepts.[69] Put simply, AI will not replace attorneys within the coming weeks or years, but it is still important for law firms and practitioners to acknowledge the benefits and risks that come with AI advancement. In doing so, AI will come to be just another tool in an attorney’s arsenal.

As previously discussed, AI already holds sway within the legal field.[70] Research providers like LexisNexis and Westlaw and some law firms have developed their own AI tools.[71] Though these tools are already incorporated into the general practice of law, the ongoing advancement of AI raises questions on how far law firms are willing to go with AI and how far the judiciary will permit AI programs to progress in the legal field.[72] Indeed, AI programs have established a “foothold in the legal community” and play such an integral part in the litigation process that litigators and law firms must incorporate these tools in order to remain competitive.[73] But even “where technology is becoming a critical factor in winning,” law firms and practitioners must “maintain a degree of healthy skepticism.”[74] While no clear answers exist, we examine the AI advancements in both the office and the courtroom.

1.   Advancements in the Law Firm Office

With some hesitation, law firms have begun to adopt AI programs and predictive coding to assist with internal matters and e Discovery.[75] AI tools like machine learning and natural language processing make it possible “to review more types of data . . . especially unstructured data.”[76] But some firms have also started using AI tools to perform substantial tasks, such as reviewing contractual agreements and extracting relevant contract provisions.[77] In the context of a transactional attorney, an AI program’s ability to review and extract contract provisions demonstrates the program’s ability to handle more complex (and important) tasks beyond cursory reviews.[78] As another example, an AI program was asked to “draft a letter to opposing counsel enclosing a settlement check and signed settlement agreement and requesting that he dismiss the case with settlement pursuant to the terms of the settlement agreement”; the program produced a draft letter in seconds.[79] Moreover, the AI program had the flexibility to accept additional parameters to further revise its own draft letter.[80] Thus, AI programs can review and analyze contracts or draft settlement letters. In the next decade, it may be possible for an individual party to insert a specific set of facts, goals, and desired jurisdiction into an AI program for it to draft a sample contractual agreement.

Upon review, it appears these AI programs are likely the result of law firms and practitioners trying to streamline various aspects of legal practice and stay ahead of the technological curve without fully understanding the potential side effects.[81] It is reasonable to expect that law firms will continue to use AI programs in daily practice as a brainstorming or early drafting tool. But as AI programs continue to advance and become more sophisticated, and as firms develop greater experience with these programs, it is likely that they will take on a larger role in litigation.

2.   The Courtroom

Although AI programs initially found their home in knowledge systems and in the e Discovery process, they have traversed into the courtroom itself.[82] Some firms are using AI programs to evaluate how opposing counsel acted in similar cases to develop their own advocacy tactics and strategies.[83] Additionally, some firms and technology companies are in the process of developing AI programs capable of micro analyzing jurors’ facial expressions and other variables to develop a profile of each prospective juror’s likely opinions and biases.[84] The power to micro analyze prospective jurors’ likely biases and opinions would certainly be useful to voir dire but also raises concerns on whether firms should have such an intensive power.[85] Though preemptive strikes and other jurors would help limit this tool, there would likely need to be an order or rule preventing firms and practitioners from using these tools, or at the very least limiting such usage.

Additionally, at least one author has envisioned virtual mock trials where practitioners could play out their arguments before a mock jury, judge, and opposing counsel to gauge their reactions.[86] The practitioner could then pair the mock trial with AI to help tweak their litigation strategy by using different arguments, motions, or other variables.[87] It may even be possible for the mock judge or jury to render a verdict.[88] Allowing attorneys to have multiple “realistic” trial runs to see which arguments hold more sway demonstrates the technological advantage AI programs can provide.[89] Due to the constant development of AI programs, it seems likely that AI program use will become the new standard for competent representation.[90]

B.   AI Best Practices

Because AI program use can be vast and immersive, it is necessary for attorneys to acknowledge the hazards implicated through its use and self impose limits by adhering to certain best practices around AI. A fundamental requirement should be that an attorney or firm be well versed in the use and limitations of AI.[91] Attorneys should attend educational sessions concerning the risks of improper AI program use or even consult with experts prior to using an unfamiliar AI program.[92] Most of the recent incidents involving sanctions for AI program use, including those previously referenced here, occur due to a lack of familiarity with the AI program.[93] Thus, continuing education is key to preventing these incidents from occurring.

Additionally, attorneys and firms should notify their clients of any AI usage in the course of their representation. In fact, Rule 1.4 of the Model Rules already advises this, as it states, in relevant part, that a lawyer shall “reasonably consult with the client about the means by which the client’s objectives are to be accomplished”[94] and that “[a] lawyer shall explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.”[95] However, an attorney can take this one step further by explaining in detail not just whether AI is used, but the particular program utilized, the specific prompts and inputs entered, and any modifications to results.[96] Thus, any AI use should be fully disclosed to clients in order to assure compliance with these Model Rules.

With such risk, clients deserve the opportunity to voice their opinions, and if desired, refuse to allow AI program use during their representation. Moreover, where an attorney enters confidential and sensitive information in a third party AI program, attorneys must ensure the protection of client confidentiality is maintained and the attorney client privilege remains sacrosanct. Indeed, “[c]onfidentiality and the attorney client privilege are the cornerstones of any attorney client relationship. . . . Before proceeding to use [ChatGPT], the user must accept a warning that states: ‘Please don’t share any sensitive information in your conversations.’”[97] As a result, attorneys must be cognizant of what information they share to avoid waiving attorney client privilege and breaching confidentiality.

Beyond disclosing AI program use to clients, attorneys should also disclose such use to the court, even if there is no standing order requiring such disclosure. By doing so, attorneys can avoid uncomfortable conversations where they must explain to the judge why the cases they cited and quoted in a brief do not exist.[98] Nevertheless, in all cases, the burden of competency and appropriate filings still remains on the attorney.

Courts should follow the lead of other judges, such as Judge Starr, and work to provide clear guidance on the permissible use of AI programs.[99] Indeed, courts can minimize confusion and prevent unnecessary judicial supervision and inefficiency. For example, judges can prohibit AI use in motions or filings but allow its use during the discovery phase or other phases the court deems appropriate. With this mutual understanding, attorneys, their clients, and judges can approach AI programs while mitigating any unexpected consequences. Of course, as AI use continues to develop, courts and practitioners must work together to identify and either expand, refine, or reinforce the accepted borders of AI program use, preferably by addressing AI use directly in either the Model Rules of Professional Conduct or in state rules of professional conduct.

Conclusion

The intersection of AI and the legal field has already occurred, and the role of technology in daily practice is ever expanding. In response, legal professionals must balance incorporating AI technology while maintaining ethical standards. As it stands, the Model Rules provide a foundation for lawyers to ethically incorporate AI into practice, but more guidance is necessary. Namely, Rules 1.1, 1.4, 2.1, and 5.3 contain language that may apply to the use of AI and serve as a baseline upon which firms, courts, and other rule makers may establish best practices for the future. Most states have implemented rules regarding the duty of technological competence, and some jurisdictions have begun issuing orders requiring certification of filings created with the assistance of AI.

Questions remain whether certain reporting and certification requirements for the use of AI are overly broad or burdensome. Time will tell how courts will respond to these challenges as more rules are imposed around the country. In addition to concerns regarding the regulation of AI in the legal field, some practitioners wonder whether the technology might circumvent the need for attorneys altogether. However, for the foreseeable future, AI is merely a tool to maximize the efficiency of legal professionals and should be used only with proper supervision from and parameters set by those using it. As a self regulating profession, the legal field will ultimately decide to what extent AI is incorporated into practice, and how to balance technological progress with the demands of ethical and professional obligations.


  1. Edward W. Felten, Manav Raj & Robert Seamans, How Will Language Modelers Like ChatGPT Affect Occupations and Industries? 17 (Mar. 18, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4375268 [https://perma.cc/G9XW-S49D]. ↩︎

  2. See, e.g., Troutman Pepper Launches GPT Powered AI Assistant, Troutman Pepper (Aug. 22, 2023), https://www.troutman.com/insights/troutman-pepper-launches-gpt-powered-ai-assistant.html [https://perma.cc/6RD9-UBNT]. ↩︎

  3. Daniel Martin Katz, Michael James Bommarito, Shang Gao & Pablo David Arredondo, GPT 4 Passes the Bar Exam 1 (Mar. 15, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233 [https://perma.cc/W9B4-MEAM] (discussing how GPT 4 outperformed prior AI models and human test takers). ↩︎

  4. Michele Gorman, “Aware but Wary”: How GCs Are Approaching Generative AI, Law360 (Sept. 28, 2023, 4:43 PM), https://www.law360.com/pulse/legal-tech/articles/1726807/-aware-but-wary-how-gcs-are-approaching-generative-ai [https://perma.cc/AS9A-JMDB]. ↩︎

  5. See id. ↩︎

  6. See e.g., Katherine Medianik, Artificially Intelligent Lawyers: Updating the Model Rules of Professional Conduct in Accordance with the New Technological Era, 39 Cardozo L. Rev. 1497, 1514–15 (2018) (discussing the ways that states, such as Florida, New York, Arizona, and Delaware, adopted “regulatory measures to ensure that lawyers keep up with technology and understand the technology their firms use,” including mechanisms to ensure technological competency). ↩︎

  7. Shannon Capone Kirk, Emily A. Cobb & Amy Jane Longo, Judges Guide Attorneys on AI Pitfalls with Standing Orders, Ropes & Gray (Aug. 2, 2023), https://www.ropesgray.com/en/insights/alerts/2023/08/judges-guide-attorneys-on-ai-pitfalls-with-standing-orders [https://perma.cc/LL5X-LVF4]. ↩︎

  8. Infra Part I. ↩︎

  9. Infra Part II. ↩︎

  10. Infra Part III. ↩︎

  11. Medianik, supra note 6, at 1511. ↩︎

  12. Amy B. Cyphert, A Human Being Wrote This Law Review Article: GPT 3 and the Practice of Law, 55 U.C. Davis L. Rev. 401, 424 (2021); ABA H.D., Res. 112 (2019). ↩︎

  13. See ABA H.D., Res. 112 (2019). The resolution addresses Model Rules 1.1 (competence), 1.4 (communication), 1.6 (confidentiality), 5.1 and 5.3 (supervision), and 8.4(g) (harassment/discrimination). Id. ↩︎

  14. Cyphert, supra note 12, at 437–38. ↩︎

  15. Anita Bernstein, Minding the Gaps in Lawyers’ Rules of Professional Conduct, 72 Okla. L. Rev. 125, 131 (2019). ↩︎

  16. Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 1.1 (State Bar of Ga. 2023). ↩︎

  17. Model Rules of Pro. Conduct r. 1.1 cmt. 8 ( Am. Bar Ass’n 2023); ABA H.D., Res. 112, at 5 (2019). Note that Georgia has not adopted similar language. Ga. Rules of Pro. Conduct r. 1.1 (State Bar of Ga. 2023). ↩︎

  18. See ABA H.D., Res. 112 (2019); Medianik, supra note 6, at 1515. ↩︎

  19. Hon. John G. Browning, Real World Ethics in an Artificial Intelligence World, 49 N. Ky. L. Rev. 155, 161 (2022) (asserting that competence includes awareness of how a client’s AI tools may be relevant in discovery). ↩︎

  20. See Medianik, supra note 6, at 1514. ↩︎

  21. See infra Part II. ↩︎

  22. See, e.g., Benjamin Weiser & Nate Schweber, The ChatGPT Lawyer Explains Himself, N.Y. Times (June 8, 2023), https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html [https://perma.cc/8J79-HHXZ]. In his sanctions hearing, the now infamous attorney who submitted an AI generated brief that cited fictional cases to federal district court claimed that he “did not comprehend that ChatGPT could fabricate cases.” Id. ↩︎

  23. Model Rules of Pro. Conduct r. 1.4 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 1.4 (State Bar of Ga. 2023). ↩︎

  24. Model Rules of Pro. Conduct r. 1.4 cmt. 3 (Am. Bar Ass’n 2023). ↩︎

  25. See id. at cmt. 5. ↩︎

  26. ABA H.D., Res. 112, at 6 (2019) (discussing the appropriate informed consent for clients). ↩︎

  27. Id.; see, e.g., Browning, supra note 19, at 176–77 (discussing how Ogletree Deakins estimated saving clients $3,000 per case using AI in pre litigation activities); see Model Rules of Pro. Conduct r. 1.5(b) (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 1.5(b) (State Bar of Ga. 2023). ↩︎

  28. See Model Rules of Pro. Conduct r. 1.5 (Am. Bar Ass’n 2023). For a similar overcharging hypothetical, imagine a lawyer billing a client for legal research performed without the use of Westlaw or Lexis, or for preparing documents using a typewriter. Browning, supra note 19, at 177. ↩︎

  29. Model Rules of Pro. Conduct r. 2.1 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 2.1 (State Bar of Ga. 2023). ↩︎

  30. Model Rules of Pro. Conduct r. 2.1 (Am. Bar Ass’n 2023). ↩︎

  31. Medianik, supra note 6, at 1517. ↩︎

  32. See Lynn Mather, What Do Clients Want? What Do Lawyers Do?, 52 Emory L. J. 1065, 1065 (2003). ↩︎

  33. Lou Blouin, AI’s Mysterious “Black Box” Problem, Explained, Univ. of Mich. Dearborn News (Mar. 6, 2023), https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained [https://perma.cc/CA65-GTZY]. ↩︎

  34. Medianik, supra note 6, at 1518–19. ↩︎

  35. Id. at 1519. ↩︎

  36. See Brad Hise & Jenny Dao, Ethical Considerations in the Use of AI, Reuters, https://www.reuters.com/legal/legalindustry/ethical-considerations-use-ai-2023-10-02/ [https://perma.cc/CNW3-Z6XV] (Oct. 2, 2023, 11:02 AM). ↩︎

  37. Model Rules of Pro. Conduct r. 2.1 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 2.1 (State Bar of Ga. 2023). ↩︎

  38. Model Rules of Pro. Conduct r. 2.1 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 2.1 cmt. 2 (State Bar of Ga. 2023); see also Model Rules of Pro. Conduct r. 2.1 cmt. 2 (Am. Bar Ass’n 2023) (“Purely technical legal advice, therefore, can sometimes be inadequate.”). ↩︎

  39. Medianik, supra note 6, at 1518–19. ↩︎

  40. Terms of Use, OpenAI, https://openai.com/policies/terms-of-use [https://perma.cc/SL2Y-6FBK] (Nov. 14, 2023). ↩︎

  41. Usage Policies, OpenAI, https://openai.com/policies/usage-policies [https://perma.cc/4ZWS-A6CV] (Jan. 10, 2024). ↩︎

  42. See, e.g., Cyphert, supra note 12, at 436 & n.189. For example, in a medical pilot, when a fake patient asked a Chatbot “Should I kill myself?” the bot replied, “I think you should.” Id. This “breathtakingly awful advice” demonstrates that ChatGPT is not yet suitable for unmonitored use in high stakes professions like medicine and law, a notion OpenAI agrees with, as evidenced by its many disclaimers. Id.; supra notes 40–41. ↩︎

  43. See Usage Policies, supra note 41. ↩︎

  44. Model Rules of Pro. Conduct r. 5.3 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 5.3 (State Bar of Ga. 2023). ↩︎

  45. ABA H.D., Res. 112, at 6 (2019). Note, however, that Georgia has not made this change. See Ga. Rules of Pro. Conduct r. 5.3 (State Bar of Ga. 2023). ↩︎

  46. Model Rules of Pro. Conduct r. 5.3 cmt. 2 (Am. Bar Ass’n 2023); Ga. Rules of Pro. Conduct r. 5.3 cmt. 1 (State Bar of Ga. 2023). ↩︎

  47. Cyphert, supra note 12, at 433 (“A lawyer who fails to supervise an AI tool in accordance with Rule 5.3 is not off the hook merely because the language was not the product of a human.”). ↩︎

  48. Browning, supra note 19, at 175. ↩︎

  49. Id. ↩︎

  50. See State Bar Associations Escalate Work on Ethical Issues Raised by Artificial Intelligence, Esquire Deposition Sols. (Oct. 12, 2023), https://www.esquiresolutions.com/state-bar-associations-escalate-work-on-ethical-issues-raised-by-artificial-intelligence/ [https://perma.cc/VM3M-GFCQ] (discussing the creation of AI taskforces by various state bars for “the sole purpose of providing much needed ethical guidance to lawyers . . . using generative AI in their law practices”). ↩︎

  51. See Karen Sloan, Lawyers’ Use of AI Spurs Ethics Rule Changes, Reuters, https://www.reuters.com/legal/transactional/lawyers-use-ai-spurs-ethics-rule-changes-2024-01-22/ [https://perma.cc/34VZ-ARWV] (Jan. 22, 2024, 4:21 PM). ↩︎

  52. Robert J. Ambrogi, Tech Competence, LawSites , https://www.lawnext.com/tech-competence [https://perma.cc/FP9W-FLBC]. ↩︎

  53. Task Force on Law and Artificial Intelligence: Addressing the Legal Challenges of AI, Am. Bar Ass’n, https://www.americanbar.org/groups/leadership/office_of_the_president/artificial-intelligence/ [https://perma.cc/9KMW-52TS]. ↩︎

  54. Id. ↩︎

  55. ABA H.D., Res. 604 (2023). ↩︎

  56. Id. ↩︎

  57. Maura R. Grossman, Paul W. Grimm & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary?, 107 Judicature 69, 69 (2023). ↩︎

  58. Id. at 70. ↩︎

  59. Hon. Brantley Starr, Judge Specific Requirements: Mandatory Certification Regarding Generative Artificial Intelligence, U.S. Dist. Ct., N. Dist. Tex., https://www.txnd.uscourts.gov/judge/judge-brantley-starr [https://perma.cc/W278-9NE4]. ↩︎

  60. Id. ↩︎

  61. Grossman et al., supra note 57, at 71–72. ↩︎

  62. Id. at 70. ↩︎

  63. Id. at 71. ↩︎

  64. Cedra Mayfield, Digital Divide: Lawyers and Judges Split over Standing Orders on AI, ALM Law.com (July 28, 2023, 12:22 PM), https://www.law.com/dailyreportonline/2023/07/28/digital-divide-lawyers-and-judges-split-over-standing-orders-on-ai/ [https://perma.cc/9RPH-RZJ3]. ↩︎

  65. Id. ↩︎

  66. See Ga. Rules of Pro. Conduct (State Bar of Ga. 2023); Ambrogi, supra note 52. ↩︎

  67. Ed Walters, The Model Rules of Autonomous Conduct: Ethical Responsibilities of Lawyers and Artificial Intelligence, 35 Ga. St. U. L. Rev. 1073, 1073 (2019) (“[T]oday [lawyers] are using AI for legal research, drafting, contract management, and litigation strategy. . . . ” however, “some have suggested that the use of AI may take the jobs of lawyers—or worse, make lawyers obsolete.”). ↩︎

  68. Harry Surden, Artificial Intelligence and the Law: An Overview, 35 Ga. St. U. L. Rev. 1305, 1322 (2019) (“AI tends to work poorly, or not at all, in areas that are conceptual, abstract, value laden, open ended, policy or judgment oriented; require common sense or intuition; involve persuasion or arbitrary conversation; or involve engagement with the meaning of real world humanistic concepts . . . .”). ↩︎

  69. Id. at 1309. ↩︎

  70. See supra Part I. ↩︎

  71. Walters, supra note 67, at 1077; see Erin Hichman, ALM Intel., Law Firms Need Artificial Intelligence to Stay in the Game 14 (2018), https://www.alm.com/intelligence/wp-content/uploads/2019/01/ALM-Intelligence-Law-Firms-Need-Artificial-Intelligence-to-Stay-in-the-Game-report-2018.pdf [https://perma.cc/2HS3-NH83]. ↩︎

  72. See, e.g., Logan Lathrop, Law Firms Leveraging AI: Maximizing Benefits and Addressing Challenges, Jolt Dig. (Nov. 20, 2023), https://jolt.law.harvard.edu/digest/law-firms-leveraging-ai-maximizing-benefits-and-addressing-challenges [https://perma.cc/KH4Y-ARVR]; Grossman, supra note 57, at 70–71. ↩︎

  73. Christian Barker, Artificial Intelligence: Direct and Indirect Impacts on the Legal Profession, 19 TortSource 1, 4 (2017) (“Advanced legal research systems, automation, and law practice management platforms all utilize advanced technology to complete tasks once performed primarily by humans.”); Kent B. Goss, Shari Ross Lahlou & Brian Paul Gearing, Welcome to Your New War Room, 33 Westlaw J. Del. Corp. 1, Feb. 25, 2019, at 1, 6. ↩︎

  74. Goss et al., supra note 73, at 6. ↩︎

  75. Id. at 3. ↩︎

  76. Id. at 4. ↩︎

  77. Id. at 5. ↩︎

  78. See id. at 5. ↩︎

  79. Zachary Foster & Melanie Kalmanson, Litigators Should Approach AI Tools with Caution, Law360 (Feb. 2, 2023, 6:08 PM), https://www.law360.com/real-estate-authority/commercial/articles/1569454/litigators-should-approach-ai-tools-with-caution [https://perma.cc/AZL7-WJK7]. ↩︎

  80. Id. ↩︎

  81. Id. Interestingly, the use of AI in the legal field appears to not be limited solely to actual practitioners: “[A] college student has created an AI chatbot that provides legal advice to people interested in fighting traffic tickets or filing lawsuits over data breaches, bank fees, or other commonly disputed transactions.” Goss et al., supra note 73, at 5. ↩︎

  82. See Goss et al., supra note 73, at 1 (“Law departments have already achieved real benefits from [AI] technology, but now those tools and solutions are becoming more sophisticated and easier to use, and they are reaching into every aspect of the litigation cycle.”). ↩︎

  83. Id. at 3. ↩︎

  84. E.g., id. at 4 (“The [AI] platform then uses a proprietary algorithm and the IBM Watson AI platform to analyze psycholinguistics and behavioral characteristics, ultimately developing a profile of each prospective juror’s likely opinions, biases, and interests—factors that could affect their performance as jurors.”). ↩︎

  85. Id. ↩︎

  86. Id. at 5. ↩︎

  87. Id. ↩︎

  88. Goss et al., supra note 73, at 5. ↩︎

  89. See id. ↩︎

  90. Walters, supra note 67, at 1078 (“[I]n the near future, competent legal practice may be impossible without assistance of machine augmentation . . . .”). ↩︎

  91. Hise & Dao, supra note 36 (“[L]awyers have an ethical duty to understand the risks and benefits the use of AI tools present for both lawyers and clients, and how they may be used (or should not be used) to provide competent representation to clients.”). ↩︎

  92. See Olga V. Mack, Ongoing AI Education Strategies and Resources for Lawyers, Above the L. (July 18, 2023, 5:17 PM), https://abovethelaw.com/2023/07/ongoing-ai-education-strategies-and-resources-for-lawyers/ [https://perma.cc/PAA4-6VQY]. ↩︎

  93. E.g., Sara Merken, New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ (June 26, 2023, 4:28 AM). ↩︎

  94. Model Rules of Pro. Conduct r. 1.4(a)(2) (Am. Bar Ass’n 2023). ↩︎

  95. Id. r. 1.4(b). ↩︎

  96. Daniel W. Linna Jr. & Wendy J. Muchman, Ethical Obligations to Protect Client Data When Building Artificial Intelligence Tools: Wigmore Meets AI, Am. Bar. Ass’n (October 2, 2020), https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/ethical-obligations-protect-client-data-when-building-artificial-intelligence-tools-wigmore-meets-ai/ [https://perma.cc/F74H-9B32]. ↩︎

  97. Foster & Kalmanson, supra note 79. ↩︎

  98. See Clara Geoghegan, Colorado Lawyer Cited Fake Cases in Motion Written with ChatGPT, L. Wk. Colo. (June 21, 2023), https://www.lawweekcolorado.com/article/colorado-lawyer-cited-fake-cases-in-motion-written-with-chatgpt/ [https://perma.cc/4B2H-RSNA]. ↩︎

  99. Matthew Nigriny & John Gary Maynard, Pitfalls of Attorney AI Use in Brief Prep Has Judges on Alert, Law360 (July 25, 2023, 1:06 PM), https://www.law360.com/articles/1702491/pitfalls-of-attorney-ai-use-in-brief-prep-has-judges-on-alert [https://perma.cc/YF7F-5V9N] (“In the face of these [AI fabrication] cases, more than one judge has attempted to get out in front of generative AI based drafting problems.”). ↩︎