Upholding Ethical Obligations When Exploring Generative Artificial Intelligence

Upholding Ethical Obligations When Exploring Generative Artificial Intelligence

By Libby Gorman

            Generative artificial intelligence (GAI) is currently ubiquitous in the news and in the wider public discourse. Conversations range from optimistic speculation about how GAI can help humans to the pessimistic predictions of robot overlords taking charge—ok, the latter may just be the science fiction readers.[1] There is no doubt, however, that most people who are thinking seriously about using GAI recognize both its potential benefits and its potential dangers.[2] The legal profession is also currently exploring how GAI can both enhance representation of clients and raises concerns that lawyers must consider.[3]

            This blog post examines lawyers’ adoption of GAI first through the lens of the American Bar Association (ABA) Standing Committee on Ethics and Professional Responsibility’s Formal Opinion 512 (the “Opinion”).[4] It then considers Everett M. Rogers’s innovativeness and adopter categories. Finally, this blog post argues that most lawyers should take the early majority or late majority approach to AI rather than seeking to be innovators or early adopters.[5]

            Opinion 512, issued in 2024, uses the Model Rules of Professional Conduct to interpret the ethical responsibilities of lawyers who use GAI.[6] The Opinion states that lawyers’ responsibilities to their clients remain the same when using GAI as for other tools.[7] It then goes on to analyze how those responsibilities can be applied to the use of GAI in practice.[8] The Opinion discusses GAI in light of the duties of competence, confidentiality, communication with clients, supervision of non-lawyers, advancing meritorious claims, candor toward the tribunal, and charging reasonable fees.[9] This blog post will focus on the three duties of competence, confidentiality, and supervision.

            One common area of concern for attorneys using any kind of technology is the duty of competent representation, which “requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”[10] Although technology is not mentioned in the text of the rule, Comment [8] notes that keeping up with technology is part of the requirement of competence.[11] The fast-changing nature of GAI may make this requirement more difficult, such that the Committee on Ethics and Professional Responsibility recommends consulting with technology experts, if needed, for help understanding the abilities and risks of a GAI tool.[12] Whatever means are chosen to better understand GAI tools under consideration, lawyers must recognize that they are accountable for the results those tools generate.[13]

            Lawyers who use GAI must also ensure that they do not compromise clients’ confidential information.[14] The way that GAI develops and learns from past data also makes this responsibility more difficult.[15] Submitting one client’s information to a GAI tool may inadvertently lead to the disclosure of that information when the tool is used by another employee in the law firm, or even a person in a different firm who uses the same tool.[16] This can lead to an ethical temptation for lawyers—GAI tools that are built to protect client data will most likely be proprietary tools, which tend to cost more.[17] The motivation to keep costs down by using less expensive or free tools could lead to a costly result if client information is inadvertently disclosed.

            Another area of obligation that lawyers must consider with GAI is the duty to supervise subordinate lawyers and non-lawyers.[18] Opinion 512 focuses on how this might impact required training of firm employees and considerations when hiring outside services.[19] A recent webinar presentation went further by describing AI as a “legal assistant.”[20] If firms begin to use GAI in the role of legal assistant, the nature of GAI may complicate the supervisory responsibilities of lawyers. Computer scientist Josephine Wolff  noted that one of the differences between past technologies and GAI is the relative inability to predict what results a GAI tool will produce, even with extensive testing.[21] If attorneys and firms rely on a technology that produces unpredictable outcomes, they must consider instituting robust review procedures before making use of the work generated.

            Competence, confidentiality, and supervision are all ethical duties of an attorney that may be complicated by using GAI.[22] Both GAI’s status as a newly evolving technology and its unpredictable nature require lawyers to exercise care in using this technology while still fulfilling their ethical obligations.[23] For these reasons, it may benefit lawyers who wish to use GAI to take a slower, deliberate approach to adopting this innovation in practice.

            In Diffusion of Innovations, communications professor Everette M. Rogers considers how people react to new ideas and technologies and how innovations spread.[24] Within his work, Rogers describes five “ideal types” of “adopter categories,” or groups of people classed by the rate at which they are likely to embrace a new idea or technology.[25] The adopter categories are innovators, early adopters, early majority, late majority, and laggards or late adopters.[26] The categories range from those who not just adopt but seek out innovations to those who are the last to adopt an innovation, often well after it is an accepted practice within a community.[27] The position of this blog post is that most lawyers who are interested in GAI should wait to adopt it as part of the early majority or late majority.

            The “majority” titles in both early majority and late majority provide one reason for taking this approach. In Rogers’s characterization, each of these categories make up about one-third of members in a community, so the two categories together make up two-thirds.[28] This means that once early majority and, especially, late majority adopters take on an innovation, it has gained wider acceptance within the community. This wider acceptance usually comes with more infrastructure, such as multiple options for well-developed tools and more resources for understanding the technology. More well-developed GAI tools may include a variety of options designed specifically for the practice of law, with safeguards related to confidentiality and supervision built in. More professional development resources will make it easier for lawyers to use GAI in a competent way. Rogers also shows a correlation between earlier adoption and a greater tolerance for risk and uncertainty.[29] In general, the ethical obligations of lawyers require reducing risk, such as ensuring the required knowledge of law, taking measures to avoid inadvertent disclosure of confidential information, and taking responsibility for the actions of subordinates.[30] Taking a later adoption mindset towards GAI naturally lowers the potential risk threshold for eventual adoption. Finally, there is a correlation between larger financial resources and earlier adoption.[31] For those who are solo practitioners or in small firms on a tight budget, waiting for the infrastructure to develop around GAI may make financial sense.

            It should be noted that the innovator adopter category ideal types are descriptions of how people behave within a community.[32] While it makes sense for many lawyers to wait before adopting GAI, the innovators and early adopters within the legal community will pave the way for later adopters. Still, for those lawyers who are either worried about GAI taking their jobs or worried about the risks GAI poses, the recognition that a slower approach to innovation is a normal part of most innovation cycles may quiet fears about “keeping up.” Choosing to take a more deliberate approach will also empower lawyers to explore GAI tools in a manner that best upholds their ethical obligations.

[1] See, e.g., David Martin, AI in the Military: Testing a New Kind of Air Force, CBS NEWS (Oct. 5, 2025), https://www.cbsnews.com/news/ai-in-the-military-testing-a-new-kind-of-air-force/; For an example of a compelling science fiction novel in which an AI character both destroys and protects human life (sometimes at the same time), see Amie Kaufman & Jay Kristoff, Illuminae (2015).

[2] Martin, supra note 1.

[3] ABA Standing Comm. on Ethics & Pro. Resp., Formal Op. 512 (2024) (discussing ethical obligations that arise in the use of generative AI), https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.

[4] Id.

[5] Everett M. Rogers, Diffusion of Innovations (4th ed. 1995).

[6] ABA Standing Comm. on Ethics & Pro. Resp., supra note 3.

[7] Id. at 1.

[8] Id. at 2.

[9] Id. at 1.

[10] Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 1983).

[11] Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 1983).

[12] ABA Standing Comm. on Ethics & Pro. Resp., supra note 3, at 3.

[13] Id. at 4.

[14] Id. at 6.

[15] Id.

[16] Id. at 7.

[17] Pamela Langham & Ryan Jansen, Webinar on Ethical Uses of Generative AI in the Practice of Law, MD. STATE BAR ASS’N (Sept. 23, 2025), https://www.msba.org/site/site/rise/Store/StoreLayouts/Item-Detail.aspx?iProductCode=ETHAION2025.

[18] Model Rules of Pro. Conduct r. 5.1, 5.3 (Am. Bar Ass’n 1983).

[19] ABA Standing Comm. on Ethics & Pro. Resp., supra note 3, at 10–11.

[20] Pamela Langham & Ryan Jansen, supra note 17.

[21] Josephine Wolff, Professor of Cybersecurity Pol’y & Comput. Sci., The Fletcher Sch., Tufts Univ., Panel on AI and the First Amendment at Vermont Law Review Symposium: Free Speech on Trial (Oct. 4, 2025).

[22] ABA Standing Comm. on Ethics & Pro. Resp., supra note 3, at 14–15.

[23] Id.

[24] Rogers, supra note 5.

[25] Id. at 263.

[26] Id. at 263–66.

[27] Id. at 264–65.

[28] Id. at 265.

[29] Id. at 273.

[30] Model Rules of Pro. Conduct r. 1.1, 1.6, 5.1, 5.3 (Am. Bar Ass’n. 1983).

[31] Rogers, supra note 5, at 264.

[32] Id. at 263.

Error: Only up to 6 widgets are supported in this layout. If you need more add your own layout.

Submissions The Vermont Law Review continually seeks articles, commentaries, essays, and book reviews on any subject concerning recent developments in state, federal, Native American, or international law.

Learn more about the submissions process >