Artificial Intelligence and Health Care: Reviewing the Algorithmic Accountability Act in Light of the European Artificial Intelligence Act
Clelia Casciola (Clelia’s full Note was published in Vermont Law Review, Volume 47 and can be found here!)
—
In 2015, British theoretical physicist Stephen Hawking commented “Computers will overtake humans with AI . . . within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”[1] Stephen Hawking’s words suggest a science-fiction reality and a world in which robots possessing human-like features, such as the ability to think, would live alongside humans and even overtake them. Hawking’s words highlight the relevance that computers play and will continue to play in our society. These words also raise significant questions on Artificial Intelligence (AI) and new technologies in general. To what extent do we want to allow technological development? And should we control and regulate this technological advancement to ensure that it aligns with human goals?
In recent years, legislators and policy-makers around the world have struggled with these questions. On the one hand, they recognize the positive effects AI can have on the economy; on the other hand, they recognize the negative effects that AI can have on humans if left unchecked.[2] For years, the European Union (EU) has been a strong advocate of AI development in respect of the legal and societal values it upholds.[3] The recent proposal on AI, the Artificial Intelligence Act,[4] is an attempt of the European Union and its members to deal with AI related issues while ensuring research and development of such technologies.[5] The proposal tries to prevent algorithm bias and discrimination that AI systems can produce.[6]
Similarly, the United States has attempted to pass legislation at the federal level to ensure that AI systems work in ways that do not harm nor discriminate against consumers.[7] For instance, the Algorithmic Accountability Act of 2019 specifically tackles the issue of algorithmic bias and discrimination.[8] Algorithmic bias and discrimination can occur in different AI systems used in a variety of industries.[9] The health care industry is one. Studies have shown that in the health care industry, certain AI systems used in management programs can have discriminatory effects on patients.[10] However, as of today, Congress has not passed any legislation on this issue.
This Note argues that the U.S. Congress should enact comprehensive legislation to prevent the use of AI systems built on algorithmic biases, specifically in the health care industry, by expanding on the Algorithmic Accountability Act of 2019 and by looking at the EU Artificial Intelligence Act as example. Part I provides an overview on AI economic benefits and ethical concerns and on AI discrimination in the health care industry. It also presents the legal and policy landscape of AI in the European Union and the United States. Part II analyzes specific articles from the EU Artificial Intelligence Act that can affect AI systems in industries like health care. Part III analyzes the U.S. attempt to enact legislation at the federal level by focusing on the 2019 Algorithmic Accountability Act. Part III also compares the U.S. bill to the EU proposal in light of AI discrimination in health care. This section critically addresses the different arguments against the Algorithmic Accountability Act and describes solutions to those arguments based on the EU proposal. Finally, Part III briefly addresses the Algorithmic Accountability Act of 2022.
—
[1] Lisa Eadicicco, In the Next 100 Years ‘Computers Will Overtake Humans’ and We Need to Be Prepared, Says Stephen Hawking, Bus. Insider (May 13, 2015), https://www.businessinsider.com/stephen-hawking-on-artificial-intelligence-2015-5.
[2] See Christian Pazzanese, Ethical Concerns Mount as AI Takes Bigger Decision-making Role in More Industries, Harv. Gazette (Oct. 26, 2020), https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/ (providing examples of AI economic benefits and related ethical concerns, such as privacy, surveillance, bias, and discrimination).
[3] See Kelvin Chan, EU Proposes Rules for Artificial Intelligence to Limit Risks, AP NEWS (Feb. 19, 2020), https://apnews.com/article/artificial-intelligence-technology-business-europe-ursula-von-der-leyen-19ec99f8a970fe14a99a84d52017ec22 (presenting EU plans and projects for AI legislation with focus on human rights and interests).
[4] Commission Proposal for a Regulation of the European Parliament and of the Council (Artificial Intelligence Act), COM (2021) 206 final (April 21, 2021).
[5] Chan, supra note 3.
[6] See High-Level Expert Grp. on A.I., The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment, at 16 (July 17, 2020), https://op.europa.eu/en/publication-detail/-/publication/73552fcd-f7c2-11ea-991b-01aa75ed71a1(providing ethics guidelines for the development of trustworthy AI systems and addressing the specific requirements of non-discrimination and fairness to prevent discrimination against historically marginalized communities, requirements also present in the EU proposal).
[7] Algorithmic Accountability Act of 2019, H.R. 2231, 116th Congr. (2019-2020); S. 1198, 116th Congr. (2019-2020). In February 2022, U.S. legislators reintroduced the bill as the Algorithmic Accountability Act of 2022. H.R. 6550, 117th Congr. (2021-2022); S. 3572, 117th Congr. (2021-2022).
[8] H.R. 2231.
[9] Pazzanese, supra note 2.
[10] See Charlotte Jee, A Biased Medical Algorithm Favored White People for Health-care Programs, MIT Technology Review (Oct. 25, 2019), https://www.technologyreview.com/2019/10/25/132184/a-biased-medical-algorithm-favored-white-people-for-healthcare-programs/ (describing a case study on algorithmic biases produced by an health management software discriminating against people of color).