Artificial Intelligence and Stalking and Harassment: Is liability the answer?

Artificial Intelligence and Stalking and Harassment: Is liability the answer?

By Emily Stewart

Technology is a fast-changing and elusive reality of today’s society. In recent years, Artificial Intelligence (AI) has made its way into the general population’s lives.[1] AI is a technology involving computer systems that perform complex tasks like reasoning, decision-making, or solving problems traditionally done by humans.[2] Common examples used by the general public include ChatGPT, Google Translate, and Apple’s Siri.[3] As these programs become more ingrained in the lives of average individuals, there is a question of how this new technology will affect the judicial system. The question is whether these AI programs will integrate into existing electronics, specifically regarding criminal harassment and stalking laws.

On the federal level, the use of electronic communication systems with the intent to harass or intimidate another individual is criminalized.[4] Accordingly, 18 U.S.C. § 2261 includes interactive computer service or electronic communication system, but artificial intelligence is notably absent.[5] State governments are becoming more aware of the gaps in AI legislation.[6] In the 2025 legislative session, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., have introduced legislation on the topic.[7] Notably, North Dakota’s new law prohibits individuals from using an AI-powered robot to stalk or harass other individuals, expanding current harassment and stalking laws.[8] These state efforts serve as important models for others to follow. State governments can lead the way in tackling these rapidly evolving issues by ensuring that technological progress does not compromise personal safety.

The mens rea element of intent is critical to stalking and harassment charges. True threats of violence lie outside the bounds of the First Amendment’s protection.[9] The State is not required in these instances for the defendant to have had any more specific intent to threaten the victim.[10] But the State must prove the defendant had some understanding of their statements’ threatening character.[11] The analysis becomes more complicated when no “true threat” exists. Furthermore, it becomes more unclear when electronics are used as the primary way of stalking and harassment.

The Supreme Court previously addressed electronic means of communication that were allegedly used to stalk and harass a victim.[12] The case, Counterman v. Colorado, involved an individual’s Facebook posts concerning his soon-to-be ex-wife, police officers, and an FBI agent.[13] The Court held that the statute’s mens rea element requires proof that a defendant transmits communication to threaten, or with knowledge it will be viewed as such threat.[14] To establish criminal liability, a defendant must have the subjective intent that the transmitted communication contained a threat.[15] The Court reversed the lower court’s decision and decided the jury instructions were an error.

If AI software falls under “interactive computer service or electronic communication service,” would subjective intent still matter if delivered via third-party software?[16] What burden must the government prove to meet the same standard used by social media and electronic communications?

While AI and its impact on criminal litigation are uncertain, the same does not apply to potential product liability and tort claims. In Garcia, a mother filed several claims on behalf of her deceased son against AI-software creators.[17] Additionally, the lawsuit named Google as a defendant because the software creators first worked as engineers there.[18] The plaintiff’s son, suffering from an anxiety and mood disorder, had become addicted to Characters imitating fictional persons created initially by Character A.I.[19] The addiction worsened until he tragically took his own life, just minutes after his final communication with the AI Character.[20] The defendant moved to dismiss the claims. However, the court only granted the dismissal of the intentional infliction of emotional distress claim because the mother lacked standing.[21]

Moving forward on appeal, defendants will need to argue that large language models function more like services rather than physical products.[22] The defense’s key argument is that AI software functions similarly, so strict liability should be applicable.[23] Tech companies must now consider that AI systems could be scrutinized not only as expression tools but also as potentially dangerous products.[24]

Technological advancements over the past few decades have had an immense impact on younger generations.[25] As technology and AI grow and society adapts, the judicial system will face pressure to determine liability, criminal or not. These pressures could give vulnerable individuals additional chances for judicial remedies. These pressures might also prompt AI software developers to enhance safety measures to lower liability risks. While criminal liability for stalking and harassment using AI might seem distant, product liability and tort claims are not. Ultimately, how society tackles the gaps in AI legislation will influence the legal options available to plaintiffs and the ethical limits imposed on AI developers.

[1] Amanda Peterson, How AI Has Advanced During the 21st Century and Where It’s Headed, PRO-SAPIEN, https://www.pro-sapien.com/blog/how-ai-has-advanced-during-21st-century-and-where-its-headed/.

[2] Coursera Staff, What is Artificial Intelligence? Definition, Uses, and Types,

https://www.coursera.org/articles/what-is-artificial-intelligence (last updated Sept. 30, 2025).

[3] Id.

[4] 18 U.S.C. § 2261A(2)(B).

[5] Id.

[6] Artificial Intelligence 2025 Legislation, NCLS (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.

[7] Id.

[8] Id.

[9] Counterman v. Colorado, 600 U.S. 66, 72 (2023).

[10] Id.

[11] Id.

[12] Elonis v. U.S., 575 U.S. 723 (2015).

[13] Id. at 731.

[14] Id. at 741.

[15] Id. at 737.

[16] 18 U.S.C. § 2261A(2).

[17] Garcia v. Character Technologies, Inc., 785 F.Supp. 3d 1157 (M.D. Fla. 2025).

[18] Id. at 1166.

[19] Id. at 1167.

[20] Id. at 1169.

[21] Id. at 1186.

[22] Peter J. Gregory, Peter Gregory Authors Article on Ramifications of Major Federal AI Ruling, GOLDBERG SEGALLA (June. 30, 2025), https://www.goldbergsegalla.com/news-and-knowledge/news/peter-gregory-authors-article-on-ramifications-of-major-federal-ai-ruling/.

[23] Id.

[24] Id.

[25] Jessica Slack, The Impact of Technology on Millennials and Gen-Z, LIME1Y (May 25, 2022),

https://www.limely.co.uk/blog/the-impact-of-technology-on-millennials-and-gen-z.

Error: Only up to 6 widgets are supported in this layout. If you need more add your own layout.

Submissions The Vermont Law Review continually seeks articles, commentaries, essays, and book reviews on any subject concerning recent developments in state, federal, Native American, or international law.

Learn more about the submissions process >