Addressing Gaps in AI Law: Reasonably Protecting Children and Adults from Generative AI Created Harmful Content

Addressing Gaps in AI Law: Reasonably Protecting Children and Adults from Generative AI Created Harmful Content

Introduction

Artificial Intelligence (AI) development is rapidly exceeding what the average person can easily comprehend, with organizations in every sector figuring out new ways of implementing enhanced AI capabilities to improve operations, products, or services. The passage of the European Union’s (EU) AI Act and Colorado’s AI Act (CAIA) earlier this year marked what has been touted as comprehensive protections against AI harms.[1] These protections aimed to prevent discrimination and bias from AI decisions about individuals interacting with important systems like critical infrastructure, education, employment, public and private services, and law enforcement.[2] While decision-making regarding critical services is important, AI systems have already caused damage, not only due to their decisions but also because they have generated content that has harmed minors and adults. Covering this protection gap in existing law requires that certain Generative AI (GenAI) content providers seek parental permission, provide user warnings, and take preventative measures to protect users from harmful content. Legislators should also allow private enforcement actions against AI developers that cause reasonably foreseeable harm to their users, which would incentivize developers to prevent user harm without hampering their power to explore and innovate.

AI Law Background and Gaps in GenAI Harmful Content Coverage

The recent exponential growth of AI use in global information systems has reached numerous applications across every sector. Legislators are seeking new ways of ensuring public safety against new threats with the corresponding risk of harm to the public continuing to accelerate with every expansion of AI capability. After a few years of debate, parliament adopted the EU’s AI Act in 2024 to decrease harm related to negative impacts on health, safety, services, and rights individuals might experience from AI-based product components or decision-making processes.[3] Around the same time, Colorado’s legislature enacted CAIA, bringing similar restrictions to AI outputs concerning key decision-making systems in relation to employment, education, finance, insurance, housing, healthcare, public, and legal services.[4]

Legislatures from both jurisdictions attempted and may have successfully provided regulations that cover a significant portion of AI harm to individuals. Although the laws’ scopes are broad, limitations will likely need to be addressed to gain additional coverage for harms already impacting AI-system users. CAIA’s reach is limited explicitly to AI systems that could deny individuals education, employment, or access to some government or private services,[5] which are operated by medium to large-sized businesses with over 50 staff.[6] Organizations have expanded their use of AI well beyond AI-supported or autonomous decision-making systems not covered by the laws’ limited application to those systems.[7] While the EU’s AI act does not have an organizational size limitation, it does have more comprehensive coverage of AI systems, which includes 1) banning unacceptable uses of AI concerning deception, biometric use, or social scoring and 2) forcing strict human oversight and mitigations to prevent high-risk systems from improper deprivation of rights that could be affected by similar decision-making systems that CAIA covers regarding employment, education, safety, and services.[8]

Even some of the largest global IT consulting companies seem to limit their focus on GenAI legal concerns to intellectual property, data protection, and contracts issues,[9] while leaving out the gap related to risks caused by harmful outputs of GenAI chatbots. Minor risks to the public have already arisen in the market when Amazon’s chatbot provided the wrong products when responding to a request for kinds of running shoes, failed to respond to a request for a product, and gave disparagingly false information about one product while promoting another.[10] Even after a customer found a product they would like to purchase, a chatbot caused problems by forming a vehicle contract when a General Motor dealer’s GenAI agreed to sell a customer a new 2024 SUV for one dollar ($1.00).[11]

The risks are even higher when the conversation becomes more personal, causing emotional harm that has the potential to and has already caused severe consequences to people’s lives. Recently, Google’s Gemini chatbot frightened a 29-year-old student when asking about challenges older adults face in retirement after ending its response by telling the user to “Please die. Please.”[12] In the most severe cases of harm, a Character.AI chatbot is accused of sexually and emotionally abusing a 14-year-old teenager who died after the chatbot allegedly encouraged the student to kill himself after months of discussing sexual, emotional, and intimate relationship topics that included conversing about his suicidal thoughts.[13]

As of 2019, AI chatbot usage has increased by 92%, with approximately 67% of the global population interacting with one in the past year.[14] Although the usage of chatbots is expected to increase rapidly,[15] the main focus of safety organizations and lawmakers has been user-generated content and messaging, which are often amplified by AI-generated algorithms.[16] Parents for Safe Online Spaces (ParentsSOS) advocates for protected areas on the internet for children and content-based restrictions on AI applications, yet these regulations would primarily target only user-based content.[17] Nationally, even bipartisan bills, like the U.S. AI Act, lack support for protections against harmful AI-created outputs.[18] The Kids Online Safety Act (KOSA) has gotten increased support after passing in the U.S. Senate earlier this year and may support limiting addictive content from third-party advertising,[19] but it does not protect children from searching for harmful content or having harmful conversations with AI chatbots.

Many promoted regulations, like the EU’s AI Act, classify chatbots as a kind of “transparency risk,” which requires only indicating to users that they are interacting with an AI system and labeling AI-created responses as “AI-generated.”[20] While this may provide some minimal assistance to users, it certainly falls short of protecting vulnerable populations like our children, the elderly, and the mentally ill from AI-generated content harm. Furthermore, once an AI chatbot harm has occurred, most of the current laws, like CAIA, limit enforcement to only state actors like the Attorney General. [21] The EU’s Act similarly has no private right of action for individuals, leaving those harmed with the entire burden of determining how to seek damages for the harm incurred. The laws in place are not enough to prevent individual users from GenAI-created content that could cause them foreseeable harm, which remains in a category outside of what most legislators have included in the scope of past and current legislation.

Summary and Purposes of Proposed Legislation in NY on Chatbot Harms

After utilizing multiple AI-legislation trackers to search through hundreds of proposed U.S. state and national bills, hardly any related to protecting individuals from GenAI outputs or chatbot harm unrelated to decision-making processes. New York’s Senate Bill S9381 is currently proposed to help prevent specific liability for misleading and harmful information AI chatbots provide users.[22] The proposed law requires organizations using an AI chatbot (not third-party developers) with more than twenty employees to disclose to users that they are conversing with an AI system and that a human did not generate the content provided.[23] The bill imposes liability on a chatbot for “materially misleading, incorrect, contradictory, or harmful information.”[24] However, an organization can avoid liability if it corrects the harmful or misleading information given to users within thirty (30) days.[25] The bill lacks any indication of a private right of action, which presumably would be limited to enforcement by the state attorney general’s office.

The purpose of the bill is to prevent the nature of harm related to the misinformation that has caused harm to users, such as providing false pricing information or causing damages from improper contracting agreements.[26] The bill would increase user awareness to help them understand that they are not talking with a human being, which might heighten their caution in dealing with a system that may yield incorrect information.[27]

Recommendations to Protect Users from Harmful GenAI-Created Content

Although the N.Y. bill is a start, comprehensive legislation requires a more extensive scope and improved enforcement methods to fully address the wide-ranging potential of the related AI chatbot harms discussed above. Combatting those gaps to protect especially vulnerable populations requires addressing four major areas of AI-chatbot use and enforcement: 1) providing age verification for potentially harmful content, 2) being transparent and identifying potentially harmful types of content for all users, 3) providing a private right of action, and 4) basing the private right of action on reasonably foreseeable harms.

Recommendation 1: Age Verification for Potentially Harmful Content

            While most adults should be free to determine the type of content they want to view and interact with online, parents should have a choice in determining the systems their children use based on what they believe is in their best interest. Limiting potentially harmful content on the internet is not a new concept, as regulators understood in the 1990s that children need stronger protection than the average adult internet user.[28] Parents should be able to choose whether their children interact with particular kinds of AI chatbots that can form intimate relationships or discuss things like sexual topics, substance abuse, or the promotion of harmful behaviors. National proponents of the AI Shield for Kids Act or “ASK Act” promote Federal Trade Commission (FTC) and Federal Communications Commission (FCC) rules that would require organizations to gain parental permission before they utilize AI systems to chat or interact with minor users under the age of 18.[29] These age verification rules would effectively promote minimum standards to help ensure children browsing the internet were not exposed to harmful GenAI-created content without their parent’s approval.

Recommendation 2: Transparency and Notifications of Harmful Content for Minors and Adults

            Current legislation preventing minors from AI algorithm harm fails to prevent them from searching for harmful content, while many adults are negatively affected by similar AI-generated mechanisms. Although there has been significant movement and support to pass KOSA in 2024, the bill would not prevent children from seeking out harmful third-party content and having conversations with harmful GenAI chatbots.[30] Adults and their families have also been seriously impacted by chatbots that caused harm when a GenAI system sent them messages telling them they were unhappy, sending them inappropriate sexually harassing chats, or encouraging them to commit suicide.[31] Some adult users have also grown dependent on AI chatbots and have developed mental health crises because of their use.[32]

Organizations should warn minors, their parents, and adults using GenAI chatbots by being transparent about the potentially harmful types of content they might expose themselves to when selecting specific AI chatbots for discussion. During the chat, if any user requests information that could be harmful, the system should automatically direct them to harm prevention information and services.[33] Mandating a duty of care to technology companies can incentivize the design of safer products, including AI algorithms and chatbots that improve the user experience for all age groups.[34] Therefore, legislators should impose a GenAI developer and owner duty of care to users of all age groups, which would incentivize warnings to users (or their parents if minors) or limit access to potentially harmful content their systems might create.

Recommendation 3: Enforcement by Private Right of Action

Parents and other loved ones have desperately tried to hold companies responsible through their attempts at multiple lawsuits worldwide against Meta (Facebook), TikTok, Netflix, Snap, and Pinterest, among others, for facilitating harmful content toward their children and loved ones.[35] Algorithm recommendation systems for content covered by KOSA often have strong liability protections because third parties generate the content they recommend.[36] However, Section 230 protections for interactive computer service providers, like AI platform owners, should not apply to platform-generated content, only third-party content.[37]

Although causal connections may be stronger with content-generating systems than algorithmic recommendation systems, GenAI owner liability may only be effective with a statutory right of action as companies could argue they do not own the GenAI content. The U.S. Copyright Office has indicated that content an AI system generates, even with assistance from a long process of prompting and adjustments, cannot be copyrighted by the user or even the organization that owns the GenAI system that created the content.[38] This inability to attach ownership to GenAI content may impede necessary improvements to AI development and create cumbersome hurdles to overcome for families harmed by AI systems when seeking damages.

Another limitation of these laws is that they often provide an exemption from liability for AI system owners that remedy misinformation or harmful conduct within a certain period, usually within thirty (30) days. After the fact remediation concerning GenAI chatbot content is likely inadequate because the physical, mental, and emotional harm already done by harmful AI interactions cannot be undone by simply correcting a mistaken price tag within 30 days.

Addressing these issues will require legislators to enact laws that provide AI system users and their families with a private right of action not exempted by post-interaction corrections or adjustments. In addition, laws will need to make explicit connections between the developers and owners of GenAI systems that generate harmful content. This link should ensure that owners and developers are liable for the harmful content their systems produce to resolve the question of who or which organizations are responsible.

Recommendation 4: Enforcement Based on Reasonably Foreseeable Generated Content

Once liability attaches, the main question remains: at what level should organizations be responsible for the content a GenAI system creates? While there are gaps in the FTC’s coverage of chatbots, the organization already has standards for other AI systems, which could be used to develop duties of care toward users of GenAI systems. Developers should design AI systems based on the controls they safely can implement that would manage reasonably foreseeable risks in the “often obvious ways it could be misused” or be harmful to users.[39] On the front end of development, developers have control over what training data their GenAI systems use based on the types of content responses they want the system to provide users. Even after a system is developed, owners can restrict the scope of topics an AI system can discuss with users, thereby providing viable mitigations to potential risks of harm.

As owners and developers can tailor their GenAI chatbot’s scope and method of responding to particular discussion topic areas, legislators and regulators should impose liability on reasonably foreseeable harm that results from their choices and methods of topic coverage. This accountability system would create a design-based incentive for developers to ensure that for each new topic area of AI-generated content, a corresponding risk would need to be reviewed to ensure foreseeable harms were addressed before releasing it to the public. The GenAI developer’s design considerations change from, as translated from Jeff Goldblum’s question in Jurassic Park, not just could they create a chatbot covering a particular area or method of conversation, but should they add that area of risk in the first place,[40] and how might they prevent that area’s reasonably foreseeable harm?

Arguments For and Against the Law and their Impact on AI Chatbots

            Legislators and organizations have previously supported protections similar to the earlier recommendations, which indicates there could be support for these new improvements. The N.Y. bill requires organizations to disclose AI interactions to users, which may add a slight cost to development in their user interface with a warning that the person is interacting with an AI chatbot. The more difficult requirement to implement arises when attempting to prevent chatbots from providing misleading information. While preventing misinformation might take more development time, focusing chatbots on specific products and services might help organizations reduce costs because of the limited areas covered. Even with those additional requirement costs, organizations’ overall savings often come when chatbots allow companies to hire fewer customer support staff and realize cost reductions of as much as thirty (30) percent in support services.[41]

Age verifications that protect children from harmful content may require a brief delay for initial verification, which may slow interactions for all users (adults and minors). However, this kind of hesitation is supported by child protection advocates and is unlikely to violate First Amendment protections with only minor delays in accessing potentially harmful information.[42] Companies could likely work around initial verifications by providing immediate access to limited chatbots that are guaranteed to scope their responses to generally safe content while requiring further age verifications for any harmful, addictive, sexual, or other adult GenAI content or behaviors.

Harmful behaviors of AI systems often contain discrimination and bias problems, which may negatively impact users because of inherent discrimination that can arise from training data.[43] These concerns brought the National Association of Attorneys General (NAAG) to conclude that human control, review, and oversight of AI systems are needed because of the relatively new nature of GenAI applications.[44] Some FTC regulations govern the prevention of deceptive or misleading results from GenAI decision-making systems, but this likely leaves out content created by AI systems other than the production of deepfakes.[45] While the FTC argues rightly that developers should be responsible for both inputs and outputs of AI systems,[46] their responsibility should be expanded beyond most decision-making systems to include GenAI content creation to ensure more comprehensive coverage of potential AI harms.

In addition to arguments supporting additional restrictions on GenAI systems, some have expressed concerns about increasing regulation being too great a burden for emerging technology growth. Economists have argued that adding additional regulations to AI development may worsen the EU’s 15-year productivity slump and dampen investment, hindering future innovation.[47] Some counterarguments suggest that clarity in AI law might boost consumer confidence to increase trust and clear up regulations to encourage corporate investment, whereas the previous uncertainty in law may have stifled business confidence in developing AI systems.[48]

Reasonably foreseeable harm may not significantly impact smaller-scoped chatbots targeted at supporting particular products or services, as companies supporting those initiatives would have a much more focused area of reasonably foreseeable harm. However, the more substantial burden would likely be upon organizations with AI chatbots attempting to cover the gamut of human expression and knowledge. These organizations would need to take additional steps to ensure their systems conducted the appropriate age verification and removal or warnings of potentially harmful content in each foreseeably harmful category of chatbot communication. Some companies now possibly liable for chatbot harms are beginning to realize the need to take the safety of AI-created content seriously and have already taken steps to detect and limit harmful content by removing some discussion topics, posting user disclaimers, and implementing parental controls.[49]

Related Public Opinion on Similar AI Legislation

Attorneys general might see the existing AI laws as a model to cover the majority of GenAI issues for the rest of the United States,[50] but that opinion may result from the prominent business adoption of decision-making systems now widely used in hiring, education, and critical support services. Everyday users’ slower AI adoption is likely a result of the technology not being well understood or utilized by the public.[51] As the public interacts more with chatbots, additional harm is likely to rise along with the increased usage of AI systems across sectors, requiring the expansion of legislative coverage for these potential GenAI harms.

While cases of chatbot harm have gathered some public attention through occasional news articles, the general public does not seem to have a consensus about whether legislators need to increase user protections from chatbot harm. The US Chamber of Commerce and the Consumer Technology Association argue that regulations restricting decision-making AI algorithms will hamper AI adoption by small businesses as a barrier to entry, which could impede overall economic growth.[52] More targeted laws and regulations, including the above recommendations focused on AI chatbots, will likely receive similar feedback in slowing some innovation while organizations increase user protections for publicly available GenAI systems.

Recommended Benefits Outweigh Development Drawbacks

Although some areas of development could be slowed because of increased liability, incentives could increase user protections against chatbot harms, which would have a more significant benefit to society. The N.Y. Senate’s proposed chatbot legislation would impose minimal liability on companies, which could be seen as reactive and very limited with the 30-day window to correct any errors before permanent liability attaches. However, the additional recommendations suggested would establish permanent liability for reasonably foreseeable harm caused and a private right of action that would enable individuals and families to seek justice from content-generating AI system owners and developers. While AI development might be less costly when the scope or nature of the content created by GenAI systems is unlimited, preventing the mental, emotional, and physical risk of harm to users significantly outweighs those concerns. Furthermore, the possible drawbacks of additional cautionary steps in parts of the GenAI development process related to topic area coverage, age verification, and parental and user notifications are not likely significant enough to impede overall AI technology innovation.

Conclusion

Individual users can already interact with GenAI to explore the depths of human knowledge and emotion. They may soon be able to voyage beyond the frontiers of our known existence into new worlds with AI’s help. Along the way to further growth, legislators should establish requirements for AI development to ensure that content created by GenAI systems is safe and does not harm users. Surprisingly, AI chatbots may ultimately be one of the best tools for public health because of their ability to break down personal barriers to healthcare or provide advanced detection for early intervention when healthcare is needed.[53] However, some GenAI developers may place profits and innovation over public health concerns, which is why liability incentives creating an appropriate duty of care are required to prevent further chatbot harm.

New technological developments have correspondingly increased the likelihood of GenAI chatbot harm as users seek to meet the needs they have in their own lives through the utilization of AI systems. As children are some of the most vulnerable users of GenAI, taking steps requiring age verification or parental consent and being transparent about potentially harmful GenAI content is a prudent method to prevent harm to minors. Enabling private rights of action against owners of GenAI systems that create reasonably foreseeable harmful content is essential to removing questions surrounding GenAI developer liability for harmful behaviors and resolving the associated disputes. While these recommendations might require additional effort, they will incentivize developers by holding them accountable for the types of content their GenAI systems create and enable a safer and more reliable user experience by covering the gap in current AI laws to help protect children and vulnerable adults from harm.


[1] EU AI Act: first regulation on artificial intelligence, European Parliament, 1 (June 2024), https://www.europarl.europa.eu/pdfs/news/expert/2023/6/story/20230601STO93804/20230601STO93804_en.pdf.

[2] Id. at 3.

[3] Article 6: Classification Rules for High-Risk AI Systems: EU Artificial Intelligence Act, Future of Life Inst. (last visited Nov. 16, 2024), https://artificialintelligenceact.eu/article/6/.

[4] Colo. Rev. Stat. § 6-1-1701(3).

[5]  Id.

[6] Colo. Rev. Stat. § 6-1-1703(6).

[7] Maneesha Mithal, Christopher Olsen & Stacy Okoro, Colorado Passes First-in-Nation Artificial Intelligence Act, Wilson Sonsini (May 21, 2024), https://www.wsgr.com/en/insights/colorado-passes-first-in-nation-artificial-intelligence-act.html.

[8] European Artificial Intelligence Act comes into force, European Comm’n: Directorate-Gen. for Commc’n. (July 31, 2024), https://ec.europa.eu/commission/presscorner/detail/en/ip_24_4123.

[9] The legal implications of Generative AI, Deloitte AI Inst. (Oct. 2023), https://www2.deloitte.com/content/dam/Deloitte/us/Documents/consulting/us-ai-institute-generative-ai-legal-issues.pdf.

[10] Deiter Holger, Amazon’s AI shopping assistant Rufus is often wrong, ConsumerAffairs (Nov 7, 2024), https://www.consumeraffairs.com/news/amazons-ai-shopping-assistant-rufus-is-often-wrong-110724.html.

[11] Jonathan Lopez, GM Dealer Chat Bot Agrees To Sell 2024 Chevy Tahoe For $1, GM Authority (Dec. 18, 2023), https://gmauthority.com/blog/2023/12/gm-dealer-chat-bot-agrees-to-sell-2024-chevy-tahoe-for-1/.

[12] Mandy Taheri, Google’s AI Chatbot Tells Student Seeking Help with Homework ‘Please Die’, Newsweek (Nov. 15, 2024), https://www.msn.com/en-us/news/technology/googles-ai-chatbot-tells-student-seeking-help-with-homework-please-die/ar-AA1u9Owo.

[13] Emily Crane, Boy, 14, fell in love with ‘Game of Thrones’ chatbot — then killed himself after AI app told him to ‘come home’ to ‘her’: mom, N.Y. Post (Oct. 23, 2024), https://nypost.com/2024/10/23/us-news/florida-boy-14-killed-himself-after-falling-in-love-with-game-of-thrones-a-i-chatbot-lawsuit/.

[14] Jeff Beckman, 120+ Chatbot Statistics for 2024 (Already Mainstream), Techreport (May 29, 2024), https://techreport.com/statistics/software-web/chatbot-statistics/?.

[15] Id.

[16] Windsor Johnson, A historic new law would protect kids online and hold tech companies accountable, Or. Pub. Broad. (Aug. 3, 2024), https://www.opb.org/article/2024/08/03/a-historic-new-law-would-protect-kids-online-and-hold-tech-companies-accountable/.

[17] It’s time to #PassKOSA., Parents for Safe Online Spaces (last visited Nov. 2, 2024), https://www.parentssos.org/work.

[18] S.1409 – Kids Online Safety Act: 118th Congress (2023-2024), Libr. of Cong. (Dec. 13, 2023), https://www.congress.gov/bill/118th-congress/senate-bill/1409.

[19] Id.

[20] AI Act enters into force, European Comm’n: Directorate-Gen. for Commc’n. (Aug. 1, 2024), https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en?.

[21] SB24-205 Consumer Protections for Artificial Intelligence, Colo. Gen. Assembly (last visited Nov. 2, 2024), https://leg.colorado.gov/bills/sb24-205.

[22] Senate Bill S9381, N.Y. State Senate (May 14, 2024), https://www.nysenate.gov/legislation/bills/2023/S9381.

[23] Id. at § 3.

[24] Id. at § 2.

[25] Id.

[26] Id.

[27] Id. at § 3.

[28] Michal Lavi, Targeting Children: Liability for Algorithmic Recommendations, 73 Am. U. L. Rev. 1367, 1430 (2024).

[29] Rick Scott, S.1626 – ASK Act 118th Congress (2023-2024), Libr. of Cong. (May 16, 2023), https://www.congress.gov/bill/118th-congress/senate-bill/1626/text.

[30] S.1409, supra, at § 3(b)(1).

[31] Chloe Xiang, ‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says, VICE Media (Mar. 30, 2023), https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/.

[32] Id.

[33] S.1409, supra, at § 3(b)(2).

[34] Lavi, supra, at 1437.

[35] Lavi, supra, at 1370-76.

[36] Id. at 1446.

[37] 47 U.S.C. § 230(c)(1).

[38] Zarya of the Dawn (Registration # VAu001480196), U.S. Copyright Off. at 9 (Feb. 21, 2023).

[39] Michael Atleson, Succor borne every minute, F.T.C. (June 11, 2024), https://www.ftc.gov/business-guidance/blog/2024/06/succor-borne-every-minute.

[40] Id.

[41] Beckman, supra.

[42] Lavi, supra, at 1453.

[43] Alex Seigal & Ivan Garcia, A Deep Dive into Colorado’s Artificial Intelligence Act, Nat’l Ass’n of Att’y Gen. (Oct. 26, 2024), https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/.

[44] Id.

[45] Combatting Online Harms Through Innovation, F.T.C., 2 (June 16, 2022), https://www.ftc.gov/reports/combatting-online-harms-through-innovation.

[46] Id. at 7.

[47] Zach Meyers, Is the EU’s AI Act Merely a Distraction from Europe’s Productivity Problem?, The Econ. Voice, 3 (2024).

[48] Id.

[49] Community Safety Updates, Character.AI (Oct. 22, 2024), https://blog.character.ai/community-safety-updates/.

[50] Seigal et al., supra.

[51] Id.

[52] Id.

[53] Luke Balcombe, AI Chatbots in Digital Mental Health, 10 Informatics, no. 4, 82, 86 (2023).


Author’s Note: This essay was created from a specific request to research and analyze gaps in specific areas of AI law and how they could be improved.