A Deep Look at Legislation
Deepfake legislation in the U.S. is advancing swiftly to combat the rising risks associated with synthetic media, addressing critical areas such as cybersecurity, privacy, election integrity, and intellectual property. Federal and state lawmakers are enacting and refining laws to curb the misuse of deepfake technology, focusing on issues like fraud, defamation, election manipulation, and non-consensual explicit content. These evolving regulations aim to safeguard individuals, institutions, and democratic processes against the growing challenges posed by this rapidly advancing technology.
With the rapid advancement of technology, it is crucial to establish proper legislation to ensure that these powerful tools are used responsibly. Clear guidelines are necessary to maximize their benefits while minimizing risks, protecting against malicious use, and safeguarding societal well-being.
What are Federal Laws that Address Deepfakes or Artificial Intelligence (AI)?
National Defense Authorization Act (NDAA)
In 2024, updates to the NDAA outlined measures to address deepfake technology in cybersecurity and military operations, with focus on its potential use in misinformation campaigns by adversarial nations. The Act includes a National DeepFake Detection Program, Election Security Provisions, Criminalization of Malicious Deepfakes, and formalizes the NIST AI Risk Management Framework.
Federal Trade Commission Act (FTC Act)
The FTC Act (§ 5) prohibits “unfair or deceptive acts or practices” in commerce. This can include overpromising AI capabilities of service providers or using AI for malicious deception using deepfakes.
The FTC can take enforcement actions against entities or individuals using deepfakes or AI-generated media to engage in fraud, false advertising, or scams. Recent news highlighted how a Deepfake convinced a professional to transfer millions of dollars to a fraudulent account.
US Criminal Code (18 U.S.C.)
Fraud and Computer Misuse:
- 18 U.S.C. § 1030 (Computer Fraud and Abuse Act – CFAA) is a federal law that prohibits computer fraud and abuse. It Criminalizes unauthorized computer access, which can extend to the malicious creation or deployment of deepfake content used for fraud. The CFAA aims to prevent computer crime and balance the interests of the federal government and the states.
- 18 U.S.C. § 1343: Wire fraud statutes could also apply to deepfake-based scams, such as fake audio or video used to defraud victims.
Defamation and Non-Consensual Content:
- Deepfakes used in harassment or defamation may fall under laws addressing cyberstalking and harassment, particularly under statutes like 18 U.S.C. § 2261A (stalking).
What are Some Enacted State Laws and Legislation Addressing Deepfakes or Artificial Intelligence?
Many of the states have proposed legislation in regard to the use of AI. Quite a few bills are pending; here is an overview of some that have already been enacted.
ALABAMA
HB161 Crimes & offenses, prohibits a person from creating a private image without consent
Quick Summary: This prohibits two crimes: “(1) distributing a private image of an individual without their consent, and (2) creating a private image of an individual without their consent.”
Quick Summary: It is “illegal to distribute AI-generated deceptive media if the distributor knows it falsely represents a person and intends to influence an election or harm a candidate’s electoral prospects.”
ARIZONA
HB 2394 digital impersonation; injunctive relief; requirements
Quick Summary: This makes the “digital impersonation of a candidate for public or political party office, without consent, unlawful.”
S 1359 Election Communications and Deep Fakes and Prohibition
Quick Summary: People cannot create and distrubte a synthetic media message that they know is deceptive of fraudulent of a candidate on a ballot unless the media includes a clear disclosure that media includes content generated by artificial intelligence.
CALIFORNIA
AB-2839 Elections: deceptive media in advertisements.
Quick Summary: It is illegal to knowingly share fake or altered videos, images, or audio in ads or election messages if the goal is to mislead voters or raise money for a campaign. If someone breaks this rule, affected parties can take legal action to stop the content and seek damages.
AB-2355 Political Advertisements: Artificial Intelligence
Quick Summary: 2355 addresses AI in political ads. Specific rules apply if an image, video, or audio is completely made or significantly changed by AI that could mislead people’s opinion compared to the original.
SB-926 Crimes: distribution of intimate images.
Quick Summary: It is a crime to intentionally share or distribute AI-generated or deepfake images that realistically depict another person’s intimate body parts or sexual acts, if the image is made to appear authentic and could mislead a reasonable person into believing it is real.
COLORADO
HB24-1147 Candidate Election Deepfake Disclosures
Quick Summary: This legislation addresses the implications of utilizing deepfake technology in communications concerning candidates for elective office. It outlines the necessity for disclosure regarding the use of such technology, establishes mechanisms for enforcement, and introduces a private cause of action for candidates affected by deepfake misuse.
FLORIDA
CS/HB 919 Artificial Intelligence Use in Political Advertising
Quick Summary: This regulation mandates that specific political advertisements, electioneering communications, and miscellaneous advertisements must include a designated disclaimer. It outlines the requirements for this disclaimer, establishes both criminal and civil penalties for non-compliance, and permits any individual to file relevant complaints. Additionally, the regulation provides for expedited hearings to address these complaints.
HAWAII
Quick Summary: SB 2697 prohibits any individual, with specific exceptions, from recklessly distributing or entering into agreements to distribute materially deceptive media from the first working day of February in every even-numbered year until the next general election. Such actions must be conducted with reckless disregard for the potential harm to a candidate’s reputation or electoral prospects, as well as the impact on voter behavior. The provision also establishes criminal penalties and remedies for affected parties.
IDAHO
HB 575 Disclosing Explicit Synthetic Media
Quick Summary: HB 575 states that an individual is guilty of disclosing explicit synthetic media if they knowingly disclose such media and are aware, or reasonably should be aware, that an identifiable person depicted in the media did not consent to its disclosure. The disclosure is likely to cause substantial emotional distress to the identifiable person. This outlines criminal penalties for violations and specifies certain exceptions.
HB 664 Freedom From AI-Rigged (FAIR) Elections Act
Quick Summary: HB 664 allows a candidate whose actions or speech are misrepresented through synthetic media in election-related communications to seek injunctive or other equitable relief to prevent the publication of such media. That candidate may pursue a damages claim for the deceptive representation.
ILLINOIS
HB 4762 Digital Voice and Likeness Protection Act
Quick Summary: The Digital Voice and Likeness Protection Act is designed to safeguard individuals from unauthorized use of their digital replicas. It addresses the concern of misuse of digital likenesses created through technologies like generative AI.
Quick Summary: This amendment to the Right of Publicity Act enhances enforcement rights and remedies specifically for recording artists. It establishes liability for any individual who materially contributes to, induces, or facilitates a violation of the act by another party, provided that the individual had actual knowledge that the work contains an unauthorized digital replica
INDIANA
HB 1133 Digitally Altered Media in Elections
Quick Summary: HB 133 defines fabricated media and stipulates that if a campaign communication features fabricated media depicting a candidate, the individual or entity that financed the communication must include a disclaimer that is distinct from any other disclaimers. Furthermore, a candidate who is portrayed in fabricated media within a campaign communication that lacks the required disclaimer has the right to initiate a civil action.
LOUISIANA
QUICK SUMMARY: SB 6 pertains to computer-related crimes, specifically addressing the unlawful dissemination or sale of images generated by artificial intelligence. It outlines relevant definitions and stipulates associated penalties for violations.
SB 488 Unethical Election Practices
Quick Summary: SB 488 prohibits specific false statements made by political committees. It stipulates that no candidate or political committee may, with the intent of misleading voters, distribute or facilitate the distribution of any oral, visual, digital, or written material that contains statements they know, or should reasonably be expected to know, are false regarding another candidate in the election.
MARYLAND
HB 333 Election Misinformation and Election Disinformation
Quick Summary: The State Board of Elections must create a portal on its website for the public to report election misinformation and disinformation. The Board will periodically review submissions and, if necessary, provide corrective information or refer cases to the State Prosecutor. Additionally, “influence” is defined for legal provisions prohibiting improper voting influence.
MICHIGAN
Quick Summary: If an individual, committee, or entity creates, publishes, or distributes a qualified political advertisement, it must clearly state that the advertisement was generated wholly or largely by AI, if applicable. This statement must be presented in a clear and conspicuous manner, meeting specified requirements.
MINNESOTA
Quick Summary: 4772 addresses elections by implementing policy and technical changes related to election administration, campaign finance, lobbying, and census redistricting. It establishes the Minnesota Voting Rights Act and modifies the offense of using deepfakes to influence elections.
MISSISSIPPI
S 2577 Wrongful Dissemination of Digitizations
Quick Summary: 2577 imposes criminal penalties for the unauthorized sharing of digitizations.
NEW HAMPSHIRE
Quick Summary: This legislation addresses the crime of fraudulent use of deepfakes and outlines associated penalties. It establishes a legal cause of action for such fraudulent use and prohibits the registration of lobbyists found guilty of using deepfakes in specific cases. A person will be charged with a Class B felony if they knowingly create, distribute, or present a deepfake—defined as any video, audio, or other media likeness of an identifiable individual—intended for certain purposes.
HB 1688 Artificial Intelligence
Quick Summary: Addresses the use of AI by state agencies, prohibiting them from manipulating, discriminating against, or surveilling the public.
NEW MEXICO
Quick Summary: This legislation mandates that any advertisement containing materially deceptive media, including AI-generated content, must include a disclaimer. It also establishes penalties for distributing or agreeing to distribute such deceptive media to mislead voters, with both civil and criminal consequences.
NEW YORK
S 9678 Materially Deceptive Media in Political Communications
Quick Summary: This law addresses deceptive media in political communications. It requires any individual or organization that distributes political content containing deceptive media, and is aware of its misleading nature, to disclose this information. There are exceptions for bona fide news entities distributing such media for specific purposes.
TENNESSEE
HB 2091 Protection of Personal Rights
Quick Summary: House Bill 2091, known as the Ensuring Likeness, Voice, and Image Security (ELVIS) Act, establishes personal rights protections in Tennessee. It replaces the Personal Rights Protection Act and affirms that individuals hold property rights over the use of their name, photograph, voice, or likeness across all media. The law also states that the exclusive right to commercially exploit these rights ends if an executor, assignee, heir, or devisee can prove non-use for commercial purposes.
UTAH
SB 131 Information Technology Act Amendments
Quick Summary: Pertains to audio or visual communications aimed at influencing votes for or against candidates or ballot propositions in state elections. It mandates that audio communications using synthetic media must clearly state specific words at the beginning and end, while visual communications must display these words during segments that include synthetic media.
WISCONSIN
AB 684 Artificial Intelligence Content Disclosure
Quick Summary: Addresses the disclosure of AI-generated content in political ads, grants rule-making authority, and establishes penalties.
As AI use becomes more mainstream, we all need to be aware of our risks, rights, and regulations that govern its use. For an overview of Deepfakes, recent scams, and a new cybersecurity poster to remind your teams of the looming threat it poses, visit ‘How DeepFake is Your Love?’
SOURCES:
https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation and https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/