Missouri’s “Taylor Swift Act” and the new liability risks for sharing AI images
A new Missouri bill creates a $150,000 civil hammer and fresh felony charges for “digital depictions,” with special rules for minors and an election escalator.
Missouri lawmakers are pitching a simple promise: stop the spread of nonconsensual, realistic, digitally manipulated sexual images of real people. The fine print reshapes consent, lawsuits, and criminal risk for creators, sharers, and hosts.
House Bill 1887 (HB 1887) takes that promise and turns it into a named brand, “The Taylor Swift Act,” designed to ride public disgust and headline attention into legislative momentum.
The name will do its job. The legal mechanics will do something else: create a broad new civil right to sue and a new felony offense that reaches well beyond the narrow phrase “AI deepfake nudes,” depending on how courts interpret a few loaded words.
What HB 1887 is trying to stop
Start with the core problem the bill targets: realistic sexual imagery of a real person, shared without consent, where the image was created or altered through digital manipulation.
The underlying bill text is explicit about the framework, including the definitions that do most of the work, and it is worth reading straight from the source in the introduced version of the bill from the Missouri House of Representatives bill PDF.
The bill is not written as a sweeping “regulate AI” platform. It is written as a liability instrument. It creates a path to sue. It creates a felony for disclosure. Then it adds carve-outs, conditions, and escalation triggers that decide who actually carries the risk.
The definition that makes the bill bigger than “deepfakes”
HB 1887 does not use “deepfake” as its primary legal hook. It uses “digital depiction.”
A “digital depiction” is defined as a realistic visual depiction created or altered using digital manipulation. That can include generative AI, but the language can also reach other forms of realistic editing or compositing, depending on how “realistic” gets interpreted over time. The bill’s own definitions are laid out in the introduced bill text.
That matters because “realistic” is not a tiny detail. It is the boundary between a narrow deepfake porn law and a broader speech and liability regime that could catch many types of manipulated imagery that people share casually.
“Intimate digital depictions” and what counts
The bill then narrows the adult-focused category by defining “intimate digital depiction.”
This is the bucket that includes explicit nudity, sexual fluids, or sexually explicit conduct, with an enumerated list of what qualifies. The practical takeaway is that it is built to cover fake porn, and not just implied sexual content. That definition also appears in the Missouri House bill PDF.
So far, that looks like a generative-era upgrade to existing nonconsensual intimate imagery laws.
Then HB 1887 makes a key choice.
Minors get broader protection, with broader consequences
For people under 18, HB 1887 is not limited to “intimate” content. The civil cause of action extends to any “digital depiction” of a minor that is disclosed without consent.
That is either a feature, a bug, or both.
It can be read as a way to cover realistic AI-altered bullying of minors even when it is not sexual. It can also sweep in a wide range of non-sexual, arguably non-harmful speech that happens to be digitally manipulated and “realistic.” The bill does not resolve that tension in advance. Courts will end up drawing the lines, after the cases show up.
The civil lawsuit option, and why it is designed to move fast
HB 1887 creates a new civil right to sue someone who discloses either:
a digital depiction of a minor, or
an intimate digital depiction of any individual,
when the discloser knows, or recklessly disregards, that the depicted person did not consent.
Two parts of the civil section are likely to shape behavior immediately.
First, the consent rules are written to shut down common defenses. Consent to create an image does not automatically mean consent to disclose it. Second, for intimate digital depictions, consent to disclosure is only “validly given” if it is in a plain-language written agreement, signed knowingly and voluntarily, and describing the depiction and any audiovisual work it will be part of. Those details are spelled out in the introduced bill language.
That written-consent requirement is meant to stop gamesmanship. It also raises the bar in a way that could surprise people who treat “they sent it to me once” as permission.
The money at stake is not subtle
HB 1887’s civil remedies are aggressive by design. A plaintiff can pursue:
the defendant’s monetary gain tied to the image
actual damages, including emotional distress, or liquidated damages of $150,000
punitive damages
attorney’s fees and court costs
injunctions, including temporary restraining orders and preliminary injunctions
The bill also includes options to protect victims during litigation, including pseudonyms and in camera proceedings. Those remedies, and the leverage they create, are in the civil section of the bill text.
That mix matters. Liquidated damages plus fee shifting plus quick injunctions can make it feasible for victims to act quickly. It can also turn vague boundaries into expensive disputes, especially if parties fight over what “realistic” means in a given image.
The carve-outs exist, but they are not a universal shield
HB 1887 includes several limits on civil actions. It bars suits if disclosure was made in good faith to law enforcement, as part of a legal proceeding, if it was a matter of legitimate public concern or public interest, or if disclosure reasonably intended to assist the depicted individual.
Two details in these carve-outs are easy to miss.
The bill says something is not automatically of legitimate public concern just because the depicted person is a public figure. That is a direct rejection of the “they’re famous so it’s news” excuse, and it is in the statutory language itself in the bill PDF.
It also says a disclaimer is not a defense. In other words, a caption like “this is fake” does not get you off the hook if you disclose an image that triggers the statute. Again, that appears in the bill text.
That is sensible in the classic abuse scenario. It is also a warning for meme culture and “joke” sharing, because the statute is written around disclosure and harm risk, not around intent to entertain.
The criminal offense, and the escalation that changes the stakes
HB 1887 also adds a criminal section that makes certain disclosures, or threats to disclose, a felony.
The felony covers:
a digital depiction of a person under 18, or
an intimate digital depiction disclosed with intent to harass, threaten, alarm, or cause substantial financial or reputational harm, or with actual knowledge or reckless disregard that it will cause physical, emotional, reputational, or economic harm
A first violation is a Class E felony. Subsequent violations can become a Class C felony.
Then comes the steep escalation clause: the charge can also jump to Class C if the depiction could reasonably be expected to affect the conduct of a government proceeding, including elections or foreign relations, or facilitate violence. The bill does not require the effect to actually occur. That structure is visible in the criminal section of the introduced bill PDF.
This is the point where “target deepfake nudes” starts to look like “create a tool that can be used in high-pressure political contexts,” depending on how aggressively prosecutors interpret the “could reasonably be expected” language.
The fiscal note signals real criminal justice costs
Missouri’s own fiscal analysis anticipates incarceration and supervision costs tied to the new felony categories.
You can see the bill framing and the projected net effect discussion in the state’s fiscal note PDF for HB 1887.
Even if you focus purely on the moral clarity of stopping nonconsensual fake sexual images, the fiscal note is a reminder that felony creation is never just symbolic. It builds a pipeline that the state then has to staff, fund, and supervise.
The quiet winner: platforms and hosting services
One of the least headline-friendly choices in HB 1887 is also one of the most consequential.
The bill gives interactive computer services a civil shield, and it says providers have not committed the criminal offense for good-faith efforts to restrict access or provide technical means to restrict access. The platform protection language is part of the bill text.
In practice, that means the bill is aimed primarily at individual uploaders and sharers, rather than at forcing platforms into a mandatory takedown clock.
That is a defensible policy choice. It can also leave victims with the hardest part of enforcement: figuring out who actually posted or forwarded the image, serving them, and collecting on a judgment.
Platforms will still have incentives to remove aggressively, because “good faith” moderation is protected and risk management is their default posture.
Why the “Taylor Swift Act” label is part of the policy
The title is not just marketing. It is part of how the bill moves.
By tying the statute to a celebrity hook, the bill frames a widely loathed abuse in a way that is easy to sell, while the text itself builds broader categories and escalators that can reach beyond the initial outrage. The bill even states that the relevant sections “shall be known and may be cited as” the name, in the official bill language.
This also fits into a wider legislative moment in Missouri. Local reporting has described lawmakers hearing multiple AI abuse proposals and discussing consolidation into a larger package, which is how narrow-sounding bills can turn into broader bundles over time, as covered by ABC17NEWS.
What readers should do now
HB 1887 is still a proposal, but it signals where policy is heading. If your work touches generative tools, hosting, moderation, or even casual group-chat sharing, the safest reading is that “disclosure” will be treated broadly in practice.
If you build or host generative AI tools
Assume the riskiest behavior is not just creation, but sharing.
Design guardrails that prevent generating or distributing realistic intimate depictions of real people without verifiable consent. HB 1887 goes out of its way to say that a disclaimer is not a defense, and that consent to create does not automatically equal consent to disclose, in the introduced bill language.
If your product includes sharing, reposting, or export tools, treat that as a high-risk surface, not a convenience feature.
If you are experimenting privately
Keep experiments private. Avoid real-person likenesses, especially minors.
HB 1887 treats minors more broadly than adults by extending civil exposure to any disclosed “digital depiction” of a minor, not only intimate content, in the statute’s civil section. The best way to avoid becoming a test case is not to generate borderline material in the first place.
If you are a victim
Preserve evidence and metadata as early as possible, including screenshots, URLs, and any account identifiers that might later disappear.
HB 1887 is designed to make injunctions and fee recovery feasible, which can matter when time is the enemy and the harm is spreading. The civil remedies and confidentiality tools are in the bill text.
If you care about free speech and overbreadth
Focus on the pressure points that will decide how far this reaches in practice: the meaning of “realistic,” the boundaries of “legitimate public concern,” and the election and government proceeding escalation clause.
Those are the provisions that can keep the law targeted on nonconsensual fake sexual images, or push it toward a multipurpose prosecutorial tool.
Where this fits in the bigger national and global push
Missouri is not operating in a vacuum. Debates about how quickly platforms should remove nonconsensual intimate imagery and AI-altered sexual content are playing out in other jurisdictions too.
Recent coverage has highlighted how quickly “takedown rules” can become central to these debates, including reporting on proposed rapid-removal approaches by Reuters and The Guardian.
In the United States, federal-level conversations around deepfakes and nonconsensual intimate images continue to evolve, with mainstream coverage tracking the policy arguments and First Amendment friction, including reporting by AP News.
The bottom line
HB 1887 is not a general “AI bill.” It is a focused response to a specific abuse, wrapped in a celebrity label that makes it easier to pass.
Still, the fine print matters. The bill’s broad definition of “digital depiction,” the minor provisions, and the escalation clause tied to government proceedings are the parts most likely to expand the statute’s practical reach over time.
If you want the cleanest, most current reference point for where it stands, track the bill through the state’s official page on house.mo.gov’s HB 1887 bill information.
That is where a narrow-sounding fix either stays narrow, or quietly becomes something bigger.
Explore more from Popular AI:
Start here | Local AI | Fixes & guides | Builds & gear | AI briefing





