TrustMark
April 2026 — Fact-Checked Research Brief

Where the World Is Heading With AI Identity Law

Every claim on this page is sourced. Every date is verified. Every law is cited with its official reference. This is not a marketing document — it is a research brief on the regulatory landscape that shapes AI identity rights globally.

Jun 9, 2026New York A8887-B effective
Aug 2, 2026EU AI Act Article 50 in force
🇺🇸

The United States

Two tracks, one tension: federal preemption vs. state-level protections

Track 1: Federal Preemption Strategy

Phase 1 — January 23, 2025: Executive Order 14179

Revoked Biden's entire AI executive order. The stated purpose: “removing every barrier to American AI dominance.”

Phase 2 — December 11, 2025: Executive Order 14365

“Ensuring a National Policy Framework for Artificial Intelligence” — the most consequential AI policy action in U.S. history. Key mechanisms:

  • DOJ AI Litigation Task Force — established January 9, 2026. Mission: challenge state AI laws in federal court on Dormant Commerce Clause grounds, preemption, and First Amendment violations.
  • $42 billion BEAD broadband funding conditioned on states not enacting “onerous” AI laws.
  • Colorado AI Act explicitly named and criticized — delayed from Feb 1 to June 30, 2026 as a direct result.

Phase 3 — March 20, 2026: National Policy Framework for AI

Full legislative blueprint urging Congress to adopt a “light-touch” federal regime. Seven pillars: child safety, communities, creators, censorship, competitiveness, workforce, and preemption of state AI laws.

The Creator/Performer Carveout

The Framework explicitly recommends “establishing safeguards against unauthorized digital replicas of individuals' voice, likeness, or other attributes.” This is the administration acknowledging that identity rights need federal protection — even while dismantling everything else.

Sullivan & Cromwell — National Policy Framework

Track 2: State Laws That Are Now In Force

New York A8887-B

Jun 9, 2026Effective

Signed December 11, 2025. Any person who produces an advertisement must conspicuously disclose use of a “synthetic performer” — a digitally created human not recognizable as any real person. Penalties: $1,000 first violation, $5,000 each subsequent. No private cause of action. Carveouts for audio-only, language translation, and expressive work promotions.

California AB 2602 — Effective January 1, 2025

Contract provisions allowing AI digital replicas of living performers in place of work they would otherwise perform are unenforceable unless specific requirements are met. A direct labor protection — studios cannot bury consent in boilerplate contracts.

California AB 1836

Protects deceased personalities from unauthorized AI-generated digital replicas in audiovisual works and sound recordings.

Federal TAKE IT DOWN Act — Signed May 2025

First U.S. federal deepfake law. Criminalizes non-consensual intimate imagery including AI-generated fakes. Platforms must remove flagged content within 48 hours. By May 2026, all platforms hosting user content must have notice-and-takedown systems.

Pending Federal Bills

NO FAKES Act (April 9, 2025) — Would make it unlawful to create or distribute AI replicas of voice or likeness without consent. Limited exceptions for satire and commentary. The closest thing to a federal identity rights law.
DEFIANCE Act — Civil cause of action for victims of non-consensual sexual deepfakes, up to $250,000 statutory damages.
Protect Elections from Deceptive AI Act — Bans AI-generated deceptive content about federal candidates.

The Market Signal

Per Dynamis LLP: 68% of consumers frequently wonder if content is real. 50% prefer brands that avoid generative AI in consumer-facing content. 63% say brands have a duty to disclose AI use. Disclosure has become a brand signal, not just a legal requirement.

Dynamis LLP — AI Disclosure in 2026
🇪🇺

The European Union

The August 2 deadline — the world's most consequential AI law enters full enforcement

EU AI Act — Article 50

Aug 2, 2026Full enforcement

What it requires:

  • Providers of AI systems generating synthetic outputs must mark those outputs in machine-readable form.
  • Deployers (brands, agencies, studios) must disclose deepfakes and AI-generated public-interest content to end users.
  • The European Commission's March 3, 2026 Code of Practice draft mandates prominent user-facing disclosure — not buried metadata.
Penalties

Up to €15 million or 3% of global annual turnover, whichever is higher.

Geographic reach

Any brand or creator selling into or advertising in the EU — regardless of where they are based — is within the law's reach as a deployer. A Hollywood studio running a campaign in Germany must comply.

The C2PA connection

The EU's Code of Practice explicitly points toward Content Credentials (C2PA) as the technical standard for provenance. Google, Meta, and TikTok have already integrated C2PA functionality.

High-Risk AI obligations

Originally scheduled for 2026, now delayed to 2027 per European Commission proposal. Biometric identification, emotion recognition, and AI systems that manipulate human behavior fall under the highest-risk tier with mandatory conformity assessments, human oversight requirements, and registration in an EU database.

🇩🇰

Denmark

The world's most radical identity law — personal identity as intellectual property

Denmark has done something no other country has done: treated personal identity as intellectual property. Culture Minister Jakob Engel-Schmidt announced an amendment to Danish copyright law giving every citizen the right to their own body, facial features, and voice. The bill passed with nine-in-ten MP support — the broadest cross-party consensus on any tech legislation in European history.

What the law does:

Right of Removal Any citizen can demand immediate takedown of AI-generated content using their face, voice, or body, regardless of intent.
Compensation for Damage Right to claim damages without proving reputational harm or malicious intent.
Platform Liability Tech platforms face severe fines for failing to respond quickly to removal requests, aligned with the EU Digital Services Act.
50-Year Post-Death Protection Performers and regular citizens alike are protected for 50 years after death against unauthorized AI reproductions.
Parody/Satire Exception Permitted, though enforcement criteria are still being defined.
Global significance

Denmark held the EU Council Presidency in late 2025 and used it to push this model to France, Ireland, and other EU members. The Good Lobby reports that Denmark explicitly framed this as a European blueprint — not just a domestic law.

🇯🇵

Japan

The opposite direction — and why it still matters

The AI Promotion Act (May 28, 2025)

Japan's first comprehensive AI law is explicitly non-binding. No fines. No mandatory compliance. The government issues guidance; companies are expected to follow it voluntarily. The enforcement mechanism is “name and shame” — public disclosure of non-compliance.

The Privacy Law Reversal (April 7, 2026)

Japan's Cabinet approved amendments to the Personal Information Protection Act that remove the requirement for opt-in consent before sharing personal data for AI development. Japan's Digital Transformation Minister explicitly called existing consent requirements “a very big obstacle to AI development.”

  • Organizations do not need authorization to use personal data for AI development if it doesn't identify individuals.
  • Facial scans are fair game — organizations must explain how they handle the data, but opt-out is not mandatory.
  • Health data can be used without consent if it improves public health.
  • Children under 16 require parental approval.
  • Fines for malicious misuse = equivalent to profits gained from improper data use.

The Copyright Paradox

Japan's Copyright Act Article 30-4 (amended 2020) permits non-expressive uses of copyrighted works for AI training without authorization — the most permissive AI training regime in any major economy. However, if models are fine-tuned to imitate specific styles (LoRA training), the exemption no longer applies.

Why Japan matters for identity infrastructure

Japan is betting that being the “easiest country to develop AI” will attract global AI investment. This creates a regulatory arbitrage problem — companies can train on Japanese data with minimal consent requirements, then deploy globally. Provenance infrastructure becomes the answer: even if training happened in Japan without consent, a render-time authorization layer ensures that deployment requires valid authorization regardless of where the model was trained.

🇨🇳

China

The most technically rigorous AI labeling regime in the world

Measures for Labeling of AI-Generated Synthetic Content — In force September 1, 2025

  • Mandatory dual watermarking — both visible (watermark/caption) and invisible (embedded digital signature in metadata) — for all AI-generated content.
  • Platforms must detect watermarks; unmarked content gets labeled “suspected synthetic.”
  • Altering or removing AI watermarks is banned.
Regula Forensics — Deepfake regulations worldwide
🌍

Other Key Signals

South Korea, United Kingdom, France

🇰🇷 South Korea

Rolled out measures to curb deepfake pornography including harsher punishment and stepped-up platform regulations. Criminal penalties for creation and distribution.

🇬🇧 United Kingdom

The Online Safety Act 2023 implementation continued through 2025. 2025 amendments target creators directly — intentionally crafting sexually explicit deepfake images without consent, with intent to cause distress, carries up to two years in prison. Age verification for adult sites mandatory since July 25, 2025.

🇫🇷 France

Bill No. 675 pending — mandatory labeling of AI-generated images on social networks. Fines up to €3,750 for users, €50,000 per offense for platforms. Article 226-8-1 (2024) already criminalizes non-consensual sexual deepfakes: up to 2 years imprisonment and €60,000 fine.

🔭

The Three-Speed World

Where the global regulatory landscape is splitting

Speed 1: Authorization-First

EU + Denmark + UK + France + South Korea

Identity = property right. Consent required before use. Provenance mandatory. Violations carry criminal or severe civil penalties. The direction of travel: identity rights become as fundamental as copyright.

Speed 2: Innovation-First

USA + Japan

Consent requirements loosened or preempted at the federal level. State-level protections survive but are under legal attack. Even these regimes recognize that identity rights for performers and public figures need some protection — just not the kind that slows down AI development.

Speed 3: State-Controlled

China

Mandatory watermarking and provenance — but controlled by the state, not by individuals. The infrastructure is technically rigorous. The governance model is the opposite of individual rights. It proves that technical provenance infrastructure is achievable at scale.

The Convergence Point

Despite the three speeds, every major jurisdiction is converging on one technical requirement: machine-readable provenance. The EU calls it Article 50 compliance. China calls it mandatory watermarking. Denmark calls it the right to demand removal. The NO FAKES Act calls it consent documentation. The C2PA standard is the technical layer underneath all of them.

Shared identity infrastructure that operates across all three speeds would generate: a render receipt that satisfies EU Article 50's machine-readable provenance requirement, a watermark that satisfies China's dual-watermarking mandate, an authorization token that satisfies Denmark's consent requirement, a contract reference that satisfies California AB 2602's performer protection, and a compliance bundle that satisfies New York's synthetic performer disclosure law. One infrastructure. Every jurisdiction.

📅 The Timeline That Matters

DateEventImpact
Jan 2025Trump revokes Biden AI EOUS deregulation begins
May 2025TAKE IT DOWN Act signedFirst US federal deepfake law
May 2025Japan AI Promotion ActNon-binding but signals direction
Jun 2025Denmark copyright amendment announcedIdentity = IP, European first
Sep 2025China AI labeling rules in forceMandatory watermarking at scale
Dec 2025Trump EO 14365Federal preemption strategy launched
Dec 2025New York A8887-B signedSynthetic performer disclosure law
Apr 2026Japan privacy law relaxedAuthorization-free AI training data
Jun 9, 2026New York law effectiveDisclosure mandatory for ads
Aug 2, 2026EU AI Act Article 50 in forceMachine-readable provenance mandatory
2026–2027NO FAKES Act likely passageFederal US identity rights law
2027EU high-risk AI obligationsBiometric AI fully regulated

Sources