TrustMark
Geometric decomposition of a heroic character silhouette dissolving into grid data

“Character identity exists beyond its creator.”

The Hardest Question

How do you enforce the
copyright of a human?

A fictional character has a creator. A human being doesn't. So when AI generates a synthetic version of a real person — their face, their voice, their movement, their behavioral patterns — what law applies? Whose rights were violated? How do you prove it?

The problem in three parts

Copyright protects the work

Not the person depicted in it. A photograph of you is copyrighted by the photographer. A painting of you belongs to the artist. Your likeness in a film is owned by the studio under work-for-hire. Copyright was never designed to protect the person — only the artifact.

Trademark protects the name

Not the identity behind it. You can trademark your stage name, your logo, your brand. But trademark doesn't cover your voice, your movement, your behavioral patterns, or the way emotion crosses your face. It protects the label, not the person.

Right of publicity protects commercial use

But only in some states, only after the fact, and only when you can prove damages. It's reactive. It requires litigation. And it varies wildly by jurisdiction — strong in California and Tennessee, almost nonexistent in most of the world.

What TrustMark adds

Not a replacement for law. The infrastructure that makes law enforceable.

Timestamped cryptographic record

A record that the identity existed — before any AI used it. Not a claim of ownership. A provable, timestamped artifact that establishes priority.

Machine-readable authorization layer

AI pipelines must check the authorization status before rendering. Not a legal request — a technical pre-condition built into the infrastructure.

Provenance chain on every output

"Was this authorized?" has a technical answer, not just a legal argument. Every output carries a provenance chain that can be verified independently.

Statutory grounding

The legal landscape is converging. These are the statutes and precedents that TrustMark operationalizes.

Legislation

2024Tennessee ELVIS Act

First US law to explicitly protect voice from AI replication. Covers name, image, likeness, and voice.

2024California AB 2655 & AB 1836

Requires platforms to label AI-generated content. Extends publicity rights to 70 years post-mortem.

2008/2024Illinois BIPA

Biometric Information Privacy Act. Requires informed consent before collecting biometric identifiers.

2025–2026EU AI Act

Article 50 mandates disclosure of AI-generated content. €35M penalties for non-compliance.

Case Law

Midler v. Ford (1988)

Voice and performance style are protectable beyond copyright. Ford hired a Bette Midler sound-alike — the court ruled that distinctive voice is identity.

White v. Samsung (1992)

Identity is protectable beyond name and likeness. Samsung used a robot in Vanna White's pose — the court ruled that the totality of identity carries rights.

Guild Contracts

SAG-AFTRA

AI provisions in 2023 contract mandate informed consent for digital replicas. No shared infrastructure to deliver it.

WGA

Section 5 AI provisions establish that AI-generated material is not 'literary material.' Consent required for AI use of existing scripts.

DGA

DGA-AMPTP negotiations include AI identity authorization. No technical standard exists to operationalize it.

The record that makes identity enforceable

TrustMark doesn't claim to own identity. It creates the timestamped, cryptographic record that an identity existed — so when a synthetic version appears, there's a technical answer to “was this authorized?”