Statement of Graham Davies President and Chief Executive Officer
Digital Media Association (DiMA)
Before the Senate Judiciary Committee
Subcommittee on Intellectual Property
Hearing on: “The NO FAKES Act: Protecting
Americans from Unauthorized Digital Replicas.”
April 30, 2024
Chairman Coons, Ranking Member Tillis, and Members of the Committee, thank you for the opportunity to testify before you today at this important hearing examining the NO FAKES Act and issues related to unauthorized digital replicas. My name is Graham Davies, and I am the President and CEO of the Digital Media Association, or DiMA. DiMA represents the world’s leading audio streaming companies, whose innovations are the engine that has revitalized the music industry.
We appreciate this Committee’s work to explore issues related to digital replicas in the rapidly evolving AI landscape, in a measured way that takes into account the views of all stakeholders.
I. The importance of safeguarding the music streaming success story
DiMA and its members – Amazon, Apple Music, Feed.fm, Pandora, Spotify, and YouTube – advocate for policies that ensure the continued success of the music streaming economy, where music fans have legal access to music anytime, anywhere they want it, and artists and songwriters can connect with existing fans and make new ones around the world.
DiMA’s members have ushered the music industry into the modern era, empowering creators and returning the record industry to year over year growth after years of decline. In 2023, music streaming generated $14.4 billion in the U.S. alone – representing 84% of recorded music revenues.1 Music streaming services contribute enormously to the creator economy. In addition to the substantial royalties they pay – approximately two-thirds of their revenue flows to music rights holders – streaming services are critical to moving consumers from illegally accessing music to using legal, affordable and innovative services. These services also provide fans with new ways to connect with artists as well as each other – creating a powerful community focused on music. The services we represent have invested personnel and resources throughout the world to support music creators, working with up-and-coming and established artists alike, and creating opportunities for visibility and engagement. Often, emerging artists get their first big break on streaming services, connecting with audiences and developing fans that would have been unreachable in the record store era.
Streaming services are central to the music ecosystem and their success is built on a foundation of strong working relationships with rights owners, intermediaries and partners. The industry operates on global supply chains and agreements that enable the distribution of tens of millions of recordings to listeners around the world. This is a complex process that involves precise technical specifications and business practices governing staggering volumes of data: including recordings, album art and other visual information, lyrics, the metadata that accompanies those recordings, and the usage reports provided by the services to a wide variety of rights owners in a wide variety of formats. Relevant to today’s hearing, rights owners and services have established robust processes to address and potentially remove content that violates the rights of third parties or is otherwise harmful in the context of this complex, finely-tuned supply chain. A well-functioning digital music supply chain is critical to the success of the music industry as a whole. For this success story to continue, it is imperative that the work of streaming services is bolstered for the benefit of music rights owners, music creators, and music fans.
II. Responding appropriately to the risks and opportunities of Artificial Intelligence
The use and impact of artificial intelligence and how AI intersects with existing law are important areas of focus for all music industry stakeholders, including DiMA and our member companies. AI has been used as a tool in the music industry for many years, and as the technology continues to rapidly evolve, it has the ability to assist creators and artists, including musicians, producers, and songwriters, and improve the way music is created, distributed, discovered, and consumed.
A healthy music ecosystem is one where consumers have legal access to the best content creators can devise. Creativity is based on the relentless pursuit of ideas and has always incorporated the latest that technology can offer. As we address the impacts of AI technology, we must ensure that freedom of expression and creativity can be fostered, while protecting against deliberate harmful acts.
DiMA members understand that AI technology, particularly generative AI, can raise questions around the integrity of an individual’s likeness and voice that have particular resonance in the music industry. Digital service providers have nothing to gain from deceptive music in their supply chain. Those who would falsely capitalize on the creative identity and expression of the artists their customers love should be held accountable. We therefore believe there should be appropriate safeguards to protect an individual’s personhood (name, image, likeness, voice) in this context. However, it is vital that any new law sets clear, appropriate bounds for all parties involved, in order to support innovation, and adapt to future changes in technology while ensuring individuals can protect their personhood.
Notably, we are not seeing an epidemic of AI-generated voice clones climbing the charts. To the contrary, the most famous example of an AI generated voice cloned track is still the song “Heart on my Sleeve” – a purported collaboration between Drake and The Weeknd. The so-called ‘Fake Drake’ track was released more than a year ago and, since that time, there hasn’t been a successor that has achieved nearly that level of attention. Critically, as a result of the close working relationships between digital music services and rights holders, the track was quickly taken down off streaming services. It is important to recognize that the removal of this content was done in a timely and effective way under the current legal regime and existing business relationships.
I do not intend to suggest that the potential harm of voice cloning is not a real concern. It certainly is, and I have no doubt that you will hear many examples today. However, the responses to the development of AI technology should be proportional to the scale of the issue, and not inhibit future legitimate uses, such as parody or satire. AI generated content is not having a negative impact on the revenue streams generated from streaming services, nor is it interfering with the continued growth of the industry, and it is imperative to ensure any new reforms don’t disrupt the successful streaming economy.
III. DiMA principles for new legislation
DiMA supports appropriate safeguards to protect an individual’s personhood and is committed to working toward federal solutions that afford such protections in the age of AI. But it is critical to ensure that any such protections do not have a chilling effect on creative freedoms or interfere with or disrupt the thriving digital music supply chain. It is also critical that any new legislation clarifies and simplifies the law, instead of adding to the current morass of conflicting and overlapping state regulations in this area. While any successful legislation must reflect input from across the industry, DiMA believes it must satisfy the following criteria:
First: Legislation should be based on existing rights of privacy and publicity.
Any new legislation should be narrowly tailored to address the particular risks that are at issue. Specifically, legislation should be designed to protect the elements of personhood and right of performance while not encroaching on accepted and legitimate uses of technology. As other stakeholders have noted, there are a wide array of legitimate and constitutionally-protected uses of digital replicas, and any new protection should be designed to steer clear of those protected uses. Importantly, intellectual property law – including, in particular, copyright law – is not the best foundation for fashioning any new protections. Intellectual property laws provide economic incentives to create new works and devise new innovations. This construct does not readily translate to the realm of protecting personhood and right of performance, where a voice or visage already exists without any such inducement, and the goal is to protect those elements from misappropriation. Instead, any legislation aimed at protecting individuals from unauthorized digital replicas should be grounded in existing rights of privacy and publicity, which have developed over time to protect an individual’s identity against misappropriation.
Second: Legislation should not disrupt important existing commercial relationships.
Any legislation in this area must recognize and take account of the fact that there are already extensive, long-standing commercial relationships between rights owners and music streaming services. The supply chain from rights owners to streaming services is complex, having developed over the course of the digital age to support the massive scale at the heart of streaming’s economic success. Rights holders and streaming services alike have made significant investments in infrastructure and industry standards to enable the automated, real-time delivery of music to distributors, the ingestion of that music into the disparate technological systems services deploy, and the nearly instantaneous availability of that music and all the important information that goes with it to listeners around the world. With over 100 million songs available on major music streaming services and millions added each month, the function and efficiency of the music supply chain cannot be taken for granted and can be easily disrupted by new requirements. Importantly, as business partners, digital music services and rights holders may have processes already in place for identifying content that may be infringing (or otherwise unlawful or inappropriate) and assessing whether the content should be removed. Any new law should tread carefully to avoid disruption to this crucial supply chain.
Third: Legislation should focus liability on the original content creator.
The primary targets of any future claims brought under a new federal right should be the individuals or organizations that create the violative content. That is both the most fair approach – liability should rest with the person who intended to cause the harm – and the best way to ensure that only illegitimate content is targeted and removed, because the originator will be in the best position to defend the replica.
By the same token, it is imperative that downstream distributors of potentially violative content do not face liability for making it available in the ordinary course of business, except in the most extreme circumstances. If legitimate digital music services faced the same risk of liability from misappropriated personhood as the original creator of the content, they would have a substantial incentive to remove all content on the slightest sign of a dispute or question over verity, even if the content were constitutionally protected or otherwise lawful. Limiting the liability of service providers safeguards against the risk of chilling effects on freedom of expression and First Amendment rights – principles that have been essential to cultivating the rich heritage of American music today.
Fourth: Legislation should be designed to cover all types of content, not just sound recordings and motion pictures.
Any digital replica legislation should deal with image and likeness protection comprehensively: including audio works, audio-visual works, and purely visual works. This means the legislation should reflect input from all stakeholders involved in the creation of any type of content. This is important to ensure the continuation of a well-functioning market, and realize the policy goals of the legislation, as well as to make the law more likely to withstand the strict scrutiny that content-based restrictions on speech must undergo.
Fifth: Legislation must preempt the patchwork of state laws.
Current rights of publicity are subject to an inconsistent and confusing patchwork of state laws, instead of a set of uniform national rules. This problem has only been exacerbated in the age of AI, as industry groups are actively pursuing new legislation in states throughout the country at a rapid pace, and often without careful deliberation of unintended consequences. This began with the introduction and prompt adoption of the ELVIS Act in Tennessee and is being pursued in one state after another. The music industry does not operate on a state-by-state basis. The advances that music streaming has brought in developing a truly global industry means that these are not local issues; rather AI is a global phenomenon and the rights at issue affect global companies. If Congress acts, it should create a single, uniform, national rule – consistent with the principles articulated above – instead of permitting the proliferation of varying and sometimes conflicting state laws. The success of any legislation in this area will be measured by the extent to which it establishes a comprehensive and effective framework to which others around the world will look for guidance and model.
IV. Specific issues with the NO FAKES Act discussion draft
We appreciate the Committee’s work towards a solution to address these novel and challenging issues. The discussion draft of the NO FAKES Act represents an important effort. As the authors of that legislation noted, the draft was meant to foster further conversation among stakeholders. We appreciate the opportunity to have this discussion, and the recognition that these are complicated issues that warrant careful consideration. But as invested stakeholders in this process, DiMA’s members believe that the NO FAKES Act, as currently contemplated, strays from the principles we set forth above. Specifically, I want to draw attention to three main issues:
1: The draft is not narrowly tailored to the risks presented
The draft legislation is not narrowly tailored to the problem sought to be addressed – namely, the challenge of preventing harmful misuse of a person’s likeness or voice in the age of generative AI. Rather than precisely targeting the kind of conduct that has caused understandable concern, it attempts to create an entirely new species of intellectual property right, out of whole cloth.
As I previously mentioned, we do not believe that intellectual property is the right framework for this kind of legislation. And the NO FAKES Act would create a “right” that goes far beyond any other IP right that has previously been recognized. Intellectual property rights, such as copyrights and patents, are specific (and limited) exclusive rights intended to stimulate investment in the creation of new works and inventions, ultimately to promote the public interest. Those works pass into the public domain and enrich our collective cultural heritage.
Name, image, likeness, and voice rights have an entirely different origin and purpose, grounded in personal privacy rights. Most obviously, there is no need to create economic incentives for the creation of a person’s identity. Nor should a person’s immutable characteristics – indeed their very personhood – be considered a commodity to be marketed, bought, sold, divested in bankruptcy, or seized by creditors. Indeed, if the goal of the legislation is to protect an individual’s interest in preventing inappropriate uses of their likeness or voice, treating personhood as transferable intellectual property will be counterproductive. Once a person has sold off their name, image, likeness, or voice rights, they lose the right to object to what may be ‘authorized’ uses by the owner of those rights, even if they find them objectionable. In other words, we are concerned that the “intellectual property” approach taken by the NO FAKES Act is likely to make offensive uses of name, image, likeness, and voice by those who have acquired broad rights to them more common, not less.
2: The scope of liability in NO FAKES is too broad
The NO FAKES Act improperly expands the scope of liability in a manner that runs headlong into First Amendment concerns. The NO FAKES Act, like all right-of-publicity laws, is a content-based restriction on speech. The First Amendment protects freedom of speech and where State laws exist, they have been narrowly tailored to serve compelling state interests while protecting free speech.
State right of publicity laws are traditionally aimed at instances of commercial exploitation, consumer confusion, or unfair competition—in other words, economic harms. Those state laws, in addition, typically carve out specific categories of protected content, further narrowing their scope. The NO FAKES Act, however, is not so limited. In fact, it makes unlawful any knowing creation or transmission of an unauthorized digital replica, regardless of economic harm. The exceptions of protected conduct it includes alone do not make the prohibition “narrowly tailored” (as is constitutionally required) to the legitimate purpose of protecting an individual’s identity – and the exceptions would not cover all potentially protected uses.
Moreover, the damages provision of the NO FAKES Act is unprecedented in nature and scope. While state laws permit recovery of the actual economic injury suffered by the individual, the NO FAKES Act entirely does away with the requirement to show injury, instead authorizing statutory damages of $5,000 per violation. There is no basis for creating a statutory damages regime for unauthorized digital replicas; as with right of publicity and defamation torts under state law, actual damages and injunctive relief are sufficient to prevent unlawful activity. Additionally, the statute is ambiguous as to whether that provision is meant to apply per transmission or per unauthorized digital replica. If the former, it could expose a company to potentially ruinous liability based merely on the posting of a single digital replica that gets viewed many times. Such an unbounded damages regime is hardly “narrowly tailored” to serve compelling state interests.
Similarly, the NO FAKES Act appears to sweep in a broad range of downstream activities within its scope. As I explained before, liability for unauthorized digital replicas should be directly assigned to the creator of the violative content, not to parties downstream in the chain of commerce who unknowingly transmit that content. But the NO FAKES Act would not merely render unlawful the “production” of digital replicas; it would broadly sweep in secondary actors that merely “transmit” a digital replica, subjecting them to the full range of remedies that can be levied against the actual bad actors. As currently drafted, NO FAKES seeks to punish good and bad actors alike.
There is little precedent for this approach in the law. Unlike in copyright law, there is no well-developed or widely recognized regime for secondary liability based on violations of rights of publicity. The NO FAKES Act attempts to craft such a regime out of whole cloth, but, as drafted, it is not workable. Most critically, while the bill purports to limit secondary liability to those services that distribute or make content available with “knowledge” that the content is an unauthorized digital replica, that is still an amorphous standard that is likely to invite significant litigation and attendant business uncertainty. For example, in the context of the music industry, profound and pervasive metadata challenges make it difficult for services to know who owns what content; in fact, much of the music industry’s infrastructure is organized around the fact that rightsowners often don’t know what they own or control. This problem is even more acute with respect to digital replicas; there is no practical way for services to know what was created using generative AI, much less whether those works were created with requisite consent, fall into an enumerated exception, or are otherwise permissible.
Moreover, music streaming services do not need an additional incentive to remove illegal content. We have seen that parties throughout the supply chain have been quick to act, in the context of existing commercial terms and relationships, to address the issues that have arisen to date.
By placing a significant risk of liability on downstream services that have no involvement in the creation of the offending content, the bill as currently drafted would incentivize services to overly restrict or remove constitutionally protected and otherwise lawful content. Faced with unbounded liability under uncertain legal conditions, there is a serious risk that services will significantly over-screen or censor content – threatening free speech, creative freedom (and revenue for legitimate work), and consumer choice.
Accordingly, if the Committee intends to adopt a secondary liability regime for improper uses of individuals’ appearance or voice, it must put in place appropriate safeguards to ensure that any regime is targeted at truly objectionable conduct. For instance, any such liability should be premised on a refusal to remove or disable violative content once the service has actual knowledge of the specific violation. And “actual knowledge” must be carefully defined: notices identifying infringing content are often incorrect (as a matter of fact and/or law) even where they purport to contain all of the necessary information. Even where a notice is correct and complete on the facts, a distributor may have good reason to think that a First Amendment exception applies, or consent has been obtained. Merely receiving a notice alleging a violation does not, and must not as a matter of law, establish actual knowledge of an unauthorized digital replica.2
Finally, while a take-down notice identifying content cannot be sufficient, in and of itself, to establish such knowledge for purposes of liability, there should be a clear and straightforward safe harbor against liability where the service responds promptly to a specific take-down notice regarding the content. Likewise, service providers should be immune to liability for the replacement of content, where a proper counter-notice procedure is followed.
3: The draft does not solve the “patchwork” approach to rights of publicity
Finally, the discussion draft does not address the problem of the patchwork of state laws. To the contrary, the discussion draft does the opposite, by explicitly declining to preempt any law that “provides protection against the unauthorized use of the image, voice, or visual likeness of the individual.” That is a step in the wrong direction and will only encourage further fragmentation and confusion among the various state and federal laws, rather than reflect the careful policy-making that Congress is well positioned to undertake. This is a national challenge and calls for leadership to devise a national solution.
1 See RIAA, Year-End 2023 RIAA Statistics, available at https://www.riaa.com/wp-content/uploads/2024/03/2023-Year-End-Revenue-Statistics.pdf
2 Among other things, any notice-and-takedown system is open to the risk of abusive and improper notices. Simply receiving a notice cannot be enough to establish knowledge of a potential violation.
* * *
DiMA and its members thank the Subcommittee for its time and focus on this important issue. DiMA supports the effort to create stronger protections against misappropriation of personhood in the age of AI and is encouraged by the process to date. We look forward to working with policymakers and industry stakeholders to advance solutions that protect creators’ personhood, the ability of streaming services and the broader music industry to innovate, and every American’s creative speech and First Amendment rights.