It seems lately that every day provides an opportunity to consider AI and the ethics around it.
It seems lately that every day provides an opportunity to consider AI and the ethics around it.
Last week, for example, I learned that a client who’s moved into an AI-related business was using a portion of a past French narration in a presentation demonstrating the capabilities of their AI to switch languages so a presentation could be instantly adapted and localized.
The client neither asked my permission nor sought to compensate me for using my intellectual property – my voice – for a new use of a long-ago recording.
This was one of two things: an unfortunate decision based on ignorance of the above or a deliberate decision based on the assumption that I’d likely never hear about it, and what was I going to do about it anyway? Understand that this client is using this narration to sell their technology, so essentially to profit from it without compensating the voice actors involved. Again, the benefit of the doubt allows that this is a whole new frontier, and it was done with no ill intent. However, independent of intent, theft is theft.
A couple of things crossed my mind:
1 - The successful legal battle that Canadian peer Bev Standing was forced into to have TikTok remove her voice from the platform and compensate her appropriately. They had acquired audio files from another company that had recorded Bev’s voice. That company sold the files to TikTok without Bev’s knowledge or consent. The result was they used her voice, that voice actor’s intellectual property, to create new intellectual property for other clients without compensating her. The eventual settlement and resolution resulted from the hard work of a US attorney who rightly pointed out that the platform had illegally profited from using that actor’s intellectual property without due compensation for the unauthorized use of her voice.
2 - That directly led attorney Robert Sciglimpaglia Jr to work with an American industry association called NAVA to craft an AI rider that freelance voice actors could add to their contracts when working with clients.
It’s important to note here that most freelance voice actors working without a union contract go to great pains to vet their clients and essentially work on the principle of a virtual handshake, assuming that agreed-upon use terms, etc., will be respected.
So I’d like to share the AI rider, now attached to my terms of use, and used by many actors across North America and Europe. Please note the guidance to voice actors in red. The association added those notes to make actors aware that although these terms are designed to delineate the IP rights for the voice actor clearly, the rider, as a whole, is intended, with very few exceptions, to be part of a negotiation. So, there is flexibility in the terms to be adjusted on a case-by-case basis.
PLEASE NOTE:
1 - This is not a “take it or leave it” Rider. The client may have issues or want to change some of the language. Changing language is normal for any contract. If they want to change it and you don’t know what to do, or if you aren’t sure it’s already covered in their agreement, please contact NAVA, and we will refer you to an Attorney to review the rider and any requested changes.
ARTIFICIAL INTELLIGENCE RIDER
THIS RIDER is attached to the Agreement dated ___________________ between the parties _______________ (Talent) and __________________________ (Client) and is intended to replace and supersede any conflicting language in that Agreement. (This sentence means that if something is different in the original agreement, then this agreement will control in Court.)
1 - Client expressly agrees not to utilize any portion of the Talent’s file, recording or performance of Talent for purposes other than those specified in the initial Agreement between the parties, including but not limited to creation of synthetic or “cloned” voices or for machine learning.
2 - Specifically, Client shall not utilize any recording or performance of Talent to simulate client’s voice or likeness, or to create any synthesized or “digital double” voice or likeness of Talent.
Most clients agree with the language in Paragraphs 1 and 2. If they push back on these, then they may be planning on using the recordings for voice cloning or AI, so be sure to ask for more info about usage.
3 - Client specifically agrees not to sell or transfer ownership to all or part of any of the original files recording the performance of Talent to any third party for purposes of using the files for Artificial Intelligence, such as text to speech, or speech to speech uses, without Talent’s knowledge and consent.
4 - Client agrees not to enter into any agreements or contracts on behalf of Talent which utilizes all or any part of any of the original files recording the performance of Talent for purposes of using the files for Artificial Intelligence, such as text to speech, or speech to speech uses, without Talent’s knowledge and consent.
Some clients may think you are trying to restrict how they sell or use the end product after production with this paragraph, but that is not the intent. Paragraph 4 says that they won’t sell or transfer your original recordings so that your voice can’t be cloned. If the client wants to use the files to fix or add something in this particular job that they hired you for, that is not prevented by this paragraph.
5 - Client agrees to use good faith efforts to prevent any files of recordings or performances stored in digital format containing Talent’s voice or likeness from unauthorized access by third parties, and if such files are stored in “the cloud” Client agrees to utilize services that offer safeguards through encryption or other “up-to date” technological means from unauthorized third party access.
This is self explanatory, and it is not a be all end all in case a hacker gets a hold of your files. It just says the client will use their best efforts to prevent that instead of doing nothing.
This rider is available to members and non-members of NAVA at this link: https://navavoices.org/synth-ai/ai-voice-actor-resources/#nava-synth-ai-rider.
One of the recommended uses of this rider is to approach past clients and have an agreement in place to govern the use of past narrations. This would be particularly applicable to voice actors working in fields being cannibalized by AI, unethically or otherwise.
Off the top of my head, I can think of IVR/Telephony, eLearning companies and corporate narration/explainer video companies
I will be following up with my client about that long ago narration. I expect a goodwill win-win resolution, which could include everything from a commitment not to use the narration anymore, or ideally, compensation for the current use, and a written commitment to compensate me for each and every single instance my voice is cloned and used for a new project. I’ll post an amendment to this post with the conclusion of that conversation at a later date.