Scarlett Johansson Said No to Voicing ChatGPT, But It Sounds Like OpenAI Did it Anyway

OpenAI has pulled down the SKY voice option from its popular ChatGPT generative AI app. If you’ve been using ChatGPT, you know of the voice and how people have been saying it sounds eerily similar to popular actor Scarlett Johansson. The news comes just as Scarlett Johansson has released a statement saying, “Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI.”

Despite declining this offer for personal reasons, Johansson later discovered that the ChatGPT system named “Sky” sounded eerily like her, leading to widespread confusion among her friends, family, and the public. “When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson expressed. Her dismay was compounded by the suggestion that the resemblance was not coincidental, highlighted by Altman’s cryptic tweet of the word “her” – a direct nod to her role in the film Her, where she voiced an AI forming a deep connection with a human.

The thing about AI is that it will render out data based on data it was trained with. This makes it likely that OpenAI used Johansson’s voice as a model for the Sky voice – whether they agree it was intentional or not. Or, perhaps they expected her to sign the deal she mentions in her statement? And shipped the product without her signing expecting no one to notice. Yet still, this raises serious intellectual property concerns, as it suggests that her voice, an identifiable personal attribute, was replicated without consent.

When an AI like ChatGPT mimics a specific individual’s voice, it begs the question: from where did the AI learn this voice, and was consent given? Intellectual property laws are designed to protect the creations of the mind, including voices, from unauthorized use, potentially making OpenAI’s actions, as alleged by Johansson, a legal violation. Meaning, we should be asking ourselves what sort of data AI companies are using? And what sort of audit is being done on these vast sets of data. Is it ethical just because a company claims so? In this new age, what are the limits around legal and illegal?

As this unfolds, we’ll be keenly watching the next steps each party will take. What will OpenAI say in response to these allegations? How will Scarlett Johansson continue to address what she perceives as a misuse of her personal likeness? And importantly, will this incident prove to be a landmark step in defining the boundaries of ethical AI use and the legal frameworks that govern artificial intelligence? Let us see how this plays out. As we grapple with the rapid advancements in AI, the outcome of this situation may set a crucial precedent for how personal attributes are treated in the digital age, ensuring that innovation does not come at the expense of individual rights.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Discover more from Techish Kenya

Subscribe now to keep reading and get access to the full archive.

Continue reading