Sting about AI
Sting has expressed concern over the increasing use of artificial intelligence in songwriting. He believes that musicians face a battle to protect their work from AI-generated songs, stating that the building blocks of music belong to humans. This comes after several songs have leveraged AI technology to clone famous artists’ vocals, such as DJ David Guetta’s use of Eminem’s “voice” and a faked duet between Drake and The Weeknd. Sting described his disinterest in computer-generated content by comparing it to CGI in movies, saying that he gets bored when seeing a computer-generated image and imagines feeling the same way about AI-created music.
In response to the rise of AI-written music, the recording industry has formed the “Human Artistry Campaign,” which warns that AI companies are violating copyright laws by training their software on commercially-released music. While the question of whether AI-generated music can be copyrighted is still under debate, some artists view this technology as potentially helpful. For example, Pet Shop Boys frontman Neil Tennant suggested that AI could help musicians overcome writer’s block. Sting agreed with Tennant’s observation but emphasized that humans must remain in control of these tools and not allow machines to take over completely. 1
Tom Hanks about AI
Tom Hanks has discussed the possibility of his career extending beyond his lifetime with the help of artificial intelligence. He shared that advancements in AI could be employed to recreate his image, ensuring his continuous presence in films “from now until kingdom come.” However, Hanks acknowledges that these developments present artistic and legal challenges. The technology’s potential was first realized during the production of The Polar Express in 2004, which utilized digital motion-capture animation. Since then, the capabilities of AI have grown exponentially.
In response to these technological advancements, discussions are being held within the film industry to determine how best to protect actors from potential negative effects. Guilds, agencies and legal firms are working together to establish guidelines for intellectual property rights regarding faces and voices. This development in AI allows actors like Hanks to appear as their younger selves in movies indefinitely. While this offers new creative opportunities, it also presents moral dilemmas surrounding authenticity and consent for AI-generated appearances. Similar technology has already been used in films such as the latest Indiana Jones installment, where Harrison Ford was digitally “de-aged” for an opening sequence. 2
ChatGPT is now officially available on iPhones
ChatGPT has officially made its debut as a smartphone app on iPhones, a development that could be both good and bad news for users. As a highly anticipated application, it offers an ad-free experience for users to interact with the artificial intelligence chatbot. The free app became available in the U.S. on Thursday for iPhones and iPads, and will later come to Android devices. The mobile version on Apple’s iOS operating system is even more versatile than its web counterpart, allowing users to ask questions using their voice.
However, the delay in bringing ChatGPT to phones has given rise to clones that utilize similar technology. Some of these copycats have been described as “fleeceware” by security firm Sophos due to their deceptive free trials and intrusive advertising tactics. OpenAI’s official ChatGPT app may eventually starve these clones of new users, but those who have already downloaded a clone are likely to continue using them, unknowingly having their personal data harvested and sold. The introduction of the new app underscores the importance of user awareness when selecting AI chatbot applications and highlights potential issues surrounding copycat apps in the market. 3
U.S. Legislators seek to establish an AI Regulator
At a recent Senate Judiciary subcommittee hearing, senators from both parties and OpenAI CEO Sam Altman expressed the need for a new federal agency dedicated to regulating AI. The urgency to protect people’s rights against potential AI-related harm has increased since the release of OpenAI’s ChatGPT last November. While several US federal agencies already regulate AI use, some senators argue that these existing agencies cannot keep up with the rapid pace of technological change. The proposed AI regulator would have the power to grant or revoke licenses for creating AI above a certain threshold of capability.
During the hearing, various regulatory responses were discussed, such as requiring public documentation of AI systems’ limitations or datasets used in their creation, akin to an “AI nutrition label.” Additionally, lawmakers and industry witnesses called for mandatory disclosure when people engage with a language model instead of a human or when AI technology makes critical decisions with life-changing consequences. Despite growing interest from governments and tech insiders in establishing guardrails for AI development, some argue against creating a new regulator for AI and suggest updating existing laws and allowing federal agencies to incorporate this oversight within their current regulatory work. 4
A Fanfic Sex Trope and AI
The rise of generative AI has prompted concerns about legal rights and plagiarism among artists whose work is scraped by these tools. Writers, for example, are worried about their creations being exploited by AI. An interesting case involves a specific sex trope called “the Omegaverse,” known only to a specific online community of fan-fiction writers. The Omegaverse is an act of collective sexual worldbuilding originating from the Supernatural TV series fandom, featuring unique terms and phrases that are ideal for testing how generative AI systems scrape the web.
When given specific Omegaverse-related words or questions, Sudowrite (a writing tool which uses OpenAI’s GPT-3) demonstrated knowledge of this particular trope, suggesting that it had learned about the Omegaverse from fan-fiction sites like Archive of Our Own. Fan-fiction writers were displeased with discovering that their non-commercial work was used to train AI systems like Sudowrite and others utilizing GPT-3. As arguments over ownership and moral commitments continue to grow between writers and AI developers, finding a harmonious solution becomes increasingly challenging. 5Sources