AI News – May 9, 2023

Voice Banking: Acapela’s “My Own Voice”

Acapela Group, a veteran in the text-to-speech industry, has introduced its free “My Own Voice” service, which allows anyone to train an AI voice profile. The service enables individuals to create a synthetic voice for themselves. Acapela co-founder Remy Cadic noted that, in the past, customizing synthetic voices was both time-consuming and yielded poor results. However, with the advent of consumer-scale machine learning processes, individuals can now train an AI voice profile with just 50 short sentences in roughly 10 minutes.

The recording and banking process is free, and the resulting synthetic voice can be downloaded for a fee to use on any compatible speech-generation system. The service is especially beneficial for people facing degenerative conditions, cancer, or certain procedures, who know that they may lose their ability to speak well or at all in the future. Acapela also customizes the process for children, and the quality of their synthetic voices has been significantly improved. The company has emphasized the importance of diversity and thoughtfulness in the training process and is developing solutions for users with different backgrounds and disabilities.1

“Godfather of AI” Geoffrey Hinton Expresses Concerns and Resigns from Google

Geoffrey Hinton, a renowned artificial intelligence (AI) researcher, has left his position at Google to openly discuss the potential dangers of AI. Hinton, who won the Turing Award in 2018 for his work on the foundations of AI, believes that the fast pace of AI development poses a risk to society. He warned that generative AI products, which create false text, images, and videos, could lead to a situation where the truth becomes unclear. Additionally, Hinton expressed concerns about the impact of AI on the job market, as machines could eventually replace human roles.

Hinton’s concerns about AI are not unfounded, as AI has already been used to create deepfakes and automate jobs, leading to job losses. However, some experts believe that AI can benefit society by detecting diseases, detecting fraud, and reducing traffic accidents. To ensure that AI is developed in a responsible and ethical manner, many organizations have developed guidelines. Hinton’s resignation from Google and his concerns about AI highlight the need for responsible AI development that minimizes its risks and maximizes its benefits.2

The Need for a More Cautious and Inclusive Approach to AI Development

The development of artificial intelligence (AI) is progressing at an alarming pace, prompting concerns about its potential misuse and unintended consequences. From bad actors to engineers and politicians tasked with realizing the potential of AI, many exhibit a misplaced certainty about what AI can achieve and how it can be controlled, based on flawed assumptions. Despite the recognition that more work is needed to develop ethical AI, there is a lack of consideration of whether ethical AI is even a technical possibility. Furthermore, the notion of “unbiased” AI is pure fiction, and the questions of what constitutes an acceptable bias and who should have the authority to decide remain largely unaddressed. Calls for anti-woke AI from certain quarters also highlight the issue of who should have the authority to determine what AI is allowed to say and do. A more cautious, humanistic approach is required, one that looks beyond the tech realm for solutions to the problems posed by AI.

To achieve this, a wide diversity of voices, particularly those most likely to be imperilled by AI, must be included in the conversation. The predominant assumptions of AI policy reflect the views of only a narrow set of stakeholders, neglecting the interests of those most likely to be harmed, regardless of whether the AI is superintelligent or not. It is essential to promote a culture of admitting uncertainty and opening up the conversation to a more diverse range of stakeholders to recalibrate how we talk about AI. These are urgent questions that require consideration by a wider audience, especially historically marginalized groups most likely to be impacted by the misuse of AI.3

Amazon Boosts Podcasts with Snackable AI Acquisition

Last December, Amazon acquired Estonia-based Snackable AI, an audio content discovery engine that specializes in using AI to add metadata and structure to video and audio, to enhance its podcast features. The terms of the deal were not disclosed, but Amazon confirmed that the Snackable AI team joined Amazon Music to work on existing podcast projects. Snackable AI founder and CEO Mari Joller is now an AI and machine learning product leader at Amazon Music, where she leads a team of engineers, applied scientists, and computational linguists to build AI-powered products for Amazon Music Podcasts’ customers. Amazon added podcasts to its Music platform in 2020 and has since been building out the offering and adding new features, such as synched transcripts. The move is seen as part of Amazon’s effort to compete with other music services like Apple Music and Spotify.4

Integrate text into images

DeepFloyd unveils its text-to-image model, DeepFloyd IF, which can integrate text into images, generating images from prompts such as “a teddy bear wearing a shirt that reads ‘Deep Floyd'” in multiple styles. Unlike other models, it uses a large language model to understand and represent prompts as a vector and generate legible text, making it suitable for generating text-heavy designs such as logos, web design, posters, billboards, and memes. It performs diffusion several times, generating 64x64px image, then upscaling to 256x256px, and finally to 1024x1024px, and because it works directly with pixels, it is better at generating hands and spatial relationships. However, the model’s ability to generate legible text in images raises concerns about its potential to generate harmful content, and the creators note that it is biased towards western and white cultures.5

White House Urges Big Tech Bosses to Protect Public from AI Risks and Considers Further Regulation

On Thursday, tech industry leaders including Sundar Pichai of Google, Satya Nadella of Microsoft, and Sam Altmann of OpenAI were summoned to the White House and told that they have a “moral” responsibility to protect society from the potential dangers of Artificial Intelligence (AI). The administration warned that it may impose further regulation on the sector if necessary. There are concerns that AI could replace people’s jobs, lead to the dissemination of misinformation, flout copyright laws, and exacerbate fraud. However, advocates like Bill Gates argue that instead of a “pause,” the focus should be on how best to use the developments in AI.6

Illustrator Withdraws from Bradford Literature Festival Due to AI-Generated Publicity Images

Bradford Literature Festival has faced controversy after book illustrator Chris Mould pulled out of the event when he learned that AI was used to create its publicity images. Mould, who was due to lead a masterclass, said he couldn’t “tell people they can go to art school” while under the same roof as AI-generated artwork. Other speakers including Sir Lenny Henry, Sir Michael Palin, and Lemn Sissay are still due to attend. Nicola Solomon, CEO of The Society of Authors, also expressed concerns and received a “constructive response” from the festival’s director. The festival admitted it “should have been more explicit” about the use of AI.7

  1. []
  2. []
  3. []
  4. []
  5. []
  6. []
  7. []

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Share via
Copy link