AI April 26, 2023
Grimes Will Split Profits 50% On AI Songs Using Her Voice
Canadian musician Grimes says she would share 50% of profits on “any successful artificial intelligence-generated song” using her voice.
Her announcement comes in the wake of a recent controversy over an unauthorized song featuring AI-generated vocals of Drake and The Weeknd, which went viral earlier this April.
“I’ll split 50% royalties on any successful AI generated song that uses my voice,” Grimes posted on Twitter. “Feel free to use my voice without penalty,” she said, adding she has “no label and no legal bindings.
The decision could mean that Grimes is letting AI creators do all the work while she sits back and collect 50% royalties, according to some observers.
I'll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings. pic.twitter.com/KIY60B5uqt
— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
Grimes won’t allow “toxic lyrics”
Claire Elise Boucher, better known professionally as Grimes, is a musician, singer, songwriter, and record producer. The 35-year old is no stranger to the AI and tech cultural scene, having written music about science fiction and tweeted about AI from time to time.
Grimes is perhaps best known for her relationship with Tesla billionaire Elon Musk, with whom she has two children. Ironically, Musk recently added his signature to a letter calling for a 6-month moratorium on the development of artificial intelligence systems.
But Grimes is going all in on AI. “I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright,” she tweeted. She also revealed that she was working on “a program that should simulate my voice well.”
In a follow-up tweet, she announced that her vocals can be accessed through elf.tech and that she will be using smart-contracts – a sort of agreement based on blockchain technology – to split royalties with AI creators.
Grimes said she is working with her team on a system that can collect and pay out royalties to creators. She spoke about taking down copyright for songs that contain “toxic lyrics” in her voice.
“[I] don’t wanna be responsible for a Nazi anthem unless it’s somehow in jest a la producers I guess,” Grimes tweeted.
“We expect a certain amount of chaos. Grimes is an art project, not a music project. The ultimate goal has always been to push boundaries rather than have a nice song. The point is to poke holes in the simulation and see what happens even if it’s a bad outcome for us.”
AI copyright wars
Over the past year, the ability to synthesize a human voice using deep learning models has become a lot more accessible. This is due to the availability of open source models like Tortoise-tts and commercial services such as ElevenLabs speech synthesis.
Some programs have gained the ability to clone a person’s voice from a three-second audio sample. Other AI-based tools include voice transformation software, and Disney’s use of AI technology to synthesize Darth Vader‘s voice.
Also read: US Copyright Office Says You Can’t Copyright AI-generated Images
But there is worldwide concern over issues of privacy and intellectual property for individuals and creators of the data upon which AI models are trained.
In February, Getty Images sued the artificial intelligence start-up Stable Diffusion for training its image-generating system on 12 million watermarked photos, which Getty claimed diluted the value of its images.
And earlier this month, a song generated using voice models of Drake and The Weeknd went viral, but was immediately removed from streaming platforms over issues of copyright, The Verge reported.
Universal Music Group has also warned against training AI models on the companies artists’ work saying this was a violation of copyright.
AI
Metaverse Experiences Must Be Worth Having, Says Stephenson
The success of the metaverse depends on the ability of developers to build quality experiences that “millions or billions” want to have. To do that the sector must attract, find, and financially incentivize the very top talent from related industries.
This is the verdict of Neal Stephenson, the man credited with coining the word metaverse in his 1992 novel Snow Crash.
Source the best developers
Famed author and futurist Neal Stephenson says the metaverse must find and attract the most talented people to make the sector a success. Stephenson’s comments came during an appearance at AWE USA 2023 convention on Wednesday.
“If we’re going to have a metaverse that’s being used all the time by millions or billions of people, then there have to be experiences in the metaverse that are worth having,” Stephenson said.
“That seems like an obvious statement but for me, there’s a glaring and frustrating lack of support for the kinds of people who make those experiences,” added the author. “Right now the skill set that is needed to create the metaverse is basically what you see in the game industry. People who know how to use game engines and how to create the assets that feed into those game engines. Those people by and large have jobs and other things they could be doing.
“We need to create the economic basis for [developers] to get rewarded if they succeed in creating metaverse experiences that a lot of people enjoy.”
Stephenson cited a number of ways that developers may be rewarded, but his personal vision is for a tokenized metaverse owned and controlled by its citizens.
In June last year, Stephenson announced Lamina1, a layer 1 blockchain infrastructure and open metaverse company. Stephenson co-founded the “batteries-included blockchain” with Peter Vessenes, reflecting their vision for an incentivized metaverse that, according to its website, could “empower a new generation of interconnected, immersive experiences across gaming, entertainment, fashion, music, and beyond.”
Seeing double: Ori Inbar and Ori Inbar introduce Neal Stephenson
A tale of two metaverses
Ori Inbar, the CEO of AWE, hosted the conversation with Stephenson on what marked the opening of the 14th annual AWE convention. The latest event is running from May 31 to June 2 in Santa Clara, California. Those who can’t attend in person are invited to participate online.
In an entertaining introduction, a virtual facsimile of Inbar initially addressed conference attendees, only for the real Inbar to interrupt and reveal the entire monologue was written by ChatGPT.
Inbar then asserted that AI makes “original voices… even more valuable than before.”
Once Inbar sat down with Stephenson the pair discussed just how far technology is developing across multiple fields. Inbar asked Stephenson where he believed the metaverse was headed; to something dystopian or something utopian.
“I think it would be very early and premature to make announcements today about whether it is going in one particular direction,” said Stephenson.
To Stephenson, both the positives and the negatives of the metaverse can co-exist within just one reality – something he explored in his 1992 novel.
“Our initial exposure to the metaverse is a kind of very vast market, a lowest common denominator to include … the worst of television,” said Stephenson said as he described the surface level of the metaverse. “But later on, as we get farther into the book, we see that people have used it to make beautiful works of art. There are some people … who lavished a lot of time and attention on making homes in the metaverse that are exquisite works of art, both visually and in this sonic environment.”
That ambition of a deeper metaverse is worth striving for, especially as a driver of its long-term success.
AI
Baidu Is Rolling Out a $145M Venture Capital AI Fund
Chinese tech giant Baidu is setting up a venture capital fund of $145 million or 1 billion yuan to back AI-focused startups. Baidu co-founder and CEO Robin Li announced the launch of the fund at a JP Morgan summit in China this week.
The move could signal China’s push towards self-reliance in the cut-throat generative AI sector. The fund will support the development and innovation of AI-based content creation, such as chatbots, video and audio synthesis, and natural language processing.
The fund is targeting early-stage AI applications, an area which Chinese generative AI startups have so far struggled to reach widespread adoption.
Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
Tailing the US’s OpenAI
OpenAI recently created an investment fund valued at more than $175 million, according to a Securities and Exchange Commission filing. the company has been investing in startups, with its OpenAI Startup Fund to back companies “pushing the boundaries of how powerful AI can positively impact the world.”
Baidu is also planning to launch competition for developers to build applications using its Ernie large language model (LLM) or integrate the model into their existing products, in a similar fashion other tech firms are using OpenAI’s ChatGPT technology.
Ernie bot is Baidu’s own AI-powered LLM that can generate natural and coherent texts based on user inputs.
“American developers are building new applications based on ChatGPT or other language models. In China, there will be an increasing number of developers building AI applications using Ernie as their foundation,” said Li.
Baidu unveiled the chatbot in March this year and claimed that it outperformed other LLMs in several benchmarks.
Battle for AI supremacy
The success of ChatGPT has put Chinese tech companies under pressure to fast-track the release of their own LLMs and bring them to market.
According to Reuters there are over 75 Chinese companies that have already released their own LLMs since 2020. Baidu and e-commerce giant Alibaba are among these companies.
A report by a state-run research firm says over 79 LLMs have been launched in the past 3 years.
And the Baidu boss predicts that in the generative AI age, Chinese companies will catch up, and even lead the way in discovering commercial applications for AI.
“I am very bullish on China AI development. Over the past few decades, China has warmly embraced new technologies,” said Li.
“Even though we didn’t invent Android, iOS or Windows, we developed a host of very innovative applications like WeChat, Douyin and Didi. Many of them are popular and useful. The same trend is playing out in the AI age. Technology ushers in a myriad of possibilities and we are good at capturing them to build applications,” explained Li.
LLMs, a vital tech
Since they can produce realistic and varied material across a range of subjects and forms, LLMs are seen as a vital technology for expanding AI applications and services. They do, however, also present ethical and legal difficulties, such as possible abuse, plagiarism, and bias. China released draft regulations on the use of generative AI in April in response to the spike in LLMs, requiring developers to acquire approval and explicitly label such products.
The growth and adoption of AI-based content production in China and elsewhere are anticipated to be accelerated by Baidu’s venture capital fund and competition.
AI
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
On Wednesday a top EU official said the European Union and United States expect to draft a voluntary code of conduct on artificial intelligence within weeks. The move comes amid concerns about the potential risks of AI on humanity, and as calls for regulation intensify.
European Commission Vice President Margrethe Vestager said that the United States and the European Union should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed.
She was speaking at a meeting of the EU-U.S. Trade and Technology Council (TTC), which is jointly led by American and European officials. Any new rules on AI will not take effect until at least after three years, she said. The code is, therefore, expected to bridge that gap.
Also read: EU Antitrust Chief Steps up Rhetoric on Metaverse, AI Regulation
Game-changing AI technology
“We need accountable artificial intelligence. Generative AI is a complete game changer,” Vestager said after the council’s meeting in Sweden, AP reported.
“Everyone knows this is the next powerful thing. So within the next weeks, we will advance a draft of an AI code of conduct.”
She said officials will gather feedback from companies developing and using AI, and other industry players. Vestager hopes there would be a final proposal “very, very soon for industry to commit to voluntarily.”
US Secretary of State Antony Blinken said he had an “intensive and productive” discussion on AI with his European counterparts at the TTC forum.
“[The council has] an important role to play in helping establish voluntary codes of conduct that would be open to all like-minded countries,” Blinken said.
AI could end human race
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for AI to create mass unemployment.
As MetaNews previously reported, one of the core issues is what experts described as the “alignment problem.” Essentially, the problem refers to the difficulty of ensuring that an AI system’s goals and objectives are aligned with those of its human creators.
Critics say the danger is that an AI system may develop its own goals and objectives that conflict with those of its creators, leading to disastrous outcomes. On Tuesday, about 350 scientists and experts signed a statement calling for AI regulation to be a global priority.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement stated.
The statement is from San Francisco-based non-profit the Center for AI Safety. It was signed by chief executives from Google DeepMind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research.
In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”. They urged international dialogue on the governance of AI, copyright, transparency, and the threat of disinformation.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
According to Vestager, specific agreements, not just general statements, are needed. She suggested that the the 27-nation EU and the US could help drive the process.
“If the two of us take the lead with close friends, I think we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds,” she said.
Worldwide concern
The U.S. and the European Union are not the only jurisdictions working on AI regulation. China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.”
In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.
“There’s a sort of feeling in the community that they want to have the assurance … that the technology isn’t getting ahead of itself and it’s not being used in a way that creates disadvantage or risk for people,” he said, according to local media reports.
“That’s why the [federal government] wants to set up the next reforms that can give people confidence that we are curbing the risks and maximising the benefits.”
-
AIFri 2 Jun 2023 09:30 GMT
Metaverse Experiences Must Be Worth Having, Says Stephenson
-
FeaturedFri 2 Jun 2023 07:40 GMT
Mark Zuckerberg Unveils $500 Meta Quest 3 VR Headset
-
AIThu 1 Jun 2023 18:00 GMT
Baidu Is Rolling Out a $145M Venture Capital AI Fund
-
AIThu 1 Jun 2023 13:30 GMT
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
-
BusinessThu 1 Jun 2023 06:36 GMT
Metaverse Gaming Market Expected to Reach $119.2 Billion by 2028
-
AIWed 31 May 2023 17:45 GMT
Judge Orders All AI-Generated Research To Be Declared in Court
-
CryptocurrenciesWed 31 May 2023 07:03 GMT
Floki Inu (FLOKI) Volumes Surge 300% on China Metaverse Game Plans
-
AITue 30 May 2023 15:07 GMT
Nvidia Debuts AI Tools in an Era Where “Anyone Can Be a Programmer”