David Guetta, a famous French DJ and producer, has used AI to create a track featuring the voice of none other than the iconic rapper Eminem.
The rapper was unaware of the track’s existence until it was played at a concert.
The track was created by synthesizing Slim Shady’s voice using a neural network and adding it to the track.
Now, it’s also possible to use AI to “force” living artists to perform new songs, even if they haven’t consented to it.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
David Guetta, a famous French DJ and producer, has used AI to create a track featuring the voice of none other than the iconic rapper Eminem. What’s more, the rapper was not even aware of the track’s existence until it was played at a concert. The track was created by synthesizing Slim Shady’s voice using a neural network and adding it to the track.
While it’s yet to be seen if this track will ever see a commercial release, the case opens up endless possibilities for using AI in music. So, in the future, it’s not only possible that we’ll see more holograms of dead famous artists but also that we’ll be able to have them perform new songs.
|Recommended post: Top 5 AI Music&Audio Generators to Create Royalty Free Tracks|
You can’t call the track a full-fledged fit; “Eminem” simply repeats the same phrase several times (which, by the way, was also generated by AI).
It’s not just that AI can be used to create realistic holograms of dead musicians (something that has already been done). Now, it’s also possible to use AI to “force” living artists to perform new songs, even without their consent. This could have a major impact on the music industry, and it will be interesting to see how it all unfolds.
This raises a lot of questions about the legal implications of using AI to create music. Obviously, Guetta did not have permission from Eminem to use his voice, so it’s unclear if the track would ever be released publicly. However, this case does open up a lot of possibilities for using AI in music in the future.
Of course, the legal side of things will need to be sorted out before any of this can happen on a larger scale. But it’s definitely an exciting time to be alive for music lovers!
- In January, Google introduced MusicLM, a model for generating high-fidelity music from text descriptions. It outperforms previous systems in audio quality and adherence to the text description and can be conditioned on both text and a melody. MusicLM has been trained on a large corpus of musical scores and can generate music in a variety of genres, including classical, jazz, and rock. It is an important development in the field of AI-generated music, as it opens up the possibility of using AI to generate long, complex pieces of music that could be used in movies, video games, or other media.
- In December, the Riffusion project discovered an innovative application of AI for picture generation in music composition. AI has been utilized to generate new and intriguing music. Stable Diffusion can be used to alter existing sound compositions and sample music synthesis, as well as to produce new music by altering spectrograms in response to text commands. Anyone can use it to make their own music because it is open source and distributed under the MIT license.
- Stanford University has announced EDGE, a powerful music-to-dance AI that generates dance based on audio input. It uses Jukebox, a potent music feature extractor, and a transformer-based diffusion model to create physically believable, realistic dances while adhering to any supplied music. Human raters greatly favor dances generated by EDGE.
Read more about AI and Metaverse:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.