Riffusion is a real-time music generation app that uses stable diffusion
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Since the early days of AI, scientists have been trying to employ it to generate new and interesting music. The team behind the Riffusion project has found a very original use of AI for image generation in music production. They trained the open Stable Diffusion model on spectrogram images depicting the frequency and amplitude of a sound wave over time, as well as a text description. As a result, the AI can generate new spectrograms based on your text requests, and when you play them, music is played.
Similar to image modification in Stable Diffusion, the method can be used to change existing sound compositions and sample music synthesis. You can also combine different styles, make a smooth transition from one style to another, or modify an existing sound to solve problems like increasing the volume of individual instruments, changing the rhythm, and replacing instruments.
The Stable Diffusion algorithm is already showing a lot of promise for music generation. And, since it is open source and licensed under the MIT license, anyone can use it to create their own music. On the project website, you can listen to samples of generated music.
Listen to these freshly generated music examples by Stable Diffusion:
Read more about music and AI:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.