In early March, Stability AI acquired France’s Init ML, maker of the Clipdrop suite of AI imaging applications.
This collaboration between Stability AI and Init ML leads to the new product, Stable Diffusion Reimagined, which is a generator of new ideas based on a single image.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
In early March, Stability AI acquired France’s Init ML, the maker of the Clipdrop suite of AI imaging applications. This is the first purchase of Stability AI since the fundraising.
Paris-based Init ML was founded in July 2020 with seed funding from venture capital firm Air Street Capital. Clipdrop has over 15 million users on its Relight, Text Remover, Remove/Replace Background, Super Resolution, and Clean Up tools since then. Init ML will operate as a wholly-owned independent subsidiary of Stability AI, with all of its employees remaining on staff. According to the ClipDrop website, “This acquisition is expected to bring together the expertise of both companies to enhance their AI-powered solutions.” Stability AI aims to leverage Init ML’s capabilities to provide more innovative and efficient services to its clients.
And now, this collaboration between Stability AI and Init ML leads to the new product, Stable Diffusion Reimagine. Stable Diffusion Reimagine does not recreate images based on original data. Instead, Stable Diffusion Reimagine creates new images inspired by the originals.
It’s like a generator of new ideas based on a single image. On the other hand, it can be viewed as copy-paste at maximum speed, a complete analogy of the request to ChatGPT “Take this text and rewrite it differently.” In essence, this tool can be perceived as a catalyst for creativity, sparking novel concepts from a solitary visual cue. Conversely, it can also be likened to a rapid-fire duplication process, akin to the act of requesting assistance from ChatGPT to rephrase this passage.
Images are generated based on the image. After the encoder passes through the algorithm, some noise is added to create variations. This approach results in similar images with different details and composition. Unlike the image-to-image algorithm, the original image is first fully encoded. This means the generator does not use any of the pixels taken from the original image. This is so that artists do not swear at plagiarism.
StabilityAI is committed to open source and promises to put the code on GitHub, which is very cool. In the meantime, you can try it for free here.
Freshly generated examples follow below:
Meanwhile, users are waiting for a generator of sites, presentations, pitch decks, and glamorous magazines with one button: “Enter a URL or file; our AI will rewrite the texts and regenerate the pictures.”
- Stability AI, Hugging Face, and Canva establish a new non-profit organization for AI Research. EleutherAI, a community research group founded by Connor Leahy, Leo Gao, and Sid Black, is establishing a non-profit foundation.
- In November, Stability AI released a new paper on its blog about Stable Diffusion 2.0, a new algorithm that is more efficient and robust than the previous one, while benchmarking it against other state-of-the-art methods. This release features robust text-to-image models trained with a fresh new text encoder (OpenCLIP) developed by LAION with assistance from Stability AI, which significantly enhances the quality of the generated images over previous V1 releases. These models are trained using an aesthetic subset of the LAION-5B dataset generated by Stability AI’s DeepFloyd team, which is then filtered to exclude adult content using LAION’s NSFW filter.
- In October, Stability AI announced AI Music Generator Harmonai based on Dance Diffusion Model. Harmonai is a community-driven organization that publishes open-source generative audio tools to increase everyone’s access to and enjoyment of music composition. It is based on the Dance Diffusion Model, which generates never-before-heard sounds in a process called diffusion.
Read more related articles:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.