AI can extract sensitive images and expose our private lives

Opinion Technology

In Brief

AI models may learn and remember certain data and specific images, which is a concern.

It might lead to new legal disputes.


The Trust Project is a worldwide group of news organizations working to establish transparency standards.

There are numerous cutting-edge technologies that inspire wonder, and someone is suing them. In this case, the StableDiffusion model’s developer is being sued. StableDiffusion is an entirely open-source and free-to-use AI that can produce graphics based on a text request. You can use it to create photorealistic images or those that are inspired by other artists’ styles.

AI can extract sensitive images and reveal your private life
AI has the potential to extract personal images and information from digital media, such as social media accounts, that can reveal a person’s private life

However, it is obvious that artists do not appreciate this position; ArtStation, a sort of Instagram for artists, even staged a “No AI” strike. The key argument they bring up is that the model trains on the images are not StabilityAI’s property. Imagine someone took a snapshot of you at a meetup with friends and uploaded it to Instagram. Then, an AI would get its hands on it, and the photo would be analyzed by the model and used in training.

Models may learn and remember certain data and specific images, which is a concern. In other words, a blackmailer can take a query about a gathering with friends and synthesize your uploaded image from that. This is already a little worrisome, especially given that StableDiffusion’s training set contains more than 5 billion pics from the Internet, including private images that were once public. The misuse of private images does happen, unfortunately. In 2013, a doctor snapped a picture of a patient, and the picture appeared on the clinic’s website.

The silhouettes in the comparison of the original and created photographs are identical.

Here is an article that demonstrated how images were acquired using various AI models and how closely they matched training images (spoiler: there is some noise, but it is generally very similar). These defenses are applicable to the aforementioned action, and the jury will view these models differently in this instance since we can claim that they remember and replicate stuff (without the rights to do so).

However, it’s too early to claim that “models learn and don’t come up with anything” because, as I mentioned earlier, only 100 images were produced from the 5 billion training sessions (the authors marked with their hands the top 1,000 most similar generations for the most common prompts, but there were mostly no duplicates).

Read more about AI:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles