Opinion News Report Technology
September 08, 2023

Facing the AI-Generated Image Threat: Why Awareness Is Imperative

In Brief

Tech companies and governments worldwide have begun implementing measures to shield citizens from the growing menace of AI-generated images.

Facing the AI-Generated Image Threat: Why Awareness Is Imperative

AI technology continually blurs reality and fiction, saturating our visual realm — from advertising to entertainment with lifelike images. These images enable the manipulation of recognizable public figures, such as politicians, for disseminating misinformation or propaganda.

So, what consequences and concerns accompany the surge in AI-generated images?

While AI-generated images and videos bring forth benefits, such as fostering creativity and innovation, they also harbor potential risks. Generative AI technology empowers the creation of highly realistic images depicting events that never occurred, serving as a potent instrument for the propagation of falsehoods and the manipulation of public opinion.

Over the past six months, AI Photography, branded as “promptography” by Boris Eldgasen, has reached a chilling level of realism.

It is now possible to conjure images from text that leave viewers questioning their authenticity. These AI-generated photos have deceived judges, won photography contests, and been exploited by scammers during events like the Turkey-Syria earthquake.

Tech conglomerates and governments worldwide have begun implementing measures to shield citizens from the growing menace of AI-generated images. Even photographers themselves are expressing concerns, as the proliferation of AI technology in their craft poses a risk: their work may become indistinguishable from that of their peers.

A rising threat sparking unease globally

Generative AI technologies are evolving rapidly, making it increasingly challenging to differentiate between computer-generated images, also referred to as “synthetic imagery,” and those crafted without the aid of AI systems.

The homogenization of AI-generated images threatens the diversity and originality within the field of photography, making it arduous for photographers to distinguish their work and for audiences to discern between various photographers.

Furthermore, if AI-generated images become the norm, they may devalue the perceived worth of photography. AI-created images might no longer be seen as unique or precious, potentially reducing demand for original photographic creations.

Artificial intelligence tools could be exploited to produce child abuse images and terrorist propaganda, as cautioned by Australia’s eSafety Commissioner, who recently announced a industry standard mandating tech giants like Google, Microsoft’s Bing and DuckDuckGo to eradicate such material from AI-powered search engines.

This new industry code governing search engines demands that these tech giants eliminate child abuse material from their search results and take preventive measures to ensure generative AI products cannot be used to generate deceptive versions of such material.

Julie Inman Grant, the eSafety Commissioner, stressed the need for companies to take a proactive stance in minimizing the harms stemming from their products. She warned that “synthetic” child abuse material and terrorist propaganda are already emerging, emphasizing the urgency of addressing these issues.

Microsoft and Google have recently announced plans to integrate their AI tools, ChatGPT and Bard, respectively, into their popular consumer search engines. Inman Grant noted that the progress of AI technology necessitates a reevaluation of the “search code” governing these platforms.

Suspected Chinese operatives have also harnessed artificial intelligence to simulate American voters online and disseminate disinformation on divisive political topics as the 2024 US election approaches, according to a warning from Microsoft analysts.

In the past nine months, these operatives have posted striking AI-generated images featuring the Statue of Liberty and the Black Lives Matter movement on social media platforms, with a focus on disparaging US political figures and symbols.

This alleged Chinese influence network employed multiple accounts on Western social media platforms to disseminate AI-generated images. Although the images were computer-generated, real individuals, whether knowingly or unknowingly, shared them on social media, amplifying their impact.

Tech Conglomerates Unite to Safeguard Image Authenticity

Content and technology firm Thomson Reuters has partnered with Canon and Starling Lab, an academic research lab, to launch a pilot program aimed at verifying the authenticity of images used in news reporting. This collaborative initiative seeks to ensure that AI-generated images do not pass as genuine photographs, especially in news content, where accuracy is paramount.

This initiative is particularly timely in the battle against the growing tide of misinformation. Rickey Rogers, Global Editor of Reuters Pictures, emphasized the vital importance of trust in news reporting. 

“Trust in news is paramount. However, recent technological advancements in image generation and manipulation are prompting more individuals to question the authenticity of visual content. Reuters remains committed to exploring new technologies that guarantee the accuracy and trustworthiness of the content we deliver,” said Rogers. 

Likewise, Google launched SynthID, a tool for watermarking and identifying AI-generated photos, and has released its beta edition in collaboration with Google Cloud. This technology embeds a pixel-level digital watermark into images for verification, yet remains invisible to the naked eye.

Imagen, one of the latest text-to-image models, is now availing SynthID to a select group of Vertex AI customers. Imagen takes textual input and produces photorealistic images as output.

Researchers developed SynthID to maintain image quality while allowing the watermark to be detectable even after alterations such as filters, color changes, or compression using lossy algorithms, typically used for JPEGs.

SynthID employs two deep learning models—one for watermarking and one for identification—trained on a diverse set of photos. The combined model is finely tuned to achieve multiple objectives, including accurate recognition of watermarked information and aesthetic alignment of the watermark with the original content.

Addressing this issue demands action from photographers, AI developers, and the broader photography industry. This may entail the development of ethical guidelines and best practices for utilizing AI in photography and encouraging the exploration of new forms of photography that leverage AI technology’s unique capabilities while preserving the artistic integrity of the field.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Victor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master's degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends. He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.

More articles
Victor Dey
Victor Dey

Victor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master's degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends. He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.

Hot Stories

Missed Bitcoin’s Rise? Here’s What You Should Know

by Victoria d'Este
December 20, 2024
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Transak Increases Accessibility To Memecoins By Listing 11 New Tokens
Markets News Report Technology
Transak Increases Accessibility To Memecoins By Listing 11 New Tokens
December 20, 2024
Missed Bitcoin’s Rise? Here’s What You Should Know
Opinion Business Markets Technology
Missed Bitcoin’s Rise? Here’s What You Should Know
December 20, 2024
The Explosive Rise of Crypto Theft in 2024 with North Korea Leading the Charge
Opinion Business Markets Software Technology
The Explosive Rise of Crypto Theft in 2024 with North Korea Leading the Charge
December 20, 2024
Multiple Network Unveils Brand Upgrade, Focusing On Privacy Protection And Data Acceleration 
News Report Technology
Multiple Network Unveils Brand Upgrade, Focusing On Privacy Protection And Data Acceleration 
December 20, 2024