“This Content Created by Humans”: Why Users Need Content Credential Adoption in the AI Era
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Have you encountered those YouTube or TikTok videos where a robot delivers jokes against a backdrop of visuals? Honestly, I occasionally watch such videos myself. And I’m not alone; they enjoy significant popularity, racking up millions of views.
Recently we shared an extensive guide detailing how to automatically generate dozens of such videos daily. There are scripts that scrape websites for content on a chosen topic, segment the text into blocks, rephrase it for uniqueness, and then employ AI voices for narration. The narrations are overlaid on video clips from a library, and voilà! The content is ready. All that’s left is to generate a clickbait title and description, which the script also handles.
But that’s not all; you can go a step further and clone the most successful and highly viewed content! The script takes real-time input and then rehashes the text with a variety of voices, superimposes different videos, and produces “new” videos that can be uploaded to other channels. Individuals can manage hundreds or even thousands of such channels, with content now being automatically generated on servers.
|Related: HeyGen’s Mind-boggling AI-translated Video Generator Disrupts the Film Translation Industry|
So, why am I sharing this? To highlight that information is on the verge of becoming a factory-produced commodity. Nowadays, most goods are mass-produced; think of identical cakes and cookies from factory lines or the countless factories producing clothes for brands like Zara, which often iterate and paraphrase ideas from a handful of fashion houses. Unique or handmade items are considered premium and elite. I’m not criticizing this process; it enhances the accessibility of basic products. However, this transformation has yet to fully impact mental information and ideas. We are now entering the next phase of automation.
It appears that soon, the concept of authorship may become nearly obsolete, as the majority of content will be automatically generated. This doesn’t mean such content will be of poor quality, quite the opposite. A new standard of quality will emerge. Just as you know what to expect from a pair of boots purchased at a marketplace, you won’t find them comparable to individually crafted models by a skilled artisan, but they are unlikely to be poor in quality.
Additionally, this content can be tailored to human psychology, selecting optimal pacing, duration, and visual appeal. Such content will be more straightforward and enjoyable to consume compared to what we have today.
If watching black and white films, Tarkovsky’s movies, or even Tarantino has become a marker of intellectualism, very soon, the marker will be content created by humans. It will be less efficient in capturing human attention, less polished by algorithms. Consuming such content will be a thoughtful experience, akin to how not everyone can appreciate Lars von Trier’s work.
Do the majority even desire this? I believe this question is timeless, awaiting its next incarnation. The fact remains: niche author-driven cinema currently garners relatively modest revenue and is often produced for prestige rather than profit.
Do you envision a future where 99% of content will be automatically generated without human authorship within a decade?
Symbols for Identifying AI-Generated Content
Recently, Adobe has introduced an icon for marking generative content, known as the “Icon of Transparency.” This symbol can now be applied to synthetic images within Adobe’s photo and video editing software. Additionally, it will be incorporated into images generated using the Firefly neural network. In the near future, Microsoft’s Bing search engine will also include this symbol in its image generator.
Users can hover over the symbol to access information regarding the image’s generation process and its creator. This information will also be accessible in the image’s metadata.
The development of this symbol was a collaborative effort, involving not only Adobe but also members of the C2PA group. This group includes Microsoft, Intel, as well as camera manufacturers like Nikon and Leica.
It is anticipated that this new symbol will become as widespread and recognizable as the copyright symbol. However, it’s worth noting that other major companies are developing their own methods for identifying AI-generated content. Additionally, without a legal framework, the adoption of this marking will remain voluntary and potentially lacking significance.
- In August, DeepMind has introduced the Synthid tool, which can mark and identify AI-generated images. The tool inserts a watermark into the image pixels, making it invisible to the human eye. This watermark is difficult to remove, unlike other methods. Synthid is currently in beta testing and works only with Google’s Imagen neural network.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.