India’s IT Ministry Raises Alarm on Deepfake Menace, Urges Action from Social Media Platforms
Indian government officials and cybersecurity experts are collaborating to address and overcome challenges posed by deepfake content.
In a proactive step aimed at addressing the escalating threat of deepfake technology, India’s Union Minister for Electronics and Information Technology and Communications, Ashwini Vaishnaw held a high-level meeting today.
The meeting was attended by Indian government officials, and cybersecurity experts, and focused on strategies to confront and overcome the challenges presented by deepfake content.
Indian PM Narendra Modi, while speaking at the opening of a virtual summit of G20 nations, of which India holds the presidency, outline the dangers around generated content AI.
“The world is worried about the negative effects of AI. India thinks that we have to work together on the global regulations for AI. Understanding how dangerous deepfake is for society and individuals, we need to work forward,” said PM Modi. “Artificial Intelligence (AI) should reach people and must be safe for society.”
Deepfakes are media content that can be generated through AI technologies, a growing tool for misinformation and digital impersonation.
Deceptive creations generated by machine-learning algorithms combined with facial-mapping software, allowing the insertion of data into digital content without permission. When executed with precision, the outcome can be an incredibly convincing yet entirely fabricated text, video or audio clip depicting a person doing or saying something they did not.
There has been growing concern about deepfake in India, soon after the deepfake video of an Indian actress Rashmika Mandana circulated widely on social media platforms.
Commenting on the social media platform X (Twitter) the actress said, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me but also for each one of us who today is vulnerable to so much harm because of how technology is being misused.”
Social Media Platforms Must Counter Deepfake
The Indian IT minister is urging social media platforms to adopt a more aggressive approach in countering deepfake content. Ashwini Vaishnaw emphasized that the ‘Safe Harbour’ Clause, which traditionally shields social media platforms, may not apply if they fail to take adequate measures against deepfakes on their platforms.
“The Safe Harbour Clause that most of the social media platforms have been enjoying doesn’t apply if the platforms do not take adequate steps to remove the deepfakes from their platforms,” he added.
Recently, there have been instances where social media platforms have come up with measures to control deepfakes.
In a recent blog post, YouTube announced that in the coming months, it will “enable users to submit removal requests for AI-generated or other altered content” that simulates an identifiable individual, encompassing their face or voice, commonly known as deepfakes.
The company clarified that not all reported content will be automatically taken down. Factors like whether the content falls under parody or satire will be considered by the company when reviewing removal requests. Additionally, the unique identification of the person making the request or their status as a well-known individual may set a higher standard for removal.
Another tech giant, Microsoft had announced its plan to introduce a service allowing election candidates to digitally sign and authenticate content through digital watermarking. Termed ‘Content Credentials as a Service’ by Microsoft, this service utilizes the Coalition for Content Provenance and Authenticity’s (C2PA) digital watermarking credentials – a set of metadata encoding details about the content’s origin through cryptography.
Content featuring the digital watermark will include information on how, when and by whom it was created or edited, explicitly indicating if it was generated by AI, such as in the case of deepfakes.
The rising case of deepfake is a concern and appropriate AI regulations involving multiple players including governments across the globe is one of the best ways forward. The time is ripe to ensure safe and trustworthy AI reaches humans.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.