
New Delhi: In response to the growing spread of AI-generated deepfake content online, the Ministry of Electronics and Information Technology (MeitY) has issued revised guidelines for social media platforms such as Facebook, Instagram, and YouTube. The new rules require platforms to clearly label all AI-generated content and ensure that such material carries embedded digital identifiers.
Under the updated regulations, social media companies must remove AI-generated or deepfake content within three hours once it is flagged by the government or directed by a court. The notification also prohibits platforms from removing or tampering with AI labels or associated metadata once they are applied.
The government has instructed intermediaries to implement automated systems and technical safeguards to detect and prevent the spread of illegal, misleading, or sexually exploitative AI-generated content. As per the MeitY directive, platforms must also inform users at least once every three months, through their policies or agreements, about the consequences of misusing AI technology.
The guidelines further state that once an intermediary becomes aware of violations involving the creation, hosting, sharing, or distribution of synthetically generated content, it must take swift and appropriate action. Platforms are also required to deploy effective technical mechanisms to prevent users from creating or sharing AI-generated content that violates existing laws, including the Bharatiya Nyaya Sanhita, 2023, the Protection of Children from Sexual Offences (POCSO) Act, 2012, and the Explosive Substances Act, 1908.
Additionally, the proposed rules mandate users to disclose when they post AI-generated or modified content, while platforms must adopt technologies to verify such declarations. Several social media platforms have already introduced features that allow users to label content created or altered using artificial intelligence.
With inputs from IANS