The Delhi Police’s Special Cell is investigating a case of forgery involving a deepfake video of actress Rashmika Mandanna, which went viral on social media. The video was created using artificial intelligence (AI) to morph Mandanna’s face onto another woman’s body. The police have faced several challenges in tracing the origin and the motive of the video, as the US-based tech firms have not shared the required data with them. Here are the details of the case and the hurdles faced by the police.
What is the case and how did it come to light?
Video Credit-News18
The case was registered on November 10, 2023, after the fact-checking website Alt News reported that a video of Mandanna entering a lift was a deepfake. The original video was of a British-Indian woman, who had posted it on her Instagram account. The accused had used AI software to replace her face with Mandanna’s and circulated it on various social media platforms. The video had garnered millions of views and comments, many of which were abusive and derogatory towards Mandanna.
The police took suo motu cognizance of the matter and lodged a case under sections 465 (forgery), 468 (forgery for cheating), 469 (forgery to harm reputation), and 471 (using as genuine a forged document) of the Indian Penal Code (IPC) and section 66D (cheating by personation by using computer resource) of the Information Technology (IT) Act. The case was initially handled by the Intelligence Fusion and Strategic Operations (IFSO) unit of the Special Cell but was later transferred to the Cyber Cell.
What are the challenges faced by the police in the investigation?
The police have faced several difficulties in tracing the source and the motive of the video, as the US-based tech firms have not cooperated with them. The police had sent requests to Facebook, Instagram, Twitter, and YouTube to share the details of the accounts that had uploaded or shared the video, such as the IP addresses, locations, devices, and email IDs. However, the tech firms have not responded to the requests, citing their privacy policies and legal constraints.
The police have also tried to use AI tools to detect the deepfake video, but have not been successful. The police have said that the video was created using advanced AI software, which makes it difficult to spot the difference between the real and the fake face. The police have also said that the video was uploaded on multiple platforms and deleted after a short time, which makes it hard to track the trail of the video.
What are the legal implications and the possible outcomes of the case?
The case has raised several legal and ethical issues regarding the use and misuse of AI technology, especially in creating and spreading deepfake content. The case has also highlighted the need for better regulation and cooperation between law enforcement agencies and tech firms, both at the national and international levels.
The police have said that they are determined to crack the case and bring the culprits to justice. The police have also said that they are exploring other options to get the data from the tech firms, such as approaching the Ministry of External Affairs or the US Department of Justice. The police have also appealed to the public to refrain from sharing or viewing the video, as it is a crime and an invasion of privacy.