{"id":5184,"date":"2025-07-01T01:12:17","date_gmt":"2025-06-30T19:42:17","guid":{"rendered":"https:\/\/lawjurist.com\/?p=5184"},"modified":"2025-07-01T01:25:06","modified_gmt":"2025-06-30T19:55:06","slug":"the-terminator-deepfake-ai-a-threat-to-human-civilization","status":"publish","type":"post","link":"https:\/\/lawjurist.com\/index.php\/2025\/07\/01\/the-terminator-deepfake-ai-a-threat-to-human-civilization\/","title":{"rendered":"THE TERMINATOR DEEPFAKE AI: A THREAT TO HUMAN CIVILIZATION"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"5184\" class=\"elementor elementor-5184\">\n\t\t\t\t<div class=\"elementor-element elementor-element-2174b19a e-flex e-con-boxed e-con e-parent\" data-id=\"2174b19a\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-25d1f7e0 elementor-widget elementor-widget-text-editor\" data-id=\"25d1f7e0\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p>Author: Tummaganti.VamsiBabu Naidu, Advocate and LL.M (Corporate and commerical law)<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-4816831 e-flex e-con-boxed e-con e-parent\" data-id=\"4816831\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-bb5d714 elementor-widget elementor-widget-text-editor\" data-id=\"bb5d714\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><b>Abstract<\/b><b>:<\/b> <span style=\"font-weight: 400;\">Deep\u00a0learning\u00a0has\u00a0been\u00a0remarkably\u00a0effective\u00a0at\u00a0resolving\u00a0challenging\u00a0issues\u00a0in\u00a0a \u00a0 variety\u00a0of\u00a0diverse\u00a0fields, including\u00a0computer\u00a0vision, human-level\u00a0control, But the <\/span><span style=\"font-weight: 400;\">Deep-learning developments(deepfakeAI) have resulted in applications that threaten national security, democracy, and privacy. Artificial intelligence(AI) and machine learning(ML) are used to construct deepfakes, which are fake digital media that mimic real-world content, such as audio, video, and photographs. Both beneficial and detrimental uses for this technology are potential. Deepfake technology is one example of such an application; it use deep learning algorithms to create convincingly realistic-looking fake photographs and movies.<\/span><span style=\"font-weight: 400;\"> The article delves into extensive discussions on the challenges, laws, favourable and unfavourable issues, detecting methods concerning deepfake technologies. This data is collected from various e-resources, website, and other articles.<\/span><\/p>\n<p><b>Keywords<\/b><span style=\"font-weight: 400;\">:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Deep-learning, <\/span><span style=\"font-weight: 400;\">Generative Adversarial Networks (GANs), Multi-model AI chatbots, Right to privacy, Mechine learning, Artificial intelligence, National security, Visual search engine, Information technology Act, Cyber Security, Intellectual Property laws.<\/span><\/p>\n<p>\u00a0<\/p>\n<p><b>INTRODUCTION:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The term\u00a0 deep-fake Combine &#8220;deep&#8221; from AI deep-learning technology with &#8220;fake&#8221; to address the fact that what is generated is not authentic or true, as well as data manipulation through photo manipulation and censorsing. Deep-fakes\u00a0 are created by using a\u00a0 technique Generative Adversarial Networks (GANs) consisting a competing neural network a generator and a discriminator algorithms that create fake image or fake videos that look realistic one, targeting people often collect data from internet and social media without consent of the people. Deepfakes may be utilized to falsify proof, manipulate public opinion, and harm people&#8217;s reputations,\u00a0 have the potential to scam people and companies, sway elections, propagate false information, and erode confidence in democratic institutions,\u00a0 can result in social exclusion, loss of employment, and in certain situations, severe repercussions,\u00a0 extortion can potentially result in fatalities in severe circumstances. It is a synthetic media that use AI to manipulate content and deceiving someone intentionally, creates virtual video scenes, audio, photos and text content. Now a days the most concerning application of\u00a0 AI is deep fakes, which are regarded as the top global danger along with disinformation to society, when it comes to identifying the real from the fake in these ever immersive virtual world, a zero-trust mentality will be crucial<\/span><span style=\"font-weight: 400;\">. Highly customized and successful forms of manipulation will be possible because to the usage of deepfakes by agenda\u2013driven, real time\u00a0 multimodels AI chatbots.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">\u00a0<\/span><b>FAVOURABLE ISSUES:<\/b><\/h2>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Uses for Gaming applications.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Entertainment purpose (i.e.cinema, music, artist expression etc.).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Recreating ancient figures.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Helpful for mentally disable persons to express themselves online.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Recover lost voice.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Useful for Educational purpose.\u00a0<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Both Education and medical field are mainly benefitted from this technology.<\/span><\/p>\n<h4><b>UNFAVOURABLE ISSUES:<\/b><\/h4>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Main concerning issue is effect of women, celebrities\u2019 dignity and well being in multiple ways.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Extortion and blackmail individual and organization.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Broadcasting fake news and information.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Violate Privacy and dignity of individual, which effect democracy.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Damage reputation and credibility of celebrities, politicians, women, social activities.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Create fake\u00a0 pornography.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Influence elections manipulation and public opinion.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Influence voters.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Provoke religious tensions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Even leads to coups.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Attackers create new identities, false document, fakes real person victim voice.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Scams, financial fraud, identity theft spreading dis-information.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Misuse of sensitive data of defence sector (Can bypass security measures and get unauthorized access to systems).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Misuse of intellectual property rights of corporate sector like trademark, copyright infringement, geographical indication, patents etc.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Stock manipulation-\u00a0 Deepfake technology , is used to create and spread false information ( fake audio, video or images related to stock market ) to influence stock prices.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">SEBI Act, 1992 : Section 12A &#8211; Prohibits manipulative and deceptive practices.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Punishment : Fine and imprisonment upto 10 years.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">It also made him\/her an obvious target for AI scammers to replicate his\/her fake faces and voice and create fake endorsement, the technology becomes more ubiquitous that kind of impersonation could make us all prone to fraud.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">It leads to black market with all this misuse of AI and modern technology.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Deepfake videos and AI voice cloning are being used to create fake KYC documents, passports , and drivers licenses.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The technology has been overwhelmingly used to create nonconsensual pornography. Deep fake pornography is a severe sexual offense that has targeted hundreds of thousands of women.<\/span><\/li>\n<\/ul>\n<p>\u00a0<\/p>\n<h5><b>HOW TO FIND WHETHER IT IS REAL OR FAKE:<\/b><\/h5>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Unnatural body expressions : Unnatural eye movements or odd face expressions that do not fit the audio or context are common in deepfakes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Awkward head and hair : Keep an eye out for strange head and hair, inconsistent face characteristics, and backdrop anomalies.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Bad Lip syncing : At times, there may be errors in the voice or the audio may not always match the lip movements exactly.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Facial morphing and patchy skin tone,uneven and blurred face edges.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A\u00a0 Regulator is needed to keep checks on website using AI.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Reverse image search : A visual search engine looks for pictures and patterns using an algorithm that can identify them and provide relevant information using the selective or pattern match approach.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Using block chain to verify the source : Blockchain enables AI grow to give more actionable insights, govern data consumption and model sharing, and establish a transparent and reliable data economy by granting access to vast amounts of data from both inside and outside the company.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Look at the spectacles. Is there any glare? Is the glare too great? When the individual moves, does the glare&#8217;s angle change? It&#8217;s possible that DeepFakes won&#8217;t accurately capture the physics of illumination.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Keep an eye out for face moles. Does the mole appear to be real?<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">These are meant to assist those who are searching through DeepFakes. Although it might be difficult to spot high-quality DeepFakes, practice can help people develop the intuition necessary to tell what is real and what is phony. At Detect Fakes, you may try your hand at spotting DeepFakes.<\/span><\/p>\n<p><b>HOW\u00a0 TO PREVENT AI FROM CREATING DEEPFAKE AI.<\/b><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Online Safety Act 2023 &#8211;\u00a0 Creating, sharing, or threatening to disclose intimate images including deepfakes without consent is prohibited by the Act, which adds new crimes under the Sexual crimes Act 2003. Courts have the power to seize gadgets used in these offenses, and offenders may be imprisoned for up to two years.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Indian Penal Code 1860.<\/span><\/li>\n<\/ol>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section-509 (Acts intended to insult the modesty of women).<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : simple imprisonment up to 3 years or may also be fined .<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section-499 &amp;500(Criminal defamation).Creating Deepfake to harm someones reputation.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : upto\u00a0 2 years of imprisonment or fine.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section &#8211; 506 (Punishment for criminal intimidation).<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : upto 2 years and upto 7 years or fine or\u00a0 both based on seriousness of\u00a0 offence ).<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section &#8211; 505(Promiting Enmity or Hatred) Deepfakes that incite public unrest or violence.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment: upto 3 years of imprisonment.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section -153A (Spreading hatred on communal lines).<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : upto 3 years or 5 years or fine or both based on seriousness of offence.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section-465, 468, 471(Forgery or fabricateof evidencce or documents for the purpose of cheating using deepfakes).<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment :up to 2-7 years depending on the offence.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Information Technology Act 2000 :\u00a0<\/span><\/li>\n<\/ol>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section 66E: Penalties for violating privacy by taking or sending pictures of private spaces .<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : upto\u00a0 3 years imprisonment or fine up to 2 lakh.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Sections 67 and 67A :Publishing or sending pornographic and sexually explicit information electronically is prohibited.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment :upto 3 to 7 years of imprisonment based on offence.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Section 66D : Deals with imprsonation using communication equipment, such as phony audio and video. If anyone uses deepfake to impersonate another person( eg : A Politician , celebrity) for fradulent purpose.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Punishment : upto 3 years imprisonment and or fine upto 1 lakh.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>Limitations:<\/strong> Reactive laws are used after harm has been done. They don&#8217;t directly control the production or dissemination of deepfakes.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cyber Security Law &#8211; A legal basis for addressing the issues raised by deepfake AI technology is provided by cybersecurity and data protection regulations. Existing laws provide channels for deterrent and remedy, even if there isn&#8217;t any statute specifically addressing deepfakes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">\u00a0Indecent Representation of women(Prohibition ) Act, 1986- Deepfake involving womens indecent or sexually suggestive represntation.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\"><strong>Punishment :<\/strong> 2-5 years of imprisonment and fine up to 5 lakh.<\/span><\/p>\n<ol start=\"6\">\n<li><span style=\"font-weight: 400;\"> Intellectual Property Rights Act (copyright, trademark etc) &#8211; The framework of intellectual property rights (IPR) provides little safeguard against the abuse of deepfake technology, especially when it comes to material produced by artificial intelligence. Although certain features of the current rules are addressed, there are still large loopholes in the protections against the illegal use of names and likenesses.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Digital Personal Data Protection Act.(Protection of Data) &#8211;\u00a0 It sought to control the processing of personal data and protect people&#8217;s digital privacy. It creates a thorough framework for data protection that takes into consideration India&#8217;s particular requirements while adhering to international norms.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Information Technology Rules (2001). (Mandate to removal of fake <\/span>content within 36 hours.<\/li>\n<\/ol>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">India doesn\u2019t have any law specifically for deep fake like USA (Deep \u00a0 fake Task \u00a0 Force Act).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Government need to bring Regulatory and restrict methods\u00a0 to keep\u00a0 checks on websites using AI.<\/span><\/li>\n<\/ul>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In 2019 energy firm in U.K lost $2,50,000. CEO fell victim on deep fake scam.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In 2024 deep fake sexual images of Pop Star Taylor swift flooded in social media.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In September 2019 more than 15,000 deep fake pornographic videos are found in online, 96% were female adults mapped on porn stars.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In October 2021, fraudsters used a cloned voice of a corporate director to swindle $35 million.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Ukraine president <\/span><span style=\"font-weight: 400;\">Volodymyr Zelensky, Former USA President Barak Obama, former President Joe Biden, Mark zuckenberg are all victims for the deep fake AI.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">People can\u2019t find what is real (true) or fake.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Public awareness is important to protect themselves about the technology and not to share the private information in digital platform.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">What you see with your eyes\u00a0 could be\u00a0 untrue\u00a0 and what you\u00a0 hear with your ears could\u00a0 be\u00a0 untrue, thorough investigation\u00a0 is the real truth\u201d.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Before utilizing sophisticated AI techniques that may synthesize voices or faces, user verification (KYC).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Limit verified researchers or licensed professionals access to open source deepfake models or high-resolution synthesis tools.<\/span><\/li>\n<\/ul>\n<p><b>Notable Examples :<\/b><\/p>\n<p><b>CEO Voice Fraud: Deepfake Scam (2019).<\/b><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Application: Deception in finance.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Incident: By imitating a CEO&#8217;s voice using artificial intelligence, criminals were able to persuade an employee to transfer $243,000.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Technology: speech synthesis powered by AI.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">consequences: Among the earliest recorded instances of voice fraud powered by AI.<\/span><\/p>\n<p><b>Manoj Tiwari, an Indian politician (2020).<\/b><b><br \/><\/b><span style=\"font-weight: 400;\">Application: Deepfake films for multilingual political campaigns.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">The goal was to simulate multilingual talks in order to reach larger audiences.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">consequences:sparked questions about election campaign legitimacy and misinformation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><b>Nancy Pelosi video from 2019.<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Application: Political disinformation<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Technologies: The video was modified and slowed down; it was a &#8220;shallow fake&#8221; rather than a deepfake.<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">The impact: Raised awareness of politically swayed media; Facebook declined to take it down, claiming the right to free speech.<\/span><\/p>\n<p><b>Judicial Interpretation :<\/b><\/p>\n<p><b>Anil Kapoor versus Simply Life India and Others: (Defamation).<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Renowned Bollywood actor Anil Kapoor sued 16 defendants, including Simply Life India, VisionComputers, and Giphy, for using his name, voice, picture, and famous catchphrase &#8220;jhakaas&#8221; without permission. These organizations have produced and disseminated distorted photos, GIFs, and ringtones with Kapoor&#8217;s picture for profit using AI algorithms. The court acknowledged that Kapoor&#8217;s personality rights might be violated and his reputation could be damaged by such unapproved usage.In terms of safeguarding individual rights in the digital era, especially with regard to the abuse of AI technology, this case establishes a noteworthy precedent in India. It emphasizes the necessity of legal frameworks to protect people&#8217;s rights to their name and likeness and to address the issues raised by AI-generated material. In accordance with worldwide trends where courts have increasingly recognized the need to safeguard people&#8217;s personas from illegal commercial exploitation, the verdict also emphasizes how crucial it is for India to recognize and uphold personality rights.In light of developing AI technology, this ruling is an important step in guaranteeing that people and celebrities have control over how their identities are used.<\/span><\/p>\n<p><b>Justice K.S. Puttaswamy (Retd.) &amp; Anr . vs. Union of India &amp; Ors .(Right to privacy).<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The Court ruled that the protection of privacy is an essential component of the freedoms provided by Part III of the Constitution as well as the right to life and personal liberty under Article 21. It was believed that human autonomy and dignity depended on privacy. The ruling acknowledged a number of factors, including decisional autonomy, physical privacy, and informational privacy. By emphasizing data privacy and individual agreement, this ruling set the stage for further examination in the Aadhaar case (2018), even if it did not directly address the legitimacy of Aadhaar.Deepfakes frequently exploit voice and face recognition information to create phony videos, infringing on people&#8217;s privacy and compromising their health. Deepfakes are commonly employed for malicious impersonation (such as political manipulation and fake pornography), which violates the autonomy and dignity of the subject. The fundamental privacy standards upheld in Puttaswamy are violated when deepfakes are produced or disseminated without consent.<\/span><\/p>\n<p><b>Arijit Singh\u00a0 v. Codible\u00a0 Ventures LLP.<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Arijit Singh sought his name, voice, signature, portrait, image, caricature, resemblance, persona, and other characteristics to be protected against unapproved commercial usage. The defendants allegedly employed artificial intelligence (AI) techniques to sell products including his image without permission, produce deepfake videos, and mimic his voice. According to Section 38-B of the Copyright Act of 1957, these acts were said to have infringed against his moral rights. In his decision, Justice R.I. Chagla recognized Singh&#8217;s standing as a &#8220;notable singer&#8221; with &#8220;immense goodwill and reputation&#8221; and &#8220;celebrity status.&#8221; Celebrities have a right to have their personality features shielded from unapproved commercial use, the court emphasized.It was seen that the defendants were violating Singh&#8217;s publicity and personality rights by using his fame to increase traffic to their platforms and make revenue without getting his permission.<\/span><\/p>\n<p><b>Shreya singhal v union of india(2015).<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Deepfake AI regulation and online expression in the digital era will be greatly impacted by the Supreme Court&#8217;s ruling in Shreya Singhal v. Union of India (2015) 5 SCC 1, which invalidated Section 66A of the IT Act, 2000 for being ambiguous and infringing upon Article 19(1)(a) (freedom of speech). declared online communication that was &#8220;grossly offensive,&#8221; &#8220;menacing,&#8221; or likely to cause &#8220;annoyance&#8221; or &#8220;inconvenience&#8221; illegal under Section 66A of the Information Technology Act.<\/span> <span style=\"font-weight: 400;\">The court determined that Section 69A of the IT Act, which prohibits public access to materials, is constitutionally lawful.The case also raised important questions about the limits and scope of free expression in the digital age, as well as the balance between government control and individual liberty in the online world.\u00a0<\/span><\/p>\n<p><b>Mahendra Patel was arrested in India in 2025 for disseminating a deepfake video.<\/b><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Mahendra Patel, a citizen of Vansda, Gujarat, was arrested in 2024 for allegedly uploading a deepfake video of Prime Minister Narendra Modi in a WhatsApp group called &#8220;Enjoy Group.&#8221; This was a big milestone in India. Along with derogatory remarks on Operation Sindoor and the Indian Army, the movie showed a staged attack scenario involving the prime minister. According to authorities, the content incited fear and circulated false information, endangering public safety and national security. Sections 197(1)(D) and 353(1)(B) of the Bharatiya Nyaya Sanhita (BNS) and Section 66(C) of the Information Technology Act accuse Patel, who was identified via his cell phone and a Facebook link, with producing and disseminating the offensive content. The case serves as a reminder of the value of demonstrating good digital citizenship and the necessity of being aware of the potential legal repercussions of disseminating modified material online.People must be cautious and confirm the legitimacy of material before sharing it, especially as deepfake technology develops. This lawsuit could provide a precedent for future legal proceedings pertaining to the distribution of AI-generated content, as the legal environment around deepfakes in India is constantly evolving.<\/span><\/p>\n<p><b>Hugh Nelson Was Found Guilty of Creating Child Abuse Pictures Using AI (UK, 2024).<\/b><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Hugh Nelson was found guilty of utilizing Daz 3D&#8217;s technology to produce and disseminate artificial intelligence (AI)-generated pictures of child abuse. Nelson acknowledged committing 16 counts of sexual assault, including turning juvenile photos into offensive content.He received an extended sentence of 18 years in jail, meaning that he will not be eligible for release until he has completed two-thirds of his time.This case highlights the severe legal repercussions of such violations and establishes a precedent for the use of AI-generated pornographic pictures in court proceedings.<\/span><\/p>\n<p><b>Rajat Negi &amp; Ors. v. Amitabh Bachchan (India, 2023).<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The Delhi High Court dealt with the illegal use of deepfake technology to imitate Amitabh Bachchan&#8217;s appearance and their demeanor Without Bachchan&#8217;s permission, the plaintiffs produced and distributed AI-generated material that altered his voice and appearance, infringing on his privacy and personality rights. The defendants were prohibited from using Bachchan&#8217;s name, image, voice, or any other characteristic that is specifically associated with him for profit or personal benefit by the court&#8217;s ad-interim ex-parte order. The court further ordered telecom service providers to restrict access to phone numbers used by the defendants to spread the illegal information and ordered the removal of infringing content from internet platforms. Deepfake technology, which creates realistic-looking but phony audio and video content using sophisticated AI techniques like deep learning and generative adversarial networks (GANs), is an increasing problem, as this instance demonstrates. Misuse of this technology to impersonate someone might result in privacy breaches and damage to one&#8217;s reputation. Although deepfakes are not specifically covered by Indian law, there are legal options for addressing this misuse thanks to existing statutes like the Indian Penal Code and the Information Technology Act of 2000. The court&#8217;s ruling in this case highlights the necessity for legal frameworks to address the issues presented by developing technologies like deepfakes and establishes a precedent for safeguarding people&#8217;s personality rights against the exploitation of AI-generated content.<\/span><\/p>\n<h5><b>\u00a0<\/b><b>Suggestions:\u00a0<\/b><\/h5>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Union Government needs to bring strict laws and actions to control it.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Raising awareness and teaching people about deep fakes may undermine faith in authentic films, resulting in an intellectual crisis in video evidence.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Use Sensity AI is a detection platform that use deep learning to identify indicators of synthetic media in the same way that antimalware programs hunt for virus and malware signatures. When a user encounters a deep fake, they receive an email notification.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In my opinion, any media will have a unique, semi-transparent code that tracks the exact date, channel, creator, etc. If you enter this code into a search engine and discover that the information you are viewing is possibly fake, you can access an archive of the original work. However, this could not prevent government-sponsored historical revisionism, in which case the original work will also be changed if new additions are not continuously replicated by an independent, pro-democracy organization that also implements strict laws to address deep fake AI and the misuse of new technologies against individuals and society.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">For transparency, require AI-generated content disclosure tags or invisible watermarking.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Enhanced notice-and-takedown procedures for complaints pertaining to deepfakes (particularly in the areas of political manipulation, CSAM, and impersonation).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Promote the use of cross-border frameworks to combat deepfake crimes, particularly: CSAM , Interference in elections, Impersonation fraud<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">Adopt or support international standards for the use of synthetic media and AI ethics (such as UNESCO or OECD AI principles).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Report platforms of deepfake material (per IT Rules, 2021).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Submit a formal complaint for impersonation, defamation, or obscenity under the IT Act\/IPC.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li aria-level=\"1\"><span style=\"font-weight: 400;\">For prompt action, speak with legal counsel and cybercrime cells.<\/span><\/li>\n<\/ul>\n<p><b>Conclusion:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Understanding the combination of modern technology and AI, particularly in the case of deep fakes, as well as the societal implications, is critical if this technology is to be properly contextualized and the difficulties it presents handled successfully. The debates show that the integration of technology and deep fakes has a variety of effects on people, society as a whole, and future generations. Researchers can create efficient ways to identify and lessen the negative consequences of deepfakes by comprehending the methodologies used to create them.The development of dependable and effective deepfake detection techniques is crucial to halting the spread of misinformation, hate speech, and political unrest. Robust detection algorithms can be employed to detect and mark manipulated media content, hence mitigating the probable adverse outcomes linked to deepfakes. Additionally, it is critical to educate the general public about deepfakes. People may become more critical media consumers and be better able to discern between authentic and manipulated material by learning about the existence of deepfakes and their ramifications. This could possibly lessen the damage that deepfakes do to people&#8217;s trust.This technology is always considered as new, so it is challenging to detect and combat in the future. The Problem is how humans use technology, whether it is good or bad(misuse), AI is not the problem, but how we(humans) use it.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Is prevention is possible or not, whether regulation and detecting however are attainable?\u00a0<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">Is this the \u201cBeginning of an end\u201d, will it destroy the future of humanity?<\/span><\/i><\/p>\n<p><b>Bibilography<\/b><b>:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.brookings.edu\/articles\/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth\/<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.wipo.int\/wipo_magazine\/en\/2022\/02\/article_0003.html.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.techtarget.com\/whatis\/definition\/deepfake.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.thehindu.com\/sci-tech\/technology\/the-danger-of-deepfakes\/article66327991.ece.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.researchgate.net\/publication\/342795647_Artificial_Intelligence_in_Digital_Media_The_Era_of_Deepfakes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4651093<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.mondaq.com\/india\/copyright\/1506050\/decoding-the-protection-of-personality-rights-in-india-arijit-singh-v-codible-ventures-llp-ors#:~:text=Recently%2C%20the%20Bombay%20High%20Court,likeliness%20and%20other%20personality%20traits.<\/span><\/p>\n<p><a href=\"https:\/\/www.livelaw.in\/high-court\/delhi-high-court\/delhi-high-court-deepfake-ai-technology-267918\"><span style=\"font-weight: 400;\">https:\/\/www.livelaw.in\/high-court\/delhi-high-court\/delhi-high-court-deepfake-ai-technology-267918<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Arijit Singh\u00a0 v. Codible\u00a0 Ventures LLP.<\/span> <span style=\"font-weight: 400;\">COM\u00a0 IPR SUIT (L) NO.23443 OF 2024.<\/span><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">https:\/\/www.ijnrd.org\/papers\/IJNRD2310407.pdf\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Deepfakes: The Coming Infocalypse by Nina Schick.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Microsoft warns against &#8216;deepfake fraud&#8217; and deceptive AI as the company begs government leaders to take action<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Artificial Intelligence and Political Deepfakes: Shaping Citizen Perceptions Through Misinformation<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">This article examines how AI-generated deepfakes influence public opinion and democracy, highlighting the challenges in detection and regulation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Code Dependent: Living in the Shadow of AI by Madhumita Murgia<\/span><span style=\"font-weight: 400;\"><br \/><\/span><span style=\"font-weight: 400;\">This book explores the darker side of AI, including issues like deepfake pornography and predictive policing, shedding light on the exploitation inherent in AI development.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Author: Tummaganti.VamsiBabu Naidu, Advocate and LL.M (Corporate and commerical law) Abstract: Deep\u00a0learning\u00a0has\u00a0been\u00a0remarkably\u00a0effective\u00a0at\u00a0resolving\u00a0challenging\u00a0issues\u00a0in\u00a0a \u00a0 variety\u00a0of\u00a0diverse\u00a0fields, including\u00a0computer\u00a0vision, human-level\u00a0control, But the Deep-learning developments(deepfakeAI) have resulted in applications that threaten national security, democracy, and privacy. Artificial intelligence(AI) and machine learning(ML) are used to construct deepfakes, which are fake digital media that mimic real-world content, such as audio, video, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5037,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"_links":{"self":[{"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/posts\/5184"}],"collection":[{"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/comments?post=5184"}],"version-history":[{"count":4,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/posts\/5184\/revisions"}],"predecessor-version":[{"id":5188,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/posts\/5184\/revisions\/5188"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/media\/5037"}],"wp:attachment":[{"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/media?parent=5184"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/categories?post=5184"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lawjurist.com\/index.php\/wp-json\/wp\/v2\/tags?post=5184"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}