Author: Niharika Kaithwas, a 3 Year B.A.LL.B(H) at Sri Sathya Sai Law College for Women Bhopal, Madhya Pradesh
Abstract
The digital age has transformed the contours of personality rights. Artificial intelligence (AI), notably deepfakes, voice cloning, and algorithmic identity replication, has created unprecedented challenges for courts and legislatures. This article scrutinizes judicial trends in India and abroad, pinpointing how courts are protecting digital personas from AI misuse. It analyses doctrinal foundations, statutory frameworks, and landmark cases, while proposing reforms to safeguard human dignity in the digital era of algorithmic identity.
Key words – Personality Rights, Algorithmic identity, Digital Personas, Human dignity
Introduction
In addition to being a socio-technological problem, the relationship between algorithmicidentification and personality rights is a complex legal conundrum that is still developing. Algorithmic identity goes far beyond the conventional definition of personality rights, which previously focused on celebrity photos, endorsements, or likenesses. It represents a new frontier where robots replicate, simulate, or even forecast human qualities. These days, technology like speech synthesis, face impersonation, and behavioural prediction produce digital personas that are susceptible to abuse, manipulation, and commercialisation. Serious questions of autonomy, consent, dignity, and the fundamental nature of human identity are raised when these artificial identities are abused.
Courts in India have started to acknowledge that the right to life and personal liberty guaranteed by Article 21 of the Constitution must be extended into the digital realm. In a time when algorithmic profiling, viral deepfakes, and AI-generated impersonations may quickly damage people’s reputations, skew public opinion, and undermine trust, this jurisprudential enlargement is essential. Therefore, the judiciary’s duty is both reactive by offering remedies that adjust to the rate of innovation and anticipatory by developing doctrines that anticipate technological exploitation. In order to ensure that technological advancement does not come at the expense of human dignity, courts must strike a balance between protection against exploitation and freedom of expression.
Algorithmic identification also pushes the limits of data security and privacy. Algorithmic impersonation involves intangible but highly personal characteristics, such as tone of voice, micro-expressions, or behavioural inclinations, that are more difficult to control than traditional infringements, where unapproved use of a photograph or endorsement could be sued. This poses important questions: Who is the owner of a person’s digital replica? When algorithms anticipate actions that a person is unaware of, can permission be meaningfully given? When artificial content goes viral across channels and damages someone’s reputation, how should blame be allocated?
By extending Article 21 into cyberspace, the Indian judiciary acknowledges that human rights and digital autonomy are inextricably linked. Injunctions against misuse, acknowledging algorithmic impersonation as a breach of personality rights, and creating new torts to address reputational harm in the digital era are among possible remedies.
Simultaneously, anticipatory legislation becomes crucial, such as regulating AI systems to be transparent, implementing consent procedures, and holding platforms that host deepfakes accountable. In the end, the problem is to make sure that technology promotes human flourishing rather than undermining it, protecting people’s autonomy and dignity in a world where identity itself may be copied, altered, and profited from.
This article explores:
- The doctrinal foundations of personality
- The emergence of algorithmic
- Judicial trends in India and
- Challenges in equalizing innovation with
Lessons for Future Statutory Codification
Doctrinal Foundations of Personality Rights
Description – Algorithmic identity is now included in personality rights, which go beyond the protection of names and likenesses
Three pillars support the theological foundation:
Autonomy: People need to maintain control over the digital representation of their
Dignity: The constitutional value of human dignity is compromised by the misuse of digital identities.
Consent: Informed consent is required before any persona replication, particularly when using AI training datasets.
Constitutional Basis: According to Justice S.Puttaswamy v. Union of India, privacy and dignity are covered by Article 21 of the Indian Constitution. According to court interpretations, digital personas are a component of the “right to life and personal liberty.”
Judicial Expansion: Indian courts are beginning to acknowledge personality rights as a separate area of privacy, emphasising personal control over one’s online persona.
Statutory Anchors
Copyright Act, 1957
The 1957 Copyright Act safeguards performers’ rights by prohibiting the commercial exploitation of their voice, gestures, and emotions without permission. Section 38A gives artists the sole right to record their performances on audio and video. Performers can object to misuse or distortion of their work under Section 38B, which grants moral rights. This includes unapproved voice cloning and synthetic performance replication in the context of Al.
Trade Marks Act, 1999
To prevent unapproved commercial use, celebrities and prominent figures frequently register their names, signatures, or likenesses as trademarks. Trademark protection ensures that consumers cannot be duped by Al-generated endorsements or impersonations. Although trademarks offer a legal means of protecting identification in commerce, courts have acknowledged that publicity rights are distinct from trademark rights.
Information Technology Act, 2000
It contains provisions on identity theft, cybercrime, and data protection as well as remedies against the misuse of digital content. Al-driven impersonation may be covered by Section 66D of the Information Technology Act, 2000 (IT Act), which deals with cheating by personation utilising computer resources. The Information Technology Act of 2000 (IT Act),
Section 67, forbids the dissemination of offensive or dangerous content that is pertinent to the exploitation of deepfakes. Courts have imaginatively applied IT Act provisions to Al usage instances, despite the fact that they are not specifically suited to personality rights.
Common law tort of passing off
Common law tort of passing off which prohibits misrepresenting someone’s identify or persona in a way that harms their goodwill or reputation. Originally used in business settings (such as fraudulent endorsements), it is now also used in digital identities. Judicial bodies have employed passing off to prohibit unauthorised Al-generated content that causes public confusion or deception.
Algorithmic Identity and AI Misuse
- Deepfakes: Al-generated videos that mimic real people and are frequently exploited for
- Voice cloning: When speech patterns are replicated without permission, singers, actors, and public personalities may become concerned.
- Synthetic identity: Al systems that produce biometric “truths” that compete with human testimony and test evidential norms.
- Judicial concern: In order to balance innovation with dignity, courts must decide if algorithmic identities constitute misappropriation.
Judicial Trends in India Significant Cases:
Amitabh Bachchan Rajat Sharma & Ors.
The Delhi High Court ordered a dynamic injunction against misuse on digital platforms after recognising personality rights against unapproved use of Bachchan’s voice and image.
Sadhguru Jagadish Vasudev & Anr v. Igor Isakov &Ors.
The Delhi High Court emphasised the dignity and authenticity of digital personas while restricting the dissemination of deepfake films
Asha Bhosle v. Mayk Inc
The Bombay High Court upheld performers’ rights under the Copyright Act by providing protection against unapproved voice cloning. The Bombay High Court upheld performers’ rights under the Copyright Act by providing protection against unapproved voice cloning.
Kumar Sanu v. AI Platform
The Delhi High Court upheld the singer’s speech and image protection in Kumar Sanu v. Al Platforms. Additionally, the Court stressed that commercial exploitation without consent is actionable.
Titan Industries Ltd. v. M/s Ramkumar Jewellers
This case established the foundation for contemporary jurisprudence by recognising celebrity endorsement rights even if it before Al abuse.
Icc Development (International) Ltd. vs Ever Green Service Station and Anr
This case strengthened the conceptual basis by establishing the independence of publicity rights from trademark rights.
Judicial Innovation:
Indian courts have demonstrated remarkable innovation:
Dynamic injunctions: These prevent future abuse by enabling orders to change with technology. For instance, the Delhi High Court created a flexible injunction prohibiting unapproved use of Bachchan’s character in Amitabh Bachchan v. Rajat Sharma.
Preventive Remedies: Courts increasingly prioritize prevention over compensation, recognizing that reputational harm in the digital age is often irreparable.
Intermediary Liability: As a symptom of a move toward proactive regulation, platforms are being held responsible for hosting AI-generated abuse.
These patterns show that, even in the absence of formal codification, the judiciary is prepared to modify doctrine to accommodate technological realities.
Comparative Global Perspectives
United States: State law in Zacchini Scripps-Howard Broadcasting (1977)
Although each state has a different right to publicity, decisions like as Zacchini v. Scripps- Howard demonstrate the judiciary’s readiness to defend performance identity. Nevertheless, discrepancies are produced by the disjointed state-level approach.
European Union – Consent and data protection are prioritised by the General Data
Protection Regulation (GDPR) of the European Union. Digital personas are covered by the right to be forgotten, which gives people control over their algorithmic identities.
China: Deepfake prevention laws forbid damaging impersonation and mandate disclosure. A regulatory mindset that places more emphasis on deterrent than litigation is reflected in this proactive approach.
Emerging Jurisdictions: Nations like Japan and South Korea are investigating hybrid models that combine judicial innovation and statutory codification.
Challenges and Doctrinal Shifts
Authorship and Mens Rea: Misuse produced by AI frequently lacks a human author, making culpability more difficult. Are platforms, developers, or deployers accountable?
Consent in Algorithmic Training: There are moral and legal concerns when personal data is used to train AI without consent. It might be necessary for courts to develop a theory of “algorithmic informed consent.”
Balancing Rights and Innovation: While under regulation has the risk of undermining dignity, overregulation runs the risk of limiting innovation. Innovation combined with responsibility is a crucial middle ground.
Cross–Border Enforcement: AI abuse frequently occurs across national boundaries. To guarantee adequate remedies, international treaties or harmonised standards might be required.
Lessons from Judicial Trends
There are various lessons to be learned from judicial innovation:
- Acknowledgement of Digital Dignity: Courts are beginning to see digital identification as essential to human dignity.
- Article 21: Privacy jurisprudence was expanded to include algorithmic identity, strengthening constitutional protections.
- Codification is essential since enforcement is still piecemeal in the absence of clear statutes. Remedies could be combined under a specific Personality Rights
- Intermediaries’ role: In accordance with global best practices, platforms must implement proactive monitoring.
Conclusion
In the era of artificial intelligence, judicial developments in India and worldwide show a complex acknowledgement of personality rights. To stop algorithmic identity abuse, courts have implemented remedies like dynamic injunctions, intermediary liability, and platform accountability. Recognition that AI-driven impersonations, deepfakes, and synthetic personas constitute abuses of autonomy, privacy, and dignity rather than mere reputational damages is reflected in these advances.
Statutory codification remains essential, even when judicial ingenuity has produced temporary fixes. To prevent fragmented remedies, a legislative framework would provide openness, predictability, and consistency in enforcing rights. Codification would also offer security across settings, giving people confidence to confront violations of digital identities.
Similar to the EU’s GDPR, which upholds consent, transparency, and accountability in data processing, this may entail incorporating personality rights into Indian data protection laws or drafting a separate law addressing algorithmic impersonation. American jurisprudence on the right of publicity, which shields individuals against unlawful commercial exploitation of likeness, provides a comparative paradigm for balancing free expression and autonomy.
The conflict between personality rights and algorithmic identification is not limited to celebrity protection or economic endorsement. It is a broader struggle to maintain dignity, autonomy, and consent in the digital era. As technology blurs distinctions between authentic and synthetic identity, the law must stabilize, anticipate misuse, respond to risks, and emphasize that humans cannot be reduced to data points or forecasts. Courts, lawmakers, and regulators must collaborate to ensure innovation aligns with constitutional ideals and human rights.
Finally, recognising and protecting personality rights in this frontier means preserving identity itself. By incorporating safeguards into statute law, India and other countries may ensure the digital era reinforces autonomy, dignity, and liberty rather than undermining them amid technological change.

