Author: Sakshi Tripathi, student of BBA.LLB (3rd Year) United university Prayagraj
Introduction
Artificial intelligence (AI) has evolved from a futuristic idea to a reality in todayβs quickly evolving digital world. It can be found in self-driving cars, content curation, smart surveillance systems, smartphone apps, and even the legal system. An important question that emerges as AI is incorporated into Indian legal and law enforcement procedures is whether or not machine-generated evidence can be admitted into court and, if so, how to ensure that it is fairly regulated.
In Indian legal discussions, the subject of AI-generated evidence is becoming more and more prominent. This kind of evidence, which includes algorithmic reconstructions, deepfake detection, predictive crime mapping, and facial recognition data, presents both tremendous opportunities and formidable obstacles. Although Indian courts have not yet reached a definitive ruling on the admissibility of such evidence, this issue will soon take centre stage. This article will look at what AI-generated evidence is, how Indian laws currently handle it (or donβt), the ethical and legal issues it brings up, and what changes might be required to maintain accountability in technology and justice.
How we create, store, and analyse data has changed as a result of the advancement of artificial intelligence (AI). Examples of AI-generated content that are increasingly being taken into consideration as potential evidence in legal systems worldwide, including India, include deepfakes, AI-written documents, facial recognition data, and predictive policing tools. However, this technological development poses a serious threat to established standards of evidence. Can evidence produced by AI be used in Indian courts? If so, under what conditions? The legal challenges, veracity, and admissibility of AI-generated evidence in Indian courts are examined in this article.
Understanding AI-Generated Evidence
Digital data or content created entirely or in part by artificial intelligence systems is referred to as AI-generated evidence. This comprises: Deepfake audio or video recordings AI-generated documents or emails Matches using facial recognition AI-powered predictive analysis Conversation logs from chatbots AI-generated materials, in contrast to traditional evidence, are not the direct result of human intention or action, which raises significant questions regarding manipulation, authorship, and dependability.
Legal Framework in India
Rules pertaining to evidence are based on Indiaβs legal system, particularly the Indian Evidence Act of 1872. Nevertheless, this colonial-era law has had difficulty adjusting to the new complexity brought about by the digital and artificial intelligence eras.
Section 65B β Electronic Evidence
Section 65B of the Indian Evidence Act describes a crucial component of electronic evidence. According to this section, electronic documents that are accompanied by a certificate attesting to their integrity and provenance may be used as evidence in court. However, rather than the autonomous choices made by self-learning AI systems, this clause was designed with conventional digital formatsβsuch as emails, SMS messages, computer files, and digital photosβin mind. Therefore, even though AI-generated outputs could be regarded as electronic records in theory, there isn’t a formal legal procedure in place to confirm how an AI system came to its conclusions. When the AI’s decision-making process is opaque, a straightforward Section 65B certificate might not be enough.
The Bharatiya Sakshya Adhiniyam, 2023
The Bharatiya Sakshya Adhiniyam, a recently proposed evidence law, is an attempt to update Indiaβs evidentiary laws. Even though it acknowledges digital evidence more explicitly, it still lacks specific provisions to handle machine-generated outputs or artificial intelligence (AI) analyses. Legislative ambiguity creates a grey area that may result in uneven application of the law.
Judicial Precedents and Current Practice
Although Indian courts have started to consider digital evidence, there arenβt many instances that specifically deal with AI-generated information. The Supreme Court made clear in Anvar P.V. v. P.K. Basheer (2014) that electronic evidence must adhere to Section 65B. More recently, the Court reaffirmed that digital evidence is inadmissible if Section 65B certification is not followed in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020). However, these decisions address electronic evidence in general rather than AI-generated data specifically, where the issue is not only authenticity but also the thinking behind the output, which is frequently not explicitly provided by AI systems.
Admissibility Difficulties
- Authentication: How can it be demonstrated that evidence produced by AI has not been altered? For example, deepfakes are notoriously hard to spot without specialised equipment.
- Chain of Custody: AI systems might not keep thorough records of all data input, processing, and output, which makes the chain of custody more difficult to manage.
- Reliability and Bias: AI systems can only be as objective as the data they are trained on. For instance, ethnic groups have demonstrated greater error rates in facial recognition systems. It might be against fairness principles to use such results as evidence.
- Absence of Regulation: As of right now, India has no particular legislation controlling the application of AI in court. Judicial interpretation is unpredictable and inconsistent due to this regulatory gap.
- Expert Testimony: Deciphering AI results frequently calls for in-depth knowledge of data science, machine learning models, and algorithmsβa discipline that is still in its infancy in Indian legal processes.
AI and Deepfakes: A Looming Threat to Evidence Integrity
Deepfake technology, a particularly hazardous offshoot of artificial intelligence, is a serious threat to the Indian legal system. Deepfakes can be very challenging to spot with the unaided eye because they use artificial intelligence (AI) to produce incredibly realistic but completely fake audio, video, or image content. This technology presents a terrifying prospect for law enforcement and courts: fake evidence that appears to be authentic. Consider a deepfake video that uses publicly accessible voices and images to depict an innocent person committing a crime. Such a video could cause irreparable harm to justice if it is used as evidence without undergoing a rigorous technical review. Furthermore, even a layperson can produce convincingly false evidence because deepfakes are now simple to create using free tools.
Deepfakes are not only a cyberthreat in India, where digital literacy is still in its infancy, but they also have the potential to be used as a tool for character assassination, false implication, or communal unrest. Fake videos of celebrities and politicians have recently gone viral in an effort to sway public opinion, and it is not implausible to assume that litigants will try to use similar content in civil or criminal cases in the future.
At the moment, India lacks a specialised legal or forensic framework to verify the veracity of AI-powered audio or video evidence. Certain certification requirements for electronic records are mandated by Section 65B of the Indian Evidence Act (or its updated counterpart in the Bharatiya Sakshya Adhiniyam, 2023), but deepfakes and other synthetic content were never intended to be covered by these provisions.
Indian courts may soon have to do the following to combat this threat:
Create standardised forensic procedures to check for the presence of deepfakes.
Require digital submissions to include disclosures about AI-generated content.
Work together with cybersecurity specialists and international AI watchdogs.
Educate judges on how this kind of deception can happen.
The challenge lies not just in detecting deepfakes, but also in ensuring that real evidence is not wrongly dismissed as fake due to suspicion. The solution lies in balancing technical expertise with legal sensitivity, and in creating a legal environment where truth is not just soughtβbut technologically verified.
As AI tools get smarter, so must our courts.
Use of AI by Law Enforcement in India
Numerous Indian law enforcement organisations have started experimenting with AI tools, frequently without public discussion or legislative approval.
Facial Recognition: For surveillance and investigation purposes, police departments in Hyderabad, Delhi, and other states have implemented facial recognition systems (FRS).
Crime Mapping and Prediction: Some cities employ tools that forecast future hotspots based on historical crime data.
Despite their dubious dependability, experimental tools such as voice stress analysis and AI lie detectors are occasionally employed during interrogations.
The absence of supervision, openness, and judicial accountability is the problem that unites all of these deployments. In a criminal justice system where peopleβs rights are at risk, this is risky.
International Developments: A Learning Opportunity
India is not the only country battling the legal issues raised by AI. The issue of AI-generated evidence has already been addressed by a number of nations:
United states
A risk-assessment algorithm was used to determine the appropriate sentence in State v. Loomis (2016). The defendant contested the algorithmβs opaqueness. The court cautioned against exclusive reliance but permitted its use. The Sixth Amendment, which protects the freedom to cross-examine witnesses, and AI evidence are at odds in the United States.
European Union
The EUβs AI Act (2024) places judicial and law enforcement AI under the βhigh-riskβ category and mandates accountability, transparency, and human oversight. European courts are adopting a rights-first stance.
United Kingdom
Due to unclear legal frameworks, the UK Court of Appeal ruled in Bridges v. South Wales Police (2020) that police use of facial recognition technology was illegal. India can learn a lot from these cases and put protective barriers in place sooner rather than later.
Ethical and Constitutional Dilemmas
Right to a Fair Trial (Article 21)
The right to a fair trial may be violated by the use of opaque AI-generated evidence. A βdecisionβ made by an algorithm whose logic is inaccessible cannot be rationally defended against by an individual.
Right to Privacy (Puttaswamy Judgment)
Large volumes of personal data are gathered and processed by AI tools like surveillance analytics and facial recognition. This infringes upon the right to privacy outlined in the Puttaswamy v. Union of India (2017) ruling in the absence of protections.
Presumption of Innocence
The idea that a person is innocent until proven guilty may be threatened by predictive policing or risk-scoring algorithms that raise suspicions about someone without any concrete proof of crime.
The Road Ahead: Recommendations for Reform
India has to quickly embrace a forward-thinking, rights-based approach to AI-generated evidence to make sure justice is not lost to technology.
- Enact Specific Legislation on AI in Criminal Justice
A comprehensive law governing the use, limits, and admissibility of AI tools in criminal and civil matters is needed. This law must cover:
- Certification of AI tools
- Transparency requirements
- Disclosure of training data
- Independent audit
- Establish a Legal Definition of AI Evidence
Legislators and courts need to clarify what βAI-generated evidenceβ is and set it apart from conventional electronic evidence.
- Mandate Explainability and Open Access
Legal proceedings should only use AI tools whose operation can be audited and explained. Criminal trials should not include black-box algorithms.
- Include Human Supervision at All Levels
Without a human expert confirming its validity, no AI-generated evidence should be used. Judges need to be taught to assess these tools critically rather than disadvantages
- Form a Committee on Judicial Technology
To examine and authorise AI tools for legal use, a national committee comprising judges, technologists, ethicists, and solicitors ought to be formed.
6.Make Training and Education Investments in Law
AI literacy must be taught in law schools and judicial academies. Future judges and solicitors need to be aware of algorithmic systemsβ advantages and disadvantages.
Conclusion
Artificial intelligence has the potential to increase the speed, intelligence, and efficiency of justice if it is used responsibly, transparently, and with a strong commitment to human rights. Without robust safeguards, AI-generated evidence could be a double-edged sword that accelerates rather than prevents injustice. India is at a turning point. The legal community, legislature, and courts must work together to ensure that technology advances justice rather than the other way around. AI evidence must always be accepted based on sound legal reasoning and constitutional principles, never on novelty or convenience. As we look to the future, we must remember that justice is not a formula.Itβs a commitment to treat everyone equally, with respect, and sensibly. And a machine should never take precedence over that.
It is a question of βwhen,β not βif,β that artificial intelligence will be integrated into the Indian legal system. AI should never take the place of human judgement, empathy, or accountability, even though it has the potential to significantly increase the effectiveness and precision of criminal investigations and court cases. We must make sure that technology advances justice rather than impedes it in a nation as diverse, democratic, and constitutionally rich as India. Even though evidence produced by AI might seem objective and scientific, it is ultimately produced and trained by humans, who have their own prejudices, presumptions, and constraints. We run the risk of turning our courtrooms into data centres where justice is reduced to code and human stories are lost in algorithms if we ignore this fact. An AI toolβs incorrect identification is more than just a technical error, it can lead to erroneous arrests, humiliation, incarceration, or even death.
In any civilised society, such outcomes are unacceptable. The real danger with AI is not in its use, but rather in its uncritical and unregulated adoption. We violate basic legal principles like the presumption of innocence, the right to a fair trial, and the right to be heard when courts begin to rely on results that even experts are unable to fully explain. Despite advancements in technology, these principles remain unalterable.
References & Sources
- Indian Evidence Act, 1872, Section 65B β Admissibility of electronic records
Bare Act β Indian Kanoon
- Bharatiya Sakshya Adhiniyam, 2023 β Proposed legislation replacing the Indian Evidence Act
PRS Legislative Research Summary
- Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 β Landmark case on electronic evidence admissibility Supreme Court Judgment
- State (NCT of Delhi) v. Navjot Sandhu (Parliament Attack Case), (2005) 11 SCC 600 β Use of digital evidence Case Summary
- Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 β Landmark judgment recognizing the Right to Privacy Judgment Text