• About Us
    • Our team
    • Code of Conduct
    • Disclaimer Policy
  • Policy
    • Privacy
    • Copyright
    • Refund Policy
    • Terms & Condition
  • Submit Post
    • Guideline
    • Submit/Article/Blog
    • Submit-Event/Job/Internship
  • Join Us
    • Intership
    • Campus Ambassador
  • Media Partnership
  • Advertise
    • Magazine
    • Website
  • Contact us
Wednesday, July 2, 2025
  • Login
  • Register
law Jurist
Advertisement
  • Home
  • Articles
    • Articles
  • CASE LAWS
    • CRPC
    • IPR
    • Constitution
    • International Law
    • Contract Laws
    • IBC
    • Evidence Act
    • CPC
    • Property Law
    • Companies Act
    • CRPC
    • AI and law
    • Banking Law
    • Contact Laws
    • Criminal Laws
  • Law Notes
    • CPC Notes
    • International Law Notes
    • Contract Laws Notes
    • Companies Act Notes
    • Banking Law Notes
    • Evidence Act Notes
  • Opportunities
    • Internship
    • Moot Court
    • Seminar
  • Careers
    • Law School Update
    • Judiciary
    • CLAT
  • JOURNAL
  • Legal Documents
  • Bare Act
  • Lawyers corner
No Result
View All Result
  • Home
  • Articles
    • Articles
  • CASE LAWS
    • CRPC
    • IPR
    • Constitution
    • International Law
    • Contract Laws
    • IBC
    • Evidence Act
    • CPC
    • Property Law
    • Companies Act
    • CRPC
    • AI and law
    • Banking Law
    • Contact Laws
    • Criminal Laws
  • Law Notes
    • CPC Notes
    • International Law Notes
    • Contract Laws Notes
    • Companies Act Notes
    • Banking Law Notes
    • Evidence Act Notes
  • Opportunities
    • Internship
    • Moot Court
    • Seminar
  • Careers
    • Law School Update
    • Judiciary
    • CLAT
  • JOURNAL
  • Legal Documents
  • Bare Act
  • Lawyers corner
No Result
View All Result
law Jurist
No Result
View All Result
Home Articles

AI-Generated Evidence in IndianCourts: Admissibility and Legal Challenges

Law Jurist by Law Jurist
2 July 2025
in Articles
0
2 0
Read Time:11 Minute, 44 Second

Author: Sakshi Tripathi, student of BBA.LLB (3rd Year) United university Prayagraj

Introduction 

Artificial intelligence (AI) has evolved from a futuristic idea to a reality in today’s quickly evolving digital world. It can be found in self-driving cars, content curation, smart surveillance systems, smartphone apps, and even the legal system. An important question that emerges as AI is incorporated into Indian legal and law enforcement procedures is whether or not machine-generated evidence can be admitted into court and, if so, how to ensure that it is fairly regulated.

In Indian legal discussions, the subject of AI-generated evidence is becoming more and more prominent. This kind of evidence, which includes algorithmic reconstructions, deepfake detection, predictive crime mapping, and facial recognition data, presents both tremendous opportunities and formidable obstacles. Although Indian courts have not yet reached a definitive ruling on the admissibility of such evidence, this issue will soon take centre stage. This article will look at what AI-generated evidence is, how Indian laws currently handle it (or don’t), the ethical and legal issues it brings up, and what changes might be required to maintain accountability in technology and justice.

How we create, store, and analyse data has changed as a result of the advancement of artificial intelligence (AI). Examples of AI-generated content that are increasingly being taken into consideration as potential evidence in legal systems worldwide, including India, include deepfakes, AI-written documents, facial recognition data, and predictive policing tools. However, this technological development poses a serious threat to established standards of evidence. Can evidence produced by AI be used in Indian courts? If so, under what conditions? The legal challenges, veracity, and admissibility of AI-generated evidence in Indian courts are examined in this article.

Understanding AI-Generated Evidence

Digital data or content created entirely or in part by artificial intelligence systems is referred to as AI-generated evidence. This comprises: Deepfake audio or video recordings AI-generated documents or emails Matches using facial recognition AI-powered predictive analysis Conversation logs from chatbots AI-generated materials, in contrast to traditional evidence, are not the direct result of human intention or action, which raises significant questions regarding manipulation, authorship, and dependability.

Legal Framework in India

Rules pertaining to evidence are based on India’s legal system, particularly the Indian Evidence Act of 1872. Nevertheless, this colonial-era law has had difficulty adjusting to the new complexity brought about by the digital and artificial intelligence eras.

Section 65B – Electronic Evidence

Section 65B of the Indian Evidence Act describes a crucial component of electronic evidence. According to this section, electronic documents that are accompanied by a certificate attesting to their integrity and provenance may be used as evidence in court. However, rather than the autonomous choices made by self-learning AI systems, this clause was designed with conventional digital formatsβ€”such as emails, SMS messages, computer files, and digital photosβ€”in mind. Therefore, even though AI-generated outputs could be regarded as electronic records in theory, there isn’t a formal legal procedure in place to confirm how an AI system came to its conclusions. When the AI’s decision-making process is opaque, a straightforward Section 65B certificate might not be enough.

The Bharatiya Sakshya Adhiniyam, 2023

The Bharatiya Sakshya Adhiniyam, a recently proposed evidence law, is an attempt to update India’s evidentiary laws. Even though it acknowledges digital evidence more explicitly, it still lacks specific provisions to handle machine-generated outputs or artificial intelligence (AI) analyses. Legislative ambiguity creates a grey area that may result in uneven application of the law.

Judicial Precedents and Current Practice

Although Indian courts have started to consider digital evidence, there aren’t many instances that specifically deal with AI-generated information. The Supreme Court made clear in Anvar P.V. v. P.K. Basheer (2014) that electronic evidence must adhere to Section 65B. More recently, the Court reaffirmed that digital evidence is inadmissible if Section 65B certification is not followed in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020). However, these decisions address electronic evidence in general rather than AI-generated data specifically, where the issue is not only authenticity but also the thinking behind the output, which is frequently not explicitly provided by AI systems.

Admissibility Difficulties 

  1. Authentication: How can it be demonstrated that evidence produced by AI has not been altered? For example, deepfakes are notoriously hard to spot without specialised equipment. 
  2. Chain of Custody: AI systems might not keep thorough records of all data input, processing, and output, which makes the chain of custody more difficult to manage.
  3. Reliability and Bias: AI systems can only be as objective as the data they are trained on. For instance, ethnic groups have demonstrated greater error rates in facial recognition systems. It might be against fairness principles to use such results as evidence.
  4. Absence of Regulation: As of right now, India has no particular legislation controlling the application of AI in court.  Judicial interpretation is unpredictable and inconsistent due to this regulatory gap. 
  5. Expert Testimony: Deciphering AI results frequently calls for in-depth knowledge of data science, machine learning models, and algorithmsβ€”a discipline that is still in its infancy in Indian legal processes.

 AI and Deepfakes: A Looming Threat to Evidence Integrity 

Deepfake technology, a particularly hazardous offshoot of artificial intelligence, is a serious threat to the Indian legal system. Deepfakes can be very challenging to spot with the unaided eye because they use artificial intelligence (AI) to produce incredibly realistic but completely fake audio, video, or image content. This technology presents a terrifying prospect for law enforcement and courts: fake evidence that appears to be authentic. Consider a deepfake video that uses publicly accessible voices and images to depict an innocent person committing a crime. Such a video could cause irreparable harm to justice if it is used as evidence without undergoing a rigorous technical review. Furthermore, even a layperson can produce convincingly false evidence because deepfakes are now simple to create using free tools.

Deepfakes are not only a cyberthreat in India, where digital literacy is still in its infancy, but they also have the potential to be used as a tool for character assassination, false implication, or communal unrest. Fake videos of celebrities and politicians have recently gone viral in an effort to sway public opinion, and it is not implausible to assume that litigants will try to use similar content in civil or criminal cases in the future.

At the moment, India lacks a specialised legal or forensic framework to verify the veracity of AI-powered audio or video evidence. Certain certification requirements for electronic records are mandated by Section 65B of the Indian Evidence Act (or its updated counterpart in the Bharatiya Sakshya Adhiniyam, 2023), but deepfakes and other synthetic content were never intended to be covered by these provisions.

Indian courts may soon have to do the following to combat this threat:

 Create standardised forensic procedures to check for the presence of deepfakes. 

Require digital submissions to include disclosures about AI-generated content. 

Work together with cybersecurity specialists and international AI watchdogs. 

Educate judges on how this kind of deception can happen.

The challenge lies not just in detecting deepfakes, but also in ensuring that real evidence is not wrongly dismissed as fake due to suspicion. The solution lies in balancing technical expertise with legal sensitivity, and in creating a legal environment where truth is not just soughtβ€”but technologically verified.

As AI tools get smarter, so must our courts.

Use of AI by Law Enforcement in India

Numerous Indian law enforcement organisations have started experimenting with AI tools, frequently without public discussion or legislative approval.

Facial Recognition: For surveillance and investigation purposes, police departments in Hyderabad, Delhi, and other states have implemented facial recognition systems (FRS).

 Crime Mapping and Prediction: Some cities employ tools that forecast future hotspots based on historical crime data.

 Despite their dubious dependability, experimental tools such as voice stress analysis and AI lie detectors are occasionally employed during interrogations. 

The absence of supervision, openness, and judicial accountability is the problem that unites all of these deployments. In a criminal justice system where people’s rights are at risk, this is risky.

International Developments: A Learning Opportunity

India is not the only country battling the legal issues raised by AI. The issue of AI-generated evidence has already been addressed by a number of nations:

United states

A risk-assessment algorithm was used to determine the appropriate sentence in State v. Loomis (2016). The defendant contested the algorithm’s opaqueness. The court cautioned against exclusive reliance but permitted its use. The Sixth Amendment, which protects the freedom to cross-examine witnesses, and AI evidence are at odds in the United States.

European Union 

The EU’s AI Act (2024) places judicial and law enforcement AI under the β€œhigh-risk” category and mandates accountability, transparency, and human oversight. European courts are adopting a rights-first stance.

United Kingdom

Due to unclear legal frameworks, the UK Court of Appeal ruled in Bridges v. South Wales Police (2020) that police use of facial recognition technology was illegal. India can learn a lot from these cases and put protective barriers in place sooner rather than later.

Ethical and Constitutional Dilemmas

Right to a Fair Trial (Article 21)

The right to a fair trial may be violated by the use of opaque AI-generated evidence. A β€œdecision” made by an algorithm whose logic is inaccessible cannot be rationally defended against by an individual.

Right to Privacy (Puttaswamy Judgment)

Large volumes of personal data are gathered and processed by AI tools like surveillance analytics and facial recognition. This infringes upon the right to privacy outlined in the Puttaswamy v. Union of India (2017) ruling in the absence of protections.

Presumption of Innocence

The idea that a person is innocent until proven guilty may be threatened by predictive policing or risk-scoring algorithms that raise suspicions about someone without any concrete proof of crime.

The Road Ahead: Recommendations for Reform

India has to quickly embrace a forward-thinking, rights-based approach to AI-generated evidence to make sure justice is not lost to technology.

  • Enact Specific Legislation on AI in Criminal Justice

A comprehensive law governing the use, limits, and admissibility of AI tools in criminal and civil matters is needed. This law must cover:

  • Certification of AI tools
  • Transparency requirements
  • Disclosure of training data
  • Independent audit
  • Establish a Legal Definition of AI Evidence

Legislators and courts need to clarify what β€œAI-generated evidence” is and set it apart from conventional electronic evidence.

  • Mandate Explainability and Open Access

Legal proceedings should only use AI tools whose operation can be audited and explained. Criminal trials should not include black-box algorithms.

  1. Include Human Supervision at All Levels 

Without a human expert confirming its validity, no AI-generated evidence should be used. Judges need to be taught to assess these tools critically rather than disadvantages 

  • Form a Committee on Judicial Technology 

To examine and authorise AI tools for legal use, a national committee comprising judges, technologists, ethicists, and solicitors ought to be formed.

6.Make Training and Education Investments in Law

 AI literacy must be taught in law schools and judicial academies. Future judges and solicitors need to be aware of algorithmic systems’ advantages and disadvantages.

Conclusion

Artificial intelligence has the potential to increase the speed, intelligence, and efficiency of justice if it is used responsibly, transparently, and with a strong commitment to human rights. Without robust safeguards, AI-generated evidence could be a double-edged sword that accelerates rather than prevents injustice. India is at a turning point. The legal community, legislature, and courts must work together to ensure that technology advances justice rather than the other way around. AI evidence must always be accepted based on sound legal reasoning and constitutional principles, never on novelty or convenience. As we look to the future, we must remember that justice is not a formula.It’s a commitment to treat everyone equally, with respect, and sensibly. And a machine should never take precedence over that.

It is a question of β€œwhen,” not β€œif,” that artificial intelligence will be integrated into the Indian legal system. AI should never take the place of human judgement, empathy, or accountability, even though it has the potential to significantly increase the effectiveness and precision of criminal investigations and court cases. We must make sure that technology advances justice rather than impedes it in a nation as diverse, democratic, and constitutionally rich as India. Even though evidence produced by AI might seem objective and scientific, it is ultimately produced and trained by humans, who have their own prejudices, presumptions, and constraints. We run the risk of turning our courtrooms into data centres where justice is reduced to code and human stories are lost in algorithms if we ignore this fact.  An AI tool’s incorrect identification is more than just a technical error, it can lead to erroneous arrests, humiliation, incarceration, or even death. 

In any civilised society, such outcomes are unacceptable. The real danger with AI is not in its use, but rather in its uncritical and unregulated adoption. We violate basic legal principles like the presumption of innocence, the right to a fair trial, and the right to be heard when courts begin to rely on results that even experts are unable to fully explain. Despite advancements in technology, these principles remain unalterable.

References & Sources

  1. Indian Evidence Act, 1872, Section 65B – Admissibility of electronic records

Bare Act – Indian Kanoon

  1. Bharatiya Sakshya Adhiniyam, 2023 – Proposed legislation replacing the Indian Evidence Act

PRS Legislative Research Summary

  1. Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 – Landmark case on electronic evidence admissibility Supreme Court Judgment
  2. State (NCT of Delhi) v. Navjot Sandhu (Parliament Attack Case), (2005) 11 SCC 600 – Use of digital evidence Case Summary
  3. Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 – Landmark judgment recognizing the Right to Privacy Judgment Text

Share

Facebook
Twitter
Pinterest
LinkedIn

About Post Author

Law Jurist

lawjurist23@gmail.com
http://lawjurist.com
Happy
Happy
0 0 %
Sad
Sad
0 0 %
Excited
Excited
0 0 %
Sleepy
Sleepy
0 0 %
Angry
Angry
0 0 %
Surprise
Surprise
0 0 %

Recent Posts

  • Reproductive Rights in India
  • AI-Generated Evidence in IndianCourts: Admissibility and Legal Challenges
  • National Education Policy, 2020
  • THE TERMINATOR DEEPFAKE AI: A THREAT TO HUMAN CIVILIZATION
  • Constitutional and Human Rights

Recent Comments

  1. бнанс зарСструватися on (no title)
  2. Binanceζ³¨ε†Œ on (no title)
  3. registro da binance on (no title)
  4. crea un account binance on (no title)
  5. binance anm"alningsbonus on (no title)

Archives

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024

Categories

  • About Us
  • Articles
  • Articles
  • Bare Acts
  • Careers
  • CASE LAWS
  • Constitution
  • Contact Laws
  • Contract Laws
  • Criminal Laws
  • CRPC
  • IBC
  • Internship
  • IPR
  • Law Notes
  • Property Law
  • Seminar

Description

Law Jurist is dedicated to transforming legal education and practice. With a vision for change, they foster an inclusive community for law students, lawyers, and advocates. Their mission is to provide tailored resources and guidance, redefining standards through innovation and collaboration. With integrity and transparency, Law Jurist aims to be a trusted partner in every legal journey, committed to continuous improvement. Together, they shape a future where legal minds thrive and redefine impact.

Contact US

Gmail : lawjurist23@gmail.com

Phone : +91 6360756930

Categories

  • About Us
  • Articles
  • Articles
  • Bare Acts
  • Careers
  • CASE LAWS
  • Constitution
  • Contact Laws
  • Contract Laws
  • Criminal Laws
  • CRPC
  • IBC
  • Internship
  • IPR
  • Law Notes
  • Property Law
  • Seminar

Search

No Result
View All Result
  • About Us
  • Bare Act
  • Code of Conduct
  • Contact us
  • Disclaimer Policy
  • Home 1
  • Join Us
  • Legal Documents
  • Our team
  • Policy
  • Privacy
  • Submit Post
  • Website
  • About Us
  • Refund Policy
  • Terms & Condition
  • Policy
  • Submit Post
  • Join Us
  • Media Partnership
  • Advertise
  • Contact us
  • Articles
  • CASE LAWS
  • About Us

Made with ❀ in India. © 2025 -- Law Jurist, All Rights Reserved.

No Result
View All Result
  • About Us
  • Bare Act
  • Code of Conduct
  • Contact us
  • Disclaimer Policy
  • Home 1
  • Join Us
  • Legal Documents
  • Our team
  • Policy
  • Privacy
  • Submit Post
    • Submit-Event/Job/Internship
  • Website
  • About Us
    • Our team
    • Code of Conduct
    • Disclaimer Policy
  • Refund Policy
  • Terms & Condition
  • Policy
    • Privacy
    • Copyright
  • Submit Post
  • Join Us
    • Internship
    • Campus Ambassador
  • Media Partnership
  • Advertise
  • Contact us
  • Articles
  • CASE LAWS
  • About Us

Made with ❀ in India. © 2025 -- Law Jurist, All Rights Reserved.

Welcome Back!

Sign In with Google
OR

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Sign Up with Google
OR

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

🚨 Registrations Open!
πŸŽ“ 2-Week Certificate Course on Artificial Intelligence, Law and Ethics by Law Jurist

πŸ“ Course Dates: 16th – 30th June 2025
πŸ•– Time: 7:00 PM onwards
πŸ’» Mode: Google Meet (Live + Recordings available)
πŸ“œ Credits: 2
πŸ’° Fee: β‚Ή499 only
🎫 Limited Seats Available!

 

πŸ”— Register Now: https://payments.cashfree.com/forms?code=lawjuristt

πŸ“˜ Brochure & Details:Β https://drive.google.com/file/d/1M1hIXFvyvimh2dvmRIdWGJFrVmvT6iwg/view?usp=sharingΒ