Ammara Mehvish
3rd Year Law, Government Law College,Mumbai
Introduction
In our era of digital everything, the justice system in India is facing one of its biggest transformations — the rise of artificial intelligence (AI)-based evidence and algorithmic decision-making and whether the new procedural law, the BNSS, is ready for that challenge. This article explores how AI evidence is emerging, what BNSS brings in terms of procedure, and whether the law is equipped to handle the algorithmic justice issues that follow.
What do we mean by “AI-based evidence”?
When I say “AI-based evidence”, I mean things like forensic analysis done by algorithms, facial recognition outputs, predictive policing data, or even summaries generated by AI of witness statements. These are not just “digitised versions of old evidence” but evidence where an algorithm has processed, analysed or generated the material. As one study notes: “the ‘black box’ problem of AI … where the reasoning behind an algorithm’s output is opaque” poses significant barriers to fairness and due process.
In the Indian context, as per a research brief, AI is already used for data review, facial recognition, surveillance and supporting investigations under the criminal justice system. So the question becomes: when these algorithmic tools feed into the courts, is our procedure law (BNSS) ready to ensure fairness, transparency and reliability?
BNSS – what it brings to procedure
The BNSS (which replaces the old CrPC) was enacted to modernise criminal procedure. On its face, BNSS contains procedural reforms like enabling electronic filing of complaints, video conferencing of proceedings, digital workflows.
Also, paired with the Bharatiya Sakshya Adhiniyam (BSA) for evidence law, these new statutes recognise digital records and workflows (Sections about “electronic records” in BSA) and thereby open the way for evidence that has algorithmic components.
So yes, theoretically BNSS + BSA give the apparatus for digital / tech-assisted process in criminal justice. But of course “recognise” is not the same as “fully manage the risks”. Which leads us to…
The algorithmic justice challenges
Here are some of the big issues that AI-based evidence raises and how BNSS might struggle:
Transparency and explainability
AI systems often generate outputs where how they arrived at the conclusion is not clear to humans. When such output is used as evidence, the accused must have a chance to challenge or question it. The law notes that “the law must mandate a ‘right to explanation’ for AI-generated evidence.” Under BNSS/BSA the frameworks for “electronic records” exist but do we have sufficient rules to demand algorithmic audit, model disclosure, error-rates, training data etc? Not yet clearly.
Bias and fairness
Algorithms trained on skewed data may perpetuate or amplify discrimination (caste, gender, region). In India especially where dataset quality is uneven, the risk is large. A study observed “in India, data is not always reliable due to socio-economic factors … AI evokes unquestioning aspiration” which is worrying. The procedural law must guard against such bias but BNSS does not yet include very specific algorithmic fairness safeguards.
Chain of custody, authenticity and integrity
With AI evidence, you must ask: where did the algorithm process data, was the data tampered, how do we verify the output? The BSA contains sections for “electronic records” proof (eg. section 63) and BNSS procedure gives digital filing/search powers. But again, these are still generic; they are not fully tailored for “AI camera recognised suspect” or “algorithm predicted high-risk offender”.
Human oversight
A central principle of justice is that a human decision-maker should weigh evidence, not just accept algorithmic output. As one commentary puts it: “Judicial oversight, ensuring AI remains assistive, not determinative.” BNSS allows digital processes, but does it ensure that judges and magistrates have actual ability/training to interrogate algorithmic evidence? Possibly not sufficiently yet.
Legal backing and regulation
A major challenge is that India does not yet have a dedicated horizontal statute for AI-governance in justice or policing. General constitutional principles apply (Article 21, etc) but when AI evidence enters court, we need robust regulation. BNSS is a big step forward but by itself may not plug all gaps.
Can BNSS handle AI-based evidence?
In a nutshell: partially yes, but not fully. BNSS gives the procedural platform for digital processes, and when paired with BSA (evidence law) there is recognition of electronic records, digital filing, video-based hearings etc. That means the system can handle AI-based evidence in theory. But in practice many crucial pieces are missing or weak, so there is risk of justice being compromised.
Here are some points:
-
BNSS allows courts to summon “things / documents” (including digital) and do searches etc (which is helpful for algorithmic data).
-
The evidence law under BSA explicitly treats electronic records as primary evidence; this means outputs from AI systems can be placed in the record.
-
But for fairness and legitimacy we need: transparency (explain how algorithm worked), auditability (error-rates, dataset bias), human supervision (judges understand the evidence), data quality (inputs to AI are correct) and regulatory safeguards (algorithmic accountability). Many of these are currently underdeveloped in Indian law.
-
The procedural law must ensure that the accused can challenge AI-based evidence – e.g., ask for source data, algorithmic logs, audit trail. Without that we risk mechanical justice.
Suggestions for strengthening
Since I am still an intern trying to think through, here are some suggestions:
-
The BNSS (or a supplemental rule under it) should mandate for AI-derived evidence: a certificate from certified forensic/expert showing how the algorithm processed data, its error margin, limitations and dataset provenance.
-
Judges and magistrates should receive training to understand algorithmic evidence – the law should make training compulsory.
-
There should be a right for the accused / defence to inspect algorithmic logs, challenge dataset bias and ask for re-analysis by an independent expert.
-
Audit trails for AI systems used by police or investigators must be preserved and producible in court – BNSS could include procedure for preservation of such audit logs.
-
There should be oversight mechanisms – maybe an independent regulator or body (state or national) to ensure algorithmic systems used in justice are fair, transparent and accountable.
Conclusion
So to conclude, algorithmic justice is not some distant sci-fi concept: in India we are already seeing AI in investigations, forensic analysis and digital courts. The BNSS is a welcome reform in criminal procedure, and it opens the door for AI-based evidence. But opening the door is different from making sure every step inside is safe, fair and just. There are still gaps – in transparency, bias-mitigation, human oversight and accountability. For BNSS to really handle AI-based evidence properly, the procedural law must evolve further, and stakeholders (judges, lawyers, law enforcement) must be equipped and constrained in the right way.
If we get it right – then AI can become a helpful tool in delivering justice more swiftly, not a monster that undermines it. If we don’t – then we risk algorithmic injustice layering on top of old systemic injustices. And that would be a big failure of our law reforms.
References
-
Vidushi Marda, Artificial Intelligence and the Law in India, Internet Freedom Foundation Research Brief.
-
NITI Aayog, National Strategy for Artificial Intelligence: #AIForAll (2018).
-
Barfield, W., The Cambridge Handbook of the Law of Algorithms (Cambridge University Press, 2020).
-
World Economic Forum, Guidelines for AI Procurement (2019).
-
Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.

