Author(s): Alapan Dawn & Sakshi Kothari. Adv(s)
The Shift to the Machine
India’s shift to a fully “Faceless” tax administration was meant to eliminate human discretion and corruption. But recent High Court rulings in 2025 show a paradox: while courts are strictly enforcing mandatory AI allocation to prevent officer interference, the law offers almost no safeguards for algorithmic bias, data privacy, or transparency. As the taxman is forced to yield to the machine, an alarming constitutional vacuum is emerging.
The Death of the Assessing Officer
For decades, the interface between the taxpayer and the state was defined by the physical presence of the Jurisdictional Assessing Officer (JAO). This relationship, often criticized for subjective discretion and harassment, was statutorily dismantled by the Finance Act, 2021.
The introduction of the Faceless Assessment Scheme was not just an administrative upgrade; it was a fundamental shift in constitutional governance. By mandating Automated Allocation, Parliament effectively transferred the power of choice deciding which officer assesses which case from human administrators to an AI algorithm. However, 2024–25 has witnessed a peculiar litigation trend. While the legislature pushed for automation, Revenue officers frequently tried to bypass the digital wall to assert human jurisdiction. The resulting judicial pushback has cemented the supremacy of the algorithm but left open a critical question: If the algorithm is the new master, who accounts for its errors?
The Statutory Mandate: No Room for Humans
The legal backbone of this transition is Section 151A of the Income Tax Act, 1961. It empowers the Central Government to eliminate the human interface to the extent technologically feasible.
Crucially, the E-Assessment of Income Escaping Assessment Scheme, 2022, notified under this section, defines “automated allocation” explicitly as an algorithm using artificial intelligence and machine learning.
Paragraph 3 of the Scheme uses the mandatory word “shall” dictating that the issuance of notice shall be through automated allocation. This is significant because it shifts constitutional responsibility. The fairness of an assessment now depends not on an officer’s integrity, but on the logic of the code governing the allocation.
The Judiciary as the Guardian of the Algorithm
Post the Supreme Court’s intervention in Union of India v. Ashish Agarwal, which saved the Faceless regime from a procedural collapse, High Courts have taken a hardline stance against human interference.
In early 2025, the Rajasthan High Court in Khatri Ceramics Private Limited v. Income Tax Officer dealt with a batch of petitions where notices were issued by local JAOs instead of the Faceless Centre. The Revenue argued that JAOs held concurrent jurisdiction to supervise the system.
The Court rejected this, holding that concurrent jurisdiction would defeat the very purpose of the statute. It ruled that when the law mandates AI allocation to ensure randomness, reintroducing human choice violates the scheme. The message was clear: the process must be random to be fair.
No Exceptions for Search and Seizure
The Revenue attempted to carve out exceptions for complex matters, arguing that Search and Seizure cases require human application of mind that an algorithm cannot provide.
In Marudhar Bhumi Developers Pvt. Ltd. v. DCIT, the High Court dismissed this argument. Relying on the Bombay High Court’s reasoning in Hexaware Technologies, the Court held that reading an exception into the scheme would do “violence to the language of the statute.”
By quashing these notices, the judiciary has effectively constitutionalised the Faceless Scheme. It views the Black Box of automated allocation as a necessary safeguard against the arbitrariness of Article 14.
The Emerging Constitutional Crisis
While the Courts are right to enforce the rule of law against the Department, this victory for procedure opens a void for rights. We are moving toward a system of Algorithmic Governance without the necessary checks and balances.
- The Accountability Vacuum (Article 14)
Once AI determines jurisdiction or flags a file for scrutiny, Article 14 demands transparency. A taxpayer has a right to know why they were selected. However, the Risk Management Strategy algorithms are proprietary and opaque. If an algorithm hallucinates or flags a taxpayer based on biased historical data, the current legal framework offers no mechanism to challenge the reason to believe formed by the code.
- Procedural Fairness and the Black Box
Natural justice requires a reasoned decision. But how does one cross-examine an algorithm? As the EU AI Act recognizes, high-risk administrative AI requires human oversight and explainability. India’s Income Tax Act currently lacks any provision for an “algorithmic audit,” leaving taxpayers subject to a decision-making process they cannot see or question.
- The Data Protection Void
Most concerning is the interaction with the Digital Personal Data Protection (DPDP) Act, 2023. While the Act governs the processing of digital data, Section 17(2)(a) allows the government to exempt “instrumentalities of the State” from its application.
Unlike the GDPR, which gives citizens the right not to be subject to automated decision-making, the DPDP Act potentially allows the Tax Department to operate its AI behind an iron curtain of state exemption. The citizen is thus governed by the algorithm but legally barred from questioning it.
The Way Forward: Policy Recommendations
The judiciary has done its part by enforcing the mandate of the Finance Act, 2021. Now, Parliament must step in to ensure that this digital state remains a democratic one. To prevent the Faceless regime from becoming an unaccountable technocracy, three reforms are urgent:
- Mandatory Algorithmic Audits: The Income Tax Department must subject its allocation and risk assessment algorithms to independent third-party audits to check for bias and errors.
- Right to Explanation: Taxpayers must have a statutory right to understand the parameters used by the AI to select their case for scrutiny, akin to the “right to explanation” in global data laws.
- Human-in-the-Loop Safeguards: While allocation should be automated, the final “reason to believe” for income escaping assessment must require a recorded, human validation that is reviewable by courts.
The Faceless regime was designed to be a shield against harassment. Without robust data rights and transparency, there is a risk it becomes a weapon of opaque control.
Referance:
- Ministry of Finance (Dep’t of Revenue), Notification No. 18/2022, S.O. 1466(E) (Mar. 29, 2022).
- Khatri Ceramics Pvt. Ltd. v. Income Tax Officer, 2025 Latest Caselaw 9065 Raj (Rajasthan H.C. Mar. 19, 2025).
- Marudhar Bhumi Developers Pvt. Ltd. v. Deputy Comm’r of Income Tax, No. 21235/2025, 2025:RJ-JD:46961-DB (Rajasthan H.C. Oct. 30, 2025).
- Hexaware Techs. Ltd. v. Assistant Comm’r of Income Tax, 464 I.T.R. 430 (Bombay H.C. 2024).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Annex III.
- The Digital Personal Data Protection Act, 2023, § 17(2)(a) (India).

