Author: Adv. Yogesh, pursuing my LL.M at Dayananda Sagar University, Bengaluru.
Introduction:
The increased application of generative AI in research related to law, instruments like ChatGPT etc has led to a new vulnerability: “hallucinations,” which are false facts, quotes, or case citations that are manifested with high confidence and clarity. If you don’t keep a check on AI, it can deliver complete decisions or misquote earlier cases. If advocates blindly rely on these AI technologies, they would end up filing case petitions that never ever existed. This has resulted in dismay around the world. For example, in U.S.A, a judge in Manhattan levied penalty for two advocates for imbibing AI-generated content in a filing, stating they had “abandoned their responsibilities” by relying on unexcited quotes and citations from fake judicial opinions made by the AI. A Reuters investigation also depicted at least seven recent incidents around the world where courts told advocates off for using fictitious and bogus citations made by AI technology. Generative models are meant to confidently deliver facts since they don’t check facts; they only make random guess what words will come next based on statistical patterns and algorithm designs. A law firm’s chatbot made up around five to six fake or unexcited cases in a short span of time, which led to a penalty $5,000 in the U.S.A
The Indian Context
The scenario is already existing in India. In September 2025, the Honorable Delhi High Court detected a petition from homebuyers that used fake precedents made by ChatGPT and other AI tools Senior counsel for the respondents identified that the petitioners had wrongfully cited columns of fake or unexcited cases and judgments in their process of filing. The bench was astonished by this unlawful and forbidden usage of AI technology and gave an opportunity for petition to be withdrawn. It also warned that those who did so may be charged with contempt proceedings or even perjury as well. This Indian case is relatable to other cases where advocates have been told that they must be honest in their work, uphold legal spirit and that not knowing about AI’s drawbacks will not be an efficient excuse as such.
Supreme Court’s Prudence on AI Misuse
The Judges on Honorable Supreme Court are also commencing to sound the alarm. CJI B.R. Gavai warned in March 2025 that “platforms like ChatGPT have made up legal facts and fake case citations.” Justice Vikram Nath also said, “AI may help the process of justice, but only human intelligence can deliver the essence of justice.” These comments emphasize a basic fundamental principle: AI can expedite research or translation, but it cannot supplant the judge’s duty to authenticate every citation and precedent.
Ethical Duty and Judicial Integrity
The Honorable Supreme Court itself has repeatedly dictated advocates of this duty. In Bhagwan Singh v. State of U.P. (2024), for instance, the Court rebuked how advocates assisted unscrupulous litigants in filing false proceedings and falsified documents, calling it ‘a matter of at most serious concern when Advocates who are officers of the Court assist others in misapplication the process of law.’ Although Bhagwan Singh dealt with forged Vakalatnamas, its ethical lesson attributed equally to AI misuse: knowingly or negligently citing made-up cases is a form of deceiving and misrepresenting the court.
Honorable Kerala High Court’s Guidelines on Effective usage of AI (2025)
In this background, the Honorable Kerala High Court made a pioneering and revolutionary move. In July 2025, the Honorable Kerala High Court, led by Honorable Chief Justice P.V. Kunhikrishnan, released the “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary.” This is the first known policy in India restricting the usage and implementation of AI technology in legal processes. The regulation explicitly enumerates that AI should be solely used to help, and judges are instructed not to utilize AI technologies to make adjudications, provide relief, or pronounce judgments in any situation whatsoever. It is pertinent that every AI-generated output must be thoroughly scrutinized by an individual.
Key Requisites of the Kerala AI Guidelines
- AI as a supportive Tool: AI technologies can be effectively used for legal research or administrative aspects, but not for making adjunctions. Judges must be very much vigilant that AI tools are never used instead of legal Critical thinking.
2. Assessing AI Outputs: All AI-generated contents and outputs, such as case citations, judgments or references, must be checked and scrutinized against official sources.
3. Prohibiting Unvetted Generative Models: Open-ended catboats like ChatGPT, Gemini etc can’t be used for court related works since they could leak data and risk of breach of data privacy.
4. Audit Trails and Training: Courts need to keep records and conduct regular audits of how they use AI tools and educate people about AI ethics and other related concepts.
5. Disciplinary Enforcement: If they fail to follow the rules, they could face disciplinary action, which makes the courts more accountable and transparent in its action.
Why These Guidelines Matter
Court proceedings rely on key precision, brevity and veracity. Allowing AI hallucinations in case filings can make people lose faith in the judicial system. Justice Nath said it well: “A judge is not an algorithm… a machine can’t understand how a victim feels or how complicated social situations are.” Advocates are officers of the Court and must adhere the Bar Council’s Rules and Regulations. The Honorable Supreme Court has uttered numerous times that using the legal system in the wrong or unethical way, as by making bogus documents or citations, will lead to harsh and stringent punishments.
Implications for Indian Legal System and Future of AI
The Honorable Kerala High Court’s move is very pivotal for India’s legal system. It makes it concise that technology can never substitute or replace an advocates hard work. The Bar Council’s code of ethics for advocates insist that they must be honest, truthful, sincere, genuine and trustworthy: they can’t give authorities that they haven’t cross checked. AI literacy must now be an integral and essential part of legal school, education and bar preparations so that advocates may learn about the merits and demerits of AI tools. In the meantime, courts and various law firms in India should develop AI models that are educated on verifiable records so that they don’t have to rely on untrustworthy worldwide chatbots. Technologists are working on varied explainable AI models that reference their own sources, which is in line with the courts’ need for openness.
Conclusion
The leaders of Kerala have set a foundation for the whole nation. The High Court has made it very clear and specific that AI can help with legal work, but bogus, fictitious law and hallucinations have no place in court. If an Advocate or Judge crosses that threshold, they could be placed in contempt. Technology can make things expeditious, but human judgment is still the most important part of justice. Kerala’s policy reminds professionals that honesty and checking must always be non-negotiable in the age of AI.

