HomeLegal ColumnsUsing AI in Court Pleadings? Supreme Court’s Warning and the Future of...

Using AI in Court Pleadings? Supreme Court’s Warning and the Future of Legal Drafting in India

Introduction

The increasing use of artificial intelligence in legal drafting has raised serious ethical and procedural concerns within the Indian judicial system. In a recent matter before the Supreme Court of India, the Court cautioned lawyers against filing pleadings containing fabricated or unverified case citations generated through AI tools. The observation came in the context of a petition that relied on non-existent judgments, prompting the Court to emphasise the advocate’s duty of verification and accuracy.

In State of Uttar Pradesh v. Reena (2025) (oral proceedings before the Supreme Court of India), the Bench made it clear that technological assistance cannot dilute professional responsibility. The Court orally observed:

“Because of artificial intelligence, lawyers and judges have an additional duty to verify whether the citations and materials relied upon are genuine and not fabricated.”

This marks one of the earliest direct judicial engagements in India with the risks posed by AI-assisted legal drafting.

Background: The Rise of AI in Legal Drafting

Artificial intelligence tools are now routinely used in legal practice for research, drafting petitions, preparing written submissions, and summarising case law. Their speed and efficiency make them attractive, particularly in high-volume litigation environments. However, these tools are not infallible. A well-documented limitation of generative AI is the production of “hallucinated” content, including non-existent judgments, incorrect citations, and misquoted legal principles.

In the legal domain, such inaccuracies are not merely technical errors. Court pleadings are formal documents submitted to a constitutional court, and every assertion within them carries legal consequences. When AI-generated inaccuracies enter pleadings, they directly affect judicial time, procedural integrity, and the credibility of the advocate.

The Supreme Court’s Core Concern: Fake Citations in Pleadings

The Supreme Court’s warning was triggered by a petition that cited judgments which did not exist in any recognised law report. Upon noticing this, the Court dismissed the matter and cautioned the legal fraternity against blindly relying on AI-generated material. The Bench underscored that the duty to verify citations is non-delegable and rests squarely on the advocate.

The Court observed:

“A lawyer has a duty to cross-verify the judgments and authorities cited in pleadings. Reliance on artificial intelligence without verification cannot be an excuse for placing incorrect material before the Court.”

This statement reinforces a foundational principle of advocacy: accuracy is not optional. It is an ethical obligation.

Duty of Candour and Professional Responsibility

The advocate’s duty of candour to the court is a cornerstone of the justice system. Courts rely heavily on precedents cited in pleadings and written submissions. When a lawyer cites a judgment, the court assumes that the authority exists, is correctly cited, and accurately supports the proposition advanced.

Filing AI-generated pleadings containing fabricated citations undermines this trust. The Court’s warning makes it clear that responsibility cannot be shifted to software or technological tools. The signature of counsel on a pleading signifies personal verification and due diligence.

Even if an error originates from AI, the court treats the submission as the advocate’s own representation.

AI Hallucinations and Their Impact on Judicial Process

AI hallucinations pose a unique threat to judicial efficiency. Fabricated citations compel courts to spend valuable time verifying authorities, delaying proceedings and burdening an already strained judicial system. In high-stakes litigation, reliance on non-existent precedents can also distort legal arguments and affect adjudication.

The Supreme Court’s concern reflects a broader institutional objective: preserving the integrity of the adjudicatory process. Courts function on verified legal sources such as SCC, AIR, and official court databases. Introducing unverifiable AI-generated citations disrupts this structured system of legal reasoning.

Not a Ban on AI, But a Call for Responsible Use

It is crucial to note that the Supreme Court has not prohibited the use of AI in legal research or drafting. Instead, the Court has adopted a balanced and pragmatic approach. AI may be used as an assistive tool, but it cannot replace human legal judgment or professional diligence.

The Court’s oral remarks indicate a clear distinction:

“Technology can assist legal research, but the responsibility for accuracy in pleadings always lies with the counsel who files them.”

This position aligns with the traditional ethos of the legal profession, where tools may evolve but professional accountability remains constant.

Ethical Obligations Under Professional Conduct Rules

Under the Advocates Act, 1961 and the Bar Council of India Rules, advocates are bound to maintain accuracy, diligence, and fairness in their submissions. Filing incorrect citations, even inadvertently, may amount to professional misconduct if it reflects lack of due care.

The increasing reliance on AI tools introduces a new dimension to professional ethics. Lawyers must now exercise technological diligence in addition to legal diligence. Blind reliance on AI-generated drafts without verification may expose advocates to judicial criticism, reputational harm, and, in extreme cases, disciplinary scrutiny.

Broader Judicial and Institutional Implications

The Supreme Court’s warning signals the beginning of a larger jurisprudential shift concerning technology in litigation. As AI tools become more prevalent, courts may evolve formal expectations regarding their use. This could include mandatory verification standards, disclosure norms for AI-assisted drafting, and stricter scrutiny of citations in pleadings.

Globally, courts have already begun cautioning lawyers against submitting AI-generated documents without verification. The Indian Supreme Court’s observations place India within this emerging global discourse on responsible AI usage in legal systems.

Impact on Litigation Practice in India

For litigating advocates, the implications are immediate and significant. Every citation generated through AI must be independently verified from authoritative sources. Law firms and chambers will likely need to implement internal protocols to ensure that AI-assisted drafts are thoroughly vetted before filing.

The ruling also reinforces the importance of traditional legal discipline. Verification of citations, reading original judgments, and cross-checking legal propositions remain indispensable aspects of advocacy. Technological convenience cannot substitute professional competence.

The Future of AI in the Indian Legal Profession

The legal profession has historically adapted to technological change, from typewritten pleadings to e-filing and virtual hearings. AI represents the next phase of this evolution. However, unlike earlier technological tools, AI generates substantive legal content, which directly interacts with judicial reasoning.

The Supreme Court’s intervention suggests that while AI will continue to be integrated into legal workflows, its use will be accompanied by heightened standards of accountability. The profession is likely to witness the development of best practices governing AI-assisted drafting and research.

Conclusion

The Supreme Court’s observations in State of Uttar Pradesh v. Reena (2025) serve as a timely reminder that technological advancement does not dilute professional responsibility. Artificial intelligence may assist lawyers in research and drafting, but it cannot replace the advocate’s duty of verification, accuracy, and candour to the court.

The message from the Court is clear and unequivocal: AI is a tool, not an authority. The ultimate responsibility for every pleading rests with the lawyer who signs and files it. As the legal profession enters the age of AI-assisted practice, the foundational principles of diligence, integrity, and accountability remain unchanged. Courts do not adjudicate algorithms; they adjudicate arguments presented by officers of the court, who must ensure that every citation, precedent, and legal submission placed before the Bench is genuine, accurate, and verified.

Law Wire Team
Law Wire Teamhttps://lawwire.in/
Law Wire Team attempts to delve into pertinent (and sometimes not immediately pertinent) questions regarding socio-politics, Law and their interesting matrix.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular