
The petition warned that the opaque use of artificial intelligence and machine learning technologies in the judicial system and governance would raise constitutional and human rights concerns. | Photo credit: Getty Images/iStockphoto
The Supreme Court on Monday (November 10, 2025) agreed to write a petition after a fortnight highlighting that the indiscriminate use of generative artificial intelligence (GenAI) in judicial work can lead to “hallucinations”, leading to fictitious verdicts, research materials and worse, even permanent bias.
Chief Justice of India BR Gavai, before whose bench the case came up, responded that judges were aware and vocal about the intrusions of AI into the functioning of the judiciary.
Kartikeya Rawal, an advocate represented by his lawyer Abhinav Shrivastava, urged the apex court to formulate a strict policy or at least framework guidelines for regulated, transparent, safe and uniform use of GenAI in courts, tribunals and other quasi-judicial bodies until the law is in place.
It raises concerns
The petition warned that the opaque use of artificial intelligence and machine learning technologies in the judicial system and governance would raise constitutional and human rights concerns. The judiciary must only have access to unbiased data and the ownership of such data must be transparent enough to ensure accountability of the parties involved.
“GenAI’s ability to use advanced neural networks and unsupervised learning to generate new data, uncover hidden patterns and automate complex processes can lead to ‘hallucinations’, resulting in false case law, AI bias and lengthy observations… This process of hallucinating would mean that GenAI would be based not on precedent, but on law that might not even exist.
GenAI was able to produce original content based on prompts or queries. It could create realistic images, generate content such as graphics and text, answer questions, explain complex concepts and translate language into code, Mr. Rawal said.
He further stated that GenAI algorithms can also “replicate, perpetuate, exacerbate” pre-existing prejudice, discrimination and stereotyping practices, raising profound ethical and legal challenges.
AI fluctuations
The petition pointed out that judicial reasoning and decisions cannot be left to the vagaries of AI, the public has a right to know the reasoning behind the judgments. Judicial considerations and sources cannot be arbitrary, but transparent. Mr Rawal said artificial intelligence can help with administrative efficiency but cannot replace “the prudence, moral judgment and human discretion that are necessary for judicial decision-making”.
“For the safe and constitutionally compliant deployment of GenAI in the court system, it is essential that court operators maintain a ‘human in the loop’ principle and ensure that adequately trained professionals supervise and verify outputs generated by GenAI,” the petition states.
He said one of the biggest dangers of integrating AI into forensic work is data opacity, largely due to the “black box algorithms” used in GenAI.
“The term ‘black box’ is used to refer to a technological system that is inherently opaque, whose inner workings or underlying logic are not properly understood, or whose outputs and effects cannot be explained. This can make it extremely difficult to detect erroneous outputs, especially in GenAI systems that discover patterns in the underlying data unsupervised. The opacity of the internal creators may even mean that they do not create such logic. The risk of arbitrariness and discrimination,” he states petition.
Published – 10 Nov 2025 20:40 IST





