
Former RBI Governor Raghuram Rajan recently questioned whether the world could enter an “AI nightmare” and urged governments to prepare for numerous transformational outcomes.
While cautioning against exaggerated doomsday narratives, noted Rev Bloomberg interview that adoption will be gradual, allowing time for retraining and adjustment.
However, this gradual transition should not encourage political complacency.
The speed of AI development and its ability to reshape markets, workforces and information flows make management imperative.
The challenge is to mitigate damage without stifling innovation, a balance that major economies strike in different ways.
Different global approaches
The US has largely taken a sector-specific and non-normative approach. Rather than an overarching AI law, it relies on executive orders, regulators and voluntary standards.
Bodies such as the National Institute of Standards and Technology have issued risk-based frameworks to promote trusted AI while maintaining flexibility for innovation. This approach underlines competitiveness and rapid commercialization.
The EU’s interventionist path prioritizes the protection of rights and security, even at the cost of regulatory complexity.
His Artificial Intelligence Act of 2024 requires impact assessments, documentation, and transparency from developers and implementers of high-risk AI systems. Failure to comply can result in penalties of up to 7% of global annual turnover.
In contrast, China has a security-focused model and a state that strictly oversees the deployment of artificial intelligence while encouraging domestic innovation.
Its AI governance framework combines data localization requirements, algorithm registration and transparency rules and content moderation obligations.
India’s unique position
India’s regulatory approach was quite different. Rather than retrofitting security features after deployment, control mechanisms were often built into the system design.
Offered at zero cost and widely adopted across sectors, India’s Digital Public Infrastructure (DPI) is unique globally in both scale and interoperability – Aadhaar and UPI combine integrated surveillance, authentication and accountability. This offers useful signposts for AI.
A strict framework that requires compliance like that of the EU, or heavily centralized like the Chinese, risks undermining India’s comparative advantages, particularly its entrepreneurial startup ecosystem and globally competitive IT services sector.
India needs a credible regulatory framework that is agile, adaptable and based on real-world risks.
Policy options before India
Recent political signals reflect this emerging consensus. India’s recently released AI Governance Guidelines emphasize sector-specific governance, India-focused risk assessments, voluntary safeguards and prioritizing existing legal frameworks over rushed new legislation.
Policy experts are warning against AI laws across the economy, advocating responsible AI codes that build safety, accountability and explainability into the design to protect smaller innovators from compliance burdens while allowing for regulatory intervention in the event of risks.
An example is the RBI’s Free Intelligence Committee report, which articulates seven principles including trust, people-first design, accountability and security.
Since innovation and risk mitigation were complementary, she recommended the creation of shared infrastructure and AI regulatory sandboxes to democratize access to AI.
Choosing the future
To avoid the AI nightmare while realizing the promise of AI growth and social impact, India needs to build an AI governance framework that encourages experimentation and adoption, manages risk and deepens inclusion while avoiding excessive regulation.
Amar Gupta is the joint managing partner of JSA Advocates & Solicitors





