
The India summit serves as a microcosm for this global key (File) | Photo credit: REUTERS
If we were to chart the trajectory of global AI governance, geographic indicators would tell a story of waning caution. When I attended the Responsible AI in Military (REAIM) Summit in The Hague in 2023, it was a meeting defined by a grim gravity, where nations were coming together to discuss the military applications of artificial intelligence and the urgent need for a “responsible” framework. The mood was closed. Since then, the diplomatic caravan has moved through Bletchley Park, Seoul and Paris, and recently arrived at the AI Impact Summit in India.
But something fundamental shifted along the way. I’d like to map this shift with an index I’ll call the “Responsibility Index”—a measure of how much weight security and ethics carry over speed and scale. For this index, safety is going down and big money is going up. Recent negotiations in India confirm this marked transition: the era of wondering whether we should build certain things has been definitively replaced by a race to see how quickly we can finance them.
The India Summit serves as a microcosm for this global hub. While the rhetoric of “security” is still part of the press, the atmosphere has changed. The conversation shifted from the philosophical concerns of researchers to the logistical requirements of industrialists. Ethicists, diplomats and military strategists dealing with the laws of war were the stars of the show at The Hague. In the current cycle, the attention has been hijacked by check writers. “Big money” has effectively overshadowed “deep talent”.
This overshadowing of talent by capital is a crucial difference. In the early days of AI’s generative wave—which seems like decades ago, but was only in 2022—the power was in the architects of the technology. The authors of the “Attention Is All You Need” paper, or the early teams at DeepMind, held the leverage because they had the rare cognitive surplus needed to build these models. Today, the barrier to entry is no longer just genius; it is capital expenditure in the size of GDP. As the primary requirement for relevance shifts from intelligence to computational power, incentives shift from scientific rigor to return on investment.
Nothing illustrates this commoditization of intelligence more than the rhetoric coming from industry leaders like Sam Altman, head of OpenAI and the face of this AI revolution. On the sidelines of India’s AI summit, Mr. Altman compared the energy consumption of massive data centers to the cost of training a single human being for twenty years. Such a comparison should have stopped the industry in its tracks, but it barely budged.
His statement is profound and suggests a worldview where biological intelligence and synthetic intelligence are merely competing line items on a balance sheet. If a data center can produce equivalent cognitive output for a fraction of the time and money it takes to raise, educate and train a human, the market will inevitably choose the silicon option. When human development is seen as an inefficient trade-off compared to GPU clusters, the “responsibility” to protect human-centric systems naturally erodes. The goal ceases to be the expansion of human capabilities and moves towards making the “expensive” human obsolete for the sake of margin.
This is why the liability index is decreasing. Responsibility is expensive. It requires friction, audits, pauses, and the occasional decision not to release a product. In the frantic atmosphere of the summit in India and the previous summits in Paris and Seoul, friction is the enemy. The focus has completely turned to infrastructure – energy networks, chip production and data sovereignty. The questions are no longer about the morality of the algorithm, but about the ownership of the pipeline it runs through.
We have officially entered the AI industrialization phase. Just as the Industrial Revolution eventually stopped caring about the craftsmanship of the individual weaver and focused on the output of the loom, the AI revolution moves beyond the “craft” of responsible coding to the brute force of the laws of scaling. The Hague REAIM summit felt like a warning; recent summits feel like a ribbon-cutting ceremony for a bandwagon.
As the heavy machinery of global capital grinds to a halt, the voices calling for a pause or a security check are getting quieter, drowned out by the hum of cooling fans in billions of data centers. The technology is undoubtedly getting smarter, but the wisdom behind its deployment seems to diminish with each new summit.
Published – 28 February 2026 08:00 IST





