On 13 August 2025, the Reserve Bank of India released the report of the FREE-AI Committee — the Framework for Responsible and Ethical Enablement of Artificial Intelligence. The committee, constituted in December 2024, set out a coherent policy architecture for how AI should be designed, deployed, and governed by entities the RBI regulates: commercial banks, cooperative banks, NBFCs, payment system operators, and other financial institutions under RBI supervision.
This guide unpacks the framework: the survey baseline that motivated it, the 7 Sutras at its foundation, the 6 strategic Pillars that organise the work, the headline recommendations, how it sits next to the DPDP Act and broader Indian AI regulation, and the operational steps RBI-regulated entities (REs) should take now even though the recommendations are not yet binding regulation.
The Baseline — Where Indian BFSI Actually Is
The committee's report draws on an RBI survey of regulated entities. The headline numbers: roughly 20.8% of surveyed REs are already deploying AI in production — predominantly for customer support, sales, credit underwriting, and cybersecurity. 67% expressed interest in exploring AI use cases. India's BFSI sector is no longer experimenting at the margins; it is operationalising AI at the centre of how credit is assessed, fraud is detected, and customers are served.
The Sitharaman-led 23 April 2026 review with RBI, MeitY, and bank chiefs on systemic AI risk in the financial sector underscored the urgency. FREE-AI is the RBI's structured response to a sector that is past the question of whether to adopt and into the harder question of how to govern.
The 7 Sutras — Foundational Principles
The Sutras are the seven overarching principles that underpin every recommendation in the report. They are not technology-specific; they are the values the RBI expects AI deployments in the regulated sector to express:
- Public trust as the foundation. Trust is the asset; AI systems must be designed to build and sustain it.
- Disclosure and the right to override. Customers must know when AI is involved in decisions, and individuals retain the final authority to override AI determinations.
- Responsible innovation, not cautionary restraint. The framework explicitly favours socially useful innovation over excessive caution — a pragmatic stance that avoids over-regulation while demanding accountability.
- Fairness, equity, and inclusion by design. AI systems are tested for bias across protected and relevant attributes before they enter production.
- Understandability. Deploying entities must understand how the AI systems they operate work — outsourcing comprehension to vendors is not acceptable.
- Safety, sustainability, and resilience. AI systems must be safe under physical and cyber risk, including model-specific attack surfaces like prompt injection and data poisoning.
- Accountability regardless of autonomy. The deploying entity is accountable for AI decisions, no matter how autonomous the system. There is no shifting blame to the model.
The 6 Pillars — Where the Work Sits
The Pillars organise the 26 recommendations into the dimensions of responsibility a regulated entity must build:
1. Infrastructure
Sector-wide AI data infrastructure and innovation sandboxes. The recommendation includes treating financial-sector data infrastructure as a Digital Public Infrastructure (DPI) — a shared backbone the sector can rely on rather than each RE building from scratch.
2. Policy
Clear institutional and national AI policies that guide adoption and risk management. At the RE level, this translates to board-approved AI policies that are reviewed and updated as the technology and risk landscape evolves.
3. Capacity
Building AI skills, knowledge sharing, and expertise in fairness and explainability across the sector. This is a workforce and education recommendation as much as a technology one.
4. Governance
Board-level accountability, reporting, and oversight of AI initiatives. Quarterly board reviews of AI risk are now expected practice for serious deployments. AI disclosures move from optional to part of the annual report.
5. Protection
Consumer-facing controls — disclosures of AI involvement, grievance and redressal mechanisms for AI-driven decisions, and fairness in outcomes that affect customers.
6. Assurance
Independent audits, impact assessments, and ongoing evaluations to prove that AI systems remain reliable, fair, and trustworthy after deployment. The audit profession is being asked to develop AI-specific assurance practices.
The Headline Recommendations
Among the 26 recommendations in the report, several stand out for their immediate operational implications for REs:
- AI disclosures in annual reports — REs to include AI-related disclosures covering governance frameworks, areas of AI adoption, consumer protection measures, and grievance redressal mechanisms.
- Board-approved AI policies — covering governance, AI lifecycle management, risk controls, and third-party vendor liabilities.
- AI sandbox — an environment to test, validate, and develop AI solutions in a controlled setting before production deployment.
- Financial-sector data infrastructure as DPI — to enable AI use cases at scale without each RE building isolated data foundations.
- Grievance and redressal — clear consumer paths to dispute AI-driven decisions and trigger human review.
- Independent audits and impact assessments — recurring third-party evaluation of AI systems for reliability, fairness, and risk.
Most of these are achievable with existing governance and engineering disciplines applied to AI specifically. The work is not invention; it is bringing AI inside the same operational rigour banks already apply to credit, fraud, and operational risk.
How FREE-AI Sits Next to DPDP and the Broader Stack
An RE building AI in India operates under multiple regimes simultaneously:
- DPDP Act 2023 and DPDP Rules 2025 govern personal-data processing. Phase 1 enforcement is live since 14 November 2025; consent rules arrive in November 2026. See our DPDP Act AI compliance guide.
- FREE-AI governs AI deployment specifically inside RBI-regulated entities.
- Existing RBI master directions on outsourcing, IT governance, cyber security, and data localisation continue to apply.
- SEBI guidance applies to securities-market participants; IRDAI guidance applies to insurers.
- NITI Aayog's Principles for Responsible AI provide the broader directional framework.
The good news: these regimes are largely compatible. An AI governance programme built to satisfy FREE-AI naturally generates much of what DPDP and the cyber-security directions require — board approval, risk classification, audit trails, consumer disclosures, grievance paths. The trick is designing once, deliberately.
What an RE Should Do in the Next Six Months
Six concrete steps that get an RE from "aware of FREE-AI" to "operationally aligned":
- Inventory. List every AI deployment in production and pilot, classified by function, data sensitivity, customer-facing impact, and degree of automation.
- Board-approved AI policy. Drafted against the 7 Sutras, structured along the 6 Pillars, reviewed by the board and reissued annually.
- Governance owner. A named executive (often a Chief AI Officer or designated CRO/CDO) accountable for AI risk, with quarterly board reporting.
- Annual report disclosures. Drafted ahead of the next reporting cycle so the disclosures match operational reality.
- Internal AI sandbox. A non-production environment where new models, prompts, and agents are evaluated against ground truth before any customer touchpoint.
- Audit and impact-assessment programme. A repeatable framework for periodic independent review, using internal audit or third-party assurance.
For broader Responsible AI design that integrates RBI, DPDP, NITI Aayog, and the engineering controls underneath, see our Responsible AI in India guide and humaineeti's Responsible AI service.
The Direction of Travel
Specific recommendations from the FREE-AI report will likely be operationalised through RBI master directions and circulars over the next 12–24 months. The detail will differ in places; the direction is set. Mature REs are not waiting for binding rules — they are building the governance, disclosure, and assurance capabilities now, on the assumption that what is "expected practice" today is "regulated requirement" tomorrow. That is the safer bet.