Responsible AI Policy Study

PI(s): Balaraman Ravindran


The NITI Aayog serves as the apex public policy think tank of the Government of India, and the nodal agency tasked with catalyzing economic development, and fostering cooperative federalism through the involvement of State Governments of India in the economic policy-making process using a bottom-up approach.

The National Strategy for Artificial Intelligence (NSAI) document, published by NITI Aayog in June 2018, recommended establishment of clear mechanisms to ensure that the technology is used in a responsible manner by instilling trust in their functioning as a critical enabling factor for large scale adoption in a manner that harnesses the best that the technology has to offer while protecting citizens. Need for a fine balance between protecting society (individuals and communities) without stifling research and innovation in the field was underlined. The future of AI is determined by a diverse group of stakeholders, including researchers, private organisations, Government, standard-setting bodies, regulators and general citizens. Around the world, various countries and organisations have defined principles to guide responsible management of AI for various stakeholders.

NITI Aayog commissioned studies for coming up with strategies/approaches for ensuring the usage of AI in a responsible manner and published two documents in this avenue. Prof. Balaraman Ravindran and team from RBCDSAI, IIT Madras has contributed to the two documents published by NITI Aayog – Reponsible AI Approach Document for India Part 1: Principles for Responsible AI (published in February 2021) and Part 2: Operationalizing Principles for Responsible AI (published in August 2021). Recently, the Part 3 of the Responsible AI approach was put forth by NITI Aayog as a discussion paper, in November 2022, on adoption of facial recognition technologies. The paper discusses various challenges, risks, opportunities and guidelines needed in design, development and deployment of Facial Recognition Technologies.

National Association of Software and Service Companies (NASSCOM) has also published its own views and guidelines on Responsible AI. NASSCOM has published a Responsible AI Governance Framework to idenitfy and assess various risks to enable AI-led enterprises to develop, deploy and monitor AI solutions. NASSCOM also published a Responsible AI Architect’s Guide prescribing the ethical best practices in implementation of responsible design, development and deployment of AI solutions and for its adoption.

At the global level, OECD AI Policy Observatory (OECD.AI) and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have come up with ethics and guidelines that need to be followed for AI development and usage. In May 2019, OECD published the AI Principles that need to be followed to promote innovation and trustworthiness based on human rights and democratic values. The principles included: Inclusive growth, sustainable development and well-being; Human-centred values and fairness; Transparency and explainability; Robustness, security and safety; and Accountability. In November 2021, UNESCO published its Recommendations on the Ethics of AI. The document included the principles: Proportionality and Do No Harm; Safety and Security; Fairness and Non-discrimination; Sustainability; Right to Privacy, and Data Protection; Human oversight and determination; Transparency and explainability; Responsibility and accountability; Awareness and literacy; and Multi-stakeholder and adaptive governance and collaboration.