Profile picture
CIS @cis_india
, 41 tweets, 20 min read Read on Twitter
Today, @NITIAayog released a Discussion Paper titled ‘National Strategy for Artificial Intelligence”. #AIforAll
niti.gov.in/writereaddata/…
We provide our initial thoughts on the paper here. 1/n
We welcome this initiative by @NITIAayog, but a call for comments would have been a welcome addition. The paper takes important steps forward from the #AI Task Force report released earlier this year by the DIPP, Ministry of Commerce and Industry (dipp.nic.in/sites/default/…). 2/n
This paper attempts a more holistic look at a broader range of issues concerning AI including #regulation, #ethics, #fairness, #transparency and #accountability. However, a number of issues still remain with this paper. 3/n
To begin with, the paper references several foreign policies, standards or practices to address challenges in #AI, without going into detail or further contextualising them to implementation in India. 4/n
There are no specific recommendations or detailed plans to implement and improve #openstandards, #opendata, or #FOSS. While the lack of #opendata is mentioned as a challenge in implementation of #AI in #Healthcare, no alternatives or solutions are provided.5/n
The paper concludes that self-regulation is sufficient to regulate #AI instead of considering a wider spectrum of regulatory approaches and tools. This is in contrast to the recommendations of the Global Initiative on Ethics of Autonomous and Intelligent Systems 6/n
The section on transparency/opening the #blackbox has several lacunae. The source code must be made circumstantially available since #ExplainableAI alone cannot solve all problems of #transparency 7/n
First, AI/algorithms used by the government, to a required and acceptable extent, must be available in the public domain for audit (if not under #FOSS), in particular for uses that impinge on fundamental rights.8/n
Second, if the AI/algorithm is used in the private sector, there is a right to reverse engineer it provided in the Indian Copyright Act, which is not accounted for in the paper. 9/n
Furthermore, if the AI/algorithm was involved in the commission of a crime or violation of human rights, law enforcement officials and regulators will need access to the source code.10/n
To deal with problems of fairness/bias, there is no mention of regulatory tools like a) self-certification b) certification by a self-regulatory body c) discrimination impact assessments d) investigations by the privacy regulator. 11/n
The paper does not recommend kill switches, which should be mandatory for all Kinetic #AI systems. Additionally there is no recommendation for mandatory human-in-the-loop, in all systems where there are significant risks to safety and #humanrights.12/n
The law chapter of #IEEE‘s Global Initiative on Ethics of Autonomous and Intelligent Systems Law has been ignored in favor of the chapter on ‘Personal #Data and Individual Access Control in Ethically Aligned Design’ as recommended international standard.13/n
The call for sectoral regulatory frameworks seems to be silent on the regulations themselves. This could easily result in a situation where regulation implementation is delayed by calling for frameworks before any sector-specific regulation is produced.14/n
While this paper endorses the 7 #dataprotection principles given by the Justice Srikrishna Committee, we believe that these principles are generic and not specific to #data protection.15/n
Our broader response to the Justice Srikrishna Committee’s report can be seen here: cis-india.org/internet-gover… 16/n
One of the recommendations in the paper is focused spreading #public awareness when there is no explicit call for AI-specific regulation. Awareness is essential, but it must complement regulation, especially given India’s limited regulatory budget.17/n
The paper lacks information on the use of #AI in the military, which is worrying since India is chairing the Group of Governmental Experts on Lethal Autonomous Weapons Systems (#LAWS) in 2018. 18/n
The paper recommends Corporate Data Sharing for “social good” and making #datasets from the social sector available publicly. However there is no mention of privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc.19/n
Section 3(k) of Patents Act exempts algorithms from being patented. The paper advocates reworking the existing IP framework to allow patenting of AI/algorithms, which will require a revised standard.20/n
No concrete reasons have been provided to answer the question of whether #AI should be patented at all, or to what extent. There also needs to be a standard that distinguishes between #AI #algorithms and non-AI algorithms.
21/n
The #CRI guidelines require novel hardware to accompany a software for patenting as a computer programme. Amendments to either of these will have multiple effects in other domains, so making these amendments will be a lot more difficult than it may seem. 22/n
Additionally, given that there is no historical precedence on the requirement of patent rights for #AI to incentivise creation, alternative investment protection mechanisms may be a wiser move forward. 23/n
Facilitating rapmant patenting will cause a patent thicket which will prevent other companies from using the #AI. 24/n
The paper identifies 5 focus areas, of which manufacturing/production technology is not one of them. Regulatory suggestions are also absent for the retail, and #manufacturing industry, despite it being said to have the largest user base. 25/n
The paper positions safe harbour as the only alternative to strict liability, however based on the application of #AI, different legal principles such as negligence, product liability and malpractice may be looked at. 26/n
Safe harbours may not be the optimum solutions eps. where #AI use is untested and should be restricted to selective situations where liability arises from user changes. 27/n
In cases of safety critical #AI uses such as #healthcare, automobile etc., appropriate testing and QA standards need to evolve, before safe harbours and immunity may be discussed. 28/n
The paper mentions historical bias as a challenge that must be dealt with, but does not suggest detailed or satisfactory solutions to go about it, nor does it deal with historical bias in a specifically #Indian context. 29/n
The paper suggests sophisticated surveillance systems to check people’s movement and use of social media intelligence platforms for crime prevention but does not address the issue of #privacy and the use of publicly available #data for surveillance purposes in detail. 30/n
The paper also does not advocate that such actions must be inline with international #humanrights norms. CIS’s suggestions for surveillance reform can be found here: cis-india.org/internet-gover… 31/n
Additionally, the paper suggests predicting “potential activities that could disrupt public #peace,” but does not elaborate on what measures can be taken based on such predictions. 32/n
This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. 33/n
There’s no recommendation of immunities or incentives for whistleblowers or researchers to report on #privacy breaches and vulnerabilities. 34/n
The technocratic view that as we increase training on #data, systems will self correct, is inconsistent with #FundamentalRights. #Policy objectives of #AI innovation cannot be at the cost of intermediary denial of rights and services. 35/n
The paper provides blanket recommendations without looking at its viability of its in each sector. Eg. what works for #healthcare might not work for agriculture. Additionally, societal, cultural and sectoral challenges are barely touched upon. 36/n
The #government should be able to prevent the formation of monopolies by regulating companies from hoarding user #data. As such, companies should be required to share anonymised #data (subject to end user privacy) to other companies. 37/n
In general, accountability has not been recommended in #data collection and use, development of solutions, security and responses and in integration within work or as a service. Accountability in mentioned only in terms of explainability of #AI.38/n
#Transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an #AI should disclose its identity to a human have not been recommended.39/n
The paper sees the use of #autonomousAI as an economic boost, but does not sufficiently focus on the potential risks involved. A welcome recommendation would be for all autonomous #AI to go through #humanrights impact assessments.40/n
The authors of these tweets are @pranavbarkwhip, @@says_shweta, @Swagam_Dasgupta , @ambersinha07 , @sunil_abraham , @elonnai, @swarajpb, Senthil Kumar, and Vishnu Ramachandran. Thanks to @VidushiMarda for her leadership in this area.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to CIS
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!