The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 11 to 25.
Cryptocurrency Regulation in India – A brief history
http://editors.cis-india.org/internet-governance/blog/cryptocurrency-regulation-in-india-2013-a-brief-history
<b>In March 2020, the Supreme Court of India quashed the RBI order passed in 2018 that banned financial services firms from trading in virtual currency or cryptocurrency.
Keeping this policy window in mind, the Centre for Internet & Society will be releasing a series of blog posts and policy briefs on cryptocurrency regulation in India
</b>
<p id="docs-internal-guid-18286fb9-7fff-c656-6a5b-a01a2e2b3682" style="text-align: justify;" dir="ltr"> </p>
<p style="text-align: justify;" dir="ltr">The story of cryptocurrencies
started in 2008 when a paper titled “Bitcoin: A Peer to Peer Electronic
Cash System” was published by a single or group of pseudonymous
developer(s) by the name of Satoshi Nakamoto. The actual network took
some time to start with the first transactions taking place only in
January 2009. The first actual sale of an item using Bitcoin took place a
year later with a user swapping 10,000 Bitcoin for two pizzas in 2010,
which attached a cash value to the cryptocurrency for the first time. By
2011 other cryptocurrencies began to emerge, with Litecoin, Namecoin
and Swiftcoin all making their debut. Meanwhile, Bitcoin the
cryptocurrency that started it all started getting criticised after
claims emerged that it was being used on the so-called “dark web”,
particularly on sites such as Silk Road as a means of payment for
illegal transactions. Over the next five years cryptocurrencies steadily
gained traction with increased number of transactions and the price of
Bitcoin, the most popular cryptocurrency shot up from around 5 Dollars
in the beginning of 2012 to almost 1000 Dollars at the end of 2017.</p>
<p style="text-align: justify;" dir="ltr">Riding on the back of this
wave of popularity, a number of cryptocurrency exchanges started
operating in India between 2012 and 2017 providing much needed depth and
volume to the Indian cryptocurrency market. These included popular
exchanges such as Zebpay, Coinsecure, Unocoin, Koinex, Pocket Bits and
Bitxoxo. With the price of cryptocurrencies shooting up and because of
its increased popularity and adoption by users outside of its
traditional cult following, regulators worldwide began to take notice of
this new technology; in India the RBI issued a Press Release cautioning
the public against dealing in virtual currencies including Bitcoin way
back in 2013. However, the transaction volumes and adoption of
cryptocurrencies in India really picked up in earnest only after the
demonetisation of high value currency notes in November of 2016, with
the government’s emphasis on digital payments leading to alternatives to
traditional online banking such as cryptocurrencies forcing their way
into the public consciousness. Indian cryptocurrency exchanges started
acquiring users at a much higher pace which drove up volume for
cryptocurrency transactions on all Indian exchanges. The growing
popularity of cryptocurrencies and its adoption by large numbers of
Indian users forced the RBI to issue another Press Release in February
2017 reiterating its concerns regarding cryptocurrencies raised in its
earlier Press Release of 2013. </p>
<p style="text-align: justify;" dir="ltr">In October and November, 2017
two Public Interest Petitions were filed in the Supreme Court of India,
one by Siddharth Dalmia and another by Dwaipayan Bhowmick, the former
asking the Supreme Court to restrict the sale and purchase of
cryptocurrencies in India, and the latter asking for cryptocurrencies in
India to be regulated. Both the petitions are currently pending in the
Supreme Court.</p>
<p style="text-align: justify;" dir="ltr">In November, 2017 the
Government of India constituted a high level Inter-ministerial Committee
under the chairmanship of Shri Subhash Chandra Garg, Secretary,
Department of Economic Affairs, Ministry of Finance and comprising of
Shri Ajay Prakash Sawhney (Secretary, Ministry of Electronics and
Information Technology), Shri Ajay Tyagi (Chairman, Securities and
Exchange Board of India) and Shri B.P. Kanungo (Deputy Governor, Reserve
Bank of India). The mandate of the Committee was to study various
issues pertaining to Virtual Currencies and to propose specific actions
that may be taken in relation thereto. This Committee submitted its
report in July of 2019 recommending a ban on private cryptocurrencies in
India.</p>
<p style="text-align: justify;" dir="ltr">In December 2017 both the RBI
as well as the Ministry of Finance issued Press releases cautioning the
general public about the dangers and risks associated with
cryptocurrencies, with the Ministry of Finance Press Release saying that
cryptocurrencies are like ponzi schemes and also declaring that they
are not currencies or coins. It should be mentioned here that till the
end of March 2018, the RBI and the Finance Ministry had issued various
Press Releases on cryptocurrencies cautioning people against their
risks, however none of them ever took any legal action or gave any
enforceable directions against cryptocurrencies. All of this changed
with the RBI circular dated April 6, 2018 whereby the RBI prevented
Commercial and Co-operative Banks, Payments Banks, Small Finance Banks,
NBFCs, and Payment System Providers not only from dealing in virtual
currencies themselves but also directing them to stop providing services
to all entities which deal with virtual currencies.</p>
<p style="text-align: justify;" dir="ltr">The effect of the circular was
that cryptocurrency exchanges, which relied on normal banking channels
for sending and receiving money to and from their users, could not
access any banking services within India. This essentially crippled
their business operations since converting cash to cryptocurrencies and
vice versa was an essential part of their operations. Even pure
cryptocurrency exchanges which did not deal in fiat currency, were
unable to carry out their regular operations such as paying for office
space, staff salaries, server space, vendor payments, etc. without
access to banking services. </p>
<p>As a the operations of cryptocurrency exchanges took a severe hit and
the number of transactions on these exchanges reduced substantially.
People who had bought cryptocurrencies on these exchanges as an
investment were forced to sell their crypto assets and cash out before
they lost access to banking facilities. The cryptocurrency exchanges
themselves found it hard to sustain operations in the face of the dual
hit of reduced transaction volumes and loss of access banking services.
Faced with such an existential threat, a number of exchanges who were
members of the Internet and Mobile Association of India (IMAI), filed a
writ petition in the Supreme Court on May 15, 2018 titled Internet and
Mobile Association of India v. Reserve Bank of India, the final
arguments in which were heard by the Supreme Court of India in January,
2020 and the judgment is awaited. If the Supreme Court agrees with the
arguments of the petitioners, then cryptocurrency exchanges would be
able to restart operations in India; as a result the cryptocurrency
ecosystem in India may be revived and cryptocurrencies may become a
viable investment alternative again.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/cryptocurrency-regulation-in-india-2013-a-brief-history'>http://editors.cis-india.org/internet-governance/blog/cryptocurrency-regulation-in-india-2013-a-brief-history</a>
</p>
No publishervipulCybersecurityinternet governanceBitcoinInternet GovernanceCryptocurrenciesCyber Security2020-03-05T18:36:09ZBlog EntryA Compilation of Research on the PDP Bill
http://editors.cis-india.org/internet-governance/blog/compilation-of-research-on-data-protection
<b>The most recent step in India’s initiative to create an effective and comprehensive Data Protection regime was the call for comments to the Personal Data Protection Bill, 2019, which closed last month. Leading up to the comments, CIS has published numerous research pieces with the goal of providing a comprehensive overview of how this legislation would place India within the global scheme, and how the local situation has developed, as well as analysing its impacts on citizens’ rights.</b>
<p> </p>
<p>In addition to general and clause-by-clause comments and recommendations, we
have compiled an annotated version of the Personal Data Protection
Bill, which lays out our <a class="external-link" href="https://cis-india.org/internet-governance/blog/comments-to-the-personal-data-protection-bill-2019">commentary</a> in an easy-to-follow format.</p>
<p> </p>
<p><img src="https://cis-india.org/internet-governance/pdp-bill-compilation-post-image/" alt="null" width="100%" /></p>
<p> </p>
<p>Below, you can find our other recent research on Data Protection:</p>
<p> </p>
<ul><li>Pallavi Bedi has put together a <a class="external-link" href="https://cis-india.org/internet-governance/blog/divergence-between-the-general-data-protection-regulation-and-the-personal-data-protection-bill-2019">note</a> on the Divergence between EU’s General Data Protection Regulation (GDPR) and the Personal Data Protection Bill.</li></ul>
<div> </div>
<ul><li>In addition, Pallavi has also <a class="external-link" href="https://cis-india.org/internet-governance/blog/comparison-of-the-personal-data-protection-bill-with-the-general-data-protection-regulation-and-the-california-consumer-protection-act-2">contrasted</a> the Personal Data Protection Bill with the GDPR and California Consumer Protection Act, in the contexts of jurisdiction and scope, rights of the data principal, obligations of data fiduciaries, exemptions, data protection authority, and breach of personal data. </li></ul>
<div> </div>
<ul><li>On IAPP’s blog <em>Privacy Perspectives</em>, D. Shweta Reddy has <a class="external-link" href="https://iapp.org/news/a/grade-sheet-for-indias-adequacy-status/">assessed</a> whether the Personal Data Protection Bill 2019 is sufficient for India to receive adequacy status from the EU.</li></ul>
<div> </div>
<ul><li>Along with Justin Sherman, Arindrajit Basu has <a class="external-link" href="https://www.lawfareblog.com/key-global-takeaways-indias-revised-personal-data-protection-bill">outlined</a> the key global takeaways from the Personal Data Protection Bill 2019 on <em>Lawfare</em>.</li></ul>
<div> </div>
<ul><li>On <em>The Diplomat</em>, Arindrajit has also <a class="external-link" href="https://thediplomat.com/2020/01/the-retreat-of-the-data-localization-brigade-india-indonesia-and-vietnam/">traced</a> the narrowing localization provisions in India, as well as Vietnam and Indonesia, and studied the actors and geopolitical tussle that has shaped these provisions.</li></ul>
<div> </div>
<ul><li>Through a string of publicly available submissions, press statements, and other media reports, Arindrajit and Amber Sinha have <a class="external-link" href="https://www.epw.in/engage/article/politics-indias-data-protection-ecosystem">tracked</a> the political evolution of the data protection ecosystem in India, and how this has, and will continue to impact legislative and policy developments on <em>EPW Engage</em>.</li></ul>
<div> </div>
<ul><li>Gurshabad Grover and Tanaya Rajwade have <a class="external-link" href="https://thewire.in/tech/indias-privacy-bill-regulates-social-media-platforms">written</a> on <em>The Wire</em> about how the Personal Data Protection Bill regulates social media.</li></ul>
<div> </div>
<ul><li>Amber was also a guest on <em>Suno India’s <a class="external-link" href="https://www.sunoindia.in/cyber-democracy/personal-data-protection-bill-what-does-it-mean-for-your-right-to-privacy/">Cyber Democracy podcast</a></em>, with Srinivas Kodali, to discuss how the latest version of the Personal Data Protection Bill will impact the right to privacy.
</li></ul>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/compilation-of-research-on-data-protection'>http://editors.cis-india.org/internet-governance/blog/compilation-of-research-on-data-protection</a>
</p>
No publisherpranavinternet governanceInternet GovernanceData ProtectionPrivacy2020-03-05T08:04:24ZBlog EntryGoverning ID: Kenya’s Huduma Namba Programme
http://editors.cis-india.org/internet-governance/blog/governing-id-kenya2019s-huduma-namba-programme
<b></b>
<p>In our fourth case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in Kenya.</p>
<p>Read the <a class="external-link" href="https://digitalid.design/evaluation-framework-case-studies/kenya.html">case-study</a> or download as <a href="http://editors.cis-india.org/internet-governance/digital-id-kenya-case-study" class="internal-link" title="Digital ID Kenya Case Study">PDF</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/governing-id-kenya2019s-huduma-namba-programme'>http://editors.cis-india.org/internet-governance/blog/governing-id-kenya2019s-huduma-namba-programme</a>
</p>
No publisheramberinternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T13:19:15ZBlog EntryGoverning ID: Use of Digital ID in the Healthcare Sector
http://editors.cis-india.org/internet-governance/blog/governing-id-use-of-digital-id-in-the-healthcare-sector
<b></b>
<p>In our third case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in the healthcare sector.</p>
<p><img src="https://cis-india.org/internet-governance/image-digital-id-healthcare-case-study/" alt="null" width="100%" /></p>
<p>Read the <a class="external-link" href="https://digitalid.design/evaluation-framework-case-studies/healthcare.html">case-study</a> or download as <a href="http://editors.cis-india.org/internet-governance/digital-id-healthcare-case-study" class="internal-link" title="Digital ID Healthcare Case Study">PDF</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/governing-id-use-of-digital-id-in-the-healthcare-sector'>http://editors.cis-india.org/internet-governance/blog/governing-id-use-of-digital-id-in-the-healthcare-sector</a>
</p>
No publisherShruti Trikanadinternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T13:21:22ZBlog EntryGoverning ID: India’s Unique Identity Programme
http://editors.cis-india.org/internet-governance/governing-id-india2019s-unique-identity-programme
<b></b>
<div class="content">
<p>In our second case-study, we use our Evaluation Framework for Digital ID to assess India’s Unique Identity Programme.</p>
<p>Read the <a class="external-link" href="https://digitalid.design/evaluation-framework-case-studies/india.html">case-study</a> or download as <a href="http://editors.cis-india.org/internet-governance/digital-id-india-case-study" class="internal-link" title="Digital ID India Case Study">PDF</a>.</p>
</div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/governing-id-india2019s-unique-identity-programme'>http://editors.cis-india.org/internet-governance/governing-id-india2019s-unique-identity-programme</a>
</p>
No publisherVrinda Bhandariinternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T11:38:51ZBlog EntryGoverning ID: Use of Digital ID for Verification
http://editors.cis-india.org/internet-governance/blog/governing-id-2028use-of-digital-id-for-verification
<b></b>
<p>This is the first in a series of case studies, using our recently-published <a href="https://digitalid.design/evaluation-framework-02.html">Evaluation Framework for Digital ID</a>. It looks at the use of digital identity programmes for the purpose of verification, often using the process of deduplication.</p>
<p><img src="https://cis-india.org/internet-governance/image-governing-id-use-of-digital-id-for-verification/" alt="null" width="100%" /></p>
Read the <a class="external-link" href="https://digitalid.design/evaluation-framework-case-studies/verification.html">case-study</a> or download as <a href="http://editors.cis-india.org/internet-governance/use-of-digital-id-for-verification" class="internal-link" title="Use of Digital ID for Verification">PDF.</a>
<p> </p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/governing-id-2028use-of-digital-id-for-verification'>http://editors.cis-india.org/internet-governance/blog/governing-id-2028use-of-digital-id-for-verification</a>
</p>
No publisherShruti Trikanadinternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T11:16:19ZBlog EntryGoverning ID: A Framework for Evaluation of Digital Identity
http://editors.cis-india.org/internet-governance/blog/governing-id-a-framework-for-evaluation-of-digital-identity
<b></b>
<p>As governments across the globe implement new and foundational
digital identification systems (Digital ID), or modernize existing ID
programs, there is an urgent need for more research and discussion about
appropriate uses of Digital ID systems. This significant momentum for
creating Digital ID has been accompanied with concerns about privacy,
surveillance and exclusion harms of state-issued Digital IDs in several
parts of the world, resulting in campaigns and litigations in countries,
such as UK, India, Kenya, and Jamaica. Given the sweeping range of
considerations required to evaluate Digital ID projects, it is necessary
to formulate evaluation frameworks that can be used for this purpose.</p>
<p>This work began with the question of what the appropriate uses
of Digital ID can be, but through the research process, it became clear
that the question of use cannot be divorced from the fundamental
attributes of Digital ID systems and their governance structures. This
framework provides tests, which can be used to evaluate the governance
of Digital ID across jurisdictions, as well as determine whether a
particular use of Digital ID is legitimate. Through three kinds of
checks — Rule of Law tests, Rights based tests, and Risks based tests —
this scheme is a ready guide for evaluation of Digital ID.</p>
<p><img src="https://cis-india.org/internet-governance/image-governing-id-principles-for-evalution/" alt="null" width="100%" /></p>
<p> </p>
<p>View the <a class="external-link" href="https://digitalid.design/evaluation-framework-02.html">framework</a> or download as <a href="http://editors.cis-india.org/internet-governance/governing-id-principles-for-evalution" class="internal-link" title="Governing ID: Principles for Evalution">PDF</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/governing-id-a-framework-for-evaluation-of-digital-identity'>http://editors.cis-india.org/internet-governance/blog/governing-id-a-framework-for-evaluation-of-digital-identity</a>
</p>
No publisherVrinda Bhandari, Shruti Trikanad, and Amber Sinhainternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T13:22:43ZBlog EntryGoverning ID: Introducing our Evaluation Framework
http://editors.cis-india.org/internet-governance/blog/governing-id-introducing-our-evaluation-framework
<b></b>
<div class="content">
<p>With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID. In April 2019, our research began with this question, but it quickly became clear that a determination of the legitimacy of uses hinged on the fundamental attributes and governing structure of the Digital ID system itself. Our evaluation framework is intended as a series of questions against which Digital ID may be tested. We hope that these questions will inform the trade-offs that must be made while building and assessing identity programmes, to ensure that human rights are adequately protected.</p>
<h4>Rule of Law Tests</h4>
<p>Foundational Digital ID must only be implemented along with a
legitimate regulatory framework that governs all aspects of Digital ID,
including its aims and purposes, the actors who have access to it, etc.
In the absence of this framework, there is nothing that precludes
Digital IDs from being leveraged by public and private actors for
purposes outside the intended scope of the programme. Our rule of law
principles mandate that the governing law should be enacted by the
legislature, be devoid of excessive delegation, be clear and accessible
to the public, and be precise and limiting in its scope for discretion.
These principles are substantiated by the criticism that the Kenyan
Digital ID, the Huduma Namba, was met with when it was legalized through
a Miscellaneous Amendment Act, meant only for small or negligible
amendments and typically passed without any deliberation. These set of
tests respond to the haste with which Digital ID has been implemented,
often in the absence of an enabling law which adequately addresses its
potential harms.</p>
<h4>Rights based Tests</h4>
<p>Digital ID, because of its collection of personal data and
determination of eligibility and rights of users, intrinsically involves
restrictions on certain fundamental rights. The use of Digital ID for
essential functions of the State, including delivery of benefits and
welfare, and maintenance of civil and sectoral records, enhance the
impact of these restrictions. Accordingly, the entire identity
framework, including its architecture, uses, actors, and regulators,
must be evaluated at every stage against the rights it is potentially
violating. Only then will we be able to determine if such violation is
necessary and proportionate to the benefits it offers. In Jamaica, the
National Identification and Registration Act, which mandated citizens’
biometric enrolment at the risk of criminal sanctions, was held to be a
disproportionate violation of privacy, and therefore unconstitutional.</p>
<h4>Risk based Tests</h4>
<p>Even with a valid rule of law framework that seeks to protect
rights, the design and use of Digital ID must be based on an analysis of
the risks that the system introduces. This could take the form of
choosing between a centralized and federated data-storage framework,
based on the effects of potential failure or breach, or of restricting
the uses of the Digital ID to limit the actors that will benefit from
breaching it. Aside from the design of the system, the regulatory
framework that governs it should also be tailored to the potential risks
of its use. The primary rationale behind a risk assessment for an
identity framework is that it should be tested not merely against
universal metrics of legality and proportionality, but also against an
examination of the risks and harms it poses. Implicit in a risk based
assessment is also the requirement of implementing a responsive
mitigation strategy to the risks identified, both while creating and
governing the identity programme.</p>
<p>Digital ID programmes create an inherent power imbalance
between the State and its residents because of the personal data they
collect and the consequent determination of significant rights,
potentially creating risks of surveillance, exclusion, and
discrimination. The accountability and efficiency gains they promise
must not lead to hasty or inadequate implementation.</p>
</div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/governing-id-introducing-our-evaluation-framework'>http://editors.cis-india.org/internet-governance/blog/governing-id-introducing-our-evaluation-framework</a>
</p>
No publisherShruti Trikanadinternet governanceInternet GovernanceDigital IDDigital Identity2020-03-02T08:03:49ZBlog EntryHow to Shut Down Internet Shutdowns
http://editors.cis-india.org/internet-governance/events/how-to-shutdown-internet-shutdowns
<b>This talk will focus on the challenges and opportunities for research on internet shutdowns after the judgement of the Supreme Court in Anuradha Bhasin v. Union of India. Stepping beyond the judgement, there will be a wider discussion on the practice of whitelists, blocking powers of the central government.
</b>
<p> </p>
<p><img src="https://cis-india.org/How-to-Shut-Down-Internet-Shutdowns-Details/" alt="null" width="100%" /></p>
<p> </p>
<h3><strong>About the Speaker</strong> </h3>
<p>Apar Gupta is the Executive Director of the Internet Freedom Foundation.</p>
<p>Apar has been fighting the good fight for digital rights. While in law school almost 20 years ago, he wrote a legal commentary on the IT Act that is now in its third edition. As a lawyer in the Supreme Court, he worked on landmark cases such as on Section 66A, Intermediary Liability, Internet Shutdowns, the Right to Privacy and Privacy.</p>
<p>He also helped create public campaigns to advance net neutrality, reform defamation laws, fight Internet shutdowns and create a privacy statute. Apar previously ran his own successful law firm, was profiled in Outlook Magazine and listed in Forbes India's list of 30 under 30. He has also worked as a commercial litigator and partner in top law firms, written papers cited widely in local and international publications and taught courses at NLS and NLU.</p>
<p>RSVP <a class="external-link" href="https://forms.gle/CGei6wNUbR4t92549">here</a>, or by sending an email Torsha (torsha@cis-india.org).</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/events/how-to-shutdown-internet-shutdowns'>http://editors.cis-india.org/internet-governance/events/how-to-shutdown-internet-shutdowns</a>
</p>
No publisherpranavinternet governanceEventInternet Governance2020-02-03T11:13:12ZEventAutomated Facial Recognition Systems and the Mosaic Theory of Privacy: The Way Forward
http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-and-the-mosaic-theory-of-privacy-the-way-forward
<b> Arindrajit Basu and Siddharth Sonkar have co-written this blog as the third of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems? </b>
<p> </p>
<p><strong>The Mosaic Theory of Privacy</strong></p>
<p>Whether the data collected by the AFRS should be treated similar to
face photographs taken for the purposes of ABBA is not clear in the
absence of judicial opinion. The AFRS would ordinarily collect
significantly more data than facial photographs during authentication.
This can be explained with the help of the <em><a href="https://www.lawfareblog.com/defense-mosaic-theory" rel="noreferrer noopener" target="_blank">mosaic theory of privacy</a></em>.</p>
<p>The mosaic theory of privacy suggests that data collected for long
durations of an individual can be qualitatively different from single
instances of observation. It argues that aggregating data from different
instances can create a picture of an individual which affects her
reasonable expectation of privacy. This is because a mere slice of
information reveals a lot less if the same is contextualised in a broad
pattern — a mosaic. </p>
<p>The mosaic theory of privacy does not find explicit reference in
Puttaswamy II. The petitioners had argued that seeding of Aadhaar data
into existing databases would bridge information across silos so as to
make real time surveillance possible. This is because information when
integrated from different silos becomes more than the sum of its parts.</p>
<p>The Court, however, dismissed this argument, accepting UIDAI’s
submission that the data collected remains in different silos and
merging is not permitted within the Aadhaar framework. Therefore, the
Court did not examine whether it is constitutionally permissible to
integrate data from different silos; it simply rejected the possibility
of surveillance as a result of Aadhaar authentication.</p>
<p>Jurisprudence in other jurisdictions is more advanced. In <em>United States v. Jones</em>,
the United States Supreme Court had observed that the insertion of a
global positioning system into Antoine Jones’ Jeep in the absence of a
warrant and without his consent invaded his privacy, entitling him to
Fourth Amendment Protection. In this case, the movement of Jones’
vehicle was monitored for a period of twenty-eight days. Five concurring
opinions in Jones acknowledges that aggregated and extensive
surveillance is capable of violating the reasonable expectation of
privacy irrespective of whether or not surveillance has taken place in
public.</p>
<p>The Court distinguished between prolonged surveillance and short term
surveillance. Surveillance in the short run does not reveal what a
person repeatedly does, as opposed to sustained surveillance which can
reveal significantly more about a person. The Court takes the example of
how a sequence of trips to a bar, a bookie, a gym or a church can tell a
lot more about a person than the story of any single visit viewed in
isolation.</p>
<p>Most recently, in<a href="https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf" rel="noreferrer noopener" target="_blank"> <em>Carpenter v. United States</em></a>,
the Supreme Court of the United States held that the collection of
historical cell data by the government exposes the physical movements
of an individual to potential surveillance, and an individual holds a
reasonable expectation of privacy against such collection. The Court
admitted that historical-cell site information allows the government to
go back in time in order to retract the exact whereabouts of a person.</p>
<p>Judicial decisions have not addressed specifically whether facial
recognition through law enforcement constitutes a search under the
Fourth Amendment or a “mere visual observation”.</p>
<p>The common thread linking CCTV footages and cellular data is the
unique ability to track the movement of an individual from one place to
another, enabling extreme forms of surveillance. It is perhaps this
crucial link that would make ARFS-enabled CCTVs prejudicial to
individual privacy.</p>
<p> The mosaic theory as understood in <em>Carpenter</em> helps one
understand the extent to which an AFRS can augment the capacities of law
enforcement in India. This in turn can help in understanding whether it
is constitutionally permissible to install such systems across the
country.</p>
<p>AFRS enabled-CCTV footages from different CCTVs. if viewed in
conjunction could reveal a sequence of movements of an individual,
enabling long-term surveillance of a nature that is qualitatively
distinct from isolated observances observed across unrelated CCTV
footages.</p>
<p>Subsequent to <em>Carpenter</em>, <a href="https://www.lawfareblog.com/four-months-later-how-are-courts-interpreting-carpenter" rel="noreferrer noopener" target="_blank">federal district courts</a>
in the United States have declined to apply Carpenter to video
surveillance cases since the judgement did not “call into question
conventional surveillance techniques and tools, such as security
cameras.”</p>
<p>The extent of processing that an AFRS-enabled CCTV exposes an
individual to would be significantly greater. This is because every time
an individual is in the zone of a AFRS-enabled CCTV, the facial image
will be compared to a common database. Snippets from different CCTVs
capturing the individual’s physical presence in two different locations
may not be meaningful per se. When observed together, the AFRS will make
it possible to identify the individual’s movement from one place to
another.</p>
<p>For instance, the AFRS will be able to identify the person when they
are on Street A at a particular time and when they are Street B in the
immediately subsequent hour recorded by respective CCTV cameras,
indicating the person’s physical movement from A to B. While a CCTV
camera only records movement of an individual in video format, AFRS
translates that digital information into individualised data with the
help of a comparison of facial features with a pre-existing database.</p>
<p>Through data aggregation, which appears to be the aim of the Indian
government in their tender that links three databases, it is apparent
that the right to privacy is in danger. Yet, at present, there does not
exist any case law or legislation that can render such efforts illegal
at this juncture.</p>
<p><strong>Conclusions and The Way Forward</strong></p>
<p>Despite a lack of judicial recognition of the potential
unconstitutionality of deploying AFRS, it is clear that the introduction
of these systems pose a clear and present danger to civil rights and
human dignity. Algorithmic surveillance alters a human being’s life in
ways that even the subject of this surveillance cannot fully comprehend.
As an individual’s data is manipulated and aggregated to derive a
pattern about that individual’s world, the individual or his data no
longer exists for itself<sup> </sup>but are massaged into various categories.</p>
<p>Louis Amoore terms this a ‘<a href="https://journals.sagepub.com/doi/abs/10.1177/0263276411417430?journalCode=tcsa" rel="noreferrer noopener" target="_blank">data-derivative</a>’,
which is an abstract conglomeration of data that continuously shapes
our futures without us having a say in their framing. The branding of an
individual as a criminal and then aggregating their data causes
emotional distress as individuals move about in fear of the state gaze
and their association with activities that are branded as potentially
dangerous — thereby suppressing a right to dissent — as exemplified by
their use reported use during the recent protests in Hong Kong.</p>
<p>Case law both in India and abroad has clearly suggested that a right
to privacy is contextual and is not surrendered merely because an
individual is in a public place. However, the jurisprudence protecting
public photography or videography under the umbrella of privacy remains
less clear globally and non-existent in India.</p>
<p>The mosaic theory of privacy is useful in this regard as it prevents
mass ‘data-veillance’ of individual behaviour and accurately identifies
the unique power that the volume, velocity and variety of Big Data
provides to the state. Therefore, it is imperative that the judiciary
recognise safeguards from data aggregation as an essential component of a
reasonable expectation of privacy. At the same time, legislation could
also provide the required safeguards.</p>
<p>In the US, Senators Coons and Lee recently introduced a draft Bill titled ‘<a href="https://www.coons.senate.gov/imo/media/doc/ALB19A70.pdf" rel="noreferrer noopener" target="_blank">The Facial Recognition Technology Warrant Act of 2019’</a>.
The Bill aims to impose reasonable restrictions on the use of facial
recognition technology by law enforcement. The Bill creates safeguards
against sustained tracking of physical movements of an individual in
public spaces. The Bill terms such tracking ‘ongoing surveillance’ when
it occurs for over a period of 72 hours in real time or through
application of technology to historical records. The Bill requires that
ongoing surveillance only be conducted for law enforcement purposes <em>and</em> in pursuance of a Court Order (unless it is impractical to do so).</p>
<p>While the Bill has its textual problems, it is definitely worth
considering as a model going forward and ensure that AFR systems are
deployed in line with a rights-respecting reading of a reasonable
expectation of privacy. <a href="http://datagovernance.org/report/adoption-and-regulation-of-facial-recognition-technologies-in-india" rel="noreferrer noopener" target="_blank">Parsheera</a>
suggests that the legislation should narrow tailoring of the objects
and purposes for deployment of AFRS, restrictions on the person whose
images may be scanned from the databases, judicial approval for its use
on a case by case basis and effective mechanisms of oversight, analysis
and verification.</p>
<p>Appropriate legal intervention is crucial. A failure to implement
this effectively jeopardizes the expression of our true selves and the
core tenets of our democracy.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-and-the-mosaic-theory-of-privacy-the-way-forward'>http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-and-the-mosaic-theory-of-privacy-the-way-forward</a>
</p>
No publisherArindrajit Basu, Siddharth SonkarCybersecurityCyber Securityinternet governanceInternet Governance2020-01-02T14:12:38ZBlog EntryAutomated Facial Recognition Systems (AFRS): Responding to Related Privacy Concerns
http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-afrs-responding-to-related-privacy-concerns
<b>Arindrajit Basu and Siddharth Sonkar have co-written this blog as the second of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems? </b>
<p> </p>
<p> </p>
<p>The Supreme Court of India, in <a href="https://indiankanoon.org/doc/91938676/">Puttaswamy I</a><em> </em>recognized<em> </em>that
the right to privacy is not surrendered merely because the individual
is in a public place. Privacy is linked to the individual as it is an
essential facet of human dignity. Justice Chelameswar further clarified
that privacy is contextual. Even in a public setting, people trying to
converse in whispers would signal a claim to the right to privacy.
Speaking on a loudspeaker would naturally not signal the same claim.</p>
<p>The Supreme Court of Canada has also affirmed the notion of
contextual privacy. As recently as on 7 March, 2019, the Supreme Court
of Canada <a href="http://www.thecourt.ca/r-v-jarvis-carving-out-a-contextual-approach-to-privacy/" rel="noreferrer noopener" target="_blank">in a landmark decision</a> defined privacy rights in public areas implicitly applying <a href="https://crypto.stanford.edu/portia/papers/RevnissenbaumDTP31.pdf">Helena Nissenbaum’s theory of contextual integrity</a>.
Helena Nissenbaum explains that the extent to which the right to
privacy is eroded in public spaces with the help of her theory of
contextual integrity.</p>
<p>Nissenbaum suggests that labelling information as exclusively public
or private fails to take into account the context which rationalises the
desire of the individual to exercise her privacy in public. To explain
this with an illustration, there exists a reasonable expectation of
privacy in the restroom of a restaurant, even though it is in a public
space.</p>
<p>In <a href="http://www.thecourt.ca/r-v-jarvis-carving-out-a-contextual-approach-to-privacy/"><em>R v Jarvis</em></a> (Jarvis), the Court overruled a Court of Appeal for Ontario <a href="https://www.canlii.org/en/on/onca/doc/2017/2017onca778/2017onca778.pdf">decision</a>
to hold that people can have a reasonable expectation of privacy even
in public spaces. In this case, Jarvis was charged with the offence of
voyeurism for secretly recording his students. The primary issue that
the Supreme Court of Canada was concerned with was whether the students
filmed by Mr. Jarvis enjoyed a reasonable expectation of privacy at
their school.</p>
<p>The Court in this case unanimously held that students did indeed have
a reasonable expectation of privacy. The Court concluded nine
contextual factors relevant in determining whether a person has a
reasonable expectation to privacy would arise. The listed factors were:</p>
<p>“1. The location the person was in when he or she was observed or recorded,</p>
<p>2. The nature of the impugned conduct (whether it consisted of observation or recording),</p>
<p>3. Awareness of or consent to potential observation or recording,</p>
<p>4. The manner in which the observation or recording was done,</p>
<p>5. The subject matter or content of the observation or recording,</p>
<p>6. Any rules, regulations or policies that governed the observation or recording in question,</p>
<p>7. The relationship between the person who was observed or recorded and the person who did the observing or recording,</p>
<p>8. The purpose for which the observation or recording was done, and</p>
<p>9. The personal attributes of the person who was observed or recorded.” (paragraph 29 of the judgement).</p>
<p>The Court emphasized that the factors are not an exhaustive list, but
rather were meant to be a guiding tool in determining whether a
reasonable expectation of privacy existed in a given context. It is not
necessary that each of these factors is present in a given situation to
give rise to an expectation of privacy.</p>
<p>Compared to the above-mentioned factors in Jarvis, the Indian Supreme Court in <a href="https://indiankanoon.org/doc/127517806/">Justice K.S Puttaswamy (Retd.) v. Union of India</a>: Justice Sikri (Puttaswamy II) <strong>—</strong>
the case which upheld the constitutionality of the Aadhaar project
relied on the following factors to determine a reasonable expectation of
privacy in a given context:</p>
<p>“(i) What is the context in which a privacy claim is set up?</p>
<p>(ii) Does the claim relate to private or family life, or a confidential relationship?</p>
<p>(iii) Is the claim a serious one or is it trivial?</p>
<p>(iv) Is the disclosure likely to result in any serious or significant injury and the nature and extent of disclosure?</p>
<p>(v) Is disclosure relates to personal and sensitive information of an identified person?</p>
<p>(vi) Does disclosure relate to information already disclosed publicly? If so, its implication?”</p>
<p>These factors (acknowledged in Puttaswamy II in paragraph 292) seem
to be very similar to the ones laid down in Jarvis, i.e., there is a
strong reliance on the context in both cases. While there is no explicit
mention of individual attributes of the individual claiming a
reasonable expectation, the holding that children should be given an opt
out indicates that the Court implicitly takes into account personal
attributes (e.g. age) as well.</p>
<p>The Court in Jarvis further (in paragraph 39) took the example of a
woman in a communal change room at a public pool. She may expect other
users to incidentally observe her undress but she would continue to
expect only other women in the change room to observe her and reserve
her rights against the general public. She would also expect not to be
video recorded or photographed while undressing, both from other users
of the pool and by the general public. </p>
<p>If it is later found out that the change room had a one-way glass
which allowed the pool staff to view the users change — or if there was a
concealed camera recording persons while they were changing, she could
claim a breach of her reasonable expectation of privacy under such
circumstances and it would constitute an invasion of privacy.</p>
<p><strong>So, in the context of an AFRS, an individual walking down a
public road may still signal that they wish to avail of their right to
privacy. In such contexts, a concerted surveillance mechanism may come
up against constitutional roadblocks.</strong></p>
<p><strong>What is the nature of information being collected?</strong></p>
<p>The second big question <strong>—</strong> the nature of information
which is being collected plays a role in determining the extent to which
a person can exercise their reasonable expectation of privacy.
Puttaswamy II laid down that collection of core biometric information
such as fingerprints, iris scans in the context of the Aadhaar-Based
Biometric Authentication (‘ABBA’) is constitutionally permissible. The
basis of this conclusion is that the Aadhaar Act does not deal with the
individual’s intimate or private sphere.</p>
<p>The judgement of the Supreme Court in Puttaswamy II is in a very
specific context (i.e. the ABBA). It does not explain or identify the
contextual factors which determine the extent to which privacy may be
reasonably expected over biometrics generally. In this judgment, the
Court observed that demographic information and photographs do not raise
a reasonable expectation of privacy under Article 21 unless there exist
special circumstances such as the disclosure of juveniles in conflict
of law or a rape victim’s identity.</p>
<p><strong>Most importantly, the Court held that face photographs for
the purpose of identification are not covered by a reasonable
expectation of privacy. The Court distinguished face photographs from
intimate photographs or those photographs which concern confidential
situations. </strong></p>
<p><strong>Face photographs, according to the Court, are shared by
individuals in the ordinary course of conduct for the purpose of
obtaining a driving </strong>l<strong>icense, voter id, passport,
examination admit cards, employment cards, and so on. Face photographs
by themselves reveal no information.</strong></p>
<p>Naturally, this pronouncement of the Apex Court is a huge boost for the introduction of AFRS in India.</p>
<p>Abroad, however, on 4 September 2019, in <a href="https://www.judiciary.uk/wp-content/uploads/2019/09/bridges-swp-judgment-Final03-09-19-1.pdf">Edward Bridges v. Chief Constable of South Wales Police</a>, a Division Bench of the High Court in England and Wales heard a challenge against an AFRS introduced by law enforcement (<em>see</em>
Endnote 1). The High Court rejected a claim for judicial review holding
that the AFRS in question does not violate inter alia the right to
privacy under Article 8 of the European Convention of Human Rights
(‘ECHR’).</p>
<p>According to the Court, the AFRS was used for specific and limited
purposes, i.e., only when the image of the public matched a person on an
existing watchlist. The use of the AFRS was therefore considered a
lawful and fair restriction.</p>
<p>The Court, however, acknowledged that extracting biometric data
through AFRS is “well beyond the expected and unsurprising”. This seems
to be a departure from the Indian Supreme Court’s observation in
Puttaswamy II that there is no reasonable expectation of privacy over
biometric data in the context of ABBA, and may be a wiser approach for
the Indian courts to adopt.</p>
<h6><strong>Endnote </strong></h6>
<p>1. The challenge was put forth by Edward Bridges, a civil liberties
campaigner from Cardiff for being caught on camera in two particular
deployments of the AFRS a) when he was at Queen Street, a busy shopping
area in Cardiff and b) when he was at the Defence Procurement, Research,
Technology and Exportability Exhibition held at the Motorpoint Arena.</p>
<p> </p>
<p>This was published by <a class="external-link" href="https://aipolicyexchange.org/2019/12/28/automated-facial-recognition-systems-afrs-responding-to-related-privacy-concerns/">AI Policy Exchange</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-afrs-responding-to-related-privacy-concerns'>http://editors.cis-india.org/internet-governance/automated-facial-recognition-systems-afrs-responding-to-related-privacy-concerns</a>
</p>
No publisherArindrajit Basu, Siddharth SonkarCybersecurityCyber Securityinternet governanceInternet Governance2020-01-02T14:09:14ZBlog EntryDecrypting Automated Facial Recognition Systems (AFRS) and Delineating Related Privacy Concerns
http://editors.cis-india.org/internet-governance/decrypting-automated-facial-recognition-systems-afrs-and-delineating-related-privacy-concerns
<b>Arindrajit Basu and Siddharth Sonkar have co-written this blog as the first of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?</b>
<p> </p>
<p> </p>
<p>The use of aggregated Big Data by governments has the potential to
exacerbate power asymmetries and erode civil liberties like few
technologies of the past. In order to guard against the aggressive
aggregation and manipulation of the data generated by individuals who
are branded as suspect, it is critical that our firmly established
constitutional rights protect human dignity in the face of this
potential erosion.</p>
<p>The increasing ubiquity of Automated Facial Recognition Systems
(AFRS) serve as a prime example of the rising desire of governments to
push fundamental rights to the brink. With AFRS, the core fundamental
right in question is privacy, although questions have been posed
regarding the potential violation of other related rights, such as the
Right to Equality and the Right to Free Speech and Expression, as well.</p>
<p>There is a rich corpus of literature, (see <a href="https://indianexpress.com/article/opinion/columns/digital-identification-facial-recognition-system-ncrb-5859072/" rel="noreferrer noopener" target="_blank">here</a>, <a href="http://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2017/09/40-1-11.pdf" rel="noreferrer noopener" target="_blank">here</a> and an excellent recent paper by Smriti Parsheera <a href="http://datagovernance.org/report/adoption-and-regulation-of-facial-recognition-technologies-in-india" rel="noreferrer noopener" target="_blank">here)</a>
from a diverse coterie of scholars that call out the challenges posed
by AFRS, particularly with respect to its proportionality as a
restriction over the right to privacy. Our contribution to this
discourse focuses on a very specific question around a ‘reasonable
expectation of privacy’ — the standard identified for the protection of
privacy in public spaces across jurisdictions, including in India. This
is because at this juncture, the precise nature of the AFRS which will
eventually be used and the regulations it will be subject to are not
clear. </p>
<p>In <a href="https://indiankanoon.org/doc/91938676/'">Retd. K.S </a><a href="https://indiankanoon.org/doc/91938676/" rel="noreferrer noopener" target="_blank">Puttaswamy (Retd.) v. Union of India</a>:
Justice Chandrachud (Puttaswamy I), the Indian Supreme Court was
concerned with the question whether there exists a fundamental right to
privacy under the Indian Constitution. A nine-judge bench of the Court
recognized that the right to privacy is a fundamental right implicit
inter alia in the right to life within Article 21 of the Constitution.</p>
<p>The right to privacy protects people and not places. Every person is
entitled, however, to a reasonable expectation of privacy. The
expectation of privacy must be twofold. First, the person must prove
that the alleged act could inflict some harm. Such harm must be real and
not be speculative or imaginary. Second, society must recognize this
expectation as reasonable. The test of reasonable expectations is
contextual, i.e., the extent to which it safeguards privacy depends on
the place at which the individual is.</p>
<p>In order to pass any constitutional test, therefore, AFRS must
satisfy the ‘reasonable expectation’ test articulated in Puttaswamy.
However, in this context, the test itself has multiple contours. Do we
have a right to privacy in a public place? Is AFRS collecting any data
that specifically violates a right to privacy? Is the aggregation of
that data a potential violation?</p>
<p>After providing a brief introduction to the use cases of AFRS in
India and across the world, we embark upon answering all these
questions.</p>
<p><strong>Primer on Automated Facial Recognition Systems (AFRS)</strong></p>
<p>Facial recognition is a biometric technology that utilises cameras to
match stored or live footage of individuals (including both stills and
moving footage) with images or video from an existing database. Some
systems might also be used to analyze broader demographic trends or
conduct sentiment analysis through crowd scanning.</p>
<p>While the use of photographs and video footage have been core
components of police investigation, the use of algorithms to process
vast tracts of Big Data (characterized by ‘Volume, Velocity, and
Variety), and compare disparate and discrete data points allows for the
derivation of hitherto unfeasible insights on the subjects of Big Data.</p>
<p>The utilisation of AFRS for law enforcement is rapidly spreading around the world. <a href="https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847" rel="noreferrer noopener" target="_blank">A Global AI Surveillance Index</a>
compiled by the Carnegie Endowment for International Peace found that
at least sixty-four countries are incorporating facial recognition
systems into their AI surveillance programs.</p>
<p>Chinese technology company Yitu has entered into a partnership with
security forces in Malaysia to equip police officers with facial
recognition body cameras that, powered by enabling technologies, would
allow a comparison of images caught by the live body cameras with images
from several central databases.</p>
<p>In <a href="https://news.sky.com/story/met-polices-facial-recognition-tech-has-81-error-rate-independent-report-says-11755941" rel="noreferrer noopener" target="_blank">England and Wales</a>,
London Metropolitan Police, South Wales Police, and Leicestershire
Police are all in the process of developing technologies that allow for
the identification and comparison of live images with those stored in a
database.</p>
<p>The technology is being developed by Japanese firm NEC and the police
force has limited ability to oversee or modify the software, given its
proprietary nature. The Deputy Chief of South Wales Police stated that
“the tech is given to [them] as a sealed box… [and the police force
themselves] have no input – whatever it does, it does what it does.”</p>
<p>In the US, <a href="https://www.americanbar.org/groups/criminal_justice/publications/criminal-justice-magazine/2019/spring/facial-recognition-technology/" rel="noreferrer noopener" target="_blank">Baltimore’s police</a>
set up facial recognition cameras to track and arrest protestors — a
system that reached its zenith during the 2018 riots in the city. </p>
<p>It is suspected that authorities in <a href="https://www.japantimes.co.jp/news/2019/10/23/asia-pacific/hong-kong-protests-ai-facial-recognition-tech/#.Xf1Fs_zhVPY" rel="noreferrer noopener" target="_blank">Hong Kong</a> are also using AFRS to clamp down on the ongoing pro-democracy protests.</p>
<p>In India, the Ministry of Home Affairs, through the National Crime Records Bureau put out a <a href="http://ncrb.gov.in/TENDERS/AFRS/RFP_NAFRS.pdf" rel="noreferrer noopener" target="_blank">tender for a new AFRS</a>,
whose stated objective is to “act as a foundation for national level
searchable platform of facial images.” The AFRS will pull facial image
data from CCTV feeds and compare these with existing records across
databases including the Crime and Criminal Tracking Networks and Systems
(CCTNS), Inter-operable Criminal Justice System (or ICJS), Immigration
Visa Foreigner Registration Tracking (IVFRT), Passport, Prisons and
state police records.</p>
<p>Plans are also afoot to integrate this with the yet to be deployed
National Automated Fingerprint Identification System (NAFIS), thereby
creating a multi-faceted surveillance system.</p>
<p>Despite raising eyeballs due to its potential all-pervasive scope,
this tender is not the first instance of AFRS being used by Indian
authorities. Punjab Police, <a href="https://www.livemint.com/AI/DIh6fmR6croUJps6x7JW5K/Meet-Staqu-a-startup-helping-Indian-law-enforcement-agencie.html" rel="noreferrer noopener" target="_blank">in partnership with Gurugram-based start-up Staqu</a>
has launched and commenced implementation of the Punjab Artificial
Intelligence System (PAIS) which uses digitised criminal records and
automated facial recognition to retrieve information on a suspected
criminal and essentially tracks their public whereabouts, which poses
potential constitutional questions.</p>
<p> </p>
<p>This was published by <a class="external-link" href="https://aipolicyexchange.org/2019/12/26/decrypting-automated-facial-recognition-systems-afrs-and-delineating-related-privacy-concerns/">AI Policy Exchange</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/decrypting-automated-facial-recognition-systems-afrs-and-delineating-related-privacy-concerns'>http://editors.cis-india.org/internet-governance/decrypting-automated-facial-recognition-systems-afrs-and-delineating-related-privacy-concerns</a>
</p>
No publisherArindrajit Basu, Siddharth SonkarCybersecurityCyber Securityinternet governanceInternet Governance2020-01-02T14:01:48ZBlog EntryCall for Comments: Model Security Standards for the Indian Fintech Industry
http://editors.cis-india.org/internet-governance/call-for-comments-model-security-standards-for-the-indian-fintech-industry
<b></b>
<p>The Centre for Internet and Society is pleased to make available the Draft document of Model Security Standards for the Indian Fintech Industry, for feedback and comments from all stakeholders. The objective of this document which was first published in November 2019, is to ensure that the data of users is dealt with in a secure and safe manner by the Fintech Industry, and that smaller businesses in the Fintech industry have a specific standard to look at in order to limit their liabilities for any future breaches. <br /><br />We invite any parties interested in the field of technology policy, including but not limited to lawyers, policy researchers, and engineers, to send in your feedback/comments on the draft document by the 16th of January 2020. We intend to publish our final draft by the end of January 2020. We look forward to receiving your contributions to make this document more comprehensive and effective. Please find a copy of the draft document <a href="http://editors.cis-india.org/internet-governance/resources/security-standards-for-the-financial-technology-sector-in-india" class="internal-link" title="Security Standards for the Financial Technology Sector in India">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/call-for-comments-model-security-standards-for-the-indian-fintech-industry'>http://editors.cis-india.org/internet-governance/call-for-comments-model-security-standards-for-the-indian-fintech-industry</a>
</p>
No publisherpranavFinancial TechnologyCybersecurityinternet governanceInternet GovernanceCyber Security2019-12-16T13:16:25ZBlog EntryICANN Begins its Sojourn into Open Data
http://editors.cis-india.org/internet-governance/icann-begins-its-sojourn-into-open-data
<b>The Internet Corporation for Assigned Names and Numbers (ICANN) recently announced that it will now set up a pilot project in order to introduce an Open Data initiative for all data that it generates. We would like to extend our congratulations to ICANN on the development of this commendable new initiative, and would be honoured to support the creation of this living document to be prepared before ICANN 58.</b>
<p> </p>
<p style="text-align: justify;">To quote the ICANN blog directly, the aim of this project is to “<em>bring selected data sets into the open, available through web pages and programming APIs, for the purposes of external party review and analysis</em>” <a href="#ftn1">[1]</a>. This will play out through the setting up of three components:</p>
<ol><li>Development of a catalogue of existing data sets which will be appropriate for publication</li>
<li>Selection of the technology necessary for managing the publication of these data sets.</li>
<li>Creation of a process to prioritise the order in which the data sets are made available <a href="#ftn2">[2]</a>.</li></ol>
<h3><strong>Principles in Question</strong></h3>
<p style="text-align: justify;">The Centre for Internet and Society firmly believes in the value of accessible, inclusive open data standards as a tool for enhancing transparency in any system. Greater transparency goes a long way towards bringing a regulatory authority closer to those who are governed under it – be it a state or a body such as ICANN. It is, in fact, an indispensable component of a multistakeholder model of governance to facilitate informed participation by all parties concerned in the decision making process.</p>
<p style="text-align: justify;">The right to information that a regulatory authority owes those it regulates has two kinds of components. The first may be described as reactive disclosure – “<em>when individual members of the public file requests for and receive information</em>” <a href="#ftn#3">[3]</a>. The second is disclosure that is more proactive in nature – “<em>when information is made public at the initiative of the public body, without a request being filed</em>” <a href="#ftn4">[4]</a>. The former is epitomized by initiatives such as the Freedom of
Information Act <a href="#ftn5">[5]</a> in the United States, the Right to Information Act in India <a href="#ftn6">[6]</a>, or ICANN’s very own Documentary Information Disclosure Policy <a href="#ftn7">[7]</a>.</p>
<p style="text-align: justify;">Proactive disclosure policies, on the other hand, operate out of the principle that the provision of information by those in positions of regulatory authority will ensure free and timely flow of information to the public, and the information so provided will be equally accessible to everyone, without the need for individual requests being filed <a href="#ftn8">[8]</a>. Proactive disclosure also goes a long way towards preventing officials from denying or manipulating information subsequent to publication <a href="#ftn9">[9]</a>. Scholars have touted proactive disclosure as the “<em>future of the right to know</em>” <a href="#ftn10">[10]</a>.</p>
<p style="text-align: justify;">At the Centre for Internet and Society, much of our research has pointed towards the direction of creating better open data standards for governments (Please see “<a href="http://cis-india.org/openness/blog-old/open-government-data-study">Open Data Government Study: India</a>”). We are one of the Lead Stewards of the International Open Data Charter <a href="#ftn11">[11]</a> and have maintained that it is crucial for governments to maintain open data standards in the interest of transparency and accountability. We firmly believe that the same principles extend also to ICANN – a body which, as per its own by-laws commits towards operating “…<em>to the maximum extent feasible in an open and transparent manner and consistent with procedures designed to ensure fairness</em>”<a href="#ftn12">[12]</a>.</p>
<h3><strong>Suggestions</strong></h3>
<p style="text-align: justify;">While this policy is in its nascent stage, we would like to put forward certain principles which we believe ought to be kept in mind before it gets chalked out, in the best interest of the ICANN community:</p>
<ol><li>To determine what data sets should be made publicly accessible, it would be useful to carry out an analysis of existing DIDP requests to understand trends in the kind of information that the ICANN community is interested in accessing, which can then be proactively disclosed. It would be redundant on ICANN’s part to disclose, under this Open Data Policy, data which is already publicly available.</li>
<li>ICANN should first develop a catalog of all existing data sets with ICANN, apply the principles for deciding appropriateness for publication, then make publicly available both the full catalog, and the actual data sets identified for publication. ICANN should make clear the kind of information it is not going to make accessible
under this open data standards, and justify the principles on the basis of which it is choosing to do so (analogous to the exceptions clauses under the DIDP).</li>
<li>With respect to technology to be selected for managing the publication of data sets, free and open source software (such as CKAN) ought to be used, and open standards should be adopted for the use and licensing of such data.</li>
<li>Such data ought to be downloadable in bulk in CSV/JSON/XML formats.</li>
<li>DIDP responses and the open data work flows ought to be integrated so that all the responses to DIDP requests are automatically published in a machine-readable format as open data.</li>
<li>Qualitative (text of speeches, slides from presentations, recordings of sessions, etc.) and quantitative data should both be included under this new policy.</li></ol>
<p style="text-align: justify;">In conclusion, we would like to extend our congratulations to ICANN on the development of this commendable new initiative, and would be honoured to support the creation of this living document before ICANN 58.</p>
<hr align="left" size="1" width="33%" />
<h3><strong>Endnotes</strong></h3>
<div id="ftn1">
<p>[1] Internet Corporation for Assigned Names and Numbers, <em>ICANN Kicks off Open Data Initiative Pilot</em>, (November 6, 2016), available at <a href="https://www.icann.org/news/blog/icann-kicks-off-open-data-initiative-pilot">https://www.icann.org/news/blog/icann-kicks-off-open-data-initiative-pilot</a> (Last visited on November 9, 2016).</p>
</div>
<div id="ftn2">
<p>[2] Id.</p>
</div>
<div id="ftn3">
<p>[3] Naniette Coleman, <em>Proactive vs. Reactive Transparency</em>, (February 8, 2010), available at: <a href="http://blogs.worldbank.org/publicsphere/proactive-vs-reactive-transparency">http://blogs.worldbank.org/publicsphere/proactive-vs-reactive-transparency</a> (Last visited on November 9, 2016).</p>
</div>
<div id="ftn4">
<p>[4] Id.</p>
</div>
<div id="ftn5">
<p>[5] Freedom of Information Act, 1966, 5 U.S.C. § 552.</p>
</div>
<div id="ftn6">
<p>[6] Right to Information Act, 2005 <em>available at</em> http://righttoinformation.gov.in/rti-act.pdf</p>
</div>
<div id="ftn7">
<p>[7] ICANN, <em>Documentary Information Disclosure Policy</em>, available at <a href="https://www.icann.org/resources/pages/didp-2012-02-25-en">https://www.icann.org/resources/pages/didp-2012-02-25-en</a> (Last visited on November 9, 2016).</p>
</div>
<div id="ftn8">
<p>[8] Helen Darbishire, <em>Proactive Transparency: The future of the right to information?</em> Working paper. N.p.: World Bank, (2009).</p>
</div>
<div id="ftn9">
<p>[9] Id.</p>
</div>
<div id="ftn10">
<p>[10] Darbishire, <em>supra</em> note 8.</p>
</div>
<div id="ftn11">
<p>[11] Open Data Charter, <em>Who We Are</em>, available at <a href="http://opendatacharter.net/who-we-are/">http://opendatacharter.net/who-we-are/</a> (Last visited on November 10, 2016).</p>
</div>
<div id="ftn12">
<p>[12] Article III(1), Bylaws For Internet Corporation For Assigned Names And Numbers</p>
</div>
<p> </p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/icann-begins-its-sojourn-into-open-data'>http://editors.cis-india.org/internet-governance/icann-begins-its-sojourn-into-open-data</a>
</p>
No publisherPadmini Baruah and Sumandro ChattapadhyayOpen DataICANNinternet governance2016-11-12T01:17:24ZBlog EntryAn Evidence based Intermediary Liability Policy Framework: Workshop at IGF
http://editors.cis-india.org/internet-governance/blog/igf-workshop-an-evidence-based-intermediary-liability-policy-framework
<b>CIS is organising a workshop at the Internet Governance Forum 2014. The workshop will be an opportunity to present and discuss ongoing research on the changing definition of intermediaries and their responsibilities across jurisdictions and technologies and contribute to a comprehensible framework for liability that is consistent with the capacity of the intermediary and with international human-rights standards.</b>
<p style="text-align: justify; ">The Centre for Internet and Society, India and Centre for Internet and Society, Stanford Law School, USA, will be organising a workshop to analyse the role of intermediary platforms in relation to freedom of expression, freedom of information and freedom of association at the Internet Governance Forum 2014. <span>The aim of the workshop is to highlight the increasing importance of digital rights and broad legal protections of stakeholders in an increasingly knowledge-based economy. The workshop will discuss public policy issues associated with Internet intermediaries, in particular their roles, legal responsibilities and related liability limitations in context of the evolving nature and role of intermediaries in the Internet ecosystem. distinct</span></p>
<p style="text-align: justify; "><b>Online Intermediaries: Setting the context</b></p>
<p style="text-align: justify; ">The Internet has facilitated unprecedented access to information and amplified avenues for expression and engagement by removing the limits of geographic boundaries and enabling diverse sources of information and online communities to coexist. Against the backdrop of a broadening base of users, the role of intermediaries that enable economic, social and political interactions between users in a global networked communication is ubiquitous. Intermediaries are essential to the functioning of the Internet as many producers and consumers of content on the internet rely on the action of some third party–the so called intermediary. Such intermediation ranges from the mere provision of connectivity, to more advanced services such as providing online storage spaces for data, acting as platforms for storage and sharing of user generated content (UGC), or platforms that provides links to other internet content.</p>
<p style="text-align: justify; ">Online intermediaries enhance economic activity by reducing costs, inducing competition by lowering the barriers for participation in the knowledge economy and fuelling innovation through their contribution to the wider ICT sector as well as through their key role in operating and maintaining Internet infrastructure to meet the network capacity demands of new applications and of an expanding base of users.</p>
<p style="text-align: justify; ">Intermediary platforms also provide social benefits, by empowering users and improving choice through social and participative networks, or web services that enable creativity and collaboration amongst individuals. By enabling platforms for self-expression and cooperation, intermediaries also play a critical role in establishing digital trust, protection of human rights such as freedom of speech and expression, privacy and upholding fundamental values such as freedom and democracy.</p>
<p style="text-align: justify; ">However, the economic and social benefits of online intermediaries are conditional to a framework for protection of intermediaries against legal liability for the communication and distribution of content which they enable.</p>
<p style="text-align: justify; "><b>Intermediary Liability</b></p>
<p style="text-align: justify; ">Over the last decade, right holders, service providers and Internet users have been locked in a debate on the potential liability of online intermediaries. The debate has raised global concerns on issues such as, the extent to which Internet intermediaries should be held responsible for content produced by third parties using their Internet infrastructure and how the resultant liability would affect online innovation and the free flow of knowledge in the information economy?</p>
<p style="text-align: justify; ">Given the impact of their services on communications, intermediaries find themselves as either directly liable for their actions, or indirectly (or “secondarily”) liable for the actions of their users. Requiring intermediaries to monitor the legality of the online content poses an insurmountable task. Even if monitoring the legality of content by intermediaries against all applicable legislations were possible, the costs of doing so would be prohibitively high. Therefore, placing liability on intermediaries can deter their willingness and ability to provide services, hindering the development of the internet itself.</p>
<p style="text-align: justify; ">Economics of intermediaries are dependent on scale and evaluating the legality of an individual post exceeds the profit from hosting the speech, and in the absence of judicial oversight can lead to a private censorship regime. Intermediaries that are liable for content or face legal exposure, have powerful incentives, to police content and limit user activity to protect themselves. The result is curtailing of legitimate expression especially where obligations related to and definition of illegal content is vague. Content policing mandates impose significant compliance costs limiting the innovation and competiveness of such platforms.</p>
<p style="text-align: justify; ">More importantly, placing liability on intermediaries has a chilling effect on freedom of expression online. Gate keeping obligations by service providers threaten democratic participation and expression of views online, limiting the potential of individuals and restricting freedoms. Imposing liability can also indirectly lead to the death of anonymity and pseudonymity, pervasive surveillance of users' activities, extensive collection of users' data and ultimately would undermine the digital trust between stakeholders.</p>
<p style="text-align: justify; ">Thus effectively, imposing liability for intermediaries creates a chilling effect on Internet activity and speech, create new barriers to innovation and stifles the Internet's potential to promote broader economic and social gains. To avoid these issues, legislators have defined 'safe harbours', limiting the liability of intermediaries under specific circumstances.</p>
<p style="text-align: justify; ">Online intermediaries do not have direct control of what information is or information are exchanged via their platform and might not be aware of illegal content per se. A key framework for online intermediaries, such limited liability regimes provide exceptions for third party intermediaries from liability rules to address this asymmetry of information that exists between content producers and intermediaries.</p>
<p style="text-align: justify; ">However, it is important to note, that significant differences exist concerning the subjects of these limitations, their scope of provisions and procedures and modes of operation. The 'notice and takedown' procedures are at the heart of the safe harbour model and can be subdivided into two approaches:</p>
<p style="text-align: justify; ">a. Vertical approach where liability regime applies to specific types of content exemplified in the US Digital Copyright Millennium Act</p>
<p style="text-align: justify; ">b. Horizontal approach based on the E-Commerce Directive (ECD) where different levels of immunity are granted depending on the type of activity at issue</p>
<p style="text-align: justify; "><b>Current framework </b></p>
<p style="text-align: justify; ">Globally, three broad but distinct models of liability for intermediaries have emerged within the Internet ecosystem:</p>
<p style="text-align: justify; ">1. Strict liability model under which intermediaries are liable for third party content used in countries such as China and Thailand</p>
<p style="text-align: justify; ">2. Safe harbour model granting intermediaries immunity, provided their compliance on certain requirements</p>
<p style="text-align: justify; ">3. Broad immunity model that grants intermediaries broad or conditional immunity from liability for third party content and exempts them from any general requirement to monitor content. <b> </b></p>
<p style="text-align: justify; ">While the models described above can provide useful guidance for the drafting or the improvement of the current legislation, they are limited in their scope and application as they fail to account for the different roles and functions of intermediaries. Legislators and courts are facing increasing difficulties, in interpreting these regulations and adapting them to a new economic and technical landscape that involves unprecedented levels user generated content and new kinds of and online intermediaries.</p>
<p style="text-align: justify; ">The nature and role of intermediaries change considerably across jurisdictions, and in relation to the social, economic and technical contexts. In addition to the dynamic nature of intermediaries the different categories of Internet intermediaries‘ are frequently not clear-cut, with actors often playing more than one intermediation role. Several of these intermediaries offer a variety of products and services and may have number of roles, and conversely, several of these intermediaries perform the same function. For example , blogs, video services and social media platforms are considered to be 'hosts'. Search engine providers have been treated as 'hosts' and 'technical providers'.</p>
<p style="text-align: justify; ">This limitations of existing models in recognising that different types of intermediaries perform different functions or roles and therefore should have different liability, poses an interesting area for research and global deliberation. Establishing classification of intermediaries, will also help analyse existing patterns of influence in relation to content for example when the removal of content by upstream intermediaries results in undue over-blocking.</p>
<p style="text-align: justify; ">Distinguishing intermediaries on the basis of their roles and functions in the Internet ecosystem is critical to ensuring a balanced system of liability and addressing concerns for freedom of expression. Rather than the highly abstracted view of intermediaries as providing a single unified service of connecting third parties, the definition of intermediaries must expand to include the specific role and function they have in relation to users' rights. A successful intermediary liability regime must balance the needs of producers, consumers, affected parties and law enforcement, address the risk of abuses for political or commercial purposes, safeguard human rights and contribute to the evolution of uniform principles and safeguards.</p>
<p style="text-align: justify; "><b>Towards an evidence based intermediary liability policy framework</b></p>
<p style="text-align: justify; ">This workshop aims to bring together leading representatives from a broad spectrum of stakeholder groups to discuss liability related issues and ways to enhance Internet users’ trust.</p>
<p style="text-align: justify; ">Questions to address at the panel include:</p>
<p style="text-align: justify; ">1. What are the varying definitions of intermediaries across jurisdictions?</p>
<p style="text-align: justify; ">2. What are the specific roles and functions that allow for classification of intermediaries?</p>
<p style="text-align: justify; ">3. How can we ensure the legal framework keeps pace with technological advances and the changing roles of intermediaries?</p>
<p style="text-align: justify; ">4. What are the gaps in existing models in balancing innovation, economic growth and human rights?</p>
<p style="text-align: justify; ">5. What could be the respective role of law and industry self-regulation in enhancing trust?</p>
<p style="text-align: justify; ">6. How can we enhance multi-stakeholder cooperation in this space?</p>
<p style="text-align: justify; ">Confirmed Panel:</p>
<p style="text-align: justify; ">Technical Community: Malcolm Hutty: Internet Service Providers Association (ISPA)<br />Civil Society: Gabrielle Guillemin: Article19<br />Academic: Nicolo Zingales: Assistant Professor of Law at Tilburg University<br />Intergovernmental: Rebecca Mackinnon: Consent of the Networked, UNESCO project<br />Civil Society: Anriette Esterhuysen: Association for Progressive Communication (APC)<br />Civil Society: Francisco Vera: Advocacy Director: Derechos Digitale<br />Private Sector: Titi Akinsanmi: Policy and Government Relations Manager, Google Sub-Saharan Africa<br />Legal: Martin Husovec: MaxPlanck Institute</p>
<p style="text-align: justify; "><b> </b></p>
<p style="text-align: justify; "><span>Moderator(s): </span><span>Giancarlo Frosio, Centre for Internet and Society (CIS) and </span><span>Jeremy Malcolm, Electronic Frontier Foundation </span></p>
<p style="text-align: justify; "><span><span>Remote Moderator: </span><span>Anubha Sinha, New Delhi</span></span></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/igf-workshop-an-evidence-based-intermediary-liability-policy-framework'>http://editors.cis-india.org/internet-governance/blog/igf-workshop-an-evidence-based-intermediary-liability-policy-framework</a>
</p>
No publisherjyotihuman rightsDigital Governanceinternet governanceFreedom of Speech and ExpressionInternet Governance ForumHuman Rights OnlineIntermediary LiabilityPoliciesMulti-stakeholder2014-07-04T06:41:10ZBlog Entry