Security: Privacy, Transparency and Technology

Posted by Sunil Abraham at Aug 19, 2015 02:30 AM |
The Centre for Internet and Society (CIS) has been involved in privacy and data protection research for the last five years. It has participated as a member of the Justice A.P. Shah Committee, which has influenced the draft Privacy Bill being authored by the Department of Personnel and Training. It has organised 11 multistakeholder roundtables across India over the last two years to discuss a shadow Privacy Bill drafted by CIS with the participation of privacy commissioners and data protection authorities from Europe and Canada.


The article was co-authored by Sunil Abraham, Elonnai Hickok and Tarun Krishnakumar. It was published by Observer Research Foundation, Digital Debates 2015: CyFy Journal Volume 2.

Our centre’s work on privacy was considered incomplete by some stakeholders because of a lack of focus in the area of cyber security and therefore we have initiated research on it from this year onwards. In this article, we have undertaken a preliminary examination of the theoretical relationships between the national security imperative and privacy, transparency and technology.

Security and Privacy

Daniel J. Solove has identified the tension between security and privacy as a false dichotomy: "Security and privacy often clash, but there need not be a zero-sum tradeoff." [1] Further unpacking this false dichotomy, Bruce Schneier says, "There is no security without privacy. And liberty requires both security and privacy." [2] Effectively, it could be said that privacy is a precondition for security, just as security is a precondition for privacy. A secure information system cannot be designed without guaranteeing the privacy of its authentication factors, and it is not possible to guarantee privacy of authentication factors without having confidence in the security of the system. Often policymakers talk about a balance between the privacy and security imperatives—in other words a zero-sum game. Balancing these imperatives is a foolhardy approach, as it simultaneously undermines both imperatives. Balancing privacy and security should instead be framed as an optimisation problem. Indeed, during a time when oversight mechanisms have failed even in so-called democratic states, the regulatory power of technology [3] should be seen as an increasingly key ingredient to the solution of that optimisation problem.

Data retention is required in most jurisdictions for law enforcement, intelligence and military purposes. Here are three examples of how security and privacy can be optimised when it comes to Internet Service Provider (ISP) or telecom operator logs:

  1. Data Retention: We propose that the office of the Privacy Commissioner generate a cryptographic key pair for each internet user and give one key to the ISP / telecom operator. This key would be used to encrypt logs, thereby preventing unauthorised access. Once there is executive or judicial authorisation, the Privacy Commissioner could hand over the second key to the authorised agency. There could even be an emergency procedure and the keys could be automatically collected by concerned agencies from the Privacy Commissioner. This will need to be accompanied by a policy that criminalises the possession of unencrypted logs by ISP and telecom operators.

  2. Privacy-Protective Surveillance: Ann Cavoukian and Khaled El Emam [4] have proposed combining intelligent agents, homomorphic encryption and probabilistic graphical models to provide “a positive-sum, ‘win–win’ alternative to current counter-terrorism surveillance systems.” They propose limiting collection of data to “significant” transactions or events that could be associated with terrorist-related activities, limiting analysis to wholly encrypted data, which then does not just result in “discovering more patterns and relationships without an understanding of their context” but rather “intelligent information—information selectively gathered and placed into an appropriate context to produce actual knowledge.” Since fully homomorphic encryption may be unfeasible in real-world systems, they have proposed use of partially homomorphic encryption. But experts such as Prof. John Mallery from MIT are also working on solutions based on fully homomorphic encryption.

  3. Fishing Expedition Design: Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal have proposed a standard [5] that could be adopted by authorised agencies, telecom operators and ISPs. Instead of giving authorised agencies complete access to logs, they propose a format for database queries, which could be sent to the telecom operator or ISP by authorised agencies. The telecom operator or ISP would then process the query, and anonymise/obfuscate the result-set in an automated fashion based on applicable privacypolicies/regulation. Authorised agencies would then hone in on a subset of the result-set that they would like with personal identifiers intact; this smaller result set would then be shared with the authorised agencies.

An optimisation approach to resolving the false dichotomy between privacy and security will not allow for a total surveillance regime as pursued by the US administration. Total surveillance brings with it the ‘honey pot’ problem: If all the meta-data and payload data of citizens is being harvested and stored, then the data store will become a single point of failure and will become another target for attack. The next Snowden may not have honourable intentions and might decamp with this ‘honey pot’ itself, which would have disastrous consequences.

If total surveillance will completely undermine the national security imperative, what then should be the optimal level of surveillance in a population? The answer depends upon the existing security situation. If this is represented on a graph with security on the y-axis and the proportion of the population under surveillance on the x-axis, the benefits of surveillance could be represented by an inverted hockey-stick curve. To begin with, there would already be some degree of security. As a small subset of the population is brought under surveillance, security would increase till an optimum level is reached, after which, enhancing the number of people under surveillance would not result in any security pay-off. Instead, unnecessary surveillance would diminish security as it would introduce all sorts of new vulnerabilities. Depending on the existing security situation, the head of the hockey-stick curve might be bigger or smaller. To use a gastronomic analogy, optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

In India the designers of surveillance projects have fortunately rejected the total surveillance paradigm. For example, the objective of the National Intelligence Grid (NATGRID) is to streamline and automate targeted surveillance; it is introducing technological safeguards that will allow express combinations of result-sets from 22 databases to be made available to 12 authorised agencies. This is not to say that the design of the NATGRID cannot be improved.

Security and Transparency

There are two views on security and transparency: One, security via obscurity as advocated by vendors of proprietary software, and two, security via transparency as advocated by free/open source software (FOSS) advocates and entrepreneurs. Over the last two decades, public and industry opinion has swung towards security via transparency. This is based on the Linus rule that “given enough eyeballs, all bugs are shallow.” But does this mean that transparency is a necessary and sufficient condition? Unfortunately not, and therefore it is not necessarily true that FOSS and open standards will be more secure than proprietary software and proprietary standards.

Optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

The recent detection of the Heartbleed [6] security bug in Open SSL, [7] causing situations where more data can be read than should be allowed, and Snowden’s revelations about the compromise of some open cryptographic standards (which depend on elliptic curves), developed by the US National Institute of Standards and Technology, are stark examples. [8]

At the same time, however, open standards and FOSS are crucial to maintaining the balance of power in information societies, as civil society and the general public are able to resist the powers of authoritarian governments and rogue corporations using cryptographic technology. These technologies allow for anonymous speech, pseudonymous speech, private communication, online anonymity and circumvention of surveillance and censorship. For the media, these technologies enable anonymity of sources and the protection of whistle-blowers—all phenomena that are critical to the functioning of a robust and open democratic society. But these very same technologies are also required by states and by the private sector for a variety of purposes—national security, e-commerce, e-banking, protection of all forms of intellectual property, and services that depend on confidentiality, such as legal or medical services.

In order words, all governments, with the exception of the US government, have common cause with civil society, media and the general public when it comes to increasing the security of open standards and FOSS. Unfortunately, this can be quite an expensive task because the re-securing of open cryptographic standards depends on mathematicians. Of late, mathematical research outputs that can be militarised are no longer available in the public domain because the biggest employers of mathematicians worldwide today are the US military and intelligence agencies. If other governments invest a few billion dollars through mechanisms like Knowledge Ecology International’s proposed World Trade Organization agreement on the supply of knowledge as a public good, we would be able to internationalise participation in standard-setting organisations and provide market incentives for greater scrutiny of cryptographic standards and patching of vulnerabilities of FOSS. This would go a long way in addressing the trust deficit that exists on the internet today.

Security and Technology

A techno-utopian understanding of security assumes that more technology, more recent technology and more complex technology will necessarily lead to better security outcomes.

This is because the security discourse is dominated by vendors with sales targets who do not present a balanced or accurate picture of the technologies that they are selling. This has resulted in state agencies and the general public having an exaggerated understanding of the capabilities of surveillance technologies that is more aligned with Hollywood movies than everyday reality.

More Technology

Increasing the number of x-ray machines or full-body scanners at airports by a factor of ten or hundred will make the airport less secure unless human oversight is similarly increased. Even with increased human oversight, all that has been accomplished is an increase in the potential locations that can be compromised. The process of hardening a server usually involves stopping non-essential services and removing non-essential software. This reduces the software that should be subject to audit, continuously monitored for vulnerabilities and patched as soon as possible. Audits, ongoing monitoring and patching all cost time and money and therefore, for governments with limited budgets, any additional unnecessary technology should be seen as a drain on the security budget. Like with the airport example, even when it comes to a single server on the internet, it is clear that, from a security perspective, more technology without a proper functionality and security justification is counter-productive. To reiterate, throwing increasingly more technology at a problem does not make things more secure; rather, it results in a proliferation of vulnerabilities.

Latest Technology

Reports that a number of state security agencies are contemplating returning to typewriters for sensitive communications in the wake of Snowden’s revelations makes it clear that some older technologies are harder to compromise in comparison to modern technology. [9] Between iris- and fingerprint-based biometric authentication, logically, it would be easier for a criminal to harvest images of irises or authentication factors in bulk fashion using a high resolution camera fitted with a zoom lens in a public location, in comparison to mass lifting of fingerprints.

Complex Technology

Fifteen years ago, Bruce Schneier said, "The worst enemy of security is complexity. This has been true since the beginning of computers, and it’s likely to be true for the foreseeable future." [10] This is because complexity increases fragility; every feature is also a potential source of vulnerabilities and failures. The simpler Indian electronic machines used until the 2014 elections are far more secure than the Diebold voting machines used in the 2004 US presidential elections. Similarly when it comes to authentication, a pin number is harder to beat without user-conscious cooperation in comparison to iris- or fingerprint-based biometric authentication.

In the following section of the paper we have identified five threat scenarios [11] relevant to India and identified solutions based on our theoretical framing above.

Threat Scenarios and Possible Solutions

Hacking the NIC Certifying Authority
One of the critical functions served by the National Informatics Centre (NIC) is as a Certifying Authority (CA). [12] In this capacity, the NIC issues digital certificates that authenticate web services and allow for the secure exchange of information online. [13] Operating systems and browsers maintain lists of trusted CA root certificates as a means of easily verifying authentic certificates. India’s Controller of Certifying Authority’s certificates issued are included in the Microsoft Root list and recognised by the majority of programmes running on Windows, including Internet Explorer and Chrome. [14] In 2014, the NIC CA’s infrastructure was compromised, and digital certificates were issued in NIC’s name without its knowledge. [15] Reports indicate that NIC did not "have an appropriate monitoring and tracking system in place to detect such intrusions immediately." [16] The implication is that websites could masquerade as another domain using the fake certificates. Personal data of users can be intercepted or accessed by third parties by the masquerading website. The breach also rendered web servers and websites of government bodies vulnerable to attack, and end users were no longer sure that data on these websites was accurate and had not been tampered with. [17] The NIC CA was forced to revoke all 250,000 SSL Server Certificates issued until that date [18] and is no longer issuing digital certificates for the time being. [19]Public key pinning is a means through which websites can specify which certifying authorities have issued certificates for that site. Public key pinning can prevent man-in-the-middle attacks due to fake digital certificates. [20] Certificate Transparency allows anyone to check whether a certificate has been properly issued, seeing as certifying authorities must publicly publish information about the digital certificates that they have issued. Though this approach does not prevent fake digital certificates from being issued, it can allow for quick detection of misuse. [21]

‘Logic Bomb’ against Airports
Passenger operations in New Delhi’s Indira Gandhi International Airport depend on a centralised operating system known as the Common User Passenger Processing System (CUPPS). The system integrates numerous critical functions such as the arrival and departure times of flights, and manages the reservation system and check-in schedules. [22] In 2011, a logic bomb attack was remotely launched against the system to introduce malicious code into the CUPPS software. The attack disabled the CUPPS operating system, forcing a number of check-in counters to shut down completely, while others reverted to manual check-in, resulting in over 50 delayed flights. Investigations revealed that the attack was launched by three disgruntled employees who had assisted in the installation of the CUPPS system at the New Delhi Airport. [23] Although in this case the impact of the attack was limited to flight delay, experts speculate that the attack was meant to take down the entire system. The disruption and damage resulting from the shutdown of an entire airport would be extensive.

Adoption of open hardware and FOSS is one strategy to avoid and mitigate the risk of such vulnerabilities. The use of devices that embrace the concept of open hardware and software specifications must be encouraged, as this helps the FOSS community to be vigilant in detecting and reporting design deviations and investigate into probable vulnerabilities.

Attack on Critical Infrastructure
The Nuclear Power Corporation of India encounters and prevents numerous cyber attacks every day. [24] The best known example of a successful nuclear plant hack is the Stuxnet worm that thwarted the operation of an Iranian nuclear enrichment complex and set back the country’s nuclear programme. [25]

The worm had the ability to spread over the network and would activate when a specific configuration of systems was encountered [26] and connected to one or more Siemens programmable logic controllers. [27] The worm was suspected to have been initially introduced through an infected USB drive into one of the controller computers by an insider, thus crossing the air gap. [28] The worm used information that it gathered to take control of normal industrial processes (to discreetly speed up centrifuges, in the present case), leaving the operators of the plant unaware that they were being attacked. This incident demonstrates how an attack vector introduced into the general internet can be used to target specific system configurations. When the target of a successful attack is a sector as critical and secured as a nuclear complex, the implications for a country’s security and infrastructure are potentially grave.

Security audits and other transparency measures to identify vulnerabilities are critical in sensitive sectors. Incentive schemes such as prizes, contracts and grants may be evolved for the private sector and academia to identify vulnerabilities in the infrastructure of critical resources to enable/promote security auditing of infrastructure.

Micro Level: Chip Attacks
Semiconductor devices are ubiquitous in electronic devices. The US, Japan, Taiwan, Singapore, Korea and China are the primary countries hosting manufacturing hubs of these devices. India currently does not produce semiconductors, and depends on imported chips. This dependence on foreign semiconductor technology can result in the import and use of compromised or fraudulent chips by critical sectors in India. For example, hardware Trojans, which may be used to access personal information and content on a device, may be inserted into the chip. Such breaches/transgressions can render equipment in critical sectors vulnerable to attack and threaten national security. [29]

Indigenous production of critical technologies and the development of manpower and infrastructure to support these activities are needed. The Government of India has taken a number of steps towards this. For example, in 2013, the Government of India approved the building of two Semiconductor Wafer Fabrication (FAB) manufacturing facilities [30] and as of January 2014, India was seeking to establish its first semiconductor characterisation lab in Bangalore. [31]

Macro Level: Telecom and Network Switches

The possibility of foreign equipment containing vulnerabilities and backdoors that are built into its software and hardware gives rise to concerns that India’s telecom and network infrastructure is vulnerable to being hacked and accessed by foreign governments (or non-state actors) through the use of spyware and malware that exploit such vulnerabilities. In 2013, some firms, including ZTE and Huawei, were barred by the Indian government from participating in a bid to supply technology for the development of its National Optic Network project due to security concerns. [32] Similar concerns have resulted in the Indian government holding back the conferment of ‘domestic manufacturer’ status on both these firms. [33]

Following reports that Chinese firms were responsible for transnational cyber attacks designed to steal confidential data from overseas targets, there have been moves to establish laboratories to test imported telecom equipment in India. [34] Despite these steps, in a February 2014 incident the state-owned telecommunication company Bharat Sanchar Nigam Ltd’s network was hacked, allegedly by Huawei. [35]

Security practitioners and policymakers need to avoid the zero-sum framing prevalent in popular discourse regarding security VIS-A-VIS privacy, transparency and technology.

A successful hack of the telecom infrastructure could result in massive disruption in internet and telecommunications services. Large-scale surveillance and espionage by foreign actors would also become possible, placing, among others, both governmental secrets and individuals personal information at risk.

While India cannot afford to impose a general ban on the import of foreign telecommunications equipment, a number of steps can be taken to address the risk of inbuilt security vulnerabilities. Common International Criteria for security audits could be evolved by states to ensure compliance of products with international norms and practices. While India has already established common criteria evaluation centres, [36] the government monopoly over the testing function has resulted in only three products being tested so far. A Code Escrow Regime could be set up where manufacturers would be asked to deposit source code with the Government of India for security audits and verification. The source code could be compared with the shipped software to detect inbuilt vulnerabilities.


Cyber security cannot be enhanced without a proper understanding of the relationship between security and other national imperatives such as privacy, transparency and technology. This paper has provided an initial sketch of those relationships, but sustained theoretical and empirical research is required in India so that security practitioners and policymakers avoid the zero-sum framing prevalent in popular discourse and take on the hard task of solving the optimisation problem by shifting policy, market and technological levers simultaneously. These solutions must then be applied in multiple contexts or scenarios to determine how they should be customised to provide maximum security bang for the buck.

[1]. Daniel J. Solove, Chapter 1 in Nothing to Hide: The False Tradeoff between Privacy and Security (Yale University Press: 2011),

[2]. Bruce Schneier, “What our Top Spy doesn’t get: Security and Privacy aren’t Opposites,” Wired, January 24, 2008, commentary/security matters/2008/01/securitymatters_0124 and Bruce Schneier, “Security vs. Privacy,” Schneier on Security, January 29, 2008,

[3]. There are four sources of power in internet governance: Market power exerted by private sector organisations; regulatory power exerted by states; technical power exerted by anyone who has access to certain categories of technology, such as cryptography; and finally, the power of public pressure sporadically mobilised by civil society. A technically sound encryption standard, if employed by an ordinary citizen, cannot be compromised using the power of the market or the regulatory power of states or public pressure by civil society. In that sense, technology can be used to regulate state and market behaviour.

[4]. Ann Cavoukian and Khaled El Emam, “Introducing Privacy-Protective Surveillance: Achieving Privacy and Effective Counter-Terrorism,” Information & Privacy Commisioner, September 2013, Ontario, Canada,

[5]. Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal, “Information Integration and Analysis: A Semantic Approach to Privacy”(presented at the third IEEE International Conference on Information Privacy, Security, Risk and Trust, Boston, USA, October 2011),

[6]. Bruce Byfield, “Does Heartbleed disprove ‘Open Source is Safer’?,” Datamation, April 14, 2014,

[7]. “Cybersecurity Program should be more transparent, protect privacy,” Centre for Democracy and Technology Insights, March 20, 2009,

[8]. “Cracked Credibility,” The Economist, September 14, 2013,

[9]. Miriam Elder, “Russian guard service reverts to typewriters after NSA leaks,” The Guardian, July 11, 2013, and Philip Oltermann, “Germany ‘may revert to typewriters’ to counter hi-tech espionage,” The Guardian, July 15, 2014,

[10]. Bruce Schneier, “A Plea for Simplicity,” Schneier on Security, November 19, 1999,

[11]. With inputs from Pranesh Prakash of the Centre for Internet and Society and Sharathchandra Ramakrishnan of Srishti School of Art, Technology and Design.

[12]. “Frequently Asked Questions,” Controller of Certifying Authorities, Department of Electronics and Information Technology, Government of India,

[13]. National Informatics Centre Homepage, Government of India,

[14]. Adam Langley, “Maintaining Digital Certificate Security,” Google Security Blog, July 8, 2014,

[15]. This is similar to the kind of attack carried out against DigiNotar, a Dutch certificate authority. See:

[16]. R. Ramachandran, “Digital Disaster,” Frontline, August 22, 2014,

[17]. Ibid.

[18]. “NIC’s digital certification unit hacked,” Deccan Herald, July 16, 2014,

[19]. National Informatics Centre Certifying Authority Homepage, Government of India,

[20]. Mozilla Wiki, “Public Key Pinning,”

[21]. “Certificate Transparency - The quick detection of fraudulent digital certificates,” Ascertia, August 11, 2014,

[22]. “Indira Gandhi International Airport (DEL/VIDP) Terminal 3, India,” Airport, -3/.

[23]. “How techies used logic bomb to cripple Delhi Airport,” Rediff, November 21, 2011, htm.

[24]. Manu Kaushik and Pierre Mario Fitter, “Beware of the bugs,” Business Today, February 17, 2013,

[25]. “Stuxnet ‘hit’ Iran nuclear plants,” BBC, November 22, 2010,

[26]. In this case, systems using Microsoft Windows and running Siemens Step7 software were targeted.

[27]. Jonathan Fildes, “Stuxnet worm ‘targeted high-value Iranian assets’,” BBC, September 23, 2010,

[28]. Farhad Manjoo, “Don’t Stick it in: The dangers of USB drives,” Slate, October 5, 2010,

[29]. Ibid.

[30]. “IBM invests in new $5bn chip fab in India, so is chip sale off?,” ElectronicsWeekly, February 14, 2014,

[31]. NT Balanarayan, “Cabinet Approves Creation of Two Semiconductor Fabrication Units,” Medianama, February 17, 2014,

[32]. Jamie Yap, “India bars foreign vendors from national broadband initiative,” ZD Net, January 21, 2013,

[33]. Kevin Kwang, “India holds back domestic-maker status for Huawei, ZTE,” ZD Net, February 6, 2013, 00010887/. Also see “Huawei, ZTE await domestic-maker tag,” The Hindu, February 5, 2013,

[34]. Ellyne Phneah, “Huawei, ZTE under probe by Indian government,” ZD Net, May 10, 2013,

[35]. Devidutta Tripathy, “India investigates report of Huawei hacking state carrier network,” Reuters, February 6, 2014,

[36]. “Products Certified,” Common Criteria Portal of India,