Openness

by Subbiah Arunachalam and Anirudh Sridhar — last modified May 30, 2014 07:59 AM
The philosophy of openness is one that concerns itself with shifting power from centralized authorities of knowledge like owners to the community with its varied components like users, producers or contributors.

Many people think of openness as being merely about the digitization of pre-existing knowledge or content but it is far more than that. Often, as Nishant Shah puts it in his article “Big Data, People's Lives, and the Importance of Openness”[1] “it (openness) is about claiming access to knowledge and information hidden behind paywalls and gateways that are often produced using public resources.” Openness is important for the same reasons that access to knowledge is important, but it takes many different forms. We will be discussing Open Content, Open Access, Open (Government) Data, Free and Open Source Software and Open Standards.

After a quick narration of what we mean by commons and contents, we move on to open access to science and scholarship. We distinguish openness of knowledge as it prevails today from the public libraries of the print era and then move on to developments that led to the open access movement. We then discuss the status of open access in India and end with the bright future awaiting open access.

The notion of the ‘commons’ (meaning open to all) has been in existence for a very long time. For example, as early as the 4th Century B C, Aristotle commented “What is common to the greatest number gets the least care!” [1] Ecologist Garret Hardin developed this notion into the ‘tragedy of the commons’ for explaining the numerous environmental crises and ecological dilemmas we face today [2]

Commons is defined as "resources accessible to all members of a society“.  A good example of the commons is the village greens in Great Britain around which people reside and have their church and school. Then there are grazing lands for their cattle, and water bodies, which no one owns but everyone can use. The moment someone has a title deed for a piece of land he ‘encloses’ it with a fence. Even here, if that piece of land has been used for long by people to cross to the other side, the owner keeps open a narrow footpath.

It is only three or four decades ago the commons became an object of serious study. The idea of the ‘knowledge commons’  draws upon the work of people like Elinor Orstom on  ‘common pool resources,’  ‘natural resource commons’ and ‘public good’ such as forests, water systems, fisheries, grazing fields and the global atmosphere all of which are  common-pool resources of immense importance for  the survival of humans on this earth [3-5].Ostrom and her colleague Charlotte Hess also contributed to knowledge commons and in particular to our understanding of scholarly communication and cultural resources as commons. Their work brought out the essential role of collective action and self-governance in making commons work [6].

Definitions
Before talking about knowledge commons let us define these terms:

  1. Knowledge includes all useful ideas, information and data in whatever form in which it is expressed or obtained, and useful knowledge can be indigenous, scientific, scholarly, or non-academic. It also includes music and the visual and theatrical arts – humanity’s literary, artistic and cultural heritage.
  2. Ostrom and Hess define a commons as a resource shared by a group of people that is subject to social dilemmas.
  3. Social dilemma in the context of knowledge includes enclosure by intellectual property (IP) regulations, loss due to inadequate preservation or simple neglect, and different laws being applied to print and digital forms.
  4. Open Knowledge Definition defines openness in relation to content and data thus: A piece of content or data is open if anyone is free to use, reuse, and redistribute it without  technical  or legal restrictions, subject only, at most, to the requirement to attribute and/ or share-alike [http://opendefinition.org]. And ‘digital commons’ is defined as "information and knowledge resources that are collectively created and owned or shared between or among a community and that is (generally freely) available to third parties. Thus, they are oriented to favour use and reuse, rather than to exchange as a commodity."

Free and Open Software

Definition
Free and open-source software (FOSS) is software that is both free and open source. Free software is software for which the source code is released when it is distributed. The users are free to adapt study and distribute the software.[2]Most commercially available software is proprietary software so the free software is mostly developed cooperatively. The free software movement was launched in 1983 which was a social movement for the attaining these freedoms for software users. It basically draws upon the 1970’s hacker culture but the founder of the movement Richard Stallman started the GNU Project in 1983.[3] Open source software (OSS) is released with its source code and the license is one where the copyright holder extends the right for users to study, change and distribute the software to anyone and for any purpose. OSS is also often developed collaboratively in a public endeavor. Free software licenses and open-source licenses are often used by many software packages instead of proprietary software licenses which have restrictive copyrights. Usually all software and bug fixes under this are also made available under the same free and open licenses which creates a kind of living software. These types of software are essential for society moving forward because they help reduce costs, increases productivity, enhance security, and improve compliance standards. FOSS presents the lowest risk among software systems because they have the best long term investment protection.

UNESCO has recognized the importance of FOSS as a practical tool in development and in achieving the Millennium Development Goals (MDG).[4]

It recognizes that:

  1. Software plays a crucial role in access to information and knowledge;
  2. Different software models, including proprietary, open-source and free software, have many possibilities to increase competition, access by users, diversity of choice and to enable all users to develop solutions which best meet their requirements;
  3. The development and use of open, interoperable, non-discriminatory standards for information handling and access are important elements in the development of effective infostructures;
  4. The community approaches to software development has great potential to contribute to operationalize the concept of Knowledge Societies;
  5. The Free and Open Source Software (FOSS) model provides interesting tools and processes with which people can create, exchange, share and exploit software and knowledge efficiently and effectively;
  6. FOSS can play an important role as a practical instrument for development as its free and open aspirations make it a natural component of development efforts in the context of the Millennium Development Goals (MDGs);
  7. Consistent support plays an important role in the success and sustainability of FOSS solutions;
  8. All software choices should be based upon the solution's ability to achieve the best overall return on technology investments.[5]

Organizations[6]
There is no rule that excludes anyone who wants to support FOSS from doing so. Usually, however, the trend shows that non-profit organizations (NPO), academic institutions, developers and support/service businesses invest their time and resources in these projects. Here are some of the important organizations that have supported FOSS:

  1. FLOSS Manuals -- FLOSS Manuals is a community that creates free manuals for free and open source software.
  2. FOSS Learning Centre – They are an international NPO that is a center for information and training about FOSS.
  3. GOSLING - "Getting Open Source Logic Into Governments" is a knowledge sharing community assist with the introduction and use of free/libre software solutions in the Canadian Federal and other government operations.
  4. International Open Source Network -- "The vision is that developing countries in the Asia-Pacific Region can achieve rapid and sustained economic and social development by using effective FOSS ICT solutions to bridge the digital divide."
  5. Open Source for America – This is a combination of NGO’s, academic institutions, associations, technology industry leaders that advocates and helps raise the awareness of FOSS in the US Government.
  6. Open Source Initiative – This was the organization that first gave mass market appeal to the term “open source. They are the recognized certification authority for whether or not a given software license is FOSS.
  7. Open Source Software Institute – This is another NPO that consists of government, academic and corporate representation and they encourage open-source solutions in U.S. government agencies and academic entities.
  8. OSS Watch – This is a public institution in the UK which provides advice on the development and licensing of FOSS.
  9. SchoolForge – They offer references to references to open texts and lessons, open curricula, and free open source software in education.

Types of Licenses[7]
Source Code: This is a code that is readable by humans. It has statements like:*Simple Hello Button () method.

When a computer is running, a source code is translated into binary code which is not readable or modifiable by humans. It reads something like:01011001101.

The licenses that will illuminate where FOSS licenses stand relatively are GPL licenses (that are the most restrictive) and BSD licenses (which are almost public domain). The primary distinction between these two is the way in which source code is treated as opposed to binary code.

The GPL license differed from prior ones because they stipulated that the source code has to be provided along with the binary code which meant that the licensees could use and change the source code. This requirement was an important part of the domino effect in driving innovation since old industrial standards did not apply to software. However, though this freedom with binaries produced exists, there are no requirements to make the source available. The prime difference between the two being that legally, the release of the BSD source is completely at the discretion of the releasing entity.

The following table compares different kinds of FOSS licenses. In order to be considered as such, the bare minimum is for the licenses to pass the first four tests in the table.[8]

Source must be freeMust retain copyright noticeCan sell executable without restrictionModifications covered under licensePrevented from use for software or data lockingLinked code covered under licenseNew updates to license will applyPatent retaliation, loss of use if suit broughtCan sell source code
GPL V3 Y Y Y Y Y N Y ? N
Mozilla (V1.1) Y Y Y Y N N Y Y N
BSD Y Y Y Y N N N N Y

Differences[9]
The most salient distinction between the two types of software comes from the principles behind them.  For the “open source” movement, the idea that software should be open source is a practical one and isn’t concerned with the ethical dimensions behind the question. For the free software movement, the problem behind software licenses is a social one for which free software is the solution.

Openness
Openness poster depicting the 4 freedoms of Free and Open Source Software. By 2016 approximately 86% of all video content will be internet video.

FOSS in India

Many support groups like the Free Software Movement of India and various NGO’s have spawned in order to campaign for FOSS in India.[10]

The National Resource Centre for Free and Open Source Software (NRCFOSS) was an initiative by the DIT in 2005 in order to be the central point for all FOSS related activities in India. Through awareness campaigns, training programs and workshops a large collection of FOSS trained teacher and student communities have been formed across India.[11] In many curricula in technical institutes, FOSS is even offered as an elective. The Department of Electronics and Information Technology (DEITY) boasts of  “BOSS – Bharat Operating System Solutions External website that opens in a new windowis a GNU/Linux based localized Operating System distribution that supports 18 Indian languages - Assamese, Bengali, Bodo, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Maithili, Malayalam, Manipuri, Marathi, Oriya, Punjabi, Sanskrit, Tamil, Telegu and Urdu.”[12]

Case Study: Curoverse[13]
Open source software is a mainstream enterprise that can be both beneficial to society, academia and companies. This was the underlying assumption when $1.5 million was invested in an open source genomics tool project at Curoverse, Boston. The Personal Genome Project (PGP) endeavors to sequence 100,000 human genomes in the U.S. The storage of these massive amounts of data is facilitated by Arvados, which is an open source computational platform. Curoverse, which is a product of the PGP is planning to release its commercial products next year and in anticipation, Boston Global Ventures and Common Angels have invested $1.5 M. The PGP, according to George Church (the creator), the database needed to hold almost one Exabyte of data for the researchers to efficiently analyze the data. Some of the functions necessary were the ability to share the data between research centers and to make sure that complex analyses could be reproduced. In order to satisfy these requirements, the software had to open source. Although similar to the new age cloud computing the software Arvados was programmed to hold extremely high amounts of genetic data. It can run on both public and private cloud services, so it’ll be available both on Amazon and other cloud platforms. Although this software was developed in 2006, the project hadn’t officially taken off but this investment in open source software coming from high impact technology companies like Boston Global Ventures.
Case Study: Open-Sorcerers[14]

Many magical tricks can be protected by copyright. For example, Teller from Penn and teller fame is suing a Dutch magician for allegedly stealing his “shadow” illusion. Litigating on these matters is proving to be extremely difficult so magicians, like programmers are taking the route of open-source licenses. This doesn’t mean that they would just share magical secrets in violation of the Alliance of Magicians on a forum like YouTube. This is more congruous with what open source technology activists advocate which is the idea of collaboration. If magicians work with more technologists, artists, programmers, scientists and other magicians, there could be better illusions and a general cross-pollination of magical ideas among various disciplines. For this, the technology behind these illusions needs to be freely available and the licenses have to open up for open sorcerers.

Techno-illusionist Marco Tempest and Kieron Kirlkland from a digital creativity development studio in Bristol are the main proponents of open source in magic. Tempest has stated that famous magicians in the status quo contract illusion engineers, technologists or other magicians to design new effects for their acts and make them all sign secrecy agreements and the creators have no ownership of what they have created. This has been detrimental to innovation and perfection of techniques as they are not allowed to refine their work over time. If the ownership is instead shared and freely available to the co-creators and developers, then it would lead to better illusions and speed up the process faster.

Open Standards

Definition
Interoperability has many social, technical and economic benefits and interoperability on the internet magnifies these benefits many fold. Interoperability, unlike a lot of other economically beneficial changes, was not a result of the adapting markets. It came about in what modest existence it has, through a concerted effort from processes and practices by the IETF, the W3C and the Interop conferences among others.[15]

Open standards are applicable to any application programming interface, a hardware interface, a file format, a communication protocol, a specification of user interactions, or any other form of data interchange and program control.[16]

The billions of dollars of capital investment in the past few years since the internet’s advent into the mainstream has come from an understanding of very basic laws of the market. Metcalfe’s law says the value of interoperability increases geometrically with the number of compatible participants. Reed’s law states that a network’s utility exponentially increases as the number of subgroups increase.The problem with having standards for this interoperability is that the open standard either needs to be most open or most inclusive and unlike in many other cases we have discussed, here it can’t be both. If it wants to be inclusive, it should have standards that permit any license that is free, closed or open. It should have standards that have any type of implementation under any implementor.[17] On the other hand, if it to support the idea of openness, the best practices will exclude certain practices in the market like proprietary standards. Though traditionally meant to incentivize compliance by claiming a set of standards to be best practices, under this, some try to be unique in the market by adding on additional properties that are not a part of the open standards but claim that they implement “open standards” for strategic advantage. Others even defy the logic of having standards by claiming that their new additions embody open standards better.

As we have seen, due to the various conceptions of the good in open standards, there isn’t a universally accepted definition of open standards. The FOSS community largely accepts the following definition with contention from the industry.

[S]ubject to full public assessment and use without constraints [royalty-free] in a manner equally available to all parties; without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an open standard themselves; free from legal or technical clauses that limit its utilization by any party or in any business model; managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties; available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.[18]

A standard can be considered open if it does the job of achieving the following goals. It has to increase the market for a particular technology by facilitating investment in that technology by both consumers and suppliers. It has to do this by making sure these investors don’t have to pay monopoly rent or deal with trade secret, copyright, patent or trademark problems. In retrospect, we have learned that the only standards that have achieved these goals are ones that encourage an open-source philosophy.

Proprietary software manufacturers, vendors and their lobbyists often provide a definition of open standards that is not in line with the above definitions on two counts (Nah, 2006).

One, they do not think it is necessary for an open standard to be available on a royalty-free basis as long as it is available under a “reasonable and non-discriminatory” (RAND) licence. This means that there are some patents associated with the standard and the owners of the patents have agreed to license them under reasonable and non-discriminatory terms (W3C, 2002). One example is the audio format MP3, an ISO/IEC [International Organisation for Standardisation/International Electrotechnical Commission] standard where the associated patents are owned by Thomson Consumer Electronics and the Fraunhofer Society of Germany. A developer of a game with MP3 support would have to pay USD 2,500 as royalty for using the standard. While this may be reasonable in the United States (US), it is unthinkable for an entrepreneur from Bangladesh. Additionally, RAND licences are incompatible with most FOSS licensing requirements. Simon Phipps of Sun Microsystems says that FOSS “serves as the canary in the coalmine for the word ‘open’. Standards are truly open when they can be implemented without fear as free software in an open source community” (Phipps, 2007). RAND licences also retard the growth of FOSS, since they are patented in a few countries. Despite the fact that software is not patentable in most parts of the world, the makers of various distributions of GNU/Linux do not include reverse-engineered drivers, codecs, etc., in the official builds for fear of being sued. Only the large corporation-backed distributions of GNU/Linux can afford to pay the royalties needed to include patented software in the official builds (in this way enabling an enhanced out-of-the-box experience). This has the effect of slowing the adoption of GNU/Linux, as less experienced users using community-backed distributions do not have access to the wide variety of drivers and codecs that users of other operating systems do (Disposable, 2004). This vicious circle effectively ensures negligible market presence of smaller community-driven projects by artificial reduction of competition.

Two, proprietary software promoters do not believe that open standards should be “managed and further developed independently of any single vendor,” as the following examples will demonstrate. This is equally applicable to both new and existing standards.

Microsoft’s Office Open XML (OOXML) is a relatively new standard which the FOSS community sees as a redundant alternative to the existing Open Document Format (ODF). During the OOXML process, delegates were unhappy with the fact that many components were specific to Microsoft technology, amongst other issues. By the end of a fast-track process at the ISO, Microsoft stands accused of committee stuffing: that is, using its corporate social responsibility wing to coax non-governmental organisations to send form letters to national standards committees, and haranguing those who opposed OOXML. Of the twelve new national board members that joined ISO after the OOXML process started, ten voted “yes” in the first ballot (Weir, 2007). The European Commission, which has already fined Microsoft USD 2.57 billion for anti-competitive behaviour, is currently investigating the allegations of committee stuffing (Calore, 2007). Microsoft was able to use its financial muscle and monopoly to fast-track the standard and get it approved. In this way it has managed to subvert the participatory nature of a standards-setting organisation. So even though Microsoft is ostensibly giving up control of its primary file format to the ISO, it still exerts enormous influence over the future of the standard.

HTML, on the other hand, is a relatively old standard which was initially promoted by the Internet Engineering Task Force (IETF), an international community of techies. However, in 2002, seven years after the birth of HTML 2.0, the US Department of Justice alleged that Microsoft used the strategy of “embrace, extend, and extinguish” (US DoJ, 1999) in an attempt to create a monopoly among web browsers. It said that Microsoft used its dominance in the desktop operating system market to achieve dominance in the web-authoring tool and browser market by introducing proprietary extensions to the HTML standard (Festa, 2002). In other words, financial and market muscle have been employed by proprietary software companies – in these instances, Microsoft – to hijack open standards.

The Importance
There are many technical, social and ethical reasons for the adoption and use of open standards. Some of the reasons that should concern governments and other organisations utilising public money – such as multilaterals, bilaterals, civil society organisations, research organisations and educational institutions – are listed below.

Innovation/competitiveness: Open standards are the bases of most technological innovations, the best example of which would be the internet itself (Raymond, 2000). The building blocks of the internet and associated services like the world wide web are based on open standards such as TCP/IP, HTTP, HTML, CSS, XML, POP3 and SMTP. Open standards create a level playing field that ensures greater competition between large and small, local and foreign, and new and old companies, resulting in innovative products and services. Instant messaging, voice over internet protocol (VoIP), wikis, blogging, file-sharing and many other applications with large-scale global adoption were invented by individuals and small and medium enterprises, and not by multinational corporations.

Greater interoperability: Open standards ensure the ubiquity of the internet experience by allowing different devices to interoperate seamlessly. It is only due to open standards that consumers are able to use products and services from competing vendors interchangeably and simultaneously in a seamless fashion, without having to learn additional skills or acquire converters. For instance, the mail standard IMAP can be used from a variety of operating systems (Mac, Linux and Windows), mail clients (Evolution, Thunderbird, Outlook Express) and web-based mail clients. Email would be a completely different experience if we were not able to use our friends’ computers, our mobile phones, or a cybercafé to check our mail.

Customer autonomy: Open standards also empower consumers and transform them into co-creators or “prosumers” (Toffler, 1980). Open standards prevent vendor lock-in by ensuring that the customer is able to shift easily from one product or service provider to another without significant efforts or costs resulting from migration.

Reduced cost: Open standards eliminate patent rents, resulting in a reduction of total cost of ownership. This helps civil society develop products and services for the poor.

Reduced obsolescence: Software companies can leverage their clients’ dependence on proprietary standards to engineer obsolescence into their products and force their clients to keep upgrading to newer versions of software. Open standards ensure that civil society, governments and others can continue to use old hardware and software, which can be quite handy for sectors that are strapped for financial resources.

Accessibility: Operating system-level accessibility infrastructure such as magnifiers, screen readers and text-to-voice engines require compliance to open standards. Open standards therefore ensure greater access by people with disabilities, the elderly, and neo-literate and illiterate users. Examples include the US government’s Section 508 standards, and the World Wide Web Consortium’s (W3C) WAI-AA standards.

Free access to the state: Open standards enable access without forcing citizens to purchase or pirate software in order to interact with the state. This is critical given the right to information and the freedom of information legislations being enacted and implemented in many countries these days.

Privacy/security: Open standards enable the citizen to examine communications between personal and state-controlled devices and networks. For example, open standards allow users to see whether data from their media player and browser history are being transmitted along to government servers when they file their tax returns. Open standards also help prevent corporate surveillance.

Data longevity and archiving: Open standards ensure that the expiry of software licences does not prevent the state from accessing its own information and data. They also ensure that knowledge that has been passed on to our generation, and the knowledge generated by our generation, is safely transmitted to all generations to come.

Media monitoring: Open standards ensure that the voluntary sector, media monitoring services and public archives can keep track of the ever-increasing supply of text, audio, video and multimedia generated by the global news, entertainment and gaming industries. In democracies, watchdogs should be permitted to reverse-engineer proprietary standards and archive critical ephemeral media in open standards.

Principles[19]

  1. Availability:Open Standards should be available for everyone to access.
  2. Maximize End-User Choice:Open Standards should lead to a competitive and fair market and shouldn’t restrict consumer choices.
  3. No Royalty:Open Standards should be free of cost for any entity to implement while there maybe some fee for certification of compliance.
  4. No Discrimination:Open Standards should not show preference to one implementer over another as previously discussed except for the tautological reason of the compliance with the standard. The authorities that are certifying these implementers should offer a low or zero-cost implementation scheme.
  5. Extension or Subset:Open Standards may be allowed in a subset or can allow for extensions form but certifying authorities can decline from certifying subset implementations and have specific conditions for extensions.
HTTP
HTTP, HTML, TCP/IP, SSL, etc., are all royalty free open standards and are building blocks on the Internet.

OSI Criteria[20]
In addition, to make sure that the Open Standards also promote an open source philosophy, the Open Source Initiative (OSI), which is the steward of the open source definition, has a set of criteria for open standards.

  1. No Intentional Secrets: The standard MUST NOT withhold any detail necessary for interoperable implementation. As flaws are inevitable, the standard MUST define a process for fixing flaws identified during implementation and interoperability testing and to incorporate said changes into a revised version or superseding version of the standard to be released under terms that do not violate the OSR.
  2. Availability: The standard MUST be freely and publicly available (e.g., from a stable web site) under royalty-free terms at reasonable and non-discriminatory cost.
  3. Patents: All patents essential to implementation of the standard must:
    - be licensed under royalty-free terms for unrestricted use, or
    - be covered by a promise of non-assertion when practiced by open source software
  4. No Agreements: There must not be any requirement for execution of a license agreement, NDA, grant, click-through, or any other form of paperwork to deploy conforming implementations of the standard.
  5. No OSR-Incompatible Dependencies: Implementation of the standard must not require any other technology that fails to meet the criteria of this Requirement.”

W3C Criteria[21]
The W3C also has a list of criteria in order to be called “Open Standards”.

  1. Transparency (due process is public, and all technical discussions, meeting minutes, are archived and referencable in decision making)
  2. Relevance (new standardization is started upon due analysis of the market needs, including requirements phase, e.g. accessibility, multi-linguism)
  3. Openness (anybody can participate, and everybody does: industry, individual, public, government bodies, academia, on a worldwide scale)
  4. Impartiality and consensus (guaranteed fairness by the process and the neutral hosting of the W3C organization, with equal weight for each participant)
  5. Availability (free access to the standard text, both during development and at final stage, translations, and clear IPR rules for implementation, allowing open source development in the case of Internet/Web technologies)
  6. Maintenance (ongoing process for testing, errata, revision, permanent access)”

Case Study: Digital Colonialism

Imagine back to a world in which a foreign power leases out a piece of land and you grow crops on it. You have produced crops there for many seasons and used the sales to buy a nice windmill. One day, the lease expires and the foreign power come and seizes not only your crops but also your windmill. Now if we apply the same story in a proprietary standards regime, imagine you were to license a copy of Microsoft Office for 28 days. You have stored documents in .doc, .xls and .ppt format. On the day that the license expires, you will not only lose your ability to use Word, Excel and PowerPoint, you will in fact lose all your documents in .doc, .xls and .ppt formats!

Additional Readings

  1. Internet Engineering Task Force, OpenStandards.net,http://www.openstandards.net/viewOSnet2C.jsp?showModuleName=Organizations&mode=1&acronym=IETF
  2. Standards, W3C, http://www.w3.org/standards/
  3. Open Standards, http://www.open-std.org/
  4. Pranesh Prakash, “Report on Open Standards for GISW 2008”, Centre for Internet and Society, 2008, http://cis-india.org/publications-automated/cis/sunil/Open-Standards-GISW-2008.pdf/at_download/file
  5. Sunil Abraham, “Response to the Draft National Policy on Open Standards for e-Governance”, Centre for Internet and Society, 2008, http://cis-india.org/openness/publications/standards/the-response
  6. Pranesh Prakash, “Second Response to Draft National Policy on Open Standards for e-Governance”, Centre for Internet and Society, 2008,http://cis-india.org/openness/publications/standards/second-response
  7. Definition of “Open Standards”, International Telecommunication Union, http://www.itu.int/en/ITU-T/ipr/Pages/open.aspx

Open Content

Definition
The premise of an Open Content license is that, unlike most copyright licenses, which impose stringent conditions on the usage of the work, the Open Content licenses enable users to have certain freedoms by granting them rights. Some of these rights are usually common to all Open Content licenses, such as the right to copy the work and the right to distribute the work. Depending on the particular license, the user may also have the right to modify the work, create derivative works, perform the work, display the work and distribute the derivative works.

When choosing a license, the first thing that you will have to decide is the extent to which you are willing to grant someone rights over your work. For instance, let us suppose you have created a font. If you do not have a problem if people create other versions of it, then you can choose a license that grants the user all rights. If, on the other hand, you are willing to allow people to copy the font and distribute it, but you do not want them to change the typeface or create versions of it, then you can choose a more restrictive license that only grants them the first two rights.

Most open content licenses share a few common features that distinguish them from traditional copyright licenses.

These can be understood in the following ways:

  • Basis of the license/ validity of the license. (Discussed above)
  • Rights granted.  (Discussed above)
  • Derivative works.d. Commercial/ non-commercial usage.e. Procedural requirements imposed.
  • Appropriate credits.
  • They do not effect fair use rights.
  • Absence of warranty.
  • Standard legal clauses

Derivate Works
Any work that is based on an original work created by you is a derivative work. The key difference between different kinds of Open Content licenses is the method that they adopt to deal with the question of derivative works. This issue is an inheritance from the licensing issues in the Free Software environment. The GNU GPL, for instance, makes it mandatory that any derivative work created from a work licensed under the GNU GPL must also be licensed under the GNU GPL. This is a means of ensuring that no one can create a derivative work from a free work which can then be licensed with restrictive terms and conditions. In other words, it ensures that a work that has been made available in the public domain cannot be taken outside of the public domain.

On the other hand, you may have a license like the Berkeley Software Distribution (BSD) software license that may allow a person who creates a derivative work to license that derivative work under a proprietary or closed source license. This ability to control a derivative work through a license is perhaps the most important aspect of the Open Content licenses. They ensure, in a sense, a self perpetuity. Since a person cannot make a derivative work without your permission, your permission is granted on the condition that s/he also allows others to use the derivative work freely. In Open Content licenses, the right to create a derivative work normally includes the right to create it in all media. Thus, if I license a story under an Open Content license, I also grant the user the right to create an audio rendition of it. The obligation to ensure that the derivative work is also licensed under the terms and conditions of the Open Content license is not applicable, however, in cases where the work is merely aggregated into a collection / anthology / compilation. For instance, suppose that I have drawn and written a comic called X, which is being included in a general anthology. In such a case, the other comics in the anthology may be licensed under different terms, and the Open Content license is not applicable to them and will only be applicable to my comic X in the anthology.

Commercial / Non-Commercial Usage
Another important aspect of Open Content licenses is the question of commercial / non-commercial usages. For instance, I may license a piece of video that I have made, but only as long as the user is using it for non-commercial purposes. On the other hand, a very liberal license may grant the person all rights, including the right to commercially exploit the work.

Procedural Requirements Imposed
Most Open Content licenses require a very strict adherence to procedures that have to be followed by the end user if s/he wants to distribute the work, and this holds true even for derivative works. The licenses normally demand that a copy of the license accompanies the work, or the inclusion of some sign or symbol which indicates the nature of the license that the work is being distributed under, for instance, and information about where this license may be obtained. This procedure is critical to ensure that all the rights granted and all the obligations imposed under the license are also passed onto third parties who acquire the work.

Appropriate Credits
The next procedural requirement that has to be strictly followed is that there should be appropriate credits given to the author of the work. This procedure applies in two scenarios. In the first scenario, when the end user distributes the work to a third party, then s/he should ensure that the original author is duly acknowledged and credited. The procedure also applies when the end user wants to modify the work or create a derivative work. Then, the derivative work should clearly mention the author of the original and also mention where the original can be found.

The importance of this clause arises from the fact that, while Open Content licenses seek to create an alternative ethos of sharing and collaboration, it also understands the importance of crediting the author. Very often, in the absence of monetary incentive, other motivating factors such as recognition, reputation and honour become very important. Open Content licenses, far from ignoring the rights of the author, insist on strict procedures so that these authorial rights are respected. You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this license applies to the document are reproduced in all copies, and that you add no other conditions whatsover to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute.

Open content licenses do not effect fair use rights
Under copyright law, there is an exception to infringement and this is known as the fair use exception. Fair use exceptions generally include using portions for critique or review, and certain non-commercial or educational academic uses etc. Open content licenses make it clear that 48 49the terms and conditions of the license do not affect your fair use rights. Thus even if someone is in disagreement with the terms and conditions, and refuses to enter into the open content license, s/he may still have the freedom to use the work to the extent that is allowed by the principles of his/her fair use rights.

Absence of warranty
Since more often than not the work is being made available at no financial cost and also gives the user certain freedoms, most open content licenses have a standard clause which states that the work is being provided without any warranty or on an ‘as is’ basis. The licensor cannot be in a position to provide any warranty on the work. A few licenses however provide the end-user the option of providing a warranty on services, or a warranty on the derivative work so long as that warranty is one between the licensee and the third party.

Standard legal clauses
A few other clauses that appear at the end of most open content licenses are the standard legal clauses that are included in most legal agreements, and you don’t have to worry too much about them while choosing a license.

These generally include:

  1. Severability: This means that even if one portion of the license is held to be invalid the other portions will still continue to have effect.
  2. Limitation on liability: The licenses normally state that the licensor will not be liable for anything arising from the use of the work. Thus, for instance, an end-user cannot claim that he suffered mental trauma as a result of the work.
  3. The licenses do not allow you to modify any portion of the license while redistributing works, etc.
  4. Termination: Most licenses state that the rights granted to the licensee are automatically terminated the moment s/he violates any obligation under the license.

Libraries as Content Providers and the Role of Technology
Content is for people’s use. First it was the library which facilitated access to knowledge for the use by the lay public. The first among the five laws enunciated by the famous Indian librarian Ranganathan [7] emphasizes this point: “Books are for use.” And it was technology which enabled large scale production of content in the form of books and subsequently facilitated ease of access.

Let us take text as content first. Before Gutenberg invented printing using movable types (c. 1436-1440) scribes used to write on vellum by hand. It was a painfully slow process and the reach was very limited. Gutenberg brought about probably the greatest game changing technology which within a very few years revolutionized many aspects of human life and history like never before. Peter Drucker has captured this revolution beautifully in an article in The Atlantic [8]

The public library became the content commons in the print era. Of course, long before Gutenberg there were some great libraries, e.g., Royal Library of Alexandria (Egypt), Taxila University Library, Nalanda University Library (Bihar, India), Bayt Al Hiqma (Baghdad, Iraq) and the Imperial Library of Library of Constantinople (in the capital of the Byzantine Empire). None of these could survive the ravages of time. Thanks to printing, the numbers increased rapidly and the library movement spread to far corners of the globe.

The major public libraries of today are performing a great job with huge collections. The US Library of Congress in Washington DC has 155 million items occupying 838 miles of shelf space, of which 35 million are print material, 68 million are manuscripts, and 5.4 million are maps. Besides these, LoC has 6.5 million pieces of sheet music, 13.6 million photographs and 3.4 million recordings.

The British Library in London has more than 150 million items with 3 million being added annually. If one reads 5 items a day, it will take 80,000 years to complete the current collection. The National Library of Russia stocks more than 36.4 million items. The Russian State Library, the legendary 'Leninka,' comprises a unique collection of Russian and foreign documents in 247 languages, stocking over 43 million items.

Now every major library emphasizes improved access. Here are some excerpts from Mission statements of some large institutions around the world.

  1. British Library: “Enable access to everyone who wants to do research.”
  2. National Library of the Netherlands: “Our core values are accessibility, sustainability, innovation and cooperation.”
  3. German Federal Archives: “legal responsibility of permanently preserving the federal archival documents and making them available for use.”
  4. Danish National Gallery: “Through accessibility, education, and exhibition.”
  5. Victoria & Albert Museum: “To provide diverse audience with the best quality experience and optimum access to our collections, physically and digitally.”

I have included in this sample of galleries, archives, and museums as well as all of them deal with cultural content. Indeed the Open Knowledge Foundation has a major project called OpenGLAM.

In India the first network of public libraries covering a whole state was set up more than a hundred years ago by the Maharaja of Baroda (Sayaji Rao Gaekwad III), a truly benevolent king [9]. In the US though, the public library movement was essentially the gift of a ruthless industrialist who was believed to have been unfair to the workers in his steel mills. But the more than 2,000 libraries Andrew Carnegie helped set up are truly a democratizing force.

Today the Bill and Melinda Gates Foundation promotes libraries in the developing and emerging economies and through their Access to Knowledge award they leverage the use of ICT in libraries.

While public libraries opened up a vast treasure of knowledge to a large number of people many of whom could not have had an opportunity to read even a few of the books in their collections, they had not provided ‘open access.’ That has to wait a little longer.

The Internet era not only helped traditional libraries to introduce new services but also gave birth to many free and open libraries such as Internet Archive and Project Gutenberg. The Internet Archive  aims to provide ‘universal access to all knowledge’ and includes texts, audio, moving images, and software as well as archived web pages, and provides specialized services for adaptive reading and information access for the blind and other persons with disabilities. Project Gutenberg encourages the creation of ebooks.

The best known examples of more recent initiatives are Europeana and the Digital Public Library of America (DPLA) both of which take full advantage of the possibilities offered by the Internet. Europeana provides access to 22.6 million objects (from over 2,000 institutions). These include 14.6 million images – paintings, photographs, etc. and 8.4 million books, magazines, newspapers, diaries, etc. DPLA is not even a year old but it already provides access to more than 5.4 million items from a number of libraries, archives and museums.

In India there are efforts to digitize print material, paintings, images, music, films, etc. The Digital Library of India (DLI) and the Indira Gandhi National Centre for Arts (IGCNA) are two examples. Currently, the Ministry of Culture is toying with the idea of a setting up a National Virtual Library.

Apart from libraries which provide electronic access to millions, a very large number of newspapers and magazines and websites also are freely accessible on the net.

Perhaps one of the most important development in Open Content that has affected people’s access to knowledge worldwide has been Wikipedia. Alexa rans it 6th among all websites globally and approximately 365 million users worldwide read Wikipedia content.

The Creative Commons System
Critiquing a system is merely one side of the coin. Offering viable alternatives or solutions to the lacunae identified in the status quo significantly buttresses critical claims. Alternatives have moved to the internet and understood the logic of its read-write culture. New media such as YouTube and platforms like WordPress have made each one of us not mere consumers of information but potential authors, film makers. Any viable alternative must contemplate this transformation of the read-only culture of the internet to the read-write culture.

Creative Commons (CC) is a non-profit organization that functions across the world to provide licensing tools to authors of creative works. The key distinguishing feature of this system is that the authors have the right to decide under what license they want to make their work available. The system was conceptualized by a number of individuals at the helm of the copyleft movement, of whom the most prominent was Professor Lawrence Lessig.

The creative commons system stands for ‘Some Rights Reserved’, a deviation from the ‘all rights reserved’ model of strict copyright law. The rights to be reserved are left to the discretion of the author.

Types of Licenses
1.    Attribution License – CC BY
2.    Attribution-ShareAlike : CC BY-SA
3.    Attribution-NoDerivatives License : CC BY-ND
4.    Attribution-NonCommercial License : CC BY-NC
5.    Attribution-NonCommercial-ShareAlike : CC BY-NC-SA
6.    Attribution-NonCommercial-NoDerivs- CC BY-NC-ND LICENSE

Exceptions to Open Content
There are two kinds of critiques that have been made about the limitations of Open Content initiatives. The first is a policy - level critique which argues that the voluntary nature of Open Content projects diverts from the larger issue of the need for urgent structural transformations in the global copyright regime. It is argued, for instance, that by relying on copyright, even in a creative variation of it, it still ends up strengthening the copyright system. The larger problem of access to knowledge and culture can only be solved through a long-term intervention in the global copyright regime from the Berne Convention to the TRIPS agreement.

Open Content has also been criticized on the grounds that it privileges the traditional idea of the author at the center of knowledge / culture at the costs of focusing on users. By giving authors the right to participate in a flexible licensing policy, Open Content initiatives end up privileging the notion of the desirability of creating property rights in expressions; cultural and literary products are considered as commodities, albeit ones that the creator can decide to make accessible (or not0, much like a person can decide whether or not to invite someone into his / her house.

A second-level critique asks the question of the relevance of Open Content projects, with their heavy reliance on the Internet. According to the Copysouth group:
It is unlikely that more than a tiny percentage of the works created on a global basis in any year will be available under Creative Commons (CC) licenses. Will the percentage be even less within the Southern Hemisphere? This seems likely. Hence, CC licenses will be of limited value in meeting the expansive access needs of the South in the near future. Nor do CC licenses provide access to already published works or music that are still restricted by copyright laws; these form the overwhelming majority of current material. Focusing on CC licenses may potentially sideline or detour people from analyzing how existing copyright laws block access and how policy changes on a societal level, rather than the actions of individual "good guys", are the key to improving access and the related problems of copyright laws and ideology which are discussed elsewhere in this draft dossier. Nor does it confront the fact that many creators (e.g. most musicians, most academic authors) may be required, because of unequal bargaining power, to assign copyright in their own work to a record company or publisher as a condition of getting their work produced or published.

Finally, a number of Open Content initiatives have an uncomfortable take on other modes through which most people in developing nations have access to knowledge and cultural commodities, namely, piracy, and its critical relation to infrastructure. The emphasis of Open Content on the creation of new content of course raises the question of who uses the new content, and what is the relationship between such content and the question of democratization of infrastructure?

In most cases, the reason for the fall in price of electronic goods, computers, great access to material, increase in photocopiers (the infrastructure of information flows), etc. is not caused in any manner through any radical revolution such as Free Software or Open Content, but really through the easier availability of standard mainstream commodities like Microsoft software and Hollywood. Open Content is unable to provide a solution to the problem of content that is locked up within current copyright regimes. As much as one would like to promote new artists, new books, etc., the fact remains that a bulk of the people do want the latest Hollywood / Bollywood films for a cheaper cost; they do want the latest proprietary software at a cheaper cost; and they do want to read Harry Potter without paying a ransom.

We can either take the moral higher ground and speak of their real information needs or provide crude theories of how they are trapped by false consciousness. Or, we can move away from these judgmental perspectives, and look at other aspects of the debate, such as the impact that the expansion of the grey market for these goods has on their general pricing, the spread of computer/IT culture, the fall in price of consumables such as blank CDs, DVDs, the growing popularity of CD-writing equipment, etc.

There is no point in having a preachy and messianic approach that lectures people on the kind of access that should be given. While in an ideal world, we would also use Free Software and Open Content, this cannot be linked in a sacrosanct manner to the question of spreading access.


Wikipedia


History of Wikipedia
January 15
th is known as Wikipedia Day to Wikipedians. On this day 13 years back in 2001, Jimmy Wales and Larry Sanger launched a wiki-based project after experimenting with another project called Nupedia. Nupedia was also a web-based project whose content was written by experts to have high quality articles comparable to that of professional encyclopedia. Nupedia approved only 21 articles in its first year, compared to Wikipedia posting 200 articles in the first month, and 18,000 in the first year.

In concept, Wikipedia was intended to compliment Nupedia by providing additional high quality articles. In practice, Wikipedia overtook Nupedia, becoming a global project providing free information in multiple languages.

As of January 2014, Wikipedia includes over 30.5mn articles written by 44 million registered users and numerous anonyms volunteers in 287 languages; including over 20 Indian languages.[1] Wikipedia is the world's sixth-most-popular internet property with about 450 mn unique visitors every month, according to Alexa Internet.[2]

Wikipedia in Indian Language
With one of the globe’s largest populations, world’s largest democracies, dozens of languages and hundreds of dialects, rich heritage, culture, religion, architecture, art, literature and music; India presents a remarkable opportunity for Wikipedia. For the Wikimedia movement, India represents a largely untapped opportunity to dramatically expand its impact and move toward the vision of a world where everyone can freely share in – and contribute to – the sum of human knowledge.

Although the Indian population makes up about 20% of humanity, Indians account for only 4.7% of global Internet users, and India represents only 2.0% of global pageviews and 1.6% of global page edits on Wikimedia's sites. Wikipedia projects in 20 Indic languages, will become increasingly important as the next 100 million Indians come onto the Internet, given that they are likely to be increasingly using the Internet in languages other than English. Demographically, Indic languages represent a good growth opportunity since estimates suggest only about 150 million of the total Indian population of 1.2 billion have working fluency in English.

To drive the growth of Indian language Wikipedias, WMF initiated Access to Knowledge Programme (A2K) with Centre for Internet & Society in 2012.

Challenges Faced by Indian Language Wikipedias
The current challenges of Indian language Wikipedias can be summarized as below:

1.    Indian language Wikipedia’s are under-represented in reader, editor & article counts.
2.    Editor base is relatively low.Further, growth in editors and articles is still relatively low, even on a small base.
3.    Technical barriers exist for use of Indian language Wikipedias, especially for editing.
4.    Internet penetration low (~150mn) – though growing rapidly, and projected to double by 2015. [3]
Hari Prasad Nadig; a Wikipedian since 2004, an active Kannada Wikipedian, sysop on both Kannada Wikipedia and Sanskrit Wikipedia, talks about challenges and opportunities of Indian Language Wikipedias in a video.[22]

Development of Indian Language Wikipedias
Between 2002-04, about 18 Indian language Wikipedias had started. As of Jan 2014, Hindi Wikipedia is the largest project with over 1-lakh articles and Malayalam Wikipedia has the best quality articles amongst all Indian language Wikipedia projects.

In India there are two main organisational bodies that are:

First is Wikimedia India Chapter which is an independent and not-for-profit organization that supports, promotes and educate the general Indian public about the availability and use of free and open educational content, which includes the ability to access, develop and contribute to encyclopaedias, dictionaries, books, images, etc.The chapter helps coordinate various Indian language Wikipedias & other Wikimedia projects and spread the word in India. Chapter's latest updates can be accessed from its official portal wiki.wikimedia.in.

Second is Access to Knowledge Programme at Centre for Internet & Society (CIS-A2K) that provides support to the Indian Wikimedia community on various community-led activities, including outreach events across the country, meetups, contests, conferences, and connections to GLAMs and other institutions. CIS-A2K's latest updates can be accessed from its official portal Wiki.[23]

Some ideas for development of India language Wikipedias (also adopted by India Chapter and CIS-A2K) are:

Content addition/donation in Indian languages
Particular emphasis is placed on generating and improving content in Indic languages. The Indian language Wikipedias can be strengthened by finding content that is relevant and useful to the Wikimedia movement that is (a) already in the public domain and (b) contributed to the movement under an acceptable copyright license. Such content will include, but not be limited to, dictionaries, thesauruses, encyclopedias and any other encyclopedia-like compilations.

A precedent for content addition/donation exists in the gift of an encyclopedia to the Wikimedia movement by Kerala government in 2008 and Goa government in 2013.

Institutional Partnerships
To partner with higher education institutions in developing thematic projects and create a network of academicians that will actively use Indian language Wikipedias as part of their pedagogy. Conduct outreach workshops mainly with an intention to spread awareness and to arrive at possibilities for long-term partnerships.

An example of this would be 1600 students of Christ University undergraduate courses who study a second language as part of the course are enrolled in a program where they are building content on Hindi, Kannada, Tamil, Sanskrit and Urdu Wikipedias.

Strengthening existing community
Facilitate more qualitative interactions amongst current contributors, with an aim to a) foster creation of new project ideas; b) periodic review and mitigation of troublesome issues; c) foster a culture of collective review of the expansion of Indian language Wikipedias.

This is currently been done by capacity building meet-up or advanced user trainings organized for existing Wikimedia volunteers from different language communities.

Tapping into User Interest Groups
Setting up smaller special interest groups by tapping into existing virtual (Facebook pages/groups, bloggers communities, other open source groups/mailing lists), and physical communities and supporting key Wikipedians to bring new Wikipedians on board.

Building ties with DiscoverBhubaneshwar in Odisa [4] and Goa.me in Goa [5], which are photographer’s communities. Useful pictures from different states can feed into Wikipedia articles there by enriching the content. Collaboration with Media lab at Jadhavpur University, Kolkota has helped create articles on Indian cinema and media, Indian film history etc.

Creating awareness
Creation of short online editing videos tutorials and editing guides to be published on Wikimedia commons, YouTube, Facebook and similar websites that could help us reach out to larger audiences. Production of videos in local language will avoid existing issues with global videos such as low comprehensions because of accents and relevance.

Hindi Wikipedia tutorial videos were produced in collaboration with the Christ University students, faculty and staff, as part of the Wikipedia-in-the-UG-Language-Classroom program. A total of 10 videos are thoughtfully produced to teach anyone how to edit Hindi Wikipedia.[24] Video tutorials on Kannada Wikipedia are currently in pipeline.

Technical support
Liaising between language communities and WMF & Language Committee in finding effective solutions for any script issue, input method issue, rendering issues or any bugs.

Case Study: Wikipedians Speak

Netha Hussain is a 21-year-old medical student from Kerala, India. She first began editing Wikipedia in May 2010, contributing to English Wikipedia and Malayalam Wikipedia along with uploading photos to Wikimedia Commons. She said “I started editing Wikipedia every day. In school, we studied subjects like microbiology, pathology, pharmacology and forensic medicine. After class, I'd go straight to Wikipedia. I'd review the information related to the day's lecture, and add a few more facts and sources. It was a lot of work, and I always went to bed tired, but it was worth it. Everybody reads Wikipedia. If they want to learn something, they turn to Wikipedia first. I know I’ve helped a little — maybe even a lot. And that’s the greatest feeling I know.”[25]

Netha Hussein

Image Attribution:Netha Hussain by Adam Novak, under CC-BY-SA 3.0 Unported, from Wikimedia Commons.

Poongothai Balasubramanian is a retired Math teacher from Tamil Nadu, India. She began editing Wikipedia in 2010. Since then, she's created 250 articles and recorded pronunciations for 6,000 words. She has created several articles on quadratic functions, probability, charts, graphs and more on Tamil Wikipedia. She has over 7,000 Wikipedia edits. She said, “As a teacher and a mother, I was always busy. But now that I'm retired and my children are grown, my time is my own — all 24 hours of it! And I spend every day on Wikipedia. I'm a volunteer. No one pays me. But helping edit Wikipedia has become my life's work. Even though I'm not in the classroom, I'm still doing what I care about most: helping a newgeneration of students learn, in the language I love.”[26]

Balasubramanian

Image Attribution: Balasubramanian Poongothai by Adam Novak, under CC-BY-SA 3.0 Unported, from Wikimedia Commons.

Additional Reading

  1. Geert Lovink and Nathaniel Tkacz (eds.), “Critical Point of View: A Wikipedia Reader”, Centre for Internet and Society and the Institute of Network Cultures, http://www.networkcultures.org/_uploads/%237reader_Wikipedia.pdf.
  2. Links to 2 videos
  3. Yochai Benkler

Open Access

Definition
Open-access (OA) literature is digital, online, free of charge, and free of most copyright and licensing restrictions.

OA removes price barriers (subscriptions, licensing fees, pay-per-view fees) and permission barriers (most copyright and licensing restrictions). The PLoS shorthand definition —"free availability and unrestricted use"— succinctly captures both elements.

There is some flexibility about which permission barriers to remove. For example, some OA providers permit commercial re-use and some do not. Some permit derivative works and some do not. But all of the major public definitions of OA agree that merely removing price barriers, or limiting permissible uses to "fair use" ("fair dealing" in the UK), is not enough.

Here's how the Budapest Open Access Initiative put it: "There are many degrees and kinds of wider and easier access to this literature. By 'open access' to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited."

Here's how the Bethesda and Berlin statements put it: For a work to be OA, the copyright holder must consent in advance to let users "copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship...."

The Budapest (February 2002), Bethesda (June 2003), and Berlin (October 2003) definitions of "open access" are the most central and influential for the OA movement. Sometimes I refer to them collectively, or to their common ground, as the BBB definition.

When we need to refer unambiguously to sub-species of OA, we can borrow terminology from the kindred movement for free and open-source software. Gratis OA removes price barriers alone, andlibre OA removes price barriers and at least some permission barriers as well. Gratis OA is free of charge, but not free of copyright of licensing restrictions. Users must either limit themselves to fair use or seek permission to exceed it. Libre OA is free of charge and expressly permits uses beyond fair use. To adapt Richard Stallman's famous formulation (originally applied to software), gratis OA is free as in 'free beer', while libre OA is also free as in 'free speech'.

In addition to removing access barriers, OA should be immediate, rather than delayed, and should apply to full texts, not just abstracts or summaries.

It is true that many libraries and other content providing organizations provide free access to vast quantities of textual (and other kinds of) information.  Today a variety of contents is thrown open by the creators and these include hundreds of educational courses, open government data, open monographs, open images and so on.

But when we talk of ‘open access’ the term is restricted to science and scholarship and especially to research publications and in particular journal articles. Unlike most newspaper publishers, not all publishers of professional journals are ready to allow free use of the material they publish. Indeed, they levy hefty subscription prices and some journals cost in the range of US $ 20-30 thousand per year. Large publishing houses earn a profit of upwards of 35%. ”Elsevier's reported margins are 37%, but financial analysts estimate them at 40–50% for the STM publishing division before tax” [10].

Publishers protect their ‘rights’ with copyright and are ever vigilant in protecting those rights.

Case Study: Aaron Swartz

Let us begin with an extreme example – the case of Aaron Swartz, the hacker-activist, who was forced to end his life early this year after being pursued by the US Department of Justice.

What did Aaron do? He downloaded a very large number of full text papers from JSTOR, a database of scholarly journal articles, from an MIT server.

Why should anyone think downloading scholarly research articles was a crime in the first place? “Why, twenty years after the birth of the modern Internet, is it a felony to download works that academics chose to share with the world?” asks Michael Eisen, a renowned biologist at UC Berekeley and cofounder of the Public Library of Science [11].

The most important component of the Internet, the World Wide Web, was invented by CERN researchers essentially to help scientists communicate and share their research.

Today we can view thousands of videos on Indian weddings and pruning roses. But we are barred from downloading or reading research papers without paying a large sum! These are papers written by scientists, reviewed by scientists, their research often paid for by government agencies. And the knowledge therein is of relevance not only to other scientists but to the lay public as well. Especially, health related research.

And yet, JSTOR, a not-for-profit organization founded with support from Andrew Mellon Foundation, and MIT were keen to go to court, and the prosecutor was keen to argue for the severest punishment.

Case Study: Rover Research
Recently, Michael Eisen placed in his website four research papers resulting from the Rover exploration of Mars published in the AAAS journal Science. This is something no one has done before. His logic: the research was funded by NASA, a US government agency, and most of the authors were working in government institutions, and therefore the citizens have the right to access. While everyone was expecting AAAS and the authors to drag Eisen to court for violating copyright, the authors also made the papers freely available on their institutions’ websites! But I wonder if Eisen could have got away so easily had he placed papers published in a journal published by Elsevier or Springer. Possibly not. Recently Elsevier had sent thousands of take down notices to Academia.edu for placing papers published in Elsevier journals (in the final PDF version) in their site. Elsevier had also sent similar missives to many individual scientists and universities including Harvard for a similar ‘offence’ [12].

Scientists do research and communicate results to other scientists. They build on what is already known, on what others have done – the ‘shoulders of giants’ as Newton said. Getting to know the work and results of others’ research is essential for the progress of knowledge. Any barrier, including cost barrier, will hurt science or for that matter production of knowledge in any field.

When it comes to information (and knowledge) scientists everywhere face two problems, viz. Access and Visibility. These problems are acutely felt by scientists in poorer countries.

  1. They are unable to access what other scientists have done, because of the high costs of access. With the nation’s an annual per capita GDP of about US $3,500 (ppp) or even less, libraries in most developing countries cannot afford to subscribe to key journals needed by their users. Most scientists are forced to work in a situation of information poverty. Thanks to spiraling costs many libraries are forced to cancel subscription to several journals making the situation even worse.
  2. Scientists elsewhere are unable to access what developing country researchers are publishing, leading to low visibility and low use of their work. Take for example India. As Indian scientists publish their own research in thousands of journals, small and big, from around the world, their work is often not noticed by other scientists. even within India, working in the same and related areas. Thus Indian work is hardly cited.

Both these handicaps can be overcome to a considerable extent if open access is adopted widely both within and outside the country.

Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities[27]
Due to the changes that have come about in the production and distribution of scientific and cutlural knowledge in the age of the internet, there needed to be an agreement to move towards a global and interactive representation of human knowledge with worldwide access guarunteed. The Berlin Declaration of 2003 was an attempt at just that and it was in accordance with the spirit of the Declaration of the Budapest Open Access Initiative, the ECHO Charter and the Bethesda Statement on Open Access Publishing. The declaration lays down the measures that need to be adopted by research institutions, funding agencies, libraries, archives and museums among others in order to utilize the internet for open access to knowledge. There are more than 450 signatories including various government, funding agencies, academic and other knowledge based institutions. According to the Declaration, open access contributions have to include:

"Original scientific research results, raw data and metadata, source materials, digital representations of pictorial and graphical materials and scholarly multimedia material.

  1. Open access contributions must satisfy two conditions:The author(s) and right holder(s) of such contributions grant(s) to all users a free, irrevocable, worldwide, right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship (community standards, will continue to provide the mechanism for enforcement of proper attribution and responsible use of the published work, as they do now), as well as the right to make small numbers of printed copies for their personal use.
  2. A complete version of the work and all supplemental materials, including a copy of the permission as stated above, in an appropriate standard electronic format is deposited (and thus published) in at least one online repository using suitable technical standards (such as the Open Archive definitions) that is supported and maintained by an academic institution, scholarly society, government agency, or other well-established organization that seeks to enable open access, unrestricted distribution, inter operability, and long-term archiving."

Open Access – Green and Gold
With the Internet and the Web becoming ubiquitous, we need not suffer these problems. If science is about sharing, then the Net has the potential to liberate the world of science and scholarship and make it a level playing field.

Till a few decades ago scholarly communication was a quite affair. Scientists and professors did research in their laboratories and sent the papers they wrote to editors of refereed journals. These journals were often published by professional societies, academies and in some countries government departments devoted to science. Many societies gave the responsibility to bring out the journals to commercial publishing houses. These publishers found in journal publishing a great business opportunity and started raising subscription prices. Initially no one seemed to notice or bother. But from around 1980, the rise in the cost of journals outstripped the general inflation by a factor of 3 or 4. Members of the Association of Research Libraries felt the pinch; many academic libraries had to cut down on their purchase of books and monographs so as to be able to subscribe to as many journals as possible. Then they had to cut down on the number of journals. Their levels of service to their academic clients fell badly. The ‘serials crisis’ forced them to protest. By then web technologies and online sharing of information had sufficiently advanced. Together these two developments led to the open access movement.

There are two ways research papers published in journals can be made open access: Open access journals and open access repositories.

Open Access Journals - The journal can allow free downloading of papers by anyone, anywhere without paying for it. Such journals are called open access journals. Making papers open by this method is referred to as the Gold route to open access. Traditionally, journals used to charge a subscription fee from libraries (or individuals who may choose to take personal subscriptions) and not charge authors submitting papers for publication. Occasionally, some journals may request authors to pay a small fee to cover colour printing of illustrations. Many open access journals do charge a fee from the authors, which is often paid by the author’s institution. The APC collected by different journals varies from a few hundred dollars to a few thousands.

But not all OA journals levy an article publishing charge, e.g., journals published by the Indian Academy of Sciences, Council of Scientific and Industrial Research (CSIR-NISCAIR), Indian Council of Medical Research, and the Indian Council of Agricultural Research do not charge authors or their institutions.As of today, there are more than 9,800 OA journals published from 124 countries and these are listed in the Directory of Open Access Journals, [www.doaj.org], an authoritative database maintained at Lund University. On average four new journal titles are added to DOAJ every day.

Open Access Repositories - Authors of research papers may make them available to the rest of the world by placing them in archives or repositories. This is the ‘Green route’ to open access. There are two kinds of repositories: Central and distributed or institutional. arXiv is a good example of a central repository. Any researcher working in a relevant field can place his paper in arXiv and it can be seen almost instantaneously by other researchers worldwide.  Developed in 1991 as a means of circulating scientific papers prior to publication, arXiv initially focused on e-Prints in High Energy Physics (HEP). In time, focus broadened to related disciplines. All content in arXiv is freely available to all users. Currently, it provides access to more than 900,000 “e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics.” There are other central repositories such as SSRN (Social Science Research Network;[28] abstracts on over 521,000 scholarly working papers and forthcoming papers and an Electronic Paper Collection of over 426,600 downloadable full text documents ), Research Papers in Economics[29] (and ideas.RePEc.org; 1.4 million items of which 1.3 million are downloadable full texts), and CiteSeerX  (for computerand information science).[30]

Then there are institutional repositories. Registry of Open Access repositories[31] lists more than 2,900 repositories from around the world. The Directory of Open Access Repositories[32] lists more than 2,550 repositories, linking to more than 50 million items, growing at the rate of 21 thousand items per day, which can be searched through the Bielefeld Academic Search Engine search options.  A database called SHERPA-RoMEO lists open access and self-archiving policies of journals.

These repositories are different from the usual websites that individual scientists may maintain. They have to use one of many standard software such as EPrints, DSpace, Fedora, or Greenstone.  And they are all interoperable and ‘OAI-compliant’ which means that anyone searching for information need not know about a particular paper and the repository in which it is deposited; a mere keyword search will find the paper if it is relevant.

The Prophets of Open Access
The Net and the Web have not merely replaced print by speeding up things but have inherently changed the way we can do science (e.g. eScience and Grid computing), we can collaborate, we can datamine, and deal with datasets of unimaginable size. But the potential is not fully realized, largely because most of us are conditioned by our past experience and are inherently resistant to change. Our thinking and actions are conditioned by the print-on-paper era. Added to that is the apathy of science administrators.

Three individuals have made seminal contributions to realizing the potential of the Net in scholarly communication and may be considered pioneers in ushering in an era of open access. Tony Hey calls them ‘prophets of open access.’

  1. Paul Ginsparg, creator of arXiv, an open access repository for preprints of much of the physics and astronomy literature.
  2. Lipmann, Director of the NCBI, known for his leadership in making biomedical data and health information publicly and easily available to all, including scientists, medical professionals, patients, and students.By organizing and integrating genomic data for developing diagnostic and clinical applications, NCBI serves as a bridge from research to the medical community. Each day, more than 3 million users access NCBI's 40 interlinked genomic and bibliographic databases and download more than 30 terabytes of data. NCBI is home to PubMed Central and PubChem, two essential databases for biomedical researchers. PMC is a full text (ePrints) database of published research papers and PubChem is a database of about 31 million biologically important chemical compounds and their bioassays.
  3. Stevan Harnad, author of the subversive proposal, founder of Cogprints and tireless evangelist for Green Open Access [13].  Harnad has been writing frequently on all aspects of scholarly communication and open access in his blog ‘Open Access Archivangelism,’ addressing conferences and answering questions sent to him. There are also some institutions which have contributed substantially and these include the Open Society Institute (OSI), now rechristened Open Society Foundations, which facilitated the formulation of Budapest Open Access Initiative and the Budapest Declaration, and Association of Research Libraries.Surprisingly, Microsoft, not a great admirer of open source software, is promoting eScience through its External Research Division, especially formed for this purpose under the leadership of Prof. Tony Hey, former dean of Southampton University.

Open Access in India
The situation with accessing overseas journals has improved considerably thanks to many consortia which facilitate access to large groups of scientists in India (especially those in CSIR laboratories, Indian Institutes of Technology and Indian Institute of Science). Many universities have benefited through INFLIBNET. ICMR labs and selected medical institutions have formed ERMED, their own consortium. Rajiv Gandhi Health Sciences University, Bangalaore, provides access to literature through HELINET Consortia to a number of medical colleges in the South.

But the increased availability has not been taken full advantage by our researchers. A study of IISc in 2008 showed that the faculty and students have not used not even half the journals subscribed in their work – either for publishing their research or for quoting papers published in them. We seem to be paying for journals we do not use. Many of these journals are published by commercial publishers and they make huge profits. Publishers force consortia to buy journals as packages (bundling).

On the open course ware front the NPTEL programme under which top notch IIT and IISc professors produce both web-based and video lessons in many subjects, which are available on YouTube as well, has a huge worldwide following.

Many physicists in the better-known institutions use arXiv, which has a mirror site in India, both for placing their preprints and postprints and for reading preprints of others. But many others are not aware of it. What we need is advocacy and more advocacy.

Open access is gaining traction in India. For example, professors at National Institute of Technology, Rourkela, the first Indian institution to mandate open access for all faculty (and student) research publications, have received invitations to attend international conferences and for collaboration after their papers were made available through the institutional repository. Indian journals which embraced open access model started recording higher impact factors, e.g. Indian Journal of Medical Research and Journal of Postgraduate Medicine. MedKnow, publisher of JPGM, and Bioline International, have plenty of data to show the advantages of going open access.

And yet many researchers are reluctant to embrace OA. They fear that the journal publishers may sue them if they deposit their published papers in IRs. They have concerns about copyright violation.

Organizations such as the Open Society Foundations, ARL, SPARC and JISC (UK) and the seven research councils of UK are championing open access. Unfortunately some professional societies, notably ACS, are trying to stall the march of open access.

The best way to promote open access in India is to encourage self-archiving.

As Alma Swan says, we can do that by highlighting the increased visibility and impact, requiringauthors to self-archive and requiring them to self-archive in an institutional repository [14].

Why an institutionalrepository? Because it fulfils an institution’s mission to engender, encourage and disseminate scholarly work; an institution can mandate self-archiving across all subject areas. It enables an institution to compile a complete record of its intellectual effort; it forms a permanent record of all digital output from an institution. It enables standardised online CVs for all researchers. It can be used as a marketing’ tool for institutions [14].

An institutional repository provides researchers with secure storage (for completed work and for work-in-progress). It provides a location for supporting data yet to be published. It facilitates one-input-many outputs (CVs, publications) [14].

First, we must help institutions build an archive and teach researchers including students how to deposit (do it for them in the beginning if necessary) [14].

Eventually, in fact pretty soon, OA will be accepted by the vast majority of scientists and institutions. For only with OA scientific literature and data can be fully used. OA, making scientific literature and data free, is the only way to liberate the immense energy of distributed production. The moral, economic and philosophical imperatives for open access are indeed strong.

Even pharmaceutical companies like Glaxo SmithKline, Novartis and Novo Nordisk have started sharing their hard earned data in the area of drug development.

The openness movement in science and scholarship does not end with OA journals and OA repositories – both central and distributed. It includes the open data initiatives, escience and open science.

To learn more about open access please visit the Open Access Tracking Project led by Peter Suber, EOS [www.openscholarship.org/] and OASIS <openoasis.org> and join the GOAL discussion group moderated by Richard Poynder.

To know more about open science, read the articles by Paul David and Tony Hey.

What is Already There?
Thanks to the initiatives taken by Prof. M S Valiathan, former President of the Indian National Science Academy, the journals published by INSA were made OA a few years ago.

The Academy also signed the Berlin declaration. The Indian Academy of Sciences converted all its eleven journals into OA. The Indian Medlars Centre at the National Informatics Centre brings out the OA version of about 40 biomedical journals published mostly by professional societies.

All journals published by CSIR- NISCAIR (17), ICAR (2), ICMR and AIIMS are OA journals. No one needs to pay either to publish or read papers in these journals.

A Bombay-based private company called MedKnow brings out more than 300 journals, most of them OA, on behalf of their publishers, mostly professional societies. This company was acquired by Wolter Kluwers and they have decided to keep the journals OA.

Current Science and Pramana, the physics journal of the Indian Academy of Sciences, were the first to go open access among Indian journals. In all, the number of Indian OA journals is about 650.

The Indian Institute of Science, Bangalore, was the first to set up an institutional repository in India. They use the GNU EPrints software. Today the repository has about 33,000 papers, not all of them full text. IISc also leads the Million Books Digital Library project's India efforts under the leadership of Pro f. N Balakrishnan.

Today there are about 60 repositories in India (as seen from ROAR and OpenDOAR) including those at National Institute of Oceanography, and the National Aerospace Laboratories, Central Marine Fisheries Research Institute, Central Food Technology Research Institute, CECRI and the Raman Research Institute. The National Institute of Technology, Rourkela, was the first Indian institution to have mandated OA for all faculty publications.

Both ICRISAT and NIO have also mandated OA.

A small team at the University of Mysore is digitizing doctoral dissertations from select Indian universities under a programme called Vidyanidhi.

Problems and the Future
Despite concerted advocacy and many individual letters addressed to policy makers, the heads of government's departments of science and research councils do not seem to have applied their minds to opening up access to research papers. The examples of the research councils in the UK, the Wellcome Trust, the Howard Hughes Medical Institute and NIH have had virtually no impact. Many senior scientists and directors of research laboratories and vice chancellors of universities do not have a clear appreciation of open access and its advantages and implications.

Among those who understand the issues, many would rather like to publish in high impact journals, as far as possible, and would not take the trouble to set up institutional archives.

Most Indian researchers have not bothered to look up the several addenda (to the copyright agreement forms) that are now available. Many scientists I spoke to are worried that a publisher may not publish their papers if they attach an addendum! Publishing firms work in subtle ways to persuade senior librarians to keep away from OA initiatives. There have been no equivalents of FreeCulture.org among Indian student bodies and no equivalent of Taxpayers‘ Alliance to influence policy at the political level.

Both the National Knowledge Commission and the Indian National Science Academy have recommended OA. IASc has set up a repository for publications by all its Fellows and it has more than 90,000 papers (many of them only metadata + abstracts).  The Centre for Internet and Society has brought out a status report on OA in India. The Director General of CSIR has instructed all CSIR labs to set up and populate institutional repositories as soon as possible. Director general of ICAR has come up with an OA policy. Dr Francis Jayakanth of IISc is the recipient of the EPT Award for Advancing Open Access in the Developing World in its inaugural year. That should encourage many librarians to take to promoting OA.

The government should mandate by legislation self-archiving of all research output immediately upon acceptance for publication by peer-reviewed journals. The self-archiving should preferably be in the researcher's own institution's Institutional Repository.

The mandate should be by both institutions and funders.

Science journal publishers in the government and academic sectors should be mandated to make their journals OA (This can be achieved through adopting Open Journal Systems software developed at the University of British Columbia and Simon Fraser University and already in use by more than 10,000 journals. Expertise is available in India, or some journals can join Bioline International).

We should organize a massive training programme (in partnership with IISc, ISI-DRTC, NIC, etc.) on setting up OA repositories.

Authors should have the freedom to publish in journals of their choice; but they should be required to make their papers available through institutional repositories. In addition, they should use addenda suggested by SPARC, Science Commons, etc. while signing copyright agreements with journal publishers and not surrender copyright to (commercial) publishers. Some OA journals charge for publication. The Indian government or funders or institutions should definitely not offer to pay for journal publication charges.

Again, OA for all India's research output is covered by simply mandating OA self-archiving of all articles.

Brazil and the rest of Latin America have made great strides in open access. The excellent developments in Brazil, especially the government support (particularly in the state of Sao Paulo) and of the work of SciELO (for OA journals) and IBICT in supporting OA repository network are worthy of emulation in India and other developing countries.

Argentina has enacted a law that mandates OA to all research publications. India can follow their example.

Office of Science and Technology Policy Director John Holdren has issued a memorandum to make all research funded by major government funding agencies in the US insist on open access to government-funded research in USA. Indian funding agencies can do the same.

While our focus should be on digitizing and throwing open the current research papers and data, we may also make available our earlier work.

In particular, we may create an OA portal for the papers of great Indian scientists of the past: Ramanujan, J C Bose, S N Bose, M N Saha, K S Krishnan, Y Subba Rao, Sambhu Nath De, Mahalanobis, Maheshwari. C V Raman’s papers are already available on open access.

We may proactively advance OA in international forums such as IAP, IAC, ICSU and UNESCO. Two things can hasten the adoption of OA in India:

  1. If the political left is convinced that research paid for by the government is not readily available to the people freely and what is worse the copyright to the research papers are gifted away to commercial publishers from the advanced countries, then they may act. The same way, the political right will come forward to support open access if we impress upon them that copyright to much of the knowledge generated in our motherland is gifted away to publishing houses in the West.
  2. If the students are attracted to the idea that fighting for open access is the in thing to do, then they will form Free Culture like pressure groups and fight for the adoption of open access.

References

  1. Aristotle, “Politics”, Book2, Part 3,Oxford: Clarendon Press, 1946, 1261b.
  2. G. Hardin,“The Tragedy of the Commons”, Science, Dec 13, 1968.
  3. Vincent Ostrom and Elinor Ostrom, “Public Goods and Public Choices,” in E. S. Savas (ed.), Alternatives for Delivering Public Services: Toward Improved Performance, Boulder, Co: Westview Press, 1977, p. 7–49.
  4. Elinor Ostrom, “Governing the Commons: The Evolution of Institutions for Collective Action”, Cambridge University Press, 1990.
  5. E. Ostrom, “The Rudiments of a Theory of the Origins, Survival, and Performance of Common Property Institutions”, in D W Bromley (ed.),Making the Commons work: Theory, practice and policy, San Francisco, ICS Press, 1992.
  6. Charlotte Hess and Elinor Ostrom (eds.), “Understanding Knowledge as a Commons: From Theory to Practice”, MIT Press, 2006, http://mitpress.mit.edu/authors/charlotte-hessandhttp://mitpress.mit.edu/authors/elinor-ostrom.
  7. S.R. Ranganathan, “Five Laws of Library Science”,Sarada Ranganathan Endowment for Library Science, Bangalore,1966.
  8. Peter F. Drucker, “Beyond the Information Revolution”, The Atlantic, October 1, 1999.
  9. M.L.Nagar “Shri Sayajirao Gaikwad, Maharaja of Baroda: The Prime Promoter of Public Libraries”, 1917.
  10. Richard Van Noorden, “Open Access: The True Cost of Science Publishing”, Nature, 495 (issue 7442), 27 March 2013
  11. Michael Eisen, “The Past, Present and Future of Scholarly Publishing”, It Is Not Junk, March 28, 2013, http://www.michaeleisen.org/blog/?p=1346
  12. Kim-Mai Cutler, “Elsevier’s Research Takedown Notices Fan Out To Startups, Harvard, Individual Academics”,TechCrunch,December 19, 2013, http://techcrunch.com/author/kim-mai-cutler/, http://techcrunch.com/2013/12/19/elsevier/
  13. S. Harnad, “A Subversive Proposal” in Ann Okerson and James O'Donnell (Eds.) Scholarly Journals at the Crossroads; A Subversive Proposal for Electronic Publishing,Association of Research Libraries, June 1995. 
    http://www.ecs.soton.ac.uk/~harnad/subvert.html
  14. A. Swan, “Policy Guidelines for the Development and Promotion of Open Access”, UNESCO, Paris, 1995.
  15. Glover Wright, Pranesh Prakash, Sunil Abraham and Nishant Shah, “Open Government Data Study”, Centre for Internet and Society and Transparency and Accountability Initiative, 2011, http://cis-india.org/openness/blog/publications/open-government.pdf

Open (Government) Data

Definition
“Open data is data that can be freely used, reused and redistributed by anyone – subject only, at most, to the requirement to attribute and share alike.”[33] This has become an increasingly important issue in the age of the internet when governments can gather unprecedented amount of data about citizens and store various kinds of data which can actually be made available to people in an easier fashion.

Types of Government Data
Open (Govt) Data

[34]

This does not necessarily mean that all the government’s data should open according to the definition laid out above. There have been many arguments articulated against this.

  1. Since the government is responsible for the efficient use of tax payers money, data that is commissioned and useful only for a small subsection (eg: corporations) of society should be paid for by that subsection.
  2. There may be privacy concerns that limit the use of data to particular users or sub-sets of data.
  3. Often times, the data may not be usable without further processing and analysis that requires more investment from other sources. Groups that would usually commission such projects lose their incentive to do so because everyone has access to the information. Eg: Biological, medical and environmental data.

However, this kind of utilitarian calculus is not possible while deciding which data should be open and which ones should not. Some theorists make the argument that government data should be open.[35]

  1. An open democratic society requires that its citizens should know what the government is doing and that there is a high level of transparency. Free access is essential for this and in order for that information to be intelligible; the data should be reusable as well so it can be analyzed further.
  2. In the information age, commercial and even social activity requires data and having government data open can be a way to fuel economic and social activity within the society.
  3. If public tax payer money was used to fund the government data, then the public should have access to it.

The open data handbook lays out the steps required in order to start making government data more open.[36] The summarized gist of it is to:

1.    Chose the data sets that need to be made open.
2.    Apply an open license:
a.    Find out what kind of intellectual property rights exist on that data.
b.    Select an appropriate open license that would incorporate all of the criteria (usability, reusability etc) discussed above.
3.    Make the data available either in bulk or in Application Programming Interface (API) formats.
4.    Make this open data discoverable by posting on the web or adding it to a list.
Application Programming Interface (API) vs. Bulk Data[37]

  1. Bulk is the only way to ensure that the data is accessible to everyone.
  2. Bulk access is a lot cheaper than providing API access. (API specifies how some software components should interact with each other) Therefore, it is acceptable for the provider to charge for API access as long as the data is also provided in bulk.
  3. An API is not a guarantee of open access but it is good if it’s provided.

Open Government Data in India
At an annual summit in London recently where an open government data report was produced, India ranked 34th among 77 countries.

Data Availability and Openness

[38]

In India, open government data is currently about closing the loopholes and gaps in the Right to Information Act (RTI) and its promise of transparency as envisioned by the Knowledge Commission. In its 10th 5 year plan (2002-2007) the Indian Government announced its plan to become SMART (Simple, Moral, Accountable, Responsible and Transparent).[39]

In 2012, India launched an Open Government Platform, which is a software platform that attempted to enhance the public’s access to government data. This was jointly developed by India and the US as a part of their Open Government Initiative.[40] Data.gov.in is a platform under this which provides a single-point access to datasets and apps published by the government’s ministries, departments and organizations and it was in compliance with the National Data Sharing and Accessibility Policy (NDSAP).[41]

The Right to Information Act, 2005[42]
Around 82 countries around the world currently have laws in place that force the government to disclose information to its citizenry but this has been a rather recent phenomenon. In India, the RTI was passed in 2005 after a prolonged struggle from civil society. This act effectively replaces and overrides many state level RTI acts, the Freedom of Information Act (2002) and the Official Secrets Act, 1923. We have come to learn based on the responses of RTI requests that the government is not obliged to provide access to some pieces of information such as the minutes to a cabinet meeting.

The RTI Act defines information as:

‘Any material in any form, including records, documents, memos, e-mails, opinions, advices, press releases, circulars, orders, logbooks, contracts, reports, papers, samples, models, data material held in any electronic form and information relating to any private body which can be accessed by a public authority under any other law for the time being in force.’

This capacious vision of the Act indicated a shift in the government’s philosophy from secrecy to transparency. According to the Global Integrity report, in the category ‘public access to government information’ India went from 78 points to 90 points from 2006-2011. During the same time frame, the United States has only gone from 78 points to 79 points. However, according to a study conducted by PricewaterhouseCoopers, 75% of the respondents said they were dissatisfied with the information provided by the public authorities.

Government Copyright
The government owns the copyright to any work that is produced by the government or government employees in India as well any material produced by an Indian legislative or judicial body. This provision is laid down in the Copyright Act, 1957[43](section 17(d) read with 2(k)) which gives a lifespan of 60 years for the copyright (section 28). The exceptions to the copyright are small and laid down in section 52(1)(q):

‘52(1) The following acts shall not constitute an infringement of copyright, namely: (q) the reproduction or publication of — (i) any matter which has been published in any Official Gazette except an Act of a Legislature; (ii) any Act of a Legislature subject to the condition that such Act is reproduced or published together with any commentary thereon or any other original matter; (iii) the report of any committee, commission, council, board or other like body appointed by the Government if such report has been laid on the Table of the Legislature, unless the reproduction or publication of such report is prohibited by the Government; (iv) any judgement or order of a court, tribunal or other judicial authority, unless the reproduction or publication of such judgment or order is prohibited by the court, the tribunal or other judicial authority, as the case may be.’

Although this exception is small, in practice the government has rarely the government has rarely prosecuted to enforce copyright when data is requested by an individual or group even when the reason for request is commercial in nature.

IP Protection for the Government

Most of data compiled by or commissioned for by the government is raw data in the form of figures and statistics. Generally, non-original literary works are not protected by copyright law and this issue was decided upon in a landmark Supreme Court case in 2007. The standard of originality was changed from the labor expended on compiling the information (also known as the ‘sweat of the brow’ doctrine)[44] to the creativity, skill and judgment required in the process. This meant that most of the government’s data would not qualify as creative enough to hold a copyright.

Case Study: The Department of Agriculture, Maharashtra

The Department of Agriculture (DoA) in Pune started using ICTs in 1986 itself when it used a computerized system to process census data. The DoA currently uses ICT for internal administrative word and also for processing and disseminating data to farmers across Maharashtra both online and through SMSs. The website is bilingual in both Marathi (the local language of the State) and English.

Some of the information available includes[45]

  1. The participation of Maharashtra farmers in the National Agriculture Insurance Scheme
  2. Annual growth rates of agriculture and animal husbandry
  3. Rainfall recording and analysis
  4. Soil and crop, horticultural, soil/water conservation, agricultural inputs, statistical and district-wise fertility maps.
  5. Farmers can sign up for SMS’s that give information specific to the crop requested.

Even though information in 2010 was available on 43 different crops which was sent to 40,000 farmers, people don’t have the technology to access all this information. Usually this is because of a lack of reliable electricity, internet and mobile phone access. The question is whether the open data responsibility ends as long as the data is made available by the government. Sometimes, the government has to make a discretionary decision to not make certain data available to the common man in the interest of public order. An example is if there is a crop that is infested with a disease or a pest, then it could cause a mass panic not only among farmers but also among the general consumers.

Case Study: Indian Water Portal

The Indian Water Portal[46] in Bangalore claims that it is an open, inclusive, web­based platform for sharing water management knowledge amongst practitioners and the general public. It aims to draw on the rich experience of water­sector experts, package their knowledge and add value to it through technology and then disseminate it to a larger audience through the Internet."[47]

Based the recommendations of the National Knowledge Commission (NKC), the IWP has established the best practices. It has been running on the open source software Drupal Software since 2007, and it is available in Hindi, Kannada and English. This portal also has an educational aspect to it as it provides reading material to students who wish to learn about water issues. Although this website was set up with the support of the national government, it hasn’t gotten much support from ministries and departments which is problematic as they produce the most amount of information on water and sanitation.

This is, however, a great example of a partnership between private and public that has led to accessible open government data. The only problem here is that it is only accessible to people with access to the web but that may be a problem better solved by increasing access to the web.


[1]. Read more at http://dmlcentral.net/blog/nishant-shah/big-data-peoples-lives-and-importance-openness

[2]. For more see GNU Operating System, “The Free Software Definition”, available at http://www.gnu.org/philosophy/free-sw.html, last accessed on January 26, 2014.

[3]. Read more at http://freeopensourcesoftware.org/index.php?title=History

[4]. For more see Millennium Development Goals, United Nations, available at http://www.un.org/millenniumgoals/bkgd.shtml, last accessed on January 26, 2014.

[5]. For more see “Free and Open Source Software”, Communication and Information, UNESCO, available at  http://www.unesco.org/new/en/communication-and-information/access-to-knowledge/free-and-open-source-software-foss/, last accessed on January 26, 2014.

[6]. Read more at http://freeopensourcesoftware.org/index.php?title=Organizations

[7]. Read more at http://freeopensourcesoftware.org/index.php?title=Licenses

[8]. See citation 6 above.

[9]. For more see GNU Operating System, Why “Free Software” is better than “Open Source” https://www.gnu.org/philosophy/free-software-for-freedom.html, last accessed on January 26, 2014.

[10]. For more see Free Software Movement of India, available at http://www.fsmi.in/, last accessed on January 26, 2014.

[11]. See the Department of Electronics and Information Technology, Ministry of Communications & Information Technology, Government of India, Free and Open Source Software available at http://deity.gov.in/content/free-and-open-source-software, last accessed on January 26, 2014.

[12]. See citation above.

[13]. For more see Curoverse Gets $1.5M to Develop Open Source Genomics Tool, available at http://www.xconomy.com/boston/2013/12/18/curoverse-gets-1-5m-develop-open-source-genomics-tool/2/, last accessed on January 26, 2014.

[14]. For more see The Open-Sorcerers, available at http://slate.me/18NNx4x, last accessed on January 24, 2014.

[15]. For more see “Open Standards Requirements for Software – Rationale”, Open Source Initiative, available at http://opensource.org/osr-rationale, last accessed on January 26, 2014.

[16]. See citation above.

[17]. Ibid.

[18]. For more see “An emerging understanding of Open Standards”, available at http://blogs.fsfe.org/greve/?p=160, last accessed on January 26, 2014.

[19]. http://perens.com/OpenStandards/Definition.html

[20]. For more see Open Standards Requirements for Software – Rationale, available at http://opensource.org/osr, last accessed on January 26, 2014.

[21]. See “Definition of Open Standards”, available at http://www.w3.org/2005/09/dd-osd.html, last accessed on January 27, 2014.

[22]. Hari Prasad Nadig talking about Wikipedia Community building at Train the Trainer Program organised by CIS, November 29, 2013, available at http://www.youtube.com/watch?v=scEZewFJXUU, last accessed on February 1, 2014.

[23]. India Access to Knowledge meta page available at http://meta.wikimedia.org/wiki/India_Access_To_Knowledge , last accessed on February 1, 2014.

[24]. What is Hindi Wikipedia?, CIS-A2K, available at http://www.youtube.com/watch?v=96Lzxglp5W4&list=PLe81zhzU9tTTuGZg41mXLXve6AMboaxzD, last accessed on February 1, 2014.

[25]. Interview with Netha Hussain at WikiWomenCamp in Buenos Aires 2012, available at http://commons.wikimedia.org/wiki/File:WWC-Netha-Hussain.ogv , last accessed on February 2, 2014.

[26]. See interview of Poongothai Balasubramanian at http://wikimediafoundation.org/wiki/Thank_You/Poongothai_Balasubramanian, last accessed on February 1, 2014.

[27]. For more see Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, available at http://openaccess.mpg.de/286432/Berlin-Declaration, last accessed on February 1, 2014.

[28]. See Social Science Research Network, available at http://www.ssrn.com/, last accessed on January 27, 2014.

[29]. RePEc, available at http://www.repec.org/, last accessed on January 26, 2014.

[30]. Cite Seer X, available at http://citeseerx.ist.psu.edu/, last accessed on January 26, 2014.

[31]. Registry of Open Access Repositories, available at http://roar.eprints.org/, last accessed on January 26, 2014.

[32]. The Directory of Open Access Repositories, available at http://www.opendoar.org/, last accessed on January 26, 2014.

[33]. For more see Why Open Data, available at http://okfn.org/opendata/, last accessed on January 26, 2014.

[34]. Image obtained from http://okfn.org/opendata/

[35]. For more see Glover Wright, Pranesh Prakash, Sunil Abraham, Nishant Shah and Nisha Thompson, “Report on Open Government Data in India, Version 2 Draft”, Centre for Internet and Society, available at  http://cis-india.org/openness/publications/ogd-draft-v2/, last accessed on January 25, 2014.

[36]. For more see Open Data Handbook, available at http://opendatahandbook.org/en/ , last accessed on January 29, 2014.

[37]. For more see Janet Wagner, “Government Data: Web APIs vs. Bulk Data Files”, programmable web, available at http://blog.programmableweb.com/2012/03/28/government-data-web-apis-vs-bulk-data-files/, last accessed on January 31, 2014.

[38]. Read more at http://www.thehindu.com/opinion/blogs/blog-datadelve/article5314288.ece

[39]. For more see Glover Wright, Pranesh Prakash, Sunil Abraham and Nishant Shah, “Open Government Data Study: India”, Centre for Internet and Society, available at http://cis-india.org/openness/publications/open-government.pdf, last accessed on January 26, 2014.

[40]. Read more at http://pib.nic.in/newsite/erelease.aspx?relid=82025

[41]. Read the guidelines at http://data.gov.in/sites/default/files/NDSAP_Implementation_Guidelines-2.1.pdf

[42]. See the Right to Information Act, 2005, available at http://rti.gov.in/rti-act.pdf, last accessed on January 25, 2014.

[43]. See the Copyright Act, 1957, available at http://www.indiaip.com/india/copyrights/acts/act1957/act1957.htm, last accessed on January 25, 2014.

[44]. See note above.

[45]. See note above.

[46]. For more see Glover Wright, Pranesh Prakash, Sunil Abraham and Nishant Shah, “Open Government Data Study: India”, Centre for Internet and Society, available at http://cis-india.org/openness/publications/open-government.pdf, last accessed on January 26, 2014.

[47]. For more see India Water Portal, available at http://www.indiawaterportal.org/, last accessed on January 26, 2014.