You are here: Home / Internet Governance / Blog / i4D Interview: Social Networking and Internet Access

i4D Interview: Social Networking and Internet Access

Nishant Shah, the Director for Research at CIS, was recently interviewed in i4D in a special section looking at Social Networking and Governance, as a lead up to the Internet Governance Forum in December, in the city of Hyderabad.

Mechanism of Self-Governance Needed for Social Networks

Should social networking sites be governed, and if yes, in what way?

Nishant ShahA call for either monitoring or censoring Social Networking Sites has long been proved ineffectual, with the users always finding new ways of circumventing the bans or the blocks that are put into place. However, given the ubiquitous nature of SNS and the varied age-groups and interests that are represented there, governance, which is non-intrusive and actually enables  a better and more effective experience of the site, is always welcome. The presumed notion of governance is that it will set processes and procedures in place which will eventually crystallise into laws or regulations. However, there is also another form of governance - governance as provided by a safe-keeper or a guardian, somebody who creates symbols of caution and warns us about being cautious in certain areas. In the physical world, we constantly face these symbols and signs which remind us of the need to be aware and safe. Creation of a vocabulary of warnings, signs and symbols that remind us of the dangers within SNS is a form of governance that needs to be worked out. This can be a participatory governance where each community develops its own concerns and addresses them. What is needed is a way of making sure that these signs are present and garner the attention of the user.

How do we address the concerns that some of the social networking spaces are not "child safe"? 

The question of child safety online has resulted in a raging debate. Several models, from the cybernanny to monitoring the child's activities online ,have been suggested at different times and have more or less failed. The concerns about what happens to a child online are the same as those about what happens to a child in the physical world. When the child goes off to school, or to the park to play, we train and educate them about things that they should not be doing -- suggesting that they do not talk to strangers, do not take sweets from strangers, do not tell people where they live, don't wander off alone -- and hope that these will be sufficient safeguards to their well being. As an added precaution, we also sometimes supervise their activities and their media consumption. More than finding technical solutions for safety online, it is a question of education and training and some amount of supervision to ensure that the child is complying with your idea of what is good for it. A call for sanitising the internet is more or less redundant, only, in fact, adding to the dark glamour of the web and inciting younger users to go and search for material which they would otherwise have ignored.

What are the issues, especially around identities and profile information privacy rights of users of social networking sites?  

The main set of issues, as I see it, around the question of identities, is the mapping of the digital identities to the physical selves. The questions would be : What constitutes the authentic self?  What is the responsibility of the digital persona? Are we looking at a post-human world where  online identities are equally a part of who we are and are sometimes even more a part of who we are than our physical selves? Does the older argument of the Original and the Primary (characteristics of Representation aesthetics) still work when we are talking about a world of 'perfect copies' and 'interminable networks of selves' (characteristics of Simulation)? How do we create new models of verification, trust and networking within an SNS? Sites like Facebook and Orkut, with their ability to establish looped relationships between the users, and with the notion of inheritance (¨friend of a friend of a friend of a friend¨), or even testimonials and open 'walls' and 'scraps' for messaging, are already approaching these new models of trust and friendship.

How do we strike a balance between the freedom of speech and the need to maintain law and order when it comes to monitoring social networking sites?

I am not sure if the 'freedom of speech and expression' and the 'maintaining of law and order' need to be posited as antithetical to each other. Surely the whole idea of 'maintaining law and order' already includes maintaining conditions within which freedom of speech and expression can be practiced. Instead of monitoring social networking sites to censor and chastise (as has happened in some of the recent debates around Orkut, for example), it is a more fruitful exercise to ensure that speech, as long as it is not directed offensively towards an individual or a community, needs to be registered and heard. Hate speech of any sort should not be tolerated but that is a fact that is already covered by the judicial systems around the world. 

What perhaps, is needed online, is a mechanism of self-governance where the community should be able to decide the kinds of actions and speech which are valid and acceptable to them. People who enter into trollish behaviour or hate speak, automatically get chastised and punished in different ways by the community itself. To look at models of better self-governance and community mobilisation might be more productive than producing this schism between freedom of speech on the one hand and the maintenance of law and order on the other.

Link to original article on