Is That a Friend on Your Wall?
However, what remains constant in all these different equations is the process by which trust is established in physical spaces.
How do we trust somebody? How do we know that we are safe with them? We rely on answers like “instincts”, “vibes” or “feelings”, which cannot be easily quantified or explained. The reason we do not have rational explanations for why and how we trust somebody is because we depend upon a social design of trust from the beginning of our social interactions. As young children, we were told not to speak with strangers or accept candy from them. As adults, we were taught that people who look like us and sound like us are probably safer for us. We learn, through signs and experience, on how to be safe in our daily life. Some signs are obvious, like “Beware of pickpockets”.
Others are learned, the way somebody looks at us, warns us of impending danger. We have learned now to decode physical appearances, intonations, backgrounds and body language in order to develop relationships of trust with people we meet.
Often, these relationships are mitigated by structures that we trust. We believe that students who study with our children are not going to cause them harm because their schools would have vetted out undesirable people. In public places, we are not paranoid that a gunman is going to start shooting at us because we believe that the law and order systems would have produced conditions of safety.
However, when we go online, the instincts which we have been trained in to decode people’s social performances suddenly become inadequate. Social cues online are difficult to decipher.
We no longer have the luxury of studying people “in-person”. Instead, we have to engage with them on interfaces, where their avatars become the faces that we talk to. As the famous cartoon goes, a dog on a laptop is telling another canine friend, “On the internet, nobody knows you are a dog!” For digital natives who populate these virtual worlds with great ease, this is perhaps one of the biggest challenges. Without having the social design that helps them evolve measures of safety or the advice of older generations (there are no older digital natives!), it becomes difficult for them to figure out how to trust somebody online. This lack of design often informs the paranoia about predators, about young users being exploited by those more skilled at navigating the environments, about bullying and exclusion that often happens in the online space. The digital immigrants or settlers look to the digital native for clues about how to trust somebody online.
Economic structures like banks, corporates and governments advise people on how to trust transactions online. Your bank has probably sent you information about phishing scams. Companies like Facebook also warn you to check URLs and warn you when you navigate to a page outside the Facebook universe. Governments are investing in hi-tech encryption services which can protect citizen data against fraud or misuse.
Browsers like Firefox have their own parsing techniques which warn you about the possible dubiousness of a webpage. All these measures, while they help to protect us online, still do not help us in determining how and why we trust somebody online.
This is a question that needs to be emphasised because the solutions do not reside in technology implementations. Just like trust is not a technology problem, the answers to the questions are also not going to be within technologies.
As we begin the second decade of the 21st century, it is time to start figuring out how we shall learn to identify elements of digital identities. Reminders and signposts about not sharing sensitive information online; a trust-based design system where users are empowered with credentials by their participation in the community and trust ratings provided by their peers will become an integral part of digital identities. We need to learn how to analyse online identities by making database searches, reading through the larger narratives of the avatars by using reference sites that can validate information about the user, and remember that online conversations also carry an element of risk. This will help decode digital behaviour and ensure that we make informed choices about trust online.
Read the original in Indian Express here