You are here: Home / Internet Governance / Blog / Old Isn't Always Gold: FaceApp and Its Privacy Policies

Old Isn't Always Gold: FaceApp and Its Privacy Policies

Posted by Mira Swaminathan and Shweta Reddy at Jul 21, 2019 01:40 PM |
Leaving aside the Red Scare for a moment, FaceApp's own rebuttal of privacy worries are highly problematic in nature.

The article by Mira Swaminathan and Shweta Reddy was published in the Wire on July 20, 2019.


If you, much like a large number of celebrities, have spammed your followers with the images of ‘how you may look in your old age’, you have successfully been a part of the FaceApp fad that has gone viral this week.

The problem with the FaceApp trend isn’t that it has penetrated most social circles, but rather, the fact that it has gone viral with minimal scrutiny of its vaguely worded privacy policy guidelines. We click ‘I agree’ without understanding that our so called ‘explicit consent’ gives the app permission to use our likeness, name and username, for any purpose, without our knowledge and consent, even after we delete the app. FaceApp is currently the most downloaded free app on the Apple Store due to a large number of people downloading the app to ‘turn their old selfies grey’.

There are many things that the app could do. It could process the images on your device, rather than take submitted photos to an outside server.  It could also upload your photos to the cloud without making it clear to you that processing is not taking place locally on their device.

Further, if you have an Apple product, the iOS app appears to be overriding your settings even if you have denied access to their camera roll. People have reported that they could still select and upload a photo despite the app not having permission to access their photos. This ‘allowed behaviour’ in iOS is quite concerning, especially when we have apps with loosely worded terms and conditions.

FaceApp responded to these privacy concerns by issuing a statement with a list of defences. The statement clarified that FaceApp performs most of the photo processing in the cloud, that they only upload a photo selected by a user for editing and also confirmed that they never transfer any other images from the phone to the cloud. However, even in their clarificatory statement, they stated that they ‘might’ store an uploaded photo in the cloud and explained that the main reason for that is “performance and traffic”. They also stated that ‘most’ images are deleted from their servers within 48 hours from the upload date.

Further, the statement ends by saying that “all pictures from the gallery are uploaded to our servers after a user grants access to the photos”. This is highly problematic.

We have explained the concerns arising out of the privacy policy with reference to the global gold standards: the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, APEC Privacy Framework, Report of the Group of Experts on Privacy chaired by Justice A.P. Shah and the General Data Protection Regulation in the table below:

Privacy Domain OECD Guidelines APEC Privacy Framework Report of the Group of Experts on Privacy General Data Protection Regulation FaceApp Privacy Policy
Transparency There should be a general policy of openness about developments, practices and policies with respect to personal data. Personal information controllers should provide clear and easily accessible statements about their practices and policies with respect to personal data. A data controller shall give a notice that is understood simply of its information practices to all individuals, in clear and concise language, before any personal information is collected from them. Transparency:

 

The controller shall take appropriate measures to provide information relating to processing to the data subject in a concise, transparent, intelligible and easily accessible form, using clear and plain language.

Article 29 working party guidelines on Transparency:

The information should be concrete and definitive, it should not be phrased in abstract or ambivalent terms or leave room for different interpretations.

Example:

“We may use your personal data to develop new services” (as it is unclear what the services are or how the data will help develop them);

Information we collect

 

“When you visit the Service, we may use cookies and similar technologies”……. provide features to you.

We may ask advertisers or other partners to serve ads or services to your devices, which may use cookies or similar technologies placed by us or the third party.

“We may also collect similar information from emails sent to our Users..”

Sharing your information

“We may share User Content and your information with businesses…”

“We also may share your information as well as information from tools like cookies, log files..”

“We may also combine your information with other information..”

A simple reading of the guidelines in comparison with the privacy policy of FaceApp can help us understand that the terms used by the latter are ambiguous and vague. The possibility of a ‘may not’ can have a huge impact on the privacy concerns of the user.

 

The entire point of ‘transparency’ in a privacy policy is for the user to understand the extent of processing undertaken by the organisation and then have the choice to provide consent. Vague phrases do not adequately provide a clear indication of the extent of processing of personal data of the individual.

Privacy Domain OECD Guidelines APEC Privacy Framework Report of the Group of Experts on Privacy General Data Protection Regulation FaceApp Privacy Policy
Security Safeguards Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data Personal information controllers should protect personal information that they hold with appropriate safeguards against risks, such as loss or unauthorised access to personal information or unauthorised destruction, use, modification or disclosure of information or other misuses. A data controller shall secure personal information that they have either collected or have in their custody by reasonable security safeguards against loss, unauthorised access, destruction, use, processing, storage, modification, deanonymization, unauthorised disclosure or other reasonably foreseeable risks The controller and processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. How we store your information

 

“We use commercially reasonable safeguards to help keep the information collected through the Service secure and take reasonable steps… However, FaceApp cannot ensure the security of any information you transmit to FaceApp or guarantee that information on the Service may not be accessed, disclosed, altered, or destroyed.”

The obligation of implementing reasonable security measures to prevent unauthorised access and misuse of personal data is placed on the organisations processing such data. FaceApp’s privacy policy assures that reasonable security measures according to commercially accepted standards have been implemented. Despite such assurances, FaceApp’s waiver of the liability by stating that it cannot ensure the security of the information against it being accessed, disclosed, altered or destroyed itself says that the policy is faltered in nature.

The privacy concerns and the issue of transparency (or the lack thereof) in FaceApp are not isolated. After all, as a Buzzfeed analysis of the app noted, while there appeared to be no data going back to Russia, this could change at any time due to its overly broad privacy policy.

The business model of most mobile applications being developed currently relies heavily on personal data collection of the user. The users’ awareness regarding the type of information accessed based on the permissions granted to the mobile application is questionable.

In May 2018, Symantec tested the top 100 free Android and iOS apps with the primary aim of identifying cases where the apps were requesting ‘excessive’ access to information of the user in relation to the functions being performed. The study identified that 89% of Android apps and 39% of the iOS app request for what can be classified as ‘risky’ permissions, which the study defines as permissions where the app requests data or resources which involve the user’s private information, or, could potentially affect the user’s locally stored data or the operation of other apps.

Requesting risky permissions may not on its own be objectionable, provided clear and transparent information regarding the processing, which takes place upon granting permission, is provided to the individuals in the form of a clear and concise privacy notice. The study concluded that 4% of the Android apps and 3% of the iOS apps seeking risky permissions didn’t even have a privacy policy.

The lack of clarity with respect to potentially sensitive user data being siphoned off by mobile applications became even more apparent with the case of a Hyderabad based fintech company that gained access to sensitive user data by embedding a backdoor inside popular apps.

In the case of the Hyderabad-based fintech company, the user data which was affected included GPS locations, business SMS text messages from e-commerce websites and banks, personal contacts, etc. This data was used to power the company’s self-learning algorithms which helped organisations determine the creditworthiness of loan applicants. It is pertinent to note that even when apps have privacy policies, users can still find it difficult to navigate through the long content-heavy documents.

The New York Times, as part of its Privacy Project, analysed the length and readability of privacy policies of around 150 popular websites and apps. It was concluded that the vast majority of the privacy policies that were analysed exceeded the college reading level. Usage of vague language like “adequate performance” and “legitimate interest” and wide interpretation of such phrases allows organisations to use data in extensive ways while providing limited clarity on the processing activity to the individuals.

The Data Protection Authorities operating under the General Data Protection Regulation are paying close attention to openness and transparency of processing activities by organisations. The French Data Protection Authority fined Google for violating their obligations of transparency and information. The UK’s Information Commissioner’s office issued an enforcement notice to a Canadian data analytics firm for failing to provide information in a transparent manner to the data subject.

Thus, in the age of digital transformation, the unwelcome panic caused by FaceApp should be channelled towards a broader discussion on the information paradox currently existing between individuals and organisations. Organisations need to stop viewing ambiguous and opaque privacy policies as a get-out-of-jail-free card. On the contrary, a clear and concise privacy policy outlining the details related to processing activity in simple language can go a long way in gaining consumer trust.

The next time an “AI-based Selfie App” goes viral, let’s take a step back and analyse how it makes use of user-provided data and information both over and under the hood, since if data is the new gold, we can easily say that we’re in the midst of a gold rush.