I recently had a chance to submit a series of written questions to Facebook following their visit to the Oireachtas committee on Communications, Climate Action and the Environemnt.
Below you can see the responses provided to these questions.
(1) Facebook's new policy is based on opt-in facial recognition being applied to inform Facebook users of their faces appearing on photos uploaded by other users. Does this mean that Facebook will index all facial profiles on any photo uploaded, regardless of any consent by any person depicted? Please answer yes or no and explain. If yes, how is this not in contradiction with Art 9 of the GDPR on sensitive data where biometric data was included?
No, Facebook does not index all facial profiles – or 'create and maintain a face template', in our terminology – on any photo uploaded regardless of consent by any person depicted. Over the past two months, we rolled out a notice to every Facebook user in Ireland asking if they would like to participate in face recognition features for use on Facebook; we only create a face template for each of these users if he or she explicitly consents.
At that point, we use the person's profile photo and images he or she is tagged in to create a face template for him or her. This template is linked to the user's Facebook user ID. Facebook does not use face recognition to identify a person from an image unless that image matches a face template linked to a Facebook user ID, which would only be created if the person chose to enable face recognition.
(2) More specifically, will FB refrain from analysing any photograph uploaded by any user for biometric data about persons depicted on those photos until it has received an opt-in by every person depicted on those photos? Please answer with yes or no.
When users upload images to share on Facebook, Facebook will analyse images and videos to detect whether they contain human faces. If they do, then Facebook will determine whether any of those faces correspond to a pre-existing face template (which are only created with explicit consent of the user). These initial steps do not involve the analysis or processing of biometric data. Facebook will only identify and process biometric data for those individuals within a photo who have opted-in to our face recognition technology for use on Facebook. Indeed, the identification process will simply fail if a person's face recognition setting is off because we have not created a reference template for that person.
So, although as described above, we do analyse the photograph, we do not analyse biometric data about persons depicted in photographs unless they have explicitly consented to this processing. Indeed, GDPR specifically classifies information as biometric data only if the processing of that data “allow or confirm the unique identification of [a] natural person” (emphasis added). Facebook only uniquely identifies an individual depicted in a photograph using face recognition (i.e., process biometric data) if we have a face template for that individual (created only in respect of users who have face recognition enabled).
In general, we note that GDPR permits the processing of biometric data under several different legal bases, not only consent (as anticipated in your question), such as to protect the vital interests of the data subject or where necessary for reasons of substantial public interest. If in the future Facebook relied on other legal bases for processing of face recognition data, we would of course, do so only in a way that fully conformed with our legal obligations.
(3) You state the following: “Second, we'll ask people who've previously chosen to share their political, religious and “interested in” information in their profile to check that they want to continue to share it.” Does the above mean that any of the above data will be deleted if Facebook does not receive an explicit consent to retain it?
Yes, if we do not have explicit consent from people to retain the information they chose to add to these profile fields, it will be deleted from their profile fields and from our servers.
As laid out in the question, if people have chosen to share political or religious views, or whom they are “interested in”, on their profile, as part of our GDPR “user engagement experience” we’ll ask if they want to continue sharing this information with others on Facebook and for it to be used to personalise product experiences on Facebook (e.g. to suggest a Facebook page or a Facebook group).
(4) If by “sharing” it is meant that the scope of the discontinuation is limited to sharing with other FB users and/or Facebook affiliates, how does Facebook consider that this complies with the requirements of Art. 9 GDPR for processing these special categories of data?
The discontinuation is not limited to sharing with other Facebook users / affiliates; if people do not wish to consent to Facebook's processing of their special category data, their information is deleted and not further processed by Facebook Ireland.
Our consent flow for use of sensitive personal data (“special categories of data”) complies with Art. 9 of the GDPR. Specifically:
· We comply with Art. 9 by asking individuals for explicit consent to share their political, religious, or “interested in” information in their profile fields and for it to be used to personalise product experiences on Facebook.
· If people decide not to provide explicit consent and to remove the information, the information is deleted.
· If people choose to provide explicit consent to continue to share the information, they also select with whom they want to share it with on Facebook, which can be anything from public, to friends, to 'only me'. Their selected audience can be amended at any time, and they can delete or amend the information as they wish.
· Finally, again, this special category profile information is not used for the purpose of serving targeted advertisements. It is, however, used to personalise product experiences on Facebook (e.g. to suggest a Facebook page or a Facebook group), as described in the user experience through which users' may give their explicit consent.
(5) Does Facebook track information about Irish users when they are not on the website or app? How and why? Will this change?
While this information allows Facebook to help deliver the personalised experience that users want when they use our services and to provide accurate measurement and delivery to our advertisers and other partners, we are constantly working to improve transparency and controls in this area, and are doing more to listen to policymakers, regulators, privacy experts and users to understand what controls would be most meaningful to them.
· Facebook receives data about people on other websites and apps when those websites and apps choose to use Facebook's services. These services include:
§ 'Social plugins', such as our Like and Share buttons, which make other websites more social and help people share content on Facebook directly from that website;
§ 'Facebook Login', which lets people use their Facebook account to log into another website or app;
§ 'Facebook Analytics', which helps websites and apps better understand how aggregate groups of people use their services; and,
§ Facebook ads and measurement tools, which enable websites and apps to show ads from Facebook advertisers, to run their own ads on Facebook or elsewhere, and to understand the effectiveness of their ads.
· Facebook also uses this information to help protect the security of Facebook.
§ For example, receiving data about the sites a particular browser has visited can help us identify bad actors. If someone tries to log into your account using an IP address from a different country, we might ask some questions to verify it’s you. Or if a browser has visited hundreds of sites in the last five minutes, that’s a sign the device might be a bot. We’ll ask them to prove they’re a real person by completing additional security checks.
· The information we receive also helps us improve the content and ads we show on Facebook.
§ If someone visits a lot of sports sites that use our services, he/she might see sports-related stories higher up in their News Feed. If someone has visited travel sites, we can show him/her ads for hotels and rental cars.
· When people visit a site or app that uses our services in the ways described above, we receive information that could include data like a device's IP address, information about the web browser and operating system of the device being used, identifiers such as cookies or advertising identifiers, and the website or app the person is using.
· We require websites and apps that use our tools to tell people they're sharing information with us, and to have a valid legal basis (such as a person’s permission or consent) to do so. We also give people control over how this data is used to provide more relevant ads in ad preferences, including providing them the ability to hide certain ads, opt out from specific advertisers and manage their advertising interests.
· The process we describe above is core to how most internet services work, and help Facebook to provide the personalised service that people want when they come to Facebook.
· However, we have just announced our plans to build a new tool called 'Clear History'. This feature will enable people to see the websites and apps that send us information when people use them, delete this information from their account, and turn off our ability to store it associated with their account going forward.
· It will take a few months to build Clear History. We’ll work with privacy advocates, academics, policymakers and regulators to get their input on our approach, including how we plan to remove identifying information and the rare cases where we need information for security purposes.
(6) The Transparent Referendum Initiative is attempting to crowdsource a non-partisan database of all the political ads Irish Facebook users are exposed to before the referendum on the 8th amendment. But Facebook already has the comprehensive picture of this spend. In the light of the Cambridge Analytica crisis, the concerns over interference with the US presidential election and uncertainty about conduct of the campaigns in relation to Brexit, will you release that data?
We met with TRI early on in the campaign and welcomed their efforts to bring greater transparency to this process. We also opened up a dedicated channel of communication for them and others campaigning on both sides of the debate to surface any issues they identified directly to us.
We agree that, when it comes to advertising on Facebook, people should be able to see all the ads that a Page is running. And, when it comes to political ads in particular, they should be clearly labeled to show who paid for them.
We are working hard to build our new political ads transparency tools and to roll them out, but it takes time to do that and, most importantly, to get them right. The additional tools we are currently building will include an archive of political ads run by Pages, and the amount of money that was spent on each ad. Unfortunately, that is not data that we have readily available in relation to the recent referendum, but we are working hard to deploy the tool for future elections/referenda.
As we informed the Joint Oireachtas Committee in April, we did add Ireland to our pilot program for the first phase of our political ads transparency efforts – the View Ads tool. Since April 25, this has enabled Irish Facebook users to see all of the ads every page is running on Facebook targeting users in Ireland at the same time. We hope this brought greater transparency to the referendum-related ads that Irish people saw on Facebook.
(7) Privacy International created a new Facebook profile to test default settings. By default, everyone can see your friends list and look you up using the phone number you provided. This is not what proactive privacy protections look like. how does this protect users by default under Article 25 of the GDPR?
As part of our recent privacy reviews, we have already suspended the ability to search for people by phone number in the Facebook search bar. Prior to this, we had permitted people to look up phone numbers on Facebook, and also gave them the choice to opt-out of this if they wished to. We have shut down this tool so that look-ups are no longer permitted.
More generally, we develop our approach to settings in a way that is privacy protective and that also recognises that people join Facebook specifically in support of their goal of connecting with others. For example, we believe that knowing whether you have mutual friends with someone is an important signal in deciding whether a friend request — that is, a request to connect with you and access information that you have shared only with friends — that you receive is legitimate. We hope that people will scrutinise friend requests if they note that they do not share any friends with the requester. However, we realise that we cannot share information about mutual friendships unless a person's friend list is publicly visible.
Of course, people can always control the audiences that see the information they choose to share. They can also control the audience for their friend list itself. Specifically, we offer everyone a granular control which they can use to tailor who can see their list of friends -- ranging from public to friends to a customised audience, or 'only me'. This setting is easily accessible from your Settings and in-line when viewing your own friends' list.
We’ve updated our Settings and Privacy Shortcuts so people can easily access their information, understand how it is shared on Facebook, and make the privacy choices that are right for them.
(8) Why do defaults allow for maximum exposure/sharing? Why not have each setting be a choice before profile becomes functional?
The defaults do not allow for maximum sharing. For example, the default audience for new accounts is friends. When joining Facebook, people come to find their friends and have an expectation to be able to find and connect with people they may know. We set the defaults in such a way as to be consistent with these expectations, while still protecting people’s privacy. To meet these expectations we are:
· Asking users to make certain choices up front when they create their account;
· Providing in-context information and optionality around choices directly in the product - for example, as you go to share a post, we may ask you if you're sure you want to make it public or if you'd rather share it with just your friends or a selection of your friends;
· Encouraging people to take our Privacy Check-up, which allows them to review some of their most important privacy settings to make sure they are set the way the person wants;
· Launching our new centralised Settings and Privacy Shortcuts to enable people to easily access, manage, delete and download their information.
· As explained above, we are also launching a “Clear History” control which will enable people to clear their off-Facebook history. Details are still being finalised.
(9) According to your notification, a 'small number of people who logged into This is Your Digital Life also shared their own News Feed, timeline, posts and messages which may have included posts and messages from you”. Why was this data breach not notified to the appropriate national authorities immediately? Are other apps also able to share/receive messages from me?
The people who shared their messages and posts with Kogan’s app expressly consented to do so after a clear notification. Users were not able to provide apps with access to their friends’ NewsFeed or Message inbox. Because access to NewsFeed and Message Inbox only occurred with the authorisation of the user, there was no unauthorised access to data.
With the benefit of hindsight, we wish we had notified people whose information may have been impacted. Facebook has since notified all people potentially impacted with a detailed notice at the top of their newsfeed.
(10) If a similar situation to the one involving CA were, despite your efforts, to arise again, who would be responsible - Facebook Inc or Facebook Ireland?
Facebook did not permit or agree to the transfer of Facebook user data obtained through the app by Dr. Kogan onward to Cambridge Analytica. Any onward data sharing that happened was in violation of Facebook's Platform Policy, and the potential misuse of that data by Dr. Kogan and Cambridge Analytica is being investigated on an ongoing basis both by Facebook and responsible data protection authorities. Accordingly, while we have taken steps to make it harder for third parties to misuse Facebook's systems in the future, we consider that Dr. Kogan, Cambridge Analytica, and their related parties are responsible in the first instance for what happened in this case.
With regard to Facebook's own data processing (or processing in respect of which it is the responsible entity under applicable law), Facebook Ireland is the responsible entity for individuals based within Europe, and Facebook, Inc. is the responsible entity for those individuals based outside Europe.
(11) Max Schrems specifically highlighted the issue of third party developer access to friends' data - without their knowledge or consent - in his original complaint in Ireland in 2011. The Irish DPC also focused specifically on this issue and asked that it be addressed, after auditing Facebook in 2011 and 2012. Yet Facebook did not close that access point until 2014 and 2015. If FB had complied with the IDPC audit, CA could not and would not have happened. Why was the company so remiss in addressing this serious problem in a timely manner? And why should the public, the EU or regulators have any confidence the company will take data privacy seriously or proactively when it did not address this major problem for two years, despite direct instruction to do so?
As our Vice President of Global Public Policy, Joel Kaplan, shared in his testimony to the Joint Oireachtas Committee, we agree that we took too long to respond.
In 2014, we announced changes to our platform APIs to limit the data apps could access to build new social experiences: (a) As a general matter, apps could no longer access detailed information about people's friends, and (b) in most cases, the only friend information apps could get was the lists of friends who also use the app. This change would prevent an app like Kogan's from being able to access so much data today. At that time we made clear that existing apps would have a year to transition – at which point they would be forced (1) to migrate to the more restricted API and (2) be subject to Facebook’s new review and approval protocols. A small number of developers asked for and were granted short-term extensions beyond the one-year transition period, the longest of which lasted several months. These extensions ended several years ago. A transition period of this kind is standard when platforms implement significant changes to their technology base and was necessary here to avoid disrupting the experience of millions of people. New apps that launched after April 30, 2014 were required to use our more restrictive platform APIs.
The audit conducted by the Irish Data Protection Commissioner in 2011 and the follow-up in 2012 was very comprehensive and involved code-level reviews of our systems. The DPC issued approximately 60 recommendations that required a response from Facebook and, in many instances, significant modifications to our service.
The DPC specifically requested that we provide additional education and notice to users about options for reporting problematic apps and utilising controls to prevent apps from accessing data. In response, we made the placement of the controls for users (to choose whether they wanted to allow a friend to share their information with an app developer) and reporting tools more prominent, including in a new user experience flow that we built into registration for all users.
We have also recently announced a number of additional steps: https://newsroom.fb.com/news/2018/04/restricting-data-access/
Separately, Facebook has a set of APIs that enable certain partners, primarily operating systems and device manufacturers, to provide people with Facebook-like experiences (e.g., Facebook apps, news feed notifications, address book syncs) in their products. We developed these APIs, which are commonly known as “device-integrated APIs,” in the early days of mobile when the demand for Facebook outpaced our ability to build versions of our product that worked on every phone or operating system. Several dozen companies still used them at the start of the year, including Amazon, Apple, Blackberry, HTC, Microsoft, Huawei, Lenovo and Samsung, among others. On April 23, 2018, we announced that we would wind down these APIs. So far over 30 of these partnerships have been ended, including with Huawei.
These device-integrated APIs are different from the platform APIs that were used by Alexandr Kogan, which went to the heart of the Cambridge Analytica matter. Third party developers using our platform APIs built new, social experiences incorporating information that Facebook users brought with them; by contrast, the very point of our device-integrated APIs was to enable other companies to create Facebook functionality, primarily for devices and operating systems. The experiences that partners built using our device-integrated APIs were reviewed and approved by Facebook, and partners could not integrate the user’s Facebook features without the user’s permission.
(12) Why do privacy settings continue to only focus on what friends can and can't see? If the recent FB scandal has showed one thing, it's that FB's ad policies have far reaching consequences for users' privacy. When are you going to treat ad settings as privacy settings?
While we believe it is important to give people control over who sees what they share, we do treat ads settings as privacy settings. We've recently updated our Settings tool to make it even easier for people to access their privacy settings, including for ads. In fact, the Privacy Shortcuts tool that we recently launched has an entire section devoted to advertising controls.
Among the ads settings we offer is the Ad Preferences tool, which is directly accessible from our new centralised Settings and Privacy Shortcuts, and includes the following controls:
· Manage Preferences
· See and delete advertisers a user interacted with
· Manage whether FB can show ads based on the following profile fields: relationship status, employer, Job title, education
· Decide whether FB can show ads based on data from partners
· Decide whether FB can show ads based on the user's activity on Facebook Company Products that he/she sees elsewhere
· Decide who can see ads that include the user's social actions
· Hide advertisement topics
This tool also includes clear information about how advertisement works on Facebook: https://www.facebook.com/ads/about/?entry_product=ad_preferences
These controls can also be accessed via every ad a user sees on the platform.
(13) The GDPR includes new provisions on profiling and automated decision-making. How are you going to change your ad targeting practices to be compliant?
Facebook does not carry out any automated decision-making with legal (or “similarly significant”) effects pursuant to Article 22 of the GDPR. If Facebook were to carry such type of automated decision-making in the future, it would comply with the requirements of Article 22 of the GDPR. The Article 29 Working Party has clearly established in its guidelines that, “In many typical cases, the decision to present targeted advertising based on profiling will not have a similarly significant effect on individuals, for example an advertisement for a mainstream online fashion outlet based on a simple demographic profile: ‘women in the Brussels region aged between 25 and 35 who are likely to be interested in fashion and certain clothing items’”
We are transparent in our Data Policy about the type of data we collect, and how it is used, as part of the Facebook service.
(14) The Economist recently reported on how difficult it is for Europeans to download their personal data from FB. Can you confirm that the downloaded files contain all the data that FB holds on each user? MZ's testimony described your systems as more transparent than they actually are. How and when, if at all, do you plan to address these issues?
Download Your Information or “DYI” tool is Facebook’s data portability tool. It was first launched in 2010 to let people download many types of information that we maintain about them, with a focus on data that a person may wish to bring with them to another online service, and we have upgraded it in preparation for GDPR. Users now are able to select the data they want to download, as well as the date range, the format of the file (HTML or JSON) and the quality of the file. The data in DYI includes all of the data each user has provided to us, as per the requirements set out under the GDPR.
In preparation for the GDPR we've also introduced a new Access Your Information (“AYI”) tool.
With AYI, people can access and manage (edit or delete) their data. This new tool is, like DYI, structured to show you (1) Your Information and (2) Information about you.
These updated tools make it much easier to access, delete, manage, and download your data. We are continuing our efforts to give people more transparency and control over their data.
As also noted in a previous answer, we have just announced our plans to build a new tool called 'Clear History'. This feature will enable people to see the websites and apps that send us information when people use them, delete this information from their account, and turn off our ability to store it associated with their account going forward.
(15) You claim to offer a single place to control your privacy. This does not seem to include ways to opt out of ad targeting or to avoid being tracked outside Facebook. Will you offer a single place where users can control every privacy aspect of Facebook, even for people who have no FB account? Will you stop creating “shadow profiles” of people who have not registered a Facebook account and never gave their consent to being tracked?
We do offer a single place to manage your privacy settings. Specifically, we updated our Settings for GDPR to make it easier to find privacy and other controls in one place. And we also launched a new Privacy Shortcuts tool that provides centralised access to the most important privacy controls.
Like many ad-supported services, advertising is central not only to our ability to operate Facebook, but to the core service that we provide, so we do not offer the ability to disable advertising altogether. But our privacy options include industry-leading controls over advertising — including the ability to opt out of seeing ads based on certain interests or to turn off ads based on apps and websites you visit off of Facebook.
Facebook does not create profiles for people without a Facebook account. When people visit apps or websites that feature our technologies — like the Facebook Like or Comment button — our servers automatically log (i) standard browser or app records of the fact that a particular device or user visited the website or app (this connection to Facebook's servers occurs automatically when a person visits a website or app that contains our technologies, such as a Like button, and is an inherent function of Internet design); and (ii) any additional information the publisher of the app or website chooses to share with Facebook about the person's activities on that site (such as the fact that a purchase was made on the site). This is a standard feature of the Internet, and most websites and apps share this same information with multiple different third-parties whenever people visit their website or app. For example, as our Chief Technology Officer Michael Schroepfer testified during a hearing before the Digital Culture, Media and Sports Committee of the UK Parliament, the Parliament's website www.parliament.uk collects and shares browser and cookie information with six different companies - Google, LinkedIn, Twitter, Hotjar, Pingdom and Facebook- so when a person visits Parliament's website, the person's browser makes a connection to each of those third parties. This blog post has more information about how this works. This information is not used to create profiles on non-Facebook users.
As noted in an earlier response, we also recently announced that we are going to build a 'Clear History' feature which will enable people on Facebook to delete all data we have collected about them based on their activity off-Facebook.
(16) The GDPR gives individuals the right to access and verify their profiles, including marketing profiles based on so called derived data (data that were not disclosed by the user but interpreted from his/her behaviour). Is FB going to give its users full access to their marketing profiles? Please answer with “yes” or “no” and explain.
Yes. In our Ads Preferences tool, people on Facebook can access and manage the ad targeting interests associated with their user account.
(17) Speaking about derived data and marketing profiles, does FB process for marketing purposes any data that reveal (directly or indirectly) political opinions of its users? Please answer with “yes” or “no” and explain.
No. Facebook does not process data about any user's political opinions for marketing purposes in the EU.
If a Facebook user 'likes' the Facebook Page of a political party/parties or a political representative/s, which would allow the owner of the Page to see who liked their Page, Facebook does not take that as an indication of their political opinion, but rather an interest in the topic represented by the Page. Additionally, we do not give advertisers the personal information (e.g. name and email address) of people who see their advertisements on Facebook, unless person explicitly directs us to do so.
Also, as previously set out in the answer to question 3, where an individual provides explicit consent to Facebook's sharing and use of their political opinion as published in their profile for product personalisation (e.g. to suggest a Facebook page or a Facebook group), this data will not be used to inform advertising targeted to that user.
(18) Is it the case that Whatsapp are able to access data of people in my contact file and use such data without getting their direct consent from the person involved? Do you have any plans to change such practices?
All major phone manufacturers/operation systems (iOS, Android etc.) allow people to upload the contacts or address book file on their device to third-party apps, including WhatsApp. These contacts may not be uploaded without the individual’s explicit consent.
When WhatsApp users upload their contacts or address book to WhatsApp, they do so with explicit consent. WhatsApp has implemented additional technical and security measures to restrict use of phone numbers that are not associated with the uploading user. WhatsApp processes those numbers of non-users to allow WhatsApp users to find and contact each other and not for other purposes.
(19) Facebook has voluntary agreements with the Swedish intelligence services to share data. How do you reconcile that with the GDPR?
The Swedish intelligence agency has no special relationship with Facebook, and our response to law enforcement access requests fully complies with applicable laws.
We take seriously our responsibility to maintain the safety and security of our platform, and the privacy of those using it, and are committed to being transparent in the way that we do this. We also recognise that there are many situations where it is in the interests of people using our service that their governments carry out an investigation into suspected criminal activity.
We publish details of content restrictions made pursuant to local law, as well as details of our process for handling these requests, in our Transparency Report
(20) Is FB sharing or has FB shared user data with any other intelligence service?
As outlined above in response to Question 19, we recognise that there are situations where it is in the interests of people using our service that their governments carry out an investigation into suspected criminal activity. But we also recognise that we must disclose information in connection with those investigations only in limited circumstances and with due attention to the privacy of the individuals involved and in accordance with applicable law.
We publish details of our process for handling these requests, in our Transparency Report.
(21) We all know that FISA Section 702 was re-authorised recently and the Irish High Court in its referral to the EU Court of Justice also stated as a factual finding that the mass surveillance by US authorities still takes place. Is FB already preparing for a situation where the EU-US Privacy Shield has been declared invalid? Do you agree that a stable solution would involve changes in US law?
The European Commission recently conducted a review of the Privacy Shield and concluded that it ensures an adequate level of protection for personal data of Europeans transferred to the United States.
It is important to have a strong framework to enable a predictable way for companies in Europe — including many SMBs and start-ups — to transfer data to the US and continue doing business across the Atlantic.
The recent case in the Irish High Court was concerned with model clauses. The purpose of model clauses is to provide additional protections where third countries' protections are not deemed adequate. We believe this is still a valid mechanism. It is relied on by many European businesses.
(22) Is FB aware of the ongoing surveillance by the British Government of all data being transferred between the island of Ireland and the UK? What actions are the company considering taking to secure such data or adapt to regulatory changes that may come from the exit of the UK from the EU?
We are only aware of media reports referencing Edward Snowden.
Facebook has repeatedly refuted the false allegations that it has provided the U.S. government (or any other government) with mass, indiscriminate, or “backdoor” access to its servers. In June 2013, in the midst of early reporting on government intelligence programs, Mark Zuckerberg, emphatically denied reports of Facebook’s purported involvement in any mass data collection:
I want to respond personally to the outrageous press reports about PRISM:
Facebook is not and has never been part of any program to give the US or any other government direct access to our servers. We have never received a blanket request or court order from any government agency asking for information or metadata in bulk, like the one Verizon reportedly received. And if we did, we would fight it aggressively. We hadn't even heard of PRISM before yesterday.
When governments ask Facebook for data, we review each request carefully to make sure they always follow the correct processes and all applicable laws, and then only provide the information if [it] is required by law. We will continue fighting aggressively to keep your information safe and secure.
We strongly encourage all governments to be much more transparent about all programs aimed at keeping the public safe. It's the only way to protect everyone's civil liberties and create the safe and free society we all want over the long term.
We are aware of numerous attempts by state sponsored actors to compromise our systems. As a result, we have deployed a sophisticated, and constantly evolving, security programme. Further detailed information on this is available here: https://www.facebook.com/security/
We are closely following any decisions the EU makes in terms of UK adequacy after Brexit. We will continue to process EU users' data under the legal requirements of the GDPR.