Author: Natasha Lomas

Facebook moves to shrink its legal liabilities under GDPR

Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale.

Late yesterday Reuters reported on a change incoming to Facebook’s T&Cs that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA.

With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming General Data Protection Regulation (GDPR) — which applies from May 25 — will not cover the ~1.5BN+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland).

The U.S. does not have a comparable data protection framework to GDPR. While the incoming EU framework substantially strengthens penalties for data protection violations, making the move a pretty logical one for Facebook’s lawyers thinking about how it can shrink its GDPR liabilities.

Reuters says Facebook confirmed the change to it, though the company played down the significance — repeating its claim that it will be making the same privacy “controls and settings” available everywhere. (Though, as has been previously pointed out, this does not mean the same GDPR principles will be applied by Facebook everywhere.)

At the time of writing Facebook had not responded to a request for comment on the change.

Critics have couched the T&Cs shift as regressive — arguing it’s a reduction in the level of privacy protection that would otherwise have applied for international users, thanks to GDPR. Although whether these EU privacy rights would really have been enforceable for non-Europeans is questionable.

According to Reuters the T&Cs shift will affect more than 70 per cent of Facebook’s 2BN+ users. As of December, Facebook had 239M users in the US and Canada; 370M in Europe; and 1.52BN users elsewhere.

It also reports that Microsoft -owned LinkedIn is one of several other multinational companies planning to make the same data processing shift for international users — with LinkedIn’s new terms set to take effect on May 8, moving non-Europeans to contracts with the U.S.-based LinkedIn Corp.

In a statement to Reuters about the change LinkedIn also played it down, saying: “We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data.”

One interesting question is whether these sorts of data processing shifts could encourage regulators in international regions outside the EU to push for a similarly extraterritorial scope for their data protection laws — or face their citizens’ data falling between the jurisdiction cracks via processing arrangements designed to shrink companies’ legal liabilities.

Another interesting question is how Facebook (or any other multinationals making the same shift) can be entirely sure it’s not risking violating any of its EU users’ fundamental rights if it accidentally misclassifies an individual as an non-EU international users — and processes their data via Facebook USA.

Keeping data processing processes properly segmented can be difficult. As can definitively identifying a user’s legal jurisdiction based on their location (if that’s even available). So while Facebook’s T&C change here looks intended to shrink its legal liabilities under GDPR, it’s possible the change will open up another front for individuals to pursue strategic litigation in the coming months.

Data experts on Facebook’s GDPR changes: Expect lawsuits

Make no mistake: Fresh battle lines are being drawn in the clash between data-mining tech giants and Internet users over people’s right to control their personal information and protect their privacy.

An update to European Union data protection rules next month — called the General Data Protection Regulation — is the catalyst for this next chapter in the global story of tech vs privacy.

A fairytale ending would remove that ugly ‘vs’ and replace it with an enlightened ‘+’. But there’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.

Payback is coming though. Balance is being reset. And the implications of not regulating what tech giants can do with people’s data has arguably never been clearer.

The exciting opportunity for startups is to skate to where the puck is going — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admit what the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.


More than just small print

Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.

The regulation is generally intended to strengthen Internet users’ control over their personal information, as we’ve explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.

This is why Facebook is not going gentle into a data processing goodnight.

Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…

This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.

But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.

One important point re: Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland (and thus within the EU); but not necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.

Users in North America will only benefit if Facebook chooses to apply the same standard everywhere. (And on that point the company has stayed exceedingly fuzzy.)

It has claimed it won’t give US and Canadian users second tier status vs the rest of the world where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.

Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.

Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.

The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.

That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.


Plus ça change?

Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.

Coincidentally many of these changes have been announced amid a major data mishandling scandal for Facebook, in which it’s been revealed that data on up to 87M users was passed to a political consultancy without their knowledge or consent.

It’s this scandal that led Zuckerberg to be perched on a booster cushion in full public view for two days last week, dodging awkward questions from US lawmakers about how his advertising business functions.

He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )

But whether Facebook’s business having enabled — in just one example — the clandestine psychological profiling of millions of Americans for political campaign purposes ends up being the final, final straw that catalyzes US lawmakers to agree their own version of GDPR is still tbc.

Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.

The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.

Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.

Other notable changes it has so far made public include consolidating settings onto a single screen vs the confusing nightmare Facebook has historically required users to navigate just to control what’s going on with their data (remember the company got a 2011 FTC sanction for “deceptive” privacy practices); rewording its T&Cs to make it more clear what information it’s collecting for what specific purpose; and — most recently — revealing a new consent review process whereby it will be asking all users (starting with EU users) whether they consent to specific uses of their data (such as processing for facial recognition purposes).

As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.

Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.

But, as we’ve pointed out before, concealment is not consent.

To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.

Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.

The company is risking a lot here.

In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.

Safe to say, EU lawmakers and NGOs are watching.


“Yes, they will be taken to court”

“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.

Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.

Data protection experts who TechCrunch spoke to suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.

“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”

“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent’,” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25).

“I also doubt whether FB Custom Audiences work correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”

That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.

Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.

However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.

Wishful thinking is not a reliable approach to legal compliance.


Fear and manipulation of highly sensitive data

Over many years of privacy-hostile operation, Facebook has shown it has a major appetite for even very sensitive data. And GDPR does not appear to have blunted that.

Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.

Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.

“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes’. “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products”. From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.

“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider’. This is really sensitive data we’re talking about.”

“The face recognition suggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you”.

“Also, the choices and defaults are not compatible with [GDPR] Article 25 on data protection by design and default nor Recital 32… If I say no to facial recognition it’s unclear if other users can continue to tag me.”

Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.

People who might not even be users of its product.

But if it does that it will be breaking the law. Yet Facebook does indeed profile non-users — despite Zuckerberg’s claims to Congress not to know about its shadow profiles. So the risk is clear.

It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.

New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.

Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.

“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.

Again, this whiffs a whole lot more than smells like GDPR compliance.

“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)

“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy’.”

“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”

“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.

US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.

His answers on this appeared to intentionally misconstrue what was being asked — presumably in a bid to mask the ugly reality of the true scope and depth of the surveillance apparatus he commands. (Sometimes with a few special ‘CEO privacy privileges’ thrown in — like being able to selectively retract just his own historical Facebook messages from conversations, ahead of bringing the feature to anyone else.)

‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.


Not even half the story

Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Requests give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.

Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.

Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.

If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)

There’s plenty of other information harvested from users that Facebook also intentionally fails to divulge via ‘Download your data’. And — to be clear — this isn’t a new problem either. The company has a very long history of blocking these type of requests.

In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.

“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”

“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.

Facebook wouldn’t comment publicly about this when we asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.

Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.

Which is not much of a moral defense even if the company believes its lawyers can sway judges with it. (Ultimately I wouldn’t fancy its chances; the EU’s top court has a robust record of defending fundamental rights.)


Think of the children…

What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.

And remember, some Facebook non-users will be children.

So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no.

TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.

“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”

So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.

Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.

Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.

“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of democratic processes… Though nobody can predict the future, it will require hard work.”

Telegram plays down Russian block — claiming no “significant” impact yet

A court in Russia ordered a block of messaging app Telegram this week but founder Pavel Durov has shrugged off the impact of the ban 24 hours in — claiming the app hasn’t seen “a significant drop in user engagement so far”.

Russia began (trying to) block Telegram yesterday, following a court ruling in Moscow earlier this week. The state communication watchdog had filed a lawsuit to limit access to the service after Telegram refused to hand over encryption keys — and the court granted the block.

In an update posted to his Telegram channel, Durov writes: “For the last 24 hours Telegram has been under a ban by internet providers in Russia. The reason is our refusal to provide encryption keys to Russian security agencies. For us, this was an easy decision. We promised our users 100% privacy and would rather cease to exist than violate this promise.

“Despite the ban, we haven’t seen a significant drop in user engagement so far, since Russians tend to bypass the ban with VPNs and proxies. We also have been relying on third-party cloud services to remain partly available for our users there.”

Durov goes on to thank Telegram users in Russia for their support — saying the country accounts for about 7% of the app’s user base. (Last month Telegram announced passing 200M monthly active users, which suggests it has about 14M users in Russia.)

He also name-checks four U.S. tech giants — Apple, Google, Amazon and Microsoft — for, as he puts it, “not taking part in political censorship”.

Telegram moved some of its infrastructure to third-party cloud services to try to make it harder for authorities to block access to its app. But the Russian state responded by blocking millions of IP addresses belonging to Amazon Web Services and Google Cloud, apparently causing collateral damage to swathes of other digitally delivered services. (Even reportedly to some credit card terminals.)

So how long those other tech companies stand firm remains to be seen.

In some cases direct political pressure —  not just the collateral damage of service disruption — appears to be being being brought to bear on them by the Russian state.

According to the Interfax news agency (via Reuters) the Russian telecoms agency has informed Amazon and Google that a “significant” number of their IP addresses are being blocked on the basis of the court ruling to block Telegram.

We’ve reached out to both to ask whether they will continue to host Telegram on their cloud services.

Reuters also reports that Russia’s state telecommunications regulator has asked Google and Apple to remove the Telegram messenger service from their app stores, citing the Interfax news agency.

We’ve also asked Apple how it intends to respond too. Last year the company bowed to state pressure in China and removed major VPN apps from its App Storesaying it was complying with a local regulation that requires VPN apps to be licensed by the government.

Russian authorities have claimed they need access to Telegram’s encryption for counter-terrorism purposes. However opponents of Vladimir Putin’s regime argue the Russian president uses claims of combating terrorism as an instrument to consolidate his own undemocratic grip on power.

Durov concludes his update about the block saying he intends to give out “millions” of dollars’ worth in Bitcoin grants this year — to “individuals and companies who run socks5 proxies and VPN” — and thus who help to bolster the resilience of Internet infrastructure against state attempts to control access.

“I am happy to donate millions of dollars this year to this cause, and hope that other people will follow. I called this Digital Resistance — a decentralized movement standing for digital freedoms and progress globally,” he adds.

The company is in the midst of a billion dollar ICO, raising money via a token sale to develop a crypto currency and blockchain platform of its own.

Cambridge Analytica’s ex-CEO backs out of giving evidence to UK parliament

Alexander Nix, the former CEO of the political consultancy firm at the center of a storm about mishandled Facebook users data, has backed out of re-appearing in front of the UK parliament for a second time.

Nix had been scheduled to take questions from the DCMS committee that’s probing online misinformation tomorrow afternoon.

In a press notice today, the committee said: “The former CEO of Cambridge Analytica, Alexander Nix, is now refusing to appear before the Digital, Culture, Media and Sport Committee at a public session tomorrow, Wednesday 18th April, at 2.15pm. He cites the Information Commissioner’s Office’s ongoing investigation as a reason not to appear.”

Nix has already given evidence to the committee — in February — but last month it recalled him, saying it has fresh questions for him in light of revelations that millions of Facebook users had their data passed to CA in violation of Facebook’s policies.

It has also said it’s keen to press him on some of his previous answers, as a result of evidence it has heard since — including detailed testimony from CA whistleblower Chris Wylie late last month.

In a statement today about Nix’s refusal to appear, committee chair Damian Collins said it might issue a formal summons.

“We do not accept Mr Nix’s reason for not appearing in a public session before the Committee. We have taken advice and he is not been charged with any criminal offence and there is no active legal proceedings and we plan to raise this with the Information Commissioner when we meet her this week. There is therefore no legal reason why Mr Nix cannot appear,” he said.

“The Committee is minded to issue a formal summons for him to appear on a named day in the very near future. We’ll make a further statement about this next week.”

When Nix attending the hearing on February 27 he claimed Cambridge Analytica does not “work with Facebook data”, also telling the committee: “We do not have Facebook data”, though he said the company uses the social media platform to advertise, and also “as a means to gather data, adding: “We roll out surveys on Facebook that the public can engage with if they elect to.”

Since then Facebook has said information on as many as 87 million users of its platform could have been passed to CA, via a quiz app that was able to exploit its friends API to pull data on Facebook users’ friends.

The Facebook CEO, Mark Zuckerberg, has also been asked to give evidence to the committee — but has declined repeat requests to appear.

Today the committee heard from a former CA director, Brittany Kaiser, who suggested CA had in fact been able to obtain information on far more than 87M Facebook users — by the use of a series of additional quiz apps designed to be deployed on Facebook’s platform.

She claimed viral tactics were used to harvest Facebookers’ data, naming two additional survey apps it had deployed on Facebook’s platform as a ‘sex compass’ app and a music quiz app claiming to determine your personality. She said she believed the point of the quizzes was to harvest Facebook user data.

Facebook finally suspended Cambridge Analytica from its platform last month — although the company has admitted it was made aware of the allegations linking it with a quiz app that harvested Facebook users data since at least December 2015, when the Guardian published its first article on the story.

Last month the UK’s data protection agency obtained a warrant to enter and search the offices of Cambridge Analytica — as part of an ongoing investigation into the use of data analytics for political purposes which it kicked off in May 2017.

The information commissioner said the warrant had been necessary as CA failed to meet an earlier deadline to hand over information that it had requested.

Meanwhile Nix himself was suspended as CEO by CA last month, following a Channel 4 News investigation broadcast video footage of Nix talking to an undercover reporter and appearing to suggest the firm uses a range of dubious tactics, including front companies and subcontractors to secretly engage in political campaigns.

In a statement at the time, CA said the secretly recorded comments — and “other allegations” — “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation”.

It’s since been reported that Julian Wheatland, the chair of the company’s UK counterpart, SCL Group, will be taking over as CA CEO — though this has not yet been publicly confirmed. Though it has said that the acting CEO, Dr Alexander Taylor, who took over from Nix last month has returned to his former role as chief data officer.

CA also used ‘sex compass’ and other quiz apps for sucking Facebook data, says former employee

Brittney Kaiser, a former employee for Cambridge Analytica — who left the company in January and is today giving evidence in front of a UK parliament committee that’s investigating online misinformation — has suggested that data on far more Facebook users may have found its way into the consultancy’s hands than the up to 87M people Facebook has so far suggested had personal data compromised as a result of a personality quiz app running on its platform which was developed by an academic working with CA.

Another former CA employee, Chris Wylie, previously told the committee the company worked with professor Aleksandr Kogan to gather Facebook users’ data — via his thisisyourdigitallife quiz app — because Kogan had agreed to work on gathering and processing the data first, instead of negotiating commercial terms up front.

CA’s intent was to use Facebookers’ data for political microtargeting, according to evidence provided by Wylie.

In her written evidence to the committee Kaiser claims:

I should emphasise that the Kogan/GSR datasets and questionnaires were not the only Facebook-connected questionnaires and datasets which Cambridge Analytica used. I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login – for example, the “sex compass” quiz. I do not know the specifics of these surveys or how the data was acquired or processed. But I believe it is almost certain that the number of Facebook users whose data was compromised through routes similar to that used by Kogan is much greater than 87 million; and that both Cambridge Analytica and other unconnected companies and campaigns were involved in these activities.

Asked to expand on this point during today’s hearing, Kaiser said Cambridge Analytica’s internal creative, psychology and data science teams worked together to design questionnaires for deploying on Facebook’s platform.

“I am aware now of what the questionnaire was that professor Kogan used, although I didn’t know about it when I joined the company. But I would see questionnaires — for example there was one called the ‘Sex Compass’ to find out what your personal preferences were privately. And then there was another one on your ‘Music Personality’,” she told the committee.

“In my pitches I used to give examples even to clients that if you go on Facebook and you see these viral personality quizzes — not all of them would have been designed by Cambridge Analytica/SCL Group or our affiliates but that these applications were designed specifically to harvest data from individuals, using Facebook as the tool.”

“I know at least of those two examples,” she added. “Therefore it can be inferred or implied that there were many additional individuals as opposed to just the ones, through Aleksandr Kogan’s test, whose data may have been compromised.”

Committee chair Damian Collins asked Kaiser whether the viral app approach to harvesting Facebook data, which CA had developed for work in the U.S., would have been used by the company in other markets too.

“That was the idea — although in Europe it’s quite difficult because of the data protection laws,” responded Kaiser.

“Well if you observe them,” quipped Collins.

“Correct,” said Kaiser.

A little later another committee member returned to the topic, asking Kaiser to confirm whether these survey apps would definitely have been able to pass Facebook data on users if they provided their Facebook login details at the end of the survey process — noting that the committee had been told by former CA CEO, Alexander Nix, that Facebook users’ personal data may not have been accessed by CA via the surveys.

“What you’re saying is that that was not the case — that actually the purpose of the survey was to gather [Facebook] information and by completing it with your Facebook login as well then CA would also get access to your data on Facebook too?”

“I believe that was the point of the quizzes in the first place, yes,” responded Kaiser.

Since the data misuse scandal blew up last month, Facebook has said it is conducting a full audit of any apps which had access to “a large amount” of information before it changed app permissions on its platform in mid 2015 to prevent developers from being able to suck out data on Facebook users’ friends.

We’ve reached out to Facebook to ask whether it can provide an estimate on the total number of users’ data that could have also been compromised by additional quiz apps running on its platform — and will update this story with any response.

Earlier this month the company confirmed that as many as 87 million Facebook users could have had information passed to Cambridge Analytica as a result of just Kogan’s app — which was downloaded around 270,000 times.

CEO Mark Zuckerberg has said the full audit process of third party apps with access to lots of user data will take some time.

Also earlier this month the company revealed that another historical feature — intended to be used for search and account recovery — had been systematically exploited by “malicious actors” to scrape public information from Facebook users’ profiles.

It warned that “most” Facebook users will have had their public info scraped by unknown entities as a result of this security loophole. The company’s platform has more than 2BN active users now, meaning that between 1BN and 2BN people will have had some of their Facebook information taken without their consent.

How to save your privacy from the Internet’s clutches

Another week, another massive privacy scandal. When it’s not Facebook admitting it allowed data on as many as 87 million users to be sucked out by a developer on its platform who sold it to a political consultancy working for the Trump campaign, or dating app Grindr ‘fessing up to sharing its users’ HIV status with third party A/B testers, some other ugly facet of the tech industry’s love affair with tracking everything its users do slides into view.

Suddenly, Android users discover to their horror that Google’s mobile platform tells the company where they are all the time — thanks to baked-in location tracking bundled with Google services like Maps and Photos. Or Amazon Echo users realize Jeff Bezos’ ecommerce empire has amassed audio recordings of every single interaction they’ve had with their cute little smart speaker.

The problem, as ever with the tech industry’s teeny-weeny greyscaled legalise, is that the people it refers to as “users” aren’t genuinely consenting to having their information sucked into the cloud for goodness knows what. Because they haven’t been given a clear picture of what agreeing to share their data will really mean.

Instead one or two select features, with a mote of user benefit, tend to be presented at the point of sign up — to socially engineer ‘consent’. Then the company can walk away with a defacto license to perpetually harvest that person’s data by claiming that a consent box was once ticked.

A great example of that is Facebook’s Nearby Friends. The feature lets you share your position with your friends so — and here’s that shiny promise — you can more easily hang out with them. But do you know anyone who is actively using this feature? Yet millions of people started sharing their exact location with Facebook for a feature that’s now buried and mostly unused. Meanwhile Facebook is actively using your location to track your offline habits so it can make money targeting you with adverts.

Terms & Conditions are the biggest lie in the tech industry, as we’ve written before. (And more recently: It was not consent, it was concealment.)

Senator Kennedy of Louisiana also made the point succinctly to Facebook founder Mark Zuckerberg this week, telling him to his face: “Your user agreement sucks.” We couldn’t agree more.

Happily disingenuous T&Cs are on borrowed time — at least for European tech users, thanks to a new European Union data protection framework that will come into force next month. The GDPR tightens consent requirements — mandating clear and accurate information be provided to users at the point of sign up. Data collection is also more tightly tied to specific function.

From next month, holding onto personal data without a very good reason to do so will be far more risky — because GDPR is also backed up with a regime of supersized fines that are intended to make privacy rules much harder to ignore.

Of course U.S. tech users can’t bank on benefiting from European privacy regulations. And while there are now growing calls in the country for legislation to protect people’s data — in a bid to steer off the next democracy-denting Cambridge Analytica scandal, at very least — any such process will take a lot of political will.

It certainly will not happen overnight. And you can expect tech giants to fight tooth and nail against laws being drafted and passed — as indeed Facebook, Google and others lobbied fiercely to try to get GDPR watered down.

Facebook has already revealed it will not be universally applying the European regulation — which means people in North America are likely to get a degree of lower privacy than Facebook users everywhere else in the world. Which doesn’t exactly sound fair.

When it comes to privacy, some of you may think you have nothing to hide. But that’s a straw man. It’s especially hard to defend this line of thinking now that big tech companies have attracted so much soft power they can influence elections, inflame conflicts and divide people in general. It’s time to think about the bigger impact of technology on the fabric of society, and not just your personal case.

Shifting the balance

So what can Internet users do right now to stop tech giants, advertisers and unknown entities tracking everything you do online — and trying to join the dots of your digital activity to paint a picture of who they think you are? At least, everything short of moving to Europe, where privacy is a fundamental right.

There are some practical steps you can take to limit day-to-day online privacy risks by reducing third party access to your information and shielding more of your digital activity from prying eyes.

Not all these measures are appropriate for every person. It’s up to you to determine how much effort you want (or need) to put in to shield your privacy.

You may be happy to share a certain amount of personal data in exchange for access to a certain service, for example. But even then it’s unlikely that the full trade-off has been made clear to you. So it’s worth asking yourself if you’re really getting a good deal.

Once people’s eyes are opened to the fine-grained detail and depth of personal information being harvested, even some very seasoned tech users have reacted with shock — saying they had no idea, for example, that Facebook Messenger was continuously uploading their phone book and logging their calls and SMS metadata.

This is one of the reasons why the U.K.’s information commissioner has been calling for increased transparency about how and why data flows. Because for far too long tech savvy entities have been able to apply privacy hostile actions in the dark. And it hasn’t really been possible for the average person to know what’s being done with their information. Or even what data they are giving up when they click ‘I agree’.

Why does an A/B testing firm need to know a person’s HIV status? Why does a social network app need continuous access to your call history? Why should an ad giant be able to continuously pin your movements on a map?

Are you really getting so much value from an app that you’re happy for the company behind it and anyone else they partner with to know everywhere you go, everyone you talk to, the stuff you like and look at — even to have a pretty good idea what you’re thinking?

Every data misuse scandal shines a bit more light on some very murky practices — which will hopefully generate momentum for rule changes to disinfect data handling processes and strengthen individuals’ privacy by spotlighting trade-offs that have zero justification.

With some effort — and good online security practices (which we’re taking as a given for the purposes of this article, but one quick tip: Enable 2FA everywhere you can) — you can also make it harder for the web’s lurking watchers to dine out on your data.

Just don’t expect the lengths you have to go to protect your privacy to feel fair or just — the horrible truth is this fight sucks.

But whatever you do, don’t give up.

How to hide on the internet

Action: Tape over all your webcams
Who is this for: Everyone — even Mark Zuckerberg!
How difficult is it: Easy peasy lemon squeezy
Tell me more: You can get fancy removable stickers for this purpose (noyb has some nice ones). Or you can go DIY and use a bit of masking tape — on your laptop, your smartphone, even your smart TV… If your job requires you to be on camera, such as for some conference calls, and you want to look a bit more pro you can buy a webcam cover. Sadly locking down privacy is rarely this easy.

Action: Install HTTPS Everywhere
Who is this for: Everyone — seriously do it
How difficult is it: Mild effort
Tell me more: Many websites offer encryption. With HTTPS, people running the network between your device and the server hosting the website you’re browsing can’t see your requests and your internet traffic. But some websites still load unencrypted pages by default (HTTP), which also causes a security risk. The EFF has developed a browser extension that makes sure that you access all websites that offer HTTPS using… HTTPS.

Action: Use tracker blockers
Who is this for: Everyone — except people who like being ad-stalked online
How difficult is it: Mild effort
Tell me more: Trackers refers to a whole category of privacy-hostile technologies designed to follow and record what web users are doing as they move from site to site, and even across different devices. Trackers come in a range of forms these days. And there are some pretty sophisticated ways of being tracked (some definitely harder to thwart than others). But to combat trackers being deployed on popular websites — which are probably also making the pages slower to load than they otherwise would be — there’s now a range of decent, user-friendly tracker blockers to choose from. Pro-privacy search engine DuckDuckGo recently added a tracker blocker to their browser extensions, for example. is also a popular extension to block trackers from third-party websites. Firefox also has a built-in tracker blocker, which is now enabled by default in the mobile apps. If you’re curious and want to see the list of trackers on popular website, you can also install Kimetrak to understand that it’s a widespread issue.

Action: Use an ad blocker
Who is this for: Everyone who can support the moral burden
How difficult is it: Fairly easy these days but you might be locked out of the content on some news websites as a result
Tell me more: If you’ve tried using a tracker blocker, you may have noticed that many ads have been blocked in the process. That’s because most ads load from third-party servers that track you across multiple sites. So if you want to go one step further and block all ads, you should install an ad blocker. Some browsers like Opera come with an ad blocker. Otherwise, we recommend uBlock Origin on macOS, Windows, Linux and Android. 1Blocker is a solid option on iOS.
But let’s be honest, TechCrunch makes some money with online ads. If 100% of web users install an ad blocker many websites you know and love would simply go bankrupt. While your individual choice won’t have a material impact on the bottom line, consider whitelisting the sites you like. And if you’re angry at how many trackers your favorite news site is running try emailing them to ask (politely) if they can at least reduce the number of trackers they use.

Action: Make a private search engine your default
Who is this for: Most people
How difficult is it: A bit of effort because your search results might become slightly less relevant
Tell me more: Google probably knows more about you than even Facebook does, thanks to the things you tell it when you type queries into its search engine. Though that’s just the tip of how it tracks you — if you use Android it will keep running tabs on everywhere you go unless you opt out of location services. It also has its tracking infrastructure embedded on three-quarters of the top million websites. So chances are it’s following what you’re browsing online — unless you also take steps to lock down your browsing (see below).
But one major way to limit what Google knows about you is to switch to using an alternative search engine when you need to look something up on the Internet. This isn’t as hard as it used to be as there are some pretty decent alternatives now — such as DuckDuckGo which Apple will let you set as the default browser on iOS — or Qwant for French-speaking users. German users can check out Cliqz. You will also need to remember to be careful about any voice assistants you use as they often default to using Google to look stuff up on the web.

Action: Use private browser sessions
Who is this for: Most people
How difficult is it: Not at all if you understand what a private session is
Tell me more: All browsers on desktop and mobile now let you open a private window. While this can be a powerful tool, it is often misunderstood. By default, private sessions don’t make you more invisible — you’ll get tracked from one tab to another. But private sessions let you start with a clean slate. Every time you close your private session, all your cookies are erased. It’s like you disappear from everyone’s radar. You can then reopen another private session and pretend that nobody knows who you are. That’s why using a private session for weeks or months doesn’t do much, but short private sessions can be helpful.

Action: Use multiple browsers and/or browser containers
Who is this for: People who don’t want to stop using social media entirely
How difficult is it: Some effort to not get in a muddle
Tell me more: Using different browsers for different online activities can be a good way of separating portions of your browsing activity. You could, for example, use one browser on your desktop computer for your online banking, say, and a different browser for your social networking or ecommerce activity. Taking this approach further, you could use different mobile devices when you want to access different apps. The point of dividing your browsing across different browsers/devices is to try to make it harder to link all your online activity to you. That said, lots of adtech effort has been put into developing cross-device tracking techniques — so it’s not clear that fragmenting your browsing sessions will successful beat all the trackers. 
In a similar vein, in 2016 Mozilla added a feature to its Firefox browser that’s intended to help web users segregate online identities within the same browser — called multi container extensions. This approach gives users some control but it does not stop their browser being fingerprinted and all their web activity in it linked and tracked. It may help reduce some cookie-based tracking, though.
Last month Mozilla also updated the container feature to add one that specifically isolates a Facebook user’s identity from the rest of the web. This limits how Facebook can track a user’s non-Facebook web browsing — which yes Facebook does do, whatever Zuckerberg tried to claim in Congress — so again it’s a way to reduce what the social network giant knows about you. (Though it should also be noted that clicking on any Facebook social plug-ins you encounter on other websites will still send Facebook your personal data.)

Action: Get acquainted with Tor
Who is this for: Activists, people with high risks attached to being tracked online, committed privacy advocates who want to help grow the Tor network
How difficult is it: Patience is needed to use Tor. Also some effort to ensure you don’t accidentally do something that compromises your anonymity
Tell me more: For the most robust form of anonymous web browsing there’s Tor. Tor’s onion network works by encrypting and routing your Internet traffic randomly through a series of relay servers to make it harder to link a specific device with a specific online destination. This does mean it’s definitely not the fastest form of web browsing around. Some sites can also try to block Tor users so the Internet experience you get when browsing in this way may suffer. But it’s the best chance of truly preserving your online anonymity. You’ll need to download the relevant Tor browser bundle to use it. It’s pretty straightforward to install and get going. But expect very frequent security updates which will also slow you down.

Action: Switch to another DNS
Who is this for: People who don’t trust their ISP
How difficult is it: Moderately
Tell me more: When you type an address in the address bar (such as, your device asks a Domain Name Server to translate that address into an IP address (a unique combination of numbers and dots). By default, your ISP or your mobile carrier runs a DNS for their users. It means that they can see all your web history. Big telecom companies are going to take advantage of that to ramp up their advertising efforts. By default, your DNS query is also unencrypted and can be intercepted by people running the network. Some governments also ask telecom companies to block some websites on their DNS servers — some countries block Facebook for censorship reasons, others block The Pirate Bay for online piracy reasons.
You can configure each of your device to use another public DNS. But don’t use Google’s public DNS! It’s an ad company, so they really want to see your web history. Both Quad9 and Cloudflare’s have strong privacy policies. But Quad9 is a not-for-profit organization, so it’s easier to trust them.

Action: Disable location services
Who is this for: Anyone who feels uncomfortable with the idea of being kept under surveillance
How difficult is it: A bit of effort finding and changing settings, and a bit of commitment to stay on top of any ‘updates’ to privacy policies which might try to revive location tracking. You also need to be prepared to accept some reduction in the utility and/or convenience of the service because it won’t be able to automatically customize what it shows you based on your location
Tell me more: The tech industry is especially keen to keep tabs on where its users are at any given moment. And thanks to the smash hit success of smartphones with embedded sensors it’s never been easier to pervasively track where people are going — and therefore to infer what they’re doing. For ad targeting purposes location data is highly valuable of course. But it’s also hugely intrusive. Did you just visit a certain type of health clinic? Were you carrying your phone loaded with location-sucking apps? Why then it’s trivially easy for the likes of Google and Facebook to connect your identity to that trip — and link all that intel to their ad networks. And if the social network’s platform isn’t adequately “locked down” — as Zuckerberg would put it — your private information might leak and end up elsewhere. It could even get passed around between all sorts of unknown entities — as the up to 87M Facebook profiles in the Cambridge Analytica scandal appear to have been. (Whistleblower Chris Wylie has said that Facebook data-set went “everywhere”.)
There are other potential risks too. Insurance premiums being assessed based on covertly collected data inputs. Companies that work for government agencies using social media info to try to remove benefits or even have people deported. Location data can also influence the types of adverts you see or don’t see. And on that front there’s a risk of discrimination if certain types of ads — jobs or housing, for example — don’t get served to you because you happen to be a person of color, say, or a Muslim. Excluding certain protected groups of people from adverts can be illegal — but that hasn’t stopped it happening multiple times on Facebook’s platform. And location can be a key signal that underpins this kind of prejudicial discrimination.
Even the prices you are offered online can depend on what is being inferred about you via your movements. The bottom line is that everyone’s personal data is being made to carry a lot of baggage these days — and most of the time it’s almost impossible to figure out exactly what that unasked for baggage might entail when you consent to letting a particular app or service track where you go.
Pervasive tracking of location at very least risks putting you at a disadvantage as a consumer. Certainly if you live somewhere without a proper regulatory framework for privacy. It’s also worth bearing in mind how lax tech giants can be where location privacy is concerned — whether it’s Uber’s infamous ‘god view’ tool or Snapchat leaking schoolkids’ location or Strava accidentally revealing the locations of military bases. Their record is pretty terrible.
If you really can’t be bothered to go and hunt down and switch off every location setting one fairly crude action you can take is to buy a faraday cage carry case — Silent Pocket makes an extensive line of carry cases with embedded wireless shielding tech, for instance — which you can pop your smartphone into when you’re on the move to isolate it from the network. Of course once you take it out it will instantly reconnect and location data will be passed again so this is not going to do very much on its own. Nixing location tracking in the settings is much more effective.

Action: Approach VPNs with extreme caution
Who is this for: All web users — unless free Internet access is not available in your country
How difficult is it: No additional effort
Tell me more: While there may be times when you feel tempted to sign up and use a VPN service — say, to try to circumvent geoblocks so you can stream video content that’s not otherwise available in your country — if you do this you should assume that the service provider will at very least be recording everything you’re doing online. They may choose to sell that info or even steal your identity. Many of them promise you perfect privacy and great terms of service. But you can never know for sure if they’re actually doing what they say. So the rule of thumb about all VPNs is: Assume zero privacy — and avoid if at all possible. Facebook even has its own VPN — which it’s been aggressively pushing to users of its main app by badging it as a security service, with the friendly-sounding name ‘Protect’. In reality the company wants you to use this so it can track what other apps you’re using — for its own business intelligence purposes. So a more accurate name for this ‘service’ would be: ‘Protect Facebook’s stranglehold on the social web’.

Action: Build your own VPN server
Who is this for: Developers
How difficult is it: You need to be comfortable with the Terminal
Tell me more: The only VPN server you can trust is the one you built yourself! In that case, VPN servers can be a great tool if you’re on a network you don’t trust (a hotel, a conference or an office). We recommend using Algo VPN and a hosting provider you trust.

Action: Take care with third-party keyboard apps
Who is this for: All touchscreen device users
How difficult is it: No additional effort
Tell me more: Keyboard apps are a potential privacy minefield given that, if you allow cloud-enabled features, they can be in a position to suck out all the information you’re typing into your device — from passwords to credit card numbers to the private contents of your messages. That’s not to say that all third-party keyboards are keylogging everything you type. But the risk is there — so you need to be very careful about what you choose to use. Security is also key. Last year, sensitive personal data from 31M+ users of one third-party keyboard, AI.type, leaked online after the company had failed to properly secure its database server, as one illustrative example of the potential risks.
Google knows how powerful keyboards can be as a data-sucker — which is why it got into the third-party keyboard game, outing its own Gboard keyboard app first for Apple’s iOS in 2016 and later bringing it to Android. If you use Gboard you should know you are handing the adtech giant another firehose of your private information — though it claims that only search queries and “usage statistics” are sent by Gboard to Google (The privacy policy further specifies: “Anything you type other than your searches, like passwords or chats with friends, isn’t sent. Saved words on your device aren’t sent.”). So if you believe that Gboard is not literally a keylogger. But it is watching what you search for and how you use your phone. 
Also worth remembering: Data will still be passed by Gboard to Google if you’re using an e2e encrypted messenger like Signal. So third party keyboards can erode the protection afforded by robust e2e encryption — so again: Be very careful what you use.

Action: Use end-to-end encrypted messengers
Who is this for: Everyone who can
How difficult is it: Mild effort unless all your friends are using other messaging apps
Tell me more: Choosing friends based on their choice of messaging app isn’t a great option so real world network effects can often work against privacy. Indeed, Facebook uses the fuzzy feelings you have about your friends to manipulate Messenger users to consent to continuously uploading their phone contacts, by suggesting you have to if you want to talk to your contacts. (Which is, by the by, entirely bogus.)
But if all your friends use a messaging app that does not have end-to-end encryption chances are you’ll feel forced to use that same non-privacy-safe app too. Given that the other option is to exclude yourself from the digital chatter of your friend group. Which would clearly suck. 
Facebook-owned WhatsApp does at least have end-to-end encryption — and is widely used (certainly internationally). Though you still need to be careful to opt out of any privacy-eroding terms the company tries to push. In summer 2016, for example, a major T&Cs change sought to link WhatsApp users’ accounts with their Facebook profiles (and thus with all the data Facebook holds on them) — as well as sharing sensitive stuff like your last seen status, your address book, your BFFs in Whatsapp and all sorts of metadata with Zuck’s ‘family’ of companies. Thankfully most of this privacy-hostile data sharing has been suspended in Europe, after Facebook got in trouble with local data protection agencies. 

Action: Use end-to-end encryption if you use cloud storage
Who is this for: Dedicated privacy practitioners, anyone worried about third parties accessing their stuff
How difficult is it: Some effort, especially if you have lots of content stored in another service that you need to migrate
Tell me more: Dropbox IPO’d last month — and the markets signalled their approval of its business. But someone who doesn’t approve of the cloud storage giant is Edward Snowden — who in 2014 advised: “Get rid of Dropbox”, arguing the company is hostile to privacy. The problem is that Dropbox does not offer zero access encryption — because it retains encryption keys, meaning it can technically decrypt and read the data you store with it if it decides it needs to or is served with a warrant.
Cloud storage alternatives that do offer local encryption with no access to the encryption keys are available, such as Spideroak. And if you’re looking for a cloud backup service, Backblaze also offers the option to let you manage the encryption key. Another workaround if you do still want to use a service like Dropbox is to locally encrypt the stuff you want to store before you upload it — using another third party service such as Boxcryptor.

Action: Use an end-to-end encrypted email service
Who is this for: Anyone who wants to be sure their email isn’t being data mined
How difficult is it: Some effort — largely around migrating data and/or contacts from another email service
Tell me more: In the middle of last year Google finally announced it would no longer be data-mining the emails inside its Gmail free email service. (For a little perspective on how long it took to give up data-mining your emails, Gmail launched all the way back in 2004.) The company probably feels it has more than enough alternative data points feeding its user profiling at this point. Plus data-mining email with the rise of end-to-end encrypted messaging apps risks pushing the company over the ‘creepy line’ it’s been so keen to avoid to try to stave off the kind of privacy backlash currently engulfing Facebook.
So does it mean that Gmail is now 100% privacy safe? No, because the service is not end-to-end encrypted. But there are now some great webmail clients that do offer robust end-to-end encryption — most notably the Swiss service Protonmail. Really it’s never been easier to access a reliable, user-friendly, pro-privacy email service. If you want to go one step further, you should set up PGP encryption keys and share them with your contacts. This is a lot more difficult though.

Action: Choose iOS over Android
Who is this for: Mainstream consumers, Apple fans
How difficult is it: Depends on the person. Apple hardware is generally more expensive so there’s a cost premium
Tell me more: No connected technology is 100% privacy safe but Apple’s hardware-focused business model means the company’s devices are not engineered to try to harvest user data by default. Apple does also invest in developing pro-privacy technologies. Whereas there’s no getting around the fact Android-maker Google is an adtech giant whose revenues depend on profiling users in order to target web users with adverts. Basically the company needs to suck your data to make a fat profit. That’s why Google asks you to share all your web and app activity and location history if you want to use Google Assistant, for instance.
Android is a more open platform than iOS, though, and it’s possible to configure it in many different ways — some of which can be more locked down as regards privacy than others (the Android Open Source Project can be customized and used without Google services as default preloads, for example). But doing that kind of configuration is not going to be within reach of the average person. So iOS is the obvious choice for mainstream consumers.

Action: Delete your social media accounts
Who is this for: Committed privacy lovers, anyone bored with public sharing
How difficult is it: Some effort — mostly feeling like you’re going to miss out. But third party services can sometimes require a Facebook login (a workaround for that would be to create a dummy Facebook account purely for login purposes — using a name and email you don’t use for anything else, and not linking it to your usual mobile phone number or adding anyone you actually know IRL)
Tell me more: Deleting Facebook clearly isn’t for everyone. But ask yourself how much you use it these days anyway? You might find yourself realizing it’s not really that central to what you do on the Internet after all. The center of gravity in social networking has shifted away from mass public sharing into more tightly curated friend groups anyway, thanks to the popularity of messaging apps.
But of course Facebook owns Messenger, Instagram and WhatsApp too. So ducking out of its surveillance dragnet entirely is especially hard. Ideally you would also need to run tracker blockers (see above) as the company tracks non-Facebook users around the Internet via the pixels it has embedded on lots of popular websites.
While getting rid of your social media accounts is not a privacy panacea, removing yourself from mainstream social network platforms at least reduces the risk of a chunk of your personal info being scraped and used without your say so. Though it’s still not absolutely guaranteed that when you delete an account the company in question will faithfully remove all your information from their servers — or indeed from the servers of any third party they shared your data with.
If you really can’t bring yourself to ditch Facebook (et al) entirely, at least dive into the settings and make sure you lock down as much access to your data as you can — including checking which apps have been connected to your account and removing any that aren’t relevant or useful to you anymore.

Action: Say no to always-on voice assistants
Who is this for: Anyone who values privacy more than gimmickry
How difficult is it: No real effort
Tell me more: There’s a rash of smart speaker voice assistants on shop shelves these days, marketed in a way that suggests they’re a whole lot smarter and more useful than they actually are. In reality they’re most likely to be used for playing music (albeit, audio quality can be very poor) or as very expensive egg timers.
Something else the PR for gadgets like Amazon’s (many) Echos or Google Home doesn’t mention is the massive privacy trade off involved with installing an always-on listening device inside your home. Essentially these devices function by streaming whatever you ask to the cloud and will typically store recordings of things you’ve said in perpetuity on the companies’ servers. Some do offer a delete option for stored audio but you would have to stay on top of deleting your data as long as you keep using the device. So it’s a tediously Sisyphean task. Smart speakers have also been caught listening to and recording things their owner didn’t actually want them to — because they got triggered by accident. Or when someone on the TV used the trigger word.
The privacy risks around smart speakers are clearly very large indeed. Not least because this type of personal data is of obvious and inevitable interest to law enforcement agencies. So ask yourself whether that fake fart dispenser gizmo you’re giggling about is really worth the trade off of inviting all sorts of outsiders to snoop on the goings on inside your home.

Action: Block some network requests
Who is this for: Paranoid people
How difficult is it: Need to be tech savvy
Tell me more: On macOS, you can install something called Little Snitch to get an alert every time an app tries to talk with a server. You can approve or reject each request and create rules. If you don’t want Microsoft Word to talk with Microsoft’s servers all the time for instance, it’s a good solution — but is not really user friendly.

Action: Use a privacy-focused operating system
Who is this for: Edward Snowden
How difficult is it: Need to be tech savvy
Tell me more: If you really want to lock everything down, you should consider using Tails as your desktop operating system. It’s a Linux distribution that leaves no trace by default, uses the Tor network for all network requests by default. But it’s not exactly user friendly, and it’s quite complicated to install on a USB drive. One for those whose threat model really is ‘bleeding edge’.

Action: Write to your political reps to demand stronger privacy laws
Who is this for: Anyone who cares about privacy, and especially Internet users in North America right now
How difficult is it: A bit of effort
Tell me more: There appears to be bipartisan appetite among U.S. lawmakers to bring in some form of regulation for Internet companies. And with new tougher rules coming in in Europe next month it’s an especially opportune moment to push for change in the U.S. where web users are facing reduced standards vs international users after May 25. So it’s a great time to write to your reps reminding them you’re far more interested in your privacy being protected than Facebook winning some kind of surveillance arms race with the Chinese. Tell them it’s past time for the U.S. to draft laws that prioritize the protection of personal data. 

Action: Throw away all your connected devices — and choose your friends wisely
Who is this for: Fugitives and whistleblowers
How difficult is it: Privacy doesn’t get harder than this
Tell me more: Last month the former Catalan president, Carles Puigdemont — who, in October, dodged arrest by the Spanish authorities by fleeing to Brussels after the region’s abortive attempt to declare independence — was arrested by German police, after crossing the border from Denmark in a car. Spanish intelligence agents had reportedly tracked his movements via the GPS on the mobile device of one or more of his friends. The car had also been fitted with a tracker. Trusting anything not to snitch on you is a massive risk if your threat model is this high. The problem is you also need trustworthy friends to help you stay ahead of the surveillance dragnet that’s out to get you.

Action: Ditch the Internet entirely
Who is this for: Fugitives and whistleblowers
How difficult is it: Privacy doesn’t get harder than this
Tell me more: Public administrations can ask you to do pretty much everything online these days — and even if it’s not mandatory to use their Internet service it can be incentivized in various ways. The direction of travel for government services is clearly digital. So eschewing the Internet entirely is getting harder and harder to do.
One wild card option — that’s still not a full Internet alternative (yet) — is to use a different type of network that’s being engineered with privacy in mind. The experimental, decentralized MaidSafe network fits that bill. This majorly ambitious project has already clocked up a decade’s worth of R&D on the founders’ mission to rethink digital connectivity without compromising privacy and security by doing away with servers — and decentralizing and encrypting everything. It’s a fascinating project. Just sadly not yet a fully-fledged Internet alternative.

Telegram hit with block in Russia over encryption

A Russian court has ordered a block on access to the Telegram messaging app — with the block coming into force immediately, according to the BBC.

The messaging platform has been under pressure to hand over encryption keys to Russian authorities so they can access user data — which they claim is needed for counterterrorism purposes — but has so far refused.

However late last month Telegram lost a bid before the Supreme Court to block security services from getting access to users’ data, though it said it planned to appeal.

The court gave it 15 days to hand over the encryption keys. Again it refused. So last week Russia’s state communication watchdog filed a lawsuit to limit access to the service — and a court in Moscow has now granted the block.

In a tweet responding to the news, founder Pavel Durov wrote: “Privacy is not for sale, and human rights should not be compromised out of fear or greed.”

Durov is himself Russian but has lived in exile since 2014 after claiming he’d been forced to hand control of his former social networking company, vk, to allies of Russian president Vladimir Putin — also as a result of refusing to hand user data to authorities.

In a longer post on his Telegram channel today, Durov adds: “The power that local governments have over IT corporations is based on money. At any given moment, a government can crash their stocks by threatening to block revenue streams from its markets and thus force these companies to do strange things (remember how last year Apple moved iCloud servers to China).

“At Telegram, we have the luxury of not caring about revenue streams or ad sales. Privacy is not for sale, and human rights should not be compromised out of fear or greed.”

Telegram’s service has faced temporary blocks in Iran — over content being spread on the platform that the regime dislikes. Last summer the Indonesian government also used blocks to wring content-related concessions out of Telegram.

But it remains to be seen whether the company will agree to any concessions to get the Russian block removed. Durov’s first response suggests it has no intention of backing down over encryption.

Telegram’s lawyer, Pavel Chikov, has also described the move by the Russian authorities as “unconstitutional” — and claimed it “cannot be fulfilled technically and legally”.

Meanwhile, the messaging platform announced last month it now has more than 200 million monthly active users globally.

And while Durov claims not to care about money he is in the midst of a billion dollar ICO, raising money via a token sale to develop a crypto currency and blockchain platform.

Reuters suggests some Russians will seek to circumvent the block via the use of VPN technology.