Category Archives: Android

Watch Google CEO Sundar Pichai testify in Congress — on bias, China and more

Google CEO Sundar Pichai has managed to avoid the public political grillings that have come for tech leaders at Facebook and Twitter this year. But not today.

Today he will be in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices.

The hearing kicks off at 10:00 ET — and will be streamed live via our YouTube channel (with the feed also embedded above in this post).

Announcing the hearing last month, committee chairman Bob Goodlatte said it would “examine potential bias and the need for greater transparency regarding the filtering practices of tech giant Google”.

Republicans have been pressuring the Silicon Valley giant over what they claim is ‘liberal bias’ embedded at the algorithmic level.

This summer President Trump publicly lashed out at Google, expressing displeasure about news search results for his name in a series of tweets in which he claimed: “Google & others are suppressing voices of Conservatives and hiding information and news that is good.”

Google rejected the allegation, responding then that: “Search is not used to set a political agenda and we don’t bias our results toward any political ideology.”

In his prepared remarks ahead of the hearing, Pichai reiterates this point.

“I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests,” he writes. “We are a company that provides platforms for diverse perspectives and opinions—and we have no shortage of them among our own employees.”

He also seeks to paint a picture of Google as a proudly patriotic “American company” — playing up its role as a creator of local jobs and a bolster for the wider US economy, likely in the hopes of defusing some of the expected criticism from conservatives on the committee.

However his statement makes no mention of a separate controversy that’s been dogging Google this year — after news leaked this summer that it had developed a censored version of its search service for a potential relaunch in China.

The committee looks certain to question Google closely on its intentions vis-a-vis China.

In statements ahead of the hearing last month, House majority leader, Kevin McCarthy, flagged up reports he said suggested Google is “compromising its core principles by complying with repressive censorship mandates from China”.

Trust in general is a key theme, with lawmakers expressing frustration at both the opacity of Google’s blackbox algorithms, which ultimately shape content hierarchies on its platforms, and the difficulty they’ve had in getting facetime with its CEO to voice questions and concerns.

At a Senate Intelligence committee hearing three months ago, which was attended by Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg, senators did not hide their anger that Pichai had turned down their invitation — openly ripping into company leaders for not bothering to show up. (Google offered to send its chief legal officer instead.)

“For months, House Republicans have called for greater transparency and openness from Google. Company CEO Sundar Pichai met with House Republicans in September to answer some of our questions. Mr. Pichai’s scheduled appearance in front of the House Judiciary Committee is another important step to restoring public trust in Google and all the companies that shape the Internet,” McCarthy wrote last month.

Other recent news that could inform additional questions for Pichai from the committee include the revelation of yet another massive security breach at Google+; and a New York Times investigation of how mobile apps are location tracking users — with far more Android apps found to contain location-sharing code than iOS apps.

Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

China’s fast-rising Bullet Messenger hit with copyright complaint

Bullet Messenger, a fast-rising Chinese messaging upstart that’s gunning to take on local behemoth, WeChat, has been pulled from the iOS App Store owing to what its owners couch as a copyright complaint.

Reuters reported the development earlier, saying Bullet’s owner, Beijing-based Kuairu Technology, claimed in a social media posting that the app had been taken down from Apple’s app store because of a complaint related to image content provided by a partner.

“We are verifying the situation with the partner and will inform you as soon as possible when download capabilities are resumed,” it said in a statement on its official Weibo account.

The company did not specify which part of the app has been subject to a complaint.

We’ve reached out to Apple to ask if it can provide any more details.

According to checks by Reuters earlier today, the Bullet Messenger app was still available on China’s top Android app stores — including stores owned by WeChat owner Tencent, as well as Baidu and Xiaomi stores — which the news agency suggests makes it less likely the app has been pulled from iOS as a result of censorship by the state, saying apps targeted by regulators generally disappear from local app stores too.

Bullet Messenger only launched in August but quickly racked up four million users in just over a week, and also snagged $22M in funding.

By September it had claimed seven million users, and Chinese smartphone maker Smartisan — an investor in Bullet — said it planned to spend 1 billion yuan (~$146M) over the next six months in a bid to reach 100M users. Though in a battle with a competitive Goliath like WeChat (1BN+ active users) that would still be a relative minnow.

The upstart messenger has grabbed attention with its fast growth, apparently attracting users via its relatively minimal messaging interface and a feature that enables speech-to-text transcription text in real time.

Albeit the app has also raised eyebrows for allowing pornographic content to be passed around.

It’s possible that element of the app caught the attention of Chinese authorities which have been cracking down on Internet porn in recent years — even including non-visual content (such as ASMR) which local regulators have also judged to be obscene.

Although it’s equally possible Apple itself is responding to a porn complaint about Bullet’s iOS app.

Earlier this year the Telegram messaging app fell foul of the App Store rules and was temporarily pulled, as a result of what its founder described as “inappropriate content”.

Apple’s developer guidelines for iOS apps include a section on safety that proscribes “upsetting or offensive content” — including frowning on: “Apps with user-generated content or services that end up being used primarily for pornographic content.”

In Telegram’s case, the App Store banishment was soon resolved.

There’s nothing currently to suggest that Bullet’s app won’t also soon be restored.

Facebook’s ex-CSO, Alex Stamos, defends its decision to inject ads in WhatsApp

Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely spread as possible.

Stamos made the comments via Twitter, where he said he was indirectly responding to the fallout from a Forbes interview with WhatsApp co-founder Brian Acton — in which Acton hit at out at his former employer for being greedy in its approach to generating revenue off of the famously anti-ads messaging platform.

Both WhatsApp founders’ exits from Facebook has been blamed on disagreements over monetization. (Jan Koum left some months after Acton.)

In the interview, Acton said he suggested Facebook management apply a simple business model atop WhatsApp, such as metered messaging for all users after a set number of free messages. But that management pushed back — with Facebook COO Sheryl Sandberg telling him they needed a monetization method that generates greater revenue “scale”.

And while Stamos has avoided making critical remarks about Acton (unlike some current Facebook staffers), he clearly wants to lend his weight to the notion that some kind of trade-off is necessary in order for end-to-end encryption to be commercially viable (and thus for the greater good (of messaging privacy) to prevail); and therefore his tacit support to Facebook and its approach to making money off of a robustly encrypted platform.

Stamos’ own departure from the fb mothership was hardly under such acrimonious terms as Acton, though he has had his own disagreements with the leadership team — as set out in a memo he sent earlier this year that was obtained by BuzzFeed. So his support for Facebook combining e2e and ads perhaps counts for something, though isn’t really surprising given the seat he occupied at the company for several years, and his always fierce defence of WhatsApp encryption.

(Another characteristic concern that also surfaces in Stamos’ Twitter thread is the need to keep the technology legal, in the face of government attempts to backdoor encryption, which he says will require “accepting the inevitable downsides of giving people unfettered communications”.)

This summer Facebook confirmed that, from next year, ads will be injected into WhatsApp statuses (aka the app’s Stories clone). So it is indeed bringing ads to the famously anti-ads messaging platform.

For several years the company has also been moving towards positioning WhatsApp as a business messaging platform to connect companies with potential customers — and it says it plans to meter those messages, also from next year.

So there are two strands to its revenue generating playbook atop WhatsApp’s e2e encrypted messaging platform. Both with knock-on impacts on privacy, given Facebook targets ads and marketing content by profiling users by harvesting their personal data.

This means that while WhatsApp’s e2e encryption means Facebook literally cannot read WhatsApp users’ messages, it is ‘circumventing’ the technology (for ad-targeting purposes) by linking accounts across different services it owns — using people’s digital identities across its product portfolio (and beyond) as a sort of ‘trojan horse’ to negate the messaging privacy it affords them on WhatsApp.

Facebook is using different technical methods (including the very low-tech method of phone number matching) to link WhatsApp user and Facebook accounts. Once it’s been able to match a Facebook user to a WhatsApp account it can then connect what’s very likely to be a well fleshed out Facebook profile with a WhatsApp account that nonetheless contains messages it can’t read. So it’s both respecting and eroding user privacy.

This approach means Facebook can carry out its ad targeting activities across both messaging platforms (as it will from next year). And do so without having to literally read messages being sent by WhatsApp users.

As trade offs go, it’s a clearly a big one — and one that’s got Facebook into regulatory trouble in Europe.

It is also, at least in Stamos’ view, a trade off that’s worth it for the ‘greater good’ of message content remaining strongly encrypted and therefore unreadable. Even if Facebook now knows pretty much everything about the sender, and can access any unencrypted messages they sent using its other social products.

In his Twitter thread Stamos argues that “if we want that right to be extended to people around the world, that means that E2E encryption needs to be deployed inside of multi-billion user platforms”, which he says means: “We need to find a sustainable business model for professionally-run E2E encrypted communication platforms.”

On the sustainable business model front he argues that two models “currently fit the bill” — either Apple’s iMessage or Facebook-owned WhatsApp. Though he doesn’t go into any detail on why he believes only those two are sustainable.

He does say he’s discounting the Acton-backed alternative, Signal, which now operates via a not-for-profit (the Signal Foundation) — suggesting that rival messaging app is “unlikely to hit 1B users”.

In passing he also throws it out there that Signal is “subsidized, indirectly, by FB ads” — i.e. because Facebook pays a licensing fee for use of the underlying Signal Protocol used to power WhatsApp’s e2e encryption. (So his slightly shade-throwing subtext is that privacy purists are still benefiting from a Facebook sugardaddy.)

Then he gets to the meat of his argument in defence of Facebook-owned (and monetized) WhatsApp — pointing out that Apple’s sustainable business model does not reach every mobile user, given its hardware is priced at a premium. Whereas WhatsApp running on a cheap Android handset ($50 or, perhaps even $30 in future) can.

Other encrypted messaging apps can also of course run on Android but presumably Stamos would argue they’re not professionally run.

“I think it is easy to underestimate how radical WhatsApp’s decision to deploy E2E was,” he writes. “Acton and Koum, with Zuck’s blessing, jumped off a bridge with the goal of building a monetization parachute on the way down. FB has a lot of money, so it was a very tall bridge, but it is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue.

“This could come from directly charging for the service, it could come from advertising, it could come from a WeChat-like services play. The first is very hard across countries, the latter two are complicated by E2E.”

“I can’t speak to the various options that have been floated around, or the arguments between WA and FB, but those of us who care about privacy shouldn’t see WhatsApp monetization as something evil,” he adds. “In fact, we should want WA to demonstrate that E2E and revenue are compatible. That’s the only way E2E will become a sustainable feature of massive, non-niche technology platforms.”

Stamos is certainly right that Apple’s iMessage cannot reach every mobile user, given the premium cost of Apple hardware.

Though he elides the important role that second hand Apple devices play in helping to reduce the barrier to entry to Apple’s pro-privacy technology — a role Apple is actively encouraging via support for older devices (and by its own services business expansion which extends its model so that support for older versions of iOS (and thus secondhand iPhones) is also commercially sustainable).

Robust encryption only being possible via multi-billion user platforms essentially boils down to a usability argument by Stamos — which is to suggest that mainstream app users will simply not seek encryption out unless it’s plated up for them in a way they don’t even notice it’s there.

The follow on conclusion is then that only a well-resourced giant like Facebook has the resources to maintain and serve this different tech up to the masses.

There’s certainly substance in that point. But the wider question is whether or not the privacy trade offs that Facebook’s monetization methods of WhatsApp entail, by linking Facebook and WhatsApp accounts and also, therefore, looping in various less than transparent data-harvest methods it uses to gather intelligence on web users generally, substantially erodes the value of the e2e encryption that is now being bundled with Facebook’s ad targeting people surveillance. And so used as a selling aid for otherwise privacy eroding practices.

Yes WhatsApp users’ messages will remain private, thanks to Facebook funding the necessary e2e encryption. But the price users are having to pay is very likely still their personal privacy.

And at that point the argument really becomes about how much profit a commercial entity should be able to extract off of a product that’s being marketed as securely encrypted and thus ‘pro-privacy’? How much revenue “scale” is reasonable or unreasonable in that scenario?

Other business models are possible, which was Acton’s point. But likely less profitable. And therein lies the rub where Facebook is concerned.

How much money should any company be required to leave on the table, as Acton did when he left Facebook without the rest of his unvested shares, in order to be able to monetize a technology that’s bound up so tightly with notions of privacy?

Acton wanted Facebook to agree to make as much money as it could without users having to pay it with their privacy. But Facebook’s management team said no. That’s why he’s calling them greedy.

Stamos doesn’t engage with that more nuanced point. He just writes: “It is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue” — thereby collapsing the revenue argument into an all or nothing binary without explaining why it has to be that way.

Twitter now puts live broadcasts at the top of your timeline

Twitter will now put live streams and broadcasts started by accounts you follow at the top of your timeline, making it easier to see what they’re doing in realtime.

In a tweet, Twitter said that that the new feature will include breaking news, personalities and sports.

The social networking giant included the new feature in its iOS and Android apps, updated this week. Among the updates, Twitter said it’s now also supporting audio-only live broadcasts, as well as through its sister broadcast service Periscope.

Last month, Twitter discontinued its app for iOS 9 and lower versions, which according to Apple’s own data still harbors some 5 percent of all iPhone and iPad users.

Apple defends decision not to remove InfoWars’ app

Apple has commented on its decision to continue to allow conspiracy theorist profiteer InfoWars to livestream video podcasts via an app in its App Store, despite removing links to all but one of Alex Jones’ podcast content from its iTunes and podcast apps earlier this week.

At the time Apple said the podcasts had violated its community standards, emphasizing that it “does not tolerate hate speech”, and saying: “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

Yet the InfoWars app allows iOS users to livestream the same content Apple just pulled from iTunes.

In a statement given to BuzzFeed News Apple explains its decision not to pull InfoWars app’ — saying:

We strongly support all points of view being represented on the App Store, as long as the apps are respectful to users with differing opinions, and follow our clear guidelines, ensuring the App Store is a safe marketplace for all. We continue to monitor apps for violations of our guidelines and if we find content that violates our guidelines and is harmful to users we will remove those apps from the store as we have done previously.

Multiple tech platforms have moved to close to door or limit Jones’ reach on their platforms in recent weeks, including Google, which shuttered his YouTube channel, and Facebook, which removed a series of videos and banned Jones’ personal account for 30 days as well as issuing the InfoWars page with a warning strike. Spotify, Pinterest, LinkedIn, MailChimp and others have also taken action.

Although Twitter has not banned or otherwise censured Jones — despite InfoWars’ continued presence on its platform threatening CEO Jack Dorsey’s claimed push to want to improve conversational health on his platform. Snapchat is also merely monitoring Jones’ continued presence on its platform.

In an unsurprising twist, the additional exposure Jones/InfoWars has gained as a result of news coverage of the various platform bans appears to have given his apps some passing uplift…

So Apple’s decision to remove links to Jones’ podcasts yet allow the InfoWars app looks contradictory.

The company is certainly treading a fine line here. But there’s a technical distinction between a link to a podcast in a directory, where podcast makers can freely list their stuff (with the content hosted elsewhere), vs an app in Apple’s App Store which has gone through Apple’s review process and the content is being hosted by Apple.

When it removed Jones’ podcasts Apple was, in effect, just removing a pointer to the content, not the content itself. The podcasts also represented discrete content — meaning each episode which was being pointed to could be judged against Apple’s community standards. (And one podcast link was not removed, for example, though five were.)

Whereas Jones (mostly) uses the InfoWars app to livestream podcast shows. Meaning the content in the InfoWars app is more ephemeral — making it more difficult for Apple to cross-check against its community standards. The streamer has to be caught in the act, as it were.

Google has also not pulled the InfoWars app from its Play Store despite shuttering Jones’ YouTube channel, and a spokesperson told BuzzFeed: “We carefully review content on our platforms and products for violations of our terms and conditions, or our content policies. If an app or user violates these, we take action.”

That said, both the iOS and Android versions of the app also include ‘articles’ that can be saved by users, so some of the content appears to be less ephemeral.

The iOS listing further claims the app lets users “stay up to date with articles as they’re published from Infowars.com” — which at least suggests some of the content is ideal to what’s being spouting on Jones’ own website (where he’s only subject to his own T&Cs).

But in order to avoid failing foul of Apple and Google’s app store guidelines, Jones is likely carefully choosing which articles are funneled into the apps — to avoid breaching app store T&Cs against abuse and hateful conduct, and (most likely also) to hook more eyeballs with more soft-ball conspiracy nonsense before, once they’re pulled into his orbit, blasting people with his full bore BS shotgun on his own platform.

Sample articles depicted in screenshots in the App Store listing for the app include one claiming that George Soros is “literally behind Starbucks’ sensitivity training” and another, from the ‘science’ section, pushing some junk claims about vision correction — so all garbage but not at the same level of anti-truth toxicity that Jones has become notorious for for what he says on his shows; while the Play Store listing flags a different selection of sample articles with a slightly more international flavor — including several on European far right politics, in addition to U.S. focused political stories about Trump and some outrage about domestic ‘political correctness gone mad’. So the static sample content at least isn’t enough to violate any T&Cs.

Still, the livestream component of the apps presents an ongoing problem for Apple and Google — given both have stated that his content elsewhere violates their standards. And it’s not clear how sustainable it will be for them to continue to allow Jones a platform to livestream hate from inside the walls of their commercial app stores.

Beyond that, narrowly judging Jones — a purveyor of weaponized anti-truth (most egregiously his claim that the Sandy Hook Elementary School shooting was a hoax) — by the content he uploads directly to their servers also ignores the wider context (and toxic baggage) around him.

And while no tech companies want their brands to be perceived as toxic to conservative points of view, InfoWars does not represent conservative politics. Jones peddles far right conspiracy theories, whips up hate and spreads junk science in order to generate fear and make money selling supplements. It’s cynical manipulation not conservatism.

Both should revisit their decision. Hateful anti-truth merely damages the marketplace of ideas they claim to want to champion, and chills free speech through violent bullying of minorities and the people it makes into targets and thus victimizes.

Earlier this week 9to5Mac reported that CNN’s Dylan Byers has said the decision to remove links to InfoWars’ podcasts had been made at the top of Apple after a meeting between CEO Tim Cook and SVP Eddy Cue. Byers’ reported it was also the execs’ decision not to remove the InfoWars app.

We’ve reached out to Apple to ask whether it will be monitoring InfoWars’ livestreams directly for any violations of its community standards and will update this story with any response.

Apple got even tougher on ad trackers at WWDC

Apple unveiled a handful of pro-privacy enhancements for its Safari web browser at its annual developer event yesterday, building on an ad tracker blocker it announced at WWDC a year ago.

The feature — which Apple dubbed ‘Intelligent Tracking Prevention’ (IPT) — places restrictions on cookies based on how frequently a user interacts with the website that dropped them. After 30 days of a site not being visited Safari purges the cookies entirely.

Since debuting IPT a major data misuse scandal has engulfed Facebook, and consumer awareness about how social platforms and data brokers track them around the web and erode their privacy by building detailed profiles to target them with ads has likely never been higher.

Apple was ahead of the pack on this issue and is now nicely positioned to surf a rising wave of concern about how web infrastructure watches what users are doing by getting even tougher on trackers.

Cupertino’s business model also of course aligns with privacy, given the company’s main money spinner is device sales. And features intended to help safeguard users’ data remain one of the clearest and most compelling points of differentiation vs rival devices running Google’s Android OS, for example.

“Safari works really hard to protect your privacy and this year it’s working even harder,” said Craig Federighi, Apple’s SVP of software engineering during yesterday’s keynote.

He then took direct aim at social media giant Facebook — highlighting how social plugins such as Like buttons, and comment fields which use a Facebook login, form a core part of the tracking infrastructure that follows people as they browse across the web.

In April US lawmakers also closely questioned Facebook’s CEO Mark Zuckerberg about the information the company gleans on users via their offsite web browsing, gathered via its tracking cookies and pixels — receiving only evasive answers in return.

Facebook subsequently announced it will launch a Clear History feature, claiming this will let users purge their browsing history from Facebook. But it’s less clear whether the control will allow people to clear their data off of Facebook’s servers entirely.

The feature requires users to trust that Facebook is doing what it claims to be doing. And plenty of questions remain. So, from a consumer point of view, it’s much better to defeat or dilute tracking in the first place — which is what the clutch of features Apple announced yesterday are intended to do.

“It turns out these [like buttons and comment fields] can be used to track you whether you click on them or not. And so this year we are shutting that down,” said Federighi, drawing sustained applause and appreciative woos from the WWDC audience.

He demoed how Safari will show a pop-up asking users whether or not they want to allow the plugin to track their browsing — letting web browsers “decide to keep your information private”, as he put it.

Safari will also immediately partition cookies for domains that Apple has “determined to have tracking abilities” — removing the 24 window after a website interaction that Apple allowed in the first version of IPT.

It has also engineered a feature designed to detect when a domain is solely used as a “first party bounce tracker” — i.e. meaning it is never used as a third party content provider but tracks the user purely through navigational redirects — with Safari also purging website data in such instances.

Another pro-privacy enhancement detailed by Federighi yesterday is intended to counter browser fingerprinting techniques that are also used to track users from site to site — and which can be a way of doing so even when/if tracking cookies are cleared.

“Data companies are clever and relentless,” he said. “It turns out that when you browse the web your device can be identified by a unique set of characteristics like its configuration, its fonts you have installed, and the plugins you might have installed on a device.

“With Mojave we’re making it much harder for trackers to create a unique fingerprint. We’re presenting websites with only a simplified system configuration. We show them only built-in fonts. And legacy plugins are no longer supported so those can’t contribute to a fingerprint. And as a result your Mac will look more like everyone else’s Mac and will it be dramatically more difficult for data companies to uniquely identify your device and track you.”

In a post detailing IPT 2.0 on its WebKit developer blog, Apple security engineer John Wilander writes that Apple researchers found that cross-site trackers “help each other identify the user”.

“This is basically one tracker telling another tracker that ‘I think it’s user ABC’, at which point the second tracker tells a third tracker ‘Hey, Tracker One thinks it’s user ABC and I think it’s user XYZ’. We call this tracker collusion, and ITP 2.0 detects this behavior through a collusion graph and classifies all involved parties as trackers,” he explains, warning developers they should therefore “avoid making unnecessary redirects to domains that are likely to be classified as having tracking ability” — or else risk being mistaken for a tracker and penalized by having website data purged.

ITP 2.0 will also downgrade the referrer header of a webpage that a tracker can receive to “just the page’s origin for third party requests to domains that the system has classified as possible trackers and which have not received user interaction” (Apple specifies this is not just a visit to a site but must include an interaction such as a tap/click).

Apple gives the example of a user visiting ‘https://store.example/baby-products/strollers/deluxe-navy-blue.html’, and that page loading a resource from a tracker — which prior to ITP 2.0 would have received a request containing the full referrer (which contains details of the exact product being bought and from which lots of personal information can be inferred about the user).

But under ITP 2.0, the referrer will be reduced to just “https://store.example/”. Which is a very clear privacy win.

Another welcome privacy update for Mac users that Apple announced yesterday — albeit, it’s really just playing catch-up with Windows and iOS — is expanded privacy controls in Mojave around the camera and microphone so it’s protected by default for any app you run. The user has to authorize access, much like with iOS.

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

It was not consent, it was concealment 

Facebook’s response to the clutch of users who are suddenly woke — triggered to delve into their settings by the Facebook data misuse scandal and #DeleteFacebook backlash — to the fact the social behemoth is, quietly and continuously, harvesting sensitive personal data about them and their friends tells you everything you need to know about the rotten state of tech industry ad-supported business models.

“People have to expressly agree to use this feature,” the company wrote in a defensively worded blog post at the weekend, defending how it tracks some users’ SMS and phone call metadata — a post it had the impressive brass neck to self-describe as a “fact check”.

“Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite on Android . This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook.”

So, tl;dr, if you’re shocked to see what Facebook knows about you, well, that’s your own dumb fault because you gave Facebook permission to harvest all that personal data.

Not just Facebook either, of course. A fair few Android users appear to be having a similarly rude awakening about how Google’s mobile platform (and apps) slurp location data pervasively — at least unless the user is very, very careful to lock everything down.

But the difficulty of A) knowing exactly what data is being collected for what purposes and B) finding the cunning concealed/intentionally obfuscated master setting which will nix all the tracking is by design, of course.

Privacy hostile design.

No accident then that Facebook has just given its settings pages a haircut — as it scrambles to rein in user outrage over the still snowballing Cambridge Analytica data misuse scandal — consolidating user privacy controls onto one screen instead of the full TWENTY they had been scattered across before.

ehem

Insert your ‘stable door being bolted’ GIF of choice right here.

Another example of Facebook’s privacy hostile design: As my TC colleague Romain Dillet pointed out last week, the company deploys misleading wording during the Messenger onboarding process which is very clearly intended to push users towards clicking on a big blue “turn on” (data-harvesting) button — inviting users to invite the metaphorical Facebook vampire over the threshold so it can perpetually suck data.

Facebook does this by implying that if they don’t bare their neck and “turn on” the continuous contacts uploading they somehow won’t be able to message any of their friends…

An image included with Facebook’s statement.

That’s complete nonsense of course. But opportunistic emotional blackmail is something Facebook knows a bit about — having been previously caught experimenting on users without their consent to see if it could affect their mood.

Add to that, the company has scattered its social plugins and tracking pixels all around the World Wide Web, enabling it to expand its network of surveillance signals — again, without it being entirely obvious to Internet users that Facebook is watching and recording what they are doing and liking outside its walled garden.

According to pro-privacy search engine DuckDuckGo Facebook’s trackers are on around a quarter of the top million websites. While Google’s are on a full ~three-quarters.

So you don’t even have to be a user to be pulled into this surveillance dragnet.

In its tone-deaf blog post trying to defang user concerns about its SMS/call metadata tracking, Facebook doesn’t go into any meaningful detail about exactly why it wants this granular information — merely writing vaguely that: “Contact importers are fairly common among social apps and services as a way to more easily find the people you want to connect with.”

It’s certainly not wrong that other apps and services have also been sucking up your address book.

But that doesn’t make the fact Facebook has been tracking who you’re calling and messaging — how often/for how long — any less true or horrible.

This surveillance is controversial not because Facebook gained permission to data mine your phone book and activity — which, technically speaking, it will have done, via one of the myriad socially engineered, fuzzily worded permission pop-ups starring cutesy looking cartoon characters.

But rather because the consent was not informed.

Or to put it more plainly, Facebookers had no idea what they were agreeing to let the company do.

Which is why people are so horrified now to find what the company has been routinely logging — and potentially handing over to third parties on its ad platform.

Phone calls to your ex? Of course Facebook can see them. Texts to the number of a health clinic you entered into your phonebook? Sure. How many times you phoned a law firm? Absolutely. And so on and on it goes.

This is the rude awakening that no number of defensive ‘fact checks’ from Facebook — nor indeed defensive tweet storms from current CSO Alex Stamos — will be able to smooth away.

“There are long-standing issues with organisations of all kinds, across multiple sectors, misapplying, or misunderstanding, the provisions in data protection law around data subject consent,” says data protection expert Jon Baines, an advisor at UK law firm Mishcon de Reya LLP and also chair of NADPO, when we asked what the Facebook-Cambridge Analytica data misuse scandal says about how broken the current system of online consent is.

“The current European Data Protection Directive (under which [the UK] Data Protection Act sits) says that consent means any freely given specific and informed indication of their wishes by which a data subject signifies agreement to their personal data being processed. In a situation under which a data subject legitimately later claims that they were unaware what was happening with their data, it is difficult to see how it can reasonably be said that they had “consented” to the use.”

Ironically, given recent suggestions by defunct Facebook rival Path’s founder of a latent reboot to cater to the #DeleteFacebook crowd — Path actually found itself in an uncomfortable privacy hotseat all the way back in 2012, when it was discovered to have been uploading users’ address book information without asking for permission to do so.

Having been caught with its fingers in the proverbial cookie jar, Path apologized and deleted the data.

The irony is that while Path suffered a moment of outrage, Facebook is only facing a major privacy backlash now — after it’s spent so many years calmly sucking up people’s contacts data, also without them being aware because Facebook nudged them to think they needed to tap that big blue ‘turn on’ button.

Exploiting users’ trust — and using a technicality to unhook people’s privacy — is proving pretty costly for Facebook right now though.

And the risks of attempting to hoodwink consent out of your users are about to step up sharply too, at least in Europe.

Baines points out that the EU’s updated privacy framework, GDPR, tightens the existing privacy standard — adding the words “clear affirmative act” and “unambiguous” to consent requirements.

More importantly, he notes it introduces “more stringent requirements, and certain restrictions, which are not, or are not explicit, in current law, such as the requirement to be able to demonstrate that a data subject has given (valid) consent” (emphasis his).

“Consent must also now be separable from other written agreements, and in an intelligible and easily accessible form, using clear and plain language. If these requirements are enforced by data protection supervisory authorities and the courts, then we could well see a significant shift in habits and practices,” he adds.

The GDPR framework is also backed up by a new regime of major penalties for data protection violations which can scale up to 4% of a company’s global turnover.

And the risk of fines so large will be much harder for companies to ignore — and thus playing fast and loose with data, and moving fast and breaking things (as Facebook used to say), doesn’t sound so smart anymore.

As I wrote back in 2015, the online privacy lie is unraveling.

It’s taken a little longer than I’d hoped, for sure. But here we are in 2018 — and it’s not just the #MeToo movement that’s turned consent into a buzzword.