Category Archives: Android

Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Twitter bug revealed some Android users’ private tweets

Twitter accidentally revealed some users’ “protected” (aka, private) tweets, the company disclosed this afternoon. The “Protect your Tweets” setting typically allows people to use Twitter in a non-public fashion. These users get to approve who can follow them and who can view their content. For some Android users over a period of several years, that may not have been the case – their tweets were actually made public as a result  of this bug.

The company says that the issue impacted Twitter for Android users who made certain account changes while the “Protect your Tweets” option was turned on.

For example, if the user had changed their account email address, the “Protect your Tweets” setting was disabled.

Twitter tells TechCrunch that’s just one example of an account change that could have prompted the issue. We asked for other examples, but the company declined share any specifics.

What’s fairly shocking is how long this issue has been happening.

Twitter says that users may have been impacted by the problem if they made these accounts changes between November 3, 2014, and January 14, 2019 – the day the bug was fixed. 

The company has now informed those who were affected by the issue, and has re-enabled the “Protect your Tweets” setting if it had been disabled on those accounts. But Twitter says it’s making a public announcement because it “can’t confirm every account that may have been impacted.” (!!!)

The company explains to us it was only able to notify those people where it was able to confirm the account was impacted, but says it doesn’t have a complete list of impacted accounts. For that reason, it’s unable to offer an estimate of how many Twitter for Android users were affected in total.

This is a sizable mistake on Twitter’s part, as it essentially made content that users had explicitly indicated they wanted private available to the public. It’s unclear at this time if the issue will result in a GDPR violation and fine, as a result.

The one bright spot is that some of the impacted users may have noticed their account had become public because they would have received alerts – like notifications that people were following them without their direct consent. That could have prompted the user to re-enable the “protect tweets” setting on their own. But they may have chalked up the issue to user error or a small glitch, not realizing it was a system-wide bug.

“We recognize and appreciate the trust you place in us, and are committed to earning that trust every day,” wrote Twitter in a statement. “We’re very sorry this happened and we’re conducting a full review to help prevent this from happening again.”

The company says it believes the issue is now fully resolved.

Twitter’s de-algorithmizing ‘sparkle button’ rolls out on Android

After launching on iOS, Twitter is giving Android users the ability to easily switch between seeing the reverse-chronological “latest tweets” and the algorithmic “top tweets” feeds on their home page. The company announced the rollout at a media event in New York.

The “sparkle button” is a way for Twitter to appease long-time power tweeters while also shifting more of its user base to the algorithmic feed, which the company says has served to increase the number of conversations happening on the platform.

You can read more about the company’s algorithmic feed thinking here:

Facebook is the new crapware

Welcome to 2019 where we learn Facebook is the new crapware.

Sorry #DeleteFacebook, you never stood a chance.

Yesterday Bloomberg reported that the scandal-beset social media behemoth has inked an unknown number of agreements with Android smartphone makers, mobile carriers and OSes around the world to not only pre-load Facebook’s eponymous app on hardware but render the software undeleteable; a permanent feature of your device, whether you like how the company’s app can track your every move and digital action or not.

Bloomberg spoke to a U.S. owner of a Samsung Galaxy S8 who, after reading forum discussions about Samsung devices, found his own pre-loaded Facebook app could not be removed. It could only be “disabled”, with no explanation available to him as to what exactly that meant.

The Galaxy S8 retailed for $725+ when it went on sale in the U.S. two years ago.

A Facebook spokesperson told Bloomberg that a disabled permanent app doesn’t continue collecting data or sending information back to the company. But declined to specify exactly how many such pre-install deals Facebook has globally.

While Samsung told the news organization it provides a pre-installed Facebook app on “selected models” with options to disable it, adding that once disabled the app is no longer running.

After Bloomberg’s report was published, mobile research and regular Facebook technical tipster, Jane Manchun Wong, chipped in via Twitter to comment — describing the pre-loaded Facebook app on Samsung devices as “stub”.

Aka “basically a non-functional empty shell, acts as the placeholder for when the phone receives the ‘real’ Facebook app as app updates”.

Albeit many smartphone users have automatic updates enabled, and an omnipresent disabled app is always there to be re-enabled at a later date (and thus revived from a zombie state into a fully fledged Facebook app one future day).

While you can argue that having a popular app pre-installed can be helpful to consumers (though not at all helpful to Facebook competitors), a permanent pre-install is undoubtedly an anti-consumer move.

Crapware is named crapware for a reason. Having paid to own hardware, why should people be forever saddled with unwanted software, stub or otherwise?

And while Facebook is not the only such permanent app around (Apple got a lot of historical blowback for its own undeleteable apps, for instance; finally adding the ability to delete some built-in apps with iOS 12) it’s an especially egregious example given the company’s long and storied privacy hostile history.

Consumers who do not want their digital activity and location surveilled by the people-profiling giant will likely crave the peace of mind of not having any form of Facebook app, stub or otherwise, taking up space on their device.

But an unknown number of Android users are now finding out they don’t have that option.

Not cool, Facebook, not cool.

Another interesting question the matter raises is how permanent Facebook pre-installs are counted in Facebook’s user metrics, and indeed for ad targeting purposes.

In recent years the company has had to revise its ad metrics several times. So it’s valid to wonder whether a disabled Facebook app pre-install is being properly accounted for by the company (i.e. as minus one pair of eyeballs for its ad targeting empire) or not.

We asked Facebook about this point but at the time of writing it declined to comment beyond its existing statements to Bloomberg.

Watch Google CEO Sundar Pichai testify in Congress — on bias, China and more

Google CEO Sundar Pichai has managed to avoid the public political grillings that have come for tech leaders at Facebook and Twitter this year. But not today.

Today he will be in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices.

The hearing kicks off at 10:00 ET — and will be streamed live via our YouTube channel (with the feed also embedded above in this post).

Announcing the hearing last month, committee chairman Bob Goodlatte said it would “examine potential bias and the need for greater transparency regarding the filtering practices of tech giant Google”.

Republicans have been pressuring the Silicon Valley giant over what they claim is ‘liberal bias’ embedded at the algorithmic level.

This summer President Trump publicly lashed out at Google, expressing displeasure about news search results for his name in a series of tweets in which he claimed: “Google & others are suppressing voices of Conservatives and hiding information and news that is good.”

Google rejected the allegation, responding then that: “Search is not used to set a political agenda and we don’t bias our results toward any political ideology.”

In his prepared remarks ahead of the hearing, Pichai reiterates this point.

“I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests,” he writes. “We are a company that provides platforms for diverse perspectives and opinions—and we have no shortage of them among our own employees.”

He also seeks to paint a picture of Google as a proudly patriotic “American company” — playing up its role as a creator of local jobs and a bolster for the wider US economy, likely in the hopes of defusing some of the expected criticism from conservatives on the committee.

However his statement makes no mention of a separate controversy that’s been dogging Google this year — after news leaked this summer that it had developed a censored version of its search service for a potential relaunch in China.

The committee looks certain to question Google closely on its intentions vis-a-vis China.

In statements ahead of the hearing last month, House majority leader, Kevin McCarthy, flagged up reports he said suggested Google is “compromising its core principles by complying with repressive censorship mandates from China”.

Trust in general is a key theme, with lawmakers expressing frustration at both the opacity of Google’s blackbox algorithms, which ultimately shape content hierarchies on its platforms, and the difficulty they’ve had in getting facetime with its CEO to voice questions and concerns.

At a Senate Intelligence committee hearing three months ago, which was attended by Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg, senators did not hide their anger that Pichai had turned down their invitation — openly ripping into company leaders for not bothering to show up. (Google offered to send its chief legal officer instead.)

“For months, House Republicans have called for greater transparency and openness from Google. Company CEO Sundar Pichai met with House Republicans in September to answer some of our questions. Mr. Pichai’s scheduled appearance in front of the House Judiciary Committee is another important step to restoring public trust in Google and all the companies that shape the Internet,” McCarthy wrote last month.

Other recent news that could inform additional questions for Pichai from the committee include the revelation of yet another massive security breach at Google+; and a New York Times investigation of how mobile apps are location tracking users — with far more Android apps found to contain location-sharing code than iOS apps.

Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

China’s fast-rising Bullet Messenger hit with copyright complaint

Bullet Messenger, a fast-rising Chinese messaging upstart that’s gunning to take on local behemoth, WeChat, has been pulled from the iOS App Store owing to what its owners couch as a copyright complaint.

Reuters reported the development earlier, saying Bullet’s owner, Beijing-based Kuairu Technology, claimed in a social media posting that the app had been taken down from Apple’s app store because of a complaint related to image content provided by a partner.

“We are verifying the situation with the partner and will inform you as soon as possible when download capabilities are resumed,” it said in a statement on its official Weibo account.

The company did not specify which part of the app has been subject to a complaint.

We’ve reached out to Apple to ask if it can provide any more details.

According to checks by Reuters earlier today, the Bullet Messenger app was still available on China’s top Android app stores — including stores owned by WeChat owner Tencent, as well as Baidu and Xiaomi stores — which the news agency suggests makes it less likely the app has been pulled from iOS as a result of censorship by the state, saying apps targeted by regulators generally disappear from local app stores too.

Bullet Messenger only launched in August but quickly racked up four million users in just over a week, and also snagged $22M in funding.

By September it had claimed seven million users, and Chinese smartphone maker Smartisan — an investor in Bullet — said it planned to spend 1 billion yuan (~$146M) over the next six months in a bid to reach 100M users. Though in a battle with a competitive Goliath like WeChat (1BN+ active users) that would still be a relative minnow.

The upstart messenger has grabbed attention with its fast growth, apparently attracting users via its relatively minimal messaging interface and a feature that enables speech-to-text transcription text in real time.

Albeit the app has also raised eyebrows for allowing pornographic content to be passed around.

It’s possible that element of the app caught the attention of Chinese authorities which have been cracking down on Internet porn in recent years — even including non-visual content (such as ASMR) which local regulators have also judged to be obscene.

Although it’s equally possible Apple itself is responding to a porn complaint about Bullet’s iOS app.

Earlier this year the Telegram messaging app fell foul of the App Store rules and was temporarily pulled, as a result of what its founder described as “inappropriate content”.

Apple’s developer guidelines for iOS apps include a section on safety that proscribes “upsetting or offensive content” — including frowning on: “Apps with user-generated content or services that end up being used primarily for pornographic content.”

In Telegram’s case, the App Store banishment was soon resolved.

There’s nothing currently to suggest that Bullet’s app won’t also soon be restored.

Facebook’s ex-CSO, Alex Stamos, defends its decision to inject ads in WhatsApp

Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely spread as possible.

Stamos made the comments via Twitter, where he said he was indirectly responding to the fallout from a Forbes interview with WhatsApp co-founder Brian Acton — in which Acton hit at out at his former employer for being greedy in its approach to generating revenue off of the famously anti-ads messaging platform.

Both WhatsApp founders’ exits from Facebook has been blamed on disagreements over monetization. (Jan Koum left some months after Acton.)

In the interview, Acton said he suggested Facebook management apply a simple business model atop WhatsApp, such as metered messaging for all users after a set number of free messages. But that management pushed back — with Facebook COO Sheryl Sandberg telling him they needed a monetization method that generates greater revenue “scale”.

And while Stamos has avoided making critical remarks about Acton (unlike some current Facebook staffers), he clearly wants to lend his weight to the notion that some kind of trade-off is necessary in order for end-to-end encryption to be commercially viable (and thus for the greater good (of messaging privacy) to prevail); and therefore his tacit support to Facebook and its approach to making money off of a robustly encrypted platform.

Stamos’ own departure from the fb mothership was hardly under such acrimonious terms as Acton, though he has had his own disagreements with the leadership team — as set out in a memo he sent earlier this year that was obtained by BuzzFeed. So his support for Facebook combining e2e and ads perhaps counts for something, though isn’t really surprising given the seat he occupied at the company for several years, and his always fierce defence of WhatsApp encryption.

(Another characteristic concern that also surfaces in Stamos’ Twitter thread is the need to keep the technology legal, in the face of government attempts to backdoor encryption, which he says will require “accepting the inevitable downsides of giving people unfettered communications”.)

This summer Facebook confirmed that, from next year, ads will be injected into WhatsApp statuses (aka the app’s Stories clone). So it is indeed bringing ads to the famously anti-ads messaging platform.

For several years the company has also been moving towards positioning WhatsApp as a business messaging platform to connect companies with potential customers — and it says it plans to meter those messages, also from next year.

So there are two strands to its revenue generating playbook atop WhatsApp’s e2e encrypted messaging platform. Both with knock-on impacts on privacy, given Facebook targets ads and marketing content by profiling users by harvesting their personal data.

This means that while WhatsApp’s e2e encryption means Facebook literally cannot read WhatsApp users’ messages, it is ‘circumventing’ the technology (for ad-targeting purposes) by linking accounts across different services it owns — using people’s digital identities across its product portfolio (and beyond) as a sort of ‘trojan horse’ to negate the messaging privacy it affords them on WhatsApp.

Facebook is using different technical methods (including the very low-tech method of phone number matching) to link WhatsApp user and Facebook accounts. Once it’s been able to match a Facebook user to a WhatsApp account it can then connect what’s very likely to be a well fleshed out Facebook profile with a WhatsApp account that nonetheless contains messages it can’t read. So it’s both respecting and eroding user privacy.

This approach means Facebook can carry out its ad targeting activities across both messaging platforms (as it will from next year). And do so without having to literally read messages being sent by WhatsApp users.

As trade offs go, it’s a clearly a big one — and one that’s got Facebook into regulatory trouble in Europe.

It is also, at least in Stamos’ view, a trade off that’s worth it for the ‘greater good’ of message content remaining strongly encrypted and therefore unreadable. Even if Facebook now knows pretty much everything about the sender, and can access any unencrypted messages they sent using its other social products.

In his Twitter thread Stamos argues that “if we want that right to be extended to people around the world, that means that E2E encryption needs to be deployed inside of multi-billion user platforms”, which he says means: “We need to find a sustainable business model for professionally-run E2E encrypted communication platforms.”

On the sustainable business model front he argues that two models “currently fit the bill” — either Apple’s iMessage or Facebook-owned WhatsApp. Though he doesn’t go into any detail on why he believes only those two are sustainable.

He does say he’s discounting the Acton-backed alternative, Signal, which now operates via a not-for-profit (the Signal Foundation) — suggesting that rival messaging app is “unlikely to hit 1B users”.

In passing he also throws it out there that Signal is “subsidized, indirectly, by FB ads” — i.e. because Facebook pays a licensing fee for use of the underlying Signal Protocol used to power WhatsApp’s e2e encryption. (So his slightly shade-throwing subtext is that privacy purists are still benefiting from a Facebook sugardaddy.)

Then he gets to the meat of his argument in defence of Facebook-owned (and monetized) WhatsApp — pointing out that Apple’s sustainable business model does not reach every mobile user, given its hardware is priced at a premium. Whereas WhatsApp running on a cheap Android handset ($50 or, perhaps even $30 in future) can.

Other encrypted messaging apps can also of course run on Android but presumably Stamos would argue they’re not professionally run.

“I think it is easy to underestimate how radical WhatsApp’s decision to deploy E2E was,” he writes. “Acton and Koum, with Zuck’s blessing, jumped off a bridge with the goal of building a monetization parachute on the way down. FB has a lot of money, so it was a very tall bridge, but it is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue.

“This could come from directly charging for the service, it could come from advertising, it could come from a WeChat-like services play. The first is very hard across countries, the latter two are complicated by E2E.”

“I can’t speak to the various options that have been floated around, or the arguments between WA and FB, but those of us who care about privacy shouldn’t see WhatsApp monetization as something evil,” he adds. “In fact, we should want WA to demonstrate that E2E and revenue are compatible. That’s the only way E2E will become a sustainable feature of massive, non-niche technology platforms.”

Stamos is certainly right that Apple’s iMessage cannot reach every mobile user, given the premium cost of Apple hardware.

Though he elides the important role that second hand Apple devices play in helping to reduce the barrier to entry to Apple’s pro-privacy technology — a role Apple is actively encouraging via support for older devices (and by its own services business expansion which extends its model so that support for older versions of iOS (and thus secondhand iPhones) is also commercially sustainable).

Robust encryption only being possible via multi-billion user platforms essentially boils down to a usability argument by Stamos — which is to suggest that mainstream app users will simply not seek encryption out unless it’s plated up for them in a way they don’t even notice it’s there.

The follow on conclusion is then that only a well-resourced giant like Facebook has the resources to maintain and serve this different tech up to the masses.

There’s certainly substance in that point. But the wider question is whether or not the privacy trade offs that Facebook’s monetization methods of WhatsApp entail, by linking Facebook and WhatsApp accounts and also, therefore, looping in various less than transparent data-harvest methods it uses to gather intelligence on web users generally, substantially erodes the value of the e2e encryption that is now being bundled with Facebook’s ad targeting people surveillance. And so used as a selling aid for otherwise privacy eroding practices.

Yes WhatsApp users’ messages will remain private, thanks to Facebook funding the necessary e2e encryption. But the price users are having to pay is very likely still their personal privacy.

And at that point the argument really becomes about how much profit a commercial entity should be able to extract off of a product that’s being marketed as securely encrypted and thus ‘pro-privacy’? How much revenue “scale” is reasonable or unreasonable in that scenario?

Other business models are possible, which was Acton’s point. But likely less profitable. And therein lies the rub where Facebook is concerned.

How much money should any company be required to leave on the table, as Acton did when he left Facebook without the rest of his unvested shares, in order to be able to monetize a technology that’s bound up so tightly with notions of privacy?

Acton wanted Facebook to agree to make as much money as it could without users having to pay it with their privacy. But Facebook’s management team said no. That’s why he’s calling them greedy.

Stamos doesn’t engage with that more nuanced point. He just writes: “It is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue” — thereby collapsing the revenue argument into an all or nothing binary without explaining why it has to be that way.

Twitter now puts live broadcasts at the top of your timeline

Twitter will now put live streams and broadcasts started by accounts you follow at the top of your timeline, making it easier to see what they’re doing in realtime.

In a tweet, Twitter said that that the new feature will include breaking news, personalities and sports.

The social networking giant included the new feature in its iOS and Android apps, updated this week. Among the updates, Twitter said it’s now also supporting audio-only live broadcasts, as well as through its sister broadcast service Periscope.

Last month, Twitter discontinued its app for iOS 9 and lower versions, which according to Apple’s own data still harbors some 5 percent of all iPhone and iPad users.

Apple defends decision not to remove InfoWars’ app

Apple has commented on its decision to continue to allow conspiracy theorist profiteer InfoWars to livestream video podcasts via an app in its App Store, despite removing links to all but one of Alex Jones’ podcast content from its iTunes and podcast apps earlier this week.

At the time Apple said the podcasts had violated its community standards, emphasizing that it “does not tolerate hate speech”, and saying: “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

Yet the InfoWars app allows iOS users to livestream the same content Apple just pulled from iTunes.

In a statement given to BuzzFeed News Apple explains its decision not to pull InfoWars app’ — saying:

We strongly support all points of view being represented on the App Store, as long as the apps are respectful to users with differing opinions, and follow our clear guidelines, ensuring the App Store is a safe marketplace for all. We continue to monitor apps for violations of our guidelines and if we find content that violates our guidelines and is harmful to users we will remove those apps from the store as we have done previously.

Multiple tech platforms have moved to close to door or limit Jones’ reach on their platforms in recent weeks, including Google, which shuttered his YouTube channel, and Facebook, which removed a series of videos and banned Jones’ personal account for 30 days as well as issuing the InfoWars page with a warning strike. Spotify, Pinterest, LinkedIn, MailChimp and others have also taken action.

Although Twitter has not banned or otherwise censured Jones — despite InfoWars’ continued presence on its platform threatening CEO Jack Dorsey’s claimed push to want to improve conversational health on his platform. Snapchat is also merely monitoring Jones’ continued presence on its platform.

In an unsurprising twist, the additional exposure Jones/InfoWars has gained as a result of news coverage of the various platform bans appears to have given his apps some passing uplift…

So Apple’s decision to remove links to Jones’ podcasts yet allow the InfoWars app looks contradictory.

The company is certainly treading a fine line here. But there’s a technical distinction between a link to a podcast in a directory, where podcast makers can freely list their stuff (with the content hosted elsewhere), vs an app in Apple’s App Store which has gone through Apple’s review process and the content is being hosted by Apple.

When it removed Jones’ podcasts Apple was, in effect, just removing a pointer to the content, not the content itself. The podcasts also represented discrete content — meaning each episode which was being pointed to could be judged against Apple’s community standards. (And one podcast link was not removed, for example, though five were.)

Whereas Jones (mostly) uses the InfoWars app to livestream podcast shows. Meaning the content in the InfoWars app is more ephemeral — making it more difficult for Apple to cross-check against its community standards. The streamer has to be caught in the act, as it were.

Google has also not pulled the InfoWars app from its Play Store despite shuttering Jones’ YouTube channel, and a spokesperson told BuzzFeed: “We carefully review content on our platforms and products for violations of our terms and conditions, or our content policies. If an app or user violates these, we take action.”

That said, both the iOS and Android versions of the app also include ‘articles’ that can be saved by users, so some of the content appears to be less ephemeral.

The iOS listing further claims the app lets users “stay up to date with articles as they’re published from Infowars.com” — which at least suggests some of the content is ideal to what’s being spouting on Jones’ own website (where he’s only subject to his own T&Cs).

But in order to avoid failing foul of Apple and Google’s app store guidelines, Jones is likely carefully choosing which articles are funneled into the apps — to avoid breaching app store T&Cs against abuse and hateful conduct, and (most likely also) to hook more eyeballs with more soft-ball conspiracy nonsense before, once they’re pulled into his orbit, blasting people with his full bore BS shotgun on his own platform.

Sample articles depicted in screenshots in the App Store listing for the app include one claiming that George Soros is “literally behind Starbucks’ sensitivity training” and another, from the ‘science’ section, pushing some junk claims about vision correction — so all garbage but not at the same level of anti-truth toxicity that Jones has become notorious for for what he says on his shows; while the Play Store listing flags a different selection of sample articles with a slightly more international flavor — including several on European far right politics, in addition to U.S. focused political stories about Trump and some outrage about domestic ‘political correctness gone mad’. So the static sample content at least isn’t enough to violate any T&Cs.

Still, the livestream component of the apps presents an ongoing problem for Apple and Google — given both have stated that his content elsewhere violates their standards. And it’s not clear how sustainable it will be for them to continue to allow Jones a platform to livestream hate from inside the walls of their commercial app stores.

Beyond that, narrowly judging Jones — a purveyor of weaponized anti-truth (most egregiously his claim that the Sandy Hook Elementary School shooting was a hoax) — by the content he uploads directly to their servers also ignores the wider context (and toxic baggage) around him.

And while no tech companies want their brands to be perceived as toxic to conservative points of view, InfoWars does not represent conservative politics. Jones peddles far right conspiracy theories, whips up hate and spreads junk science in order to generate fear and make money selling supplements. It’s cynical manipulation not conservatism.

Both should revisit their decision. Hateful anti-truth merely damages the marketplace of ideas they claim to want to champion, and chills free speech through violent bullying of minorities and the people it makes into targets and thus victimizes.

Earlier this week 9to5Mac reported that CNN’s Dylan Byers has said the decision to remove links to InfoWars’ podcasts had been made at the top of Apple after a meeting between CEO Tim Cook and SVP Eddy Cue. Byers’ reported it was also the execs’ decision not to remove the InfoWars app.

We’ve reached out to Apple to ask whether it will be monitoring InfoWars’ livestreams directly for any violations of its community standards and will update this story with any response.