Author: Natasha Lomas

Facebook has quietly removed three bogus far right networks in Spain ahead of Sunday’s elections

Facebook has quietly removed three far right networks that were engaged in coordinated inauthentic behavior intended to spread politically divisive content in Spain ahead of a general election in the country which takes place on Sunday.

The networks had a total reach of almost 1.7M followers and had generated close to 7.4M interactions in the past three months alone, according to analysis by the independent group that identified the bogus activity on Facebook’s platform.

The fake far right activity was apparently not picked up by Facebook.

Instead activist not-for-profit Avaaz unearthed the inauthentic content, and presented its findings to the social networking giant earlier this month, on April 12. In a press release issued today the campaigning organization said Facebook has now removed the fakes — apparently vindicating its findings.

“Facebook did a great job in acting fast, but these networks are likely just the tip of the disinformation iceberg — and if Facebook doesn’t scale up, such operations could sink democracy across the continent,” said Christoph Schott, campaign director at Avaaz, in a statement.

“This is how hate goes viral. A bunch of extremists use fake and duplicate accounts to create entire networks to fake public support for their divisive agenda. It’s how voters were misled in the U.S., and it happened again in Spain,” he added.

We reached out to Facebook for comment but at the time of writing the company had not responded to the request or to several questions we also put to it.

Avaaz said the networks it found comprised around thirty pages and groups spreading far right propaganda — including anti-immigrant, anti-LGBT, anti-feminist and anti-Islam content.

Examples of the inauthentic content can be viewed in Avaaz’s executive summary of the report. They include fake data about foreigners committing the majority of rapes in Spain; fake news about Catalonia’s pro independence leader; and various posts targeting leftwing political party Podemos — including an image superimposing the head of its leader onto the body of Hitler performing a nazi salute.

One of the networks — which Avaaz calls Unidad ​Nacional Española (after the most popular page in the network) — was apparently created and co-ordinated by an individual called ​Javier Ramón Capdevila Grau, who had multiple personal Facebook accounts (also) in contravention of Facebook’s community standards. 

This network, which had a reach of more than 1.2M followers, comprised at least 10 pages that Avaaz identified as working in a coordinated fashion to spread “politically divisive content”.

Its report details how word-for-word identical posts were published across multiple Facebook pages and groups in the network just minutes apart, with nothing to indicate they weren’t original postings on each page. 

Here’s an example post it found copy-pasted across the bogus network:

Translated the posted text reads: ‘In Spain, if a criminal enters your house without your permission the only thing you can do is hide, since if you touch a hair on his head or prevent him from being able to rob you you’ll spend more time in prison than him.’

Avaaz found another smaller network targeting leftwing views, called Todos Contra Podemos, which included seven pages and groups with around 114,000 followers — also apparently run by a single individual (in this case using the name Antonio Leal Felix Aguilar) who also operated multiple Facebook profiles

A third network, Lucha por España​, comprised 12 pages and groups with around 378,000 followers.

Avaaz said it was unable to identify the individual/s behind that network. 

While Facebook has not publicized the removals of these particular political disinformation networks, despite its now steady habit of issuing PR when it finds and removes ‘coordinated inauthentic behavior‘ on its platform (though of course there’s no way to be sure it’s disclosing everything it finds), test searches for the main pages identified by Avaaz returned either no results or what appear to be other unrelated Facebook pages using the same name.

Since the 2016 U.S. presidential election was (infamously) targeted by divisive Kremlin propaganda seeded and amplified via social media, Facebook has launched what it markets as “election security” initiatives in a handful of countries around the world — such as searchable ad archives and political ad authentication and/or disclosure requirements.

However these efforts continue to face criticism for being patchy, piecemeal and, even in countries where they have been applied to its platform, weak and trivially easy to workaround.

Its political ads transparency measures do not always apply to issue-based ads (and/or content), for instance, which punches a democracy-denting hole in the self-styled ‘guardrails’ by allowing divisive propaganda to continue to flow.

In Spain Facebook has not even launched a system of political ad transparency, let alone launched systems addressing issue-based political ads — despite the country’s looming general election on April 28; its third in four years. (Since 2015 elections in Spain have yielded heavily fragmented parliaments — making another imminent election not at all unlikely.)

In February, when we asked Facebook whether it would commit to launching ad transparency tools in Spain before the April 28 election, it offered no such commitment — saying instead that it sets up internal cross-functional teams for elections in every market to assess the biggest risks, and make contact with the relevant electoral commission and other key stakeholders.

Again, it’s not possible for outsiders to assess the efficacy of such internal efforts. But Avaaz’s findings suggest Facebook’s risk assessment of Spain’s general election has had a pretty hefty blindspot when it comes to proactively picking up malicious attempts to inflate far right propaganda.

Yet, at the same time, a regional election in Andalusia late last year returned a shock result and warning signs — with the tiny (and previously unelected) far right party, Vox, gaining around 10 per cent of the vote to take 12 seats.

Avaaz’s findings vis-a-vis the three bogus far right networks suggest that as well as seeking to slur leftwing/liberal political views and parties some of the inauthentic pages were involved in actively trying to amplify Vox — with one bogus page, Orgullo Nacional España, sharing a pro-Vox Facebook page 155 times in a three month period. 

Avaaz used the Facebook-owned social media monitoring tool Crowdtangle to get a read on how much impact the fake networks might have had.

It found that while the three inauthentic far right Facebook networks produced just 3.7% of the posts in its Spanish elections dataset, they garnered an impressive 12.6% of total engagement over the three month period it pulled data on (between January 5 and April 8) — despite consisting of just 27 Facebook pages and groups out of a total of 910 in the full dataset. 

Or, to put it another way, a handful of bad actors managed to generate enough divisive politically charged noise that more than one in ten of those engaging in Spanish election chatter on Facebook, per its dataset, at very least took note.

It’s a finding which neatly illustrates that divisive content being more clickable is not at all a crazy idea — whatever the founder of Facebook once said.

Facebook agrees to clearer T&Cs in Europe

Facebook has agreed to amend its terms and conditions under pressure from EU lawmakers.

The new terms will make it plain that free access to its service is contingent on users’ data being used to profile them to target with ads, the European Commission said today.

“The new terms detail what services, Facebook sells to third parties that are based on the use of their user’s data, how consumers can close their accounts and under what reasons accounts can be disabled,” it writes.

Although the exact wording of the new terms has not yet been published, and the company has until the end of June 2019 to comply — so it remains to be seen how clear is ‘clear’.

Nonetheless the Commission is couching the concession as a win for consumers, trumpeting the forthcoming changes to Facebook’s T&C in a press release in which Vera Jourová, commissioner for justice, consumers and gender equality, writes:

Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use. A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.

The change to Facebook’s T&Cs follows pressure applied to it in the wake of the Cambridge Analytica data misuse scandal, according to the Commission.

Along with national consumer protection authorities it says it asked Facebook to clearly inform consumers how the service gets financed and what revenues are derived from the use of consumer data as part of its response to the data-for-political-ads scandal.

“Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users’ agreement to share their data and to be exposed to commercial advertisements,” it writes. “Facebook’s terms will now clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.”

We reached out to Facebook with questions — including asking to see the wording of the new terms — but at the time of writing the company had declined to provide any response.

It’s also not clear whether the amended T&Cs will apply universally or only for Facebook users in Europe.

European commissioners have been squeezing social media platforms including Facebook over consumer rights issues since 2017 — when Facebook, Twitter and Google were warned the Commission was losing patience with their failure to comply with various consumer protection standards.

Aside from unclear language in their T&Cs, specific issues of concern for the Commission include terms that deprive consumers of their right to take a company to court in their own country or require consumers to waive mandatory rights (such as their right to withdraw from an online purchase).

Facebook has now agreed to several other T&Cs changes under pressure from the Commission, i.e. in addition to making it plainer that ‘if it’s free, you’re the product’.

Namely, the Commission says Facebook has agreed to: 1) amend its policy on limitation of liability — saying Facebook’s new T&Cs “acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties”; 2) amend its power to unilaterally change terms and conditions by “limiting it to cases where the changes are reasonable also taking into account the interest of the consumer”; 3) amend the rules concerning the temporary retention of content which has been deleted by consumers  — with content only able to be retained in “specific cases” (such as to comply with an enforcement request by an authority), and only for a maximum of 90 days when retained for “technical reasons”; and 4) amend the language clarifying the right to appeal of users when their content has been removed.

The Commission says it expects Facebook to make all the changes by the end of June at the latest — warning that the implementation will be closely monitored.

“If Facebook does not fulfil its commitments, national consumer authorities could decide to resort to enforcement measures, including sanctions,” it adds.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

WhatsApp adds a tip-line for checking fakes in India ahead of elections

Facebook -owned messaging platform WhatsApp has launched a fact-checking tipline for users in India ahead of elections in the country.

The fact-checking service consists of a phone number (+91-9643-000-888) where users can send dubious messages if they think they might not be true or otherwise want them verified.

The messaging giant is working with a local media skilling startup, Proto, to run the fact-checking service — in conjunction with digital strategy consultancy Dig Deeper Media and San Francisco-based Meedan, which builds tools for journalists, to provide the platform for verifying submitted content, per TNW.

We’ve reached out to Proto and WhatsApp with questions.

The Economic Times of India reports that the startup intends to use the submitted messages to build a database to help study misinformation during elections for a research project commissioned and supported by WhatsApp.

“The goal of this project is to study the misinformation phenomenon at scale. As more data flows in, we will be able to identify the most susceptible or affected issues, locations, languages, regions, and more,” said Proto’s co-founders Ritvvij Parrikh and Nasr ul Hadi in a statement quoted by Reuters.

WhatsApp also told the news agency: “The challenge of viral misinformation requires more collaborative efforts and cannot be solved by any one organisation alone.”

According to local press reports, suspicious messages can be shared to the WhatsApp tipline in four regional languages, with the fact-checking service covering videos and pictures, as well as text. The submitter is also to confirm they want a fact-check and, on doing so, will get a subsequent response indicating if the shared message is classified as true, false, misleading, disputed or out of scope.

Other related information may also be provided, the Economic Times reports.

WhatsApp has faced major issues with fakes being spread on its end-to-end encrypted platform — a robust security technology that makes the presence of bogus and/or maliciously misleading content harder to spot and harder to manage since the platform itself does not have access to it.

The spread of fakes has become a huge problem for social media platforms generally. One that’s arguably most acute in markets where literacy (and digital literacy) rates can vary substantially. And in India WhatsApp fakes have led to some truly tragic outcomes — with multiple reports in recent years detailing how fast-spreading digital rumors sparked or fuelled mob violence that’s led to death and injury.

India’s general election, which is due to take place in several phases starting later this month until mid next, presents a more clearly defined threat — with the risk of a democratic process and outcome being manipulated by weaponized political disinformation.

WhatsApp’s platform is squarely in the frame given the app’s popularity in India.

It has also been accused of fuelling damaging political fakes during elections in Brazil last year, with Reuters reporting that the platform was flooded with falsehoods and conspiracy theories.

An outsized presence on social media appears to have aided the election of right winger Jair Bolsonaro. While the leftwing candidate he beat in a presidential runoff later claimed businessmen backing Bolsonaro paid to flood WhatsApp with misleading propaganda.

In India local press reports that politicians across the spectrum are being accused of seeking to manipulate the forthcoming elections by seeding fakes on the popular encrypted messaging platform.

It’s clear that WhatsApp offers a conduit for spreading unregulated and unaccountable propaganda at scale with even limited resources. So whether a tipline can offer a robust check against weaponized political disinformation very much remains to be seen.

There certainly look to be limitations to this approach. Though it could also be developed and enhanced — such as if it gets more fully baked into the platform.

For now it looks like WhatsApp is testing the water and trying to gather more data to shape a more robust response.

The most obvious issue with the tipline is it requires a message recipient to request a check — an active step that means the person must know about the fact-check service, have the number available in their contacts, and trust the judgements of those running it.

Many WhatsApp users will fall outside those opt-in bounds.

It also doesn’t take much effort to imagine purveyors of malicious rumors spreading fresh fakes claiming the fact-checks/checkers are biased or manipulated to try to turn WhatsApp users against it.

This is likely why local grassroots political organizations are also being encouraged to submit any rumors they see circulating across the different regions during the election period. And why WhatsApp is talking about the need for collective action to combat the disinformation problem.

It will certainly need engagement across the political spectrum to counter any bias charges and plug gaps resulting from limited participation by WhatsApp users themselves.

How information on debunked fakes can be credibly and widely fed back to Indian voters in a way that broadly reaches the electorate is what’s really key though.

There’s no suggestion, here and now, that’s going to happen via WhatsApp itself — only those who request a check are set to get a response.

Although that could change in future. But, equally, the company may be wary of being seen to accept a role in  centralized distribution of (even fake) political propaganda. That way more accusations of bias likely lie.

In recent years Facebook has taken out adverts in traditional India media to warn about fakes. It has also experimented with other tactics to try to combat damaging WhatsApp rumors — such as using actors to role-plays fakes in public to warn against false messages.

So the company looks to be hoping to develop a multi-stakeholder, multi-format information network off of its own platform to help get the message out about fakes spreading on WhatsApp.

Albeit, that’s clearly going to take time and effort. It’s also still not clear whether it will be effective vs an app that’s always on hand and capable of feeding in fresh fakes. 

The tipline also, inevitably, looks slow and painstaking beside the wildfire spread of digital fakes. And it’s not clear how much of a check on spread and amplification it can offer in this form. Certainly initially — given the fact-checking process itself necessarily takes time.

While a startup, even one that’s being actively supported by WhatsApp, is unlikely to have the resources to speedily fact-check the volume of fakes that will be distributed across such a large market, fuelled by election interests. Yet timely intervention is critical to prevent fakes going viral.

So, again, this initiative looks unlikely to stop the majority of bogus WhatsApp messages from being swallowed and shared. But the data-set derived from the research project which underpins the tipline may help the company fashion a more responsive and proactive approach to contextualizing and debunking malicious rumors in future.

Proto says it plans to submit its learnings to the International Center for Journalists to help other organizations learn from its efforts.

The Economic Times also quotes Fergus Bell, founder and CEO of Dig Deeper Media, suggesting the research will help create “global benchmarks” for those wishing to tackle misinformation in their own markets.

In the meantime, though, the votes go on.

YouTube tightens restrictions on channel of UK far right activist — but no ban

YouTube has placed new restrictions on the channel of a UK far right activist which are intended to make hate speech less easy to discover on its platform.

Restrictions on Stephen Yaxley-Lennon’s YouTube channel include removing certain of his videos from recommendations. YouTube is also taking away his ability to livestream to his now close to 390,000 YouTube channel subscribers.

Yaxley-Lennon, who goes by the name ‘Tommy Robinson’ on social media, was banned from Twitter a year ago.

Buzzfeed first reported the new restrictions. A YouTube spokesperson confirmed the shift in policy, telling us: “After consulting with third party experts, we are applying a tougher treatment to Tommy Robinson’s channel in keeping with our policies on borderline content. The content will be placed behind an interstitial, removed from recommendations, and stripped of key features including livestreaming, comments, suggested videos, and likes.”

Test searches for ‘Tommy Robinson’ on YouTube now return a series of news reports — instead of Yaxley-Lennon’s own channel, as was the case just last month.

YouTube had already demonetized Yaxley-Lennon’s channel back in January for violating its ad policies.

But as we reported last month Google has been under increasing political pressure in the UK to tighten its policies over the far right activist.

The policy shift applies to videos uploaded by Yaxley-Lennon that aren’t illegal or otherwise in breach of YouTube’s community standards (as the company applies them) but which have nonetheless been flagged by users as potential violations of the platform’s policies on hate speech and violent extremism.

In such instances YouTube says it will review the videos and those not in violation of its policies but which nonetheless contain controversial religious or extremist content will be placed behind an interstitial, removed from recommendations, and stripped of key features including comments, suggested videos, and likes.

Such videos will also not be eligible for monetization.

The company says its goal with the stricter approach to Yaxley-Lennon’s content is to strike a balance between upholding free expression and a point of public and historic record, while also keeping hateful content from being spread or recommended to others.

YouTube said it carefully considered Yaxley-Lennon’s case — consulting with external experts and UK academics — before deciding it needed to take tougher treatment.

Affected videos will still remain on YouTube — albeit behind an interstitial. They also won’t be recommended, and will be stripped of the usual social features including comments, suggested videos, and likes.

Of course it remains to be seen how tightly YouTube will apply the new more restrictive policy in this case. And whether Yaxley-Lennon himself will adapt his video strategy to workaround tighter rules on that channel.

The far right is very well versed in using coded language and dog whistle tactics to communicate with its followers and spread racist messages under the mainstream radar.

Yaxley-Lennon has had a presence on multiple social media channels, adapting the content to the different platforms. Though YouTube is the last mainstream channel still available to him after Facebook kicked him off its platform in February. Albeit, he was quickly able to workaround Facebook’s ban simply by using a friend’s Facebook account to livestream himself harassing a journalist at his home late at night.

Police were called out twice in that instance. And in a vlog uploaded to YouTube after the incident Yaxley-Lennon threatened other journalists to “expect a knock at the door”.

Shortly afterwards the deputy leader of the official opposition raised his use of YouTube to livestream harassment in parliament, telling MPs then that: “Every major social media platform other than YouTube has taken down Stephen Yaxley-Lennon’s profile because of his hateful conduct.”

The secretary of state for digital, Jeremy Wright, responded by urging YouTube to “reconsider their judgement” — saying: “We all believe in freedom of speech. But we all believe too that that freedom of speech has limits. And we believe that those who seek to intimidate others, to potentially of course break the law… that is unacceptable. That is beyond the reach of the type of freedom of speech that we believe should be protected.”

YouTube claims it removes videos that violate its hate speech and violent content policies. But in previous instances involving Yaxley-Lennon it has told us that specific videos of his — including the livestreamed harassment that was raised in parliament — do not constitute a breach of its standards.

It’s now essentially admitting that those standards are too weak in instances of weaponized hate.

Yaxley-Lennon, a former member of the neo-nazi British National Party and one of the founders of the far right, Islamophobic English Defence League, has used social media to amplify his message of hate while also soliciting donations to fund individual far right ‘activism’ — under the ‘Tommy Robinson’ moniker.

The new YouTube restrictions could reduce his ability to leverage the breadth of Google’s social platform to reach a wider and more mainstream audience than he otherwise would.

Albeit, it remains trivially easy for anyone who already knows the ‘Tommy Robinson’ ‘brand’ to workaround the YouTube restrictions by using another mainstream Google-owned technology. A simple Google search for “Tommy Robinson YouTube channel” returns direct links to his channel and content at the top of search results. 

Yaxley-Lennon’s followers will also continue to be able to find and share his YouTube content by sharing direct links to it — including on mainstream social platforms.

Though the livestream ban is a significant restriction — if it’s universally applied to the channel — which will make it harder for Yaxley-Lennon to communicate instantly at a distance with followers in his emotive vlogging medium of choice.

He has used the livestreaming medium skilfully to amplify and whip up hate while presenting himself to his followers as a family man afraid for his wife and children. (For the record: Yaxley-Lennon’s criminal record includes convictions for violence, public order offences, drug possession, financial and immigration frauds, among other convictions.)

If Google is hoping to please everyone by applying a ‘third route’ of tighter restrictions for a hate speech weaponizer yet no total ban it will likely just end up pleasing no one and taking flak from both sides.

The company does point out it removes channels of proscribed groups and any individuals formally linked to such groups. And in this case the related far right groups have not been proscribed by the UK government. So the UK government could certainly do much more to check the rise of domestic far right hate.

But YouTube could also step up and take a leadership position by setting robust policies against individuals who seek to weaponize hate.

Instead it continues to fiddle around the edges — trying to fudge the issue by claiming it’s about ‘balancing’ speech and community safety.

In truth hate speech suppresses the speech of those it targets with harassment. So if social networks really want to maximize free speech across their communities they have to be prepared to weed out bad actors who would shrink the speech of minorities by weaponizing hate against them.

European parliament votes for controversial copyright reform (yes, again)

The European Parliament has voted to pass a controversial reform of online copyright rules that critics contend will result in big tech platforms pre-filtering user generated content uploads.

The results of the final vote in the EU parliament were 348 in favor vs 274 against.

An amendment that would have thrown out the most controversial component of the copyright reform — aka Article 13, which makes platforms liable for copyright infringements committed by their users — was rejected by just five votes.

In an earlier vote last fall the EU Parliament also backed the copyright reform proposal, passing negotiations to the EU Council. Months of closed door negotiations followed between representatives of EU Member States and institutions, in so called trilogue discussions, culminating in a final text being agreed last month — which was then handed back to parliament for its final vote today.

Tweaks to the reform agreed by Member States agreed during trilogue appear intended to address criticism that it imposes so-called ‘upload filters’ by default — instead requiring larger platforms to obtain licences for certain types of protected content ahead of time. Though critics still aren’t impressed.

Speaking out against the proposals in the parliament ahead of the vote, Pirate Party member and MEP Julia Reda, who is part of Group of the Greens/European Free Alliance in the EU parliament, highlighted the scale of popular protests against the copyright reform, saying 200,000 people attended demonstrations in the region this weekend and five million have signed a petition against the reform — claiming there has “never been such broad protest” against an EU directive.

She also accused of the parliament of “thoroughly ignoring” the popular protests and warned it risks convincing young people there’s no point in engaging with democratic protest.

“The most tragic thing about this process is a new generation who are voting in the European elections for the first time this year are learning a lesson: Your protests aren’t worth anything, politics will spread lies about you, and won’t care for factual arguments if geopolitical interests are at stake,” said Reda in an impassioned speech in parliament this afternoon ahead of the vote.

Her speech was interrupted several times by shouts from other MEPs disagreeing.

Freedom of expression vs creative industry

The copyright reform campaign has been massively polarized throughout, with one side claiming it means the end of the free Internet and the death of memes because it will result in all online uploads being pre-filtered; and the other accusing opponents they’re in the pay of tech giants which they accuse of freeloading and leaching off Europe’s creative industries by monetizing copyrighted content without paying for use.

Both sides have also accused each other of spreading disinformation to further their cause. There’s been zero love lost across this divide as lobbyists from the two sides have piled on (and on).

Another element of the reform, Article 11, is a proposal to extend digital copyright to cover the ledes of news stories — which aggregators such as Google News scrape and display.

Unsurprisingly that measure has strong support among European media giants like Axel Springer and critics of the reform accuse its architects of being in hoc to the newspaper industry which hopes to benefit financially by being able to charge link aggregator platforms like Google for displaying its content in future.

In recent years a couple of individual EU member states have passed similar laws to extend copyright to news snippets — which led Google to pull Google News entirely from Spain, while in Germany publishers ended up providing their snippets for free. An EU-wide rule could change the dynamics, though.

It’s certainly a much bigger business decision for Google to pull the plug on Google News across the whole of Europe, rather than just in Spain. Though, equally, Google could just come up with a compliance workaround to evade the requirement to pay.

Less discussed elements of the reform include proposals around text and data mining (TDM), which have implications for AI research — including a mandatory copyright exceptions for TDM conducted for research purposes. Teaching and educational purposes are also exempt. But rightholders can opt out of having their works datamined by entities other than research organisations.

The European Commission’s VP for the Digital Single Market tweeted in support of the parliament’s vote today — dubbing it a “big step ahead” which he said will reduce fragmentation across the bloc.

But in a follow up tweet he sought to address concerns that the reform will chill freedom of expression online, writing: “I know there are lots of fears about what users can do or not – now we have clear guarantees for , teaching and online creativity. Member States must make full use of these safeguards in national law.”

In a press release following the parliament’s vote the Commission confirms the text will need to be formally endorsed by the Council of the European Union — which will take place via another vote in the coming weeks, so likely early next month.

Assuming the Council gives its thumbs up the final text will be been published on the Official Journal of the EU, and Member States will then have 24 months to transpose the rules into their national legislation. So the timetable for the copyright directive coming into force is likely 2021.

An accompanying Commission memo on the directive also seeks to address some of the criticisms, with the Commission claiming it “protects freedom of expression [and] sets strong safeguards for users, making clear that everywhere in Europe the use of existing works for purposes of quotation, criticism, review, caricature as well as parody are explicitly allowed”.

“This means that memes and similar parody creations can be used freely. The interests of the users are also preserved through effective mechanisms to swiftly contest any unjustified removal of their content by the platforms,” it adds, in what critics will surely dub cold comfort attempts to paper over the overarching chilling effect on expression from pushing content liability onto platforms.

In another section of the memo, the Commission also writes that the directive does not “impose uploading filters” — nor add any specific technology to recognise illegal content.

“Under the new rules, certain online platforms will be required to conclude licensing agreements with right holders — for example, music or film producers — for the use of music, videos or other copyright protected content. If licences are not concluded, these platforms will have to make their best efforts to ensure that content not authorised by the right holders is not available on their website. The “best effort” obligation does not prescribe any specific means or technology,” it writes.

Though, again, critics argue that will simply translate into upload filters in practice anyway — as platforms will be encouraged to “over-comply” with the rules to “stay on the safe side”, as Reda tells it.

Also critical of the reform, former MEP Catherine Stihler, who’s now CEO of an open data advocacy not-for-profit, called the Open Knowledge Foundation.

In a reaction statement she dubbed the vote “a massive blow for every internet user in Europe”. “We now risk the creation of a more closed society at the very time we should be using digital advances to build a more open world where knowledge creates power for the many, not the few,” she suggested.

Following the vote, Tal Niv, GitHub’s VP of law and policy, also took a critical but more nuanced position, writing: “We’re thankful that policymakers listened and excluded ‘open source software developing and sharing platforms’ from the potential requirement to implement upload filters, which would have made the software ecosystem more fragile. However, the Directive that passed still contains challenges for developers.”

“Anyone developing a platform with EU users that involves sharing links or content faces great uncertainty. The ramifications include being unable to develop features that web users currently expect, and having to implement very expensive and inaccurate automated filtering. On the other hand, inclusion of a mandatory copyright exception for text and data mining in the Directive is welcome, and puts EU developers on a more even playing field relative to their US peers in the development of machine learning and artificial intelligence; looking ahead it will be crucial for member states to implement this exception in a consistent fashion.”

The Computer & Communications Industry Association reacted with disappointment too, warning in a statement that Article 13 undermines the legality of the social and sharing tools and websites that Europeans use every day and saying the reform falls short of “a balanced and modern framework for copyright” despite citing some “recent improvements”.

“We fear it will harm online innovation and restrict online freedoms in Europe. We urge Member States to thoroughly assess and try to minimize the consequences of the text when implementing it,” added Maud Sacquet, CCIA Europe’s senior policy manager.

Monique Goyens, director general of The European Consumer Organisation, BEUC, also described it as a “very unbalanced copyright law”.

“Despite the warnings and concerns of academics, privacy bodies, UN representatives and hundreds of thousands of consumers across Europe, the European Parliament has given its go-ahead to a very unbalanced copyright law. Consumers will have to bear the consequences of this decision,” she warned.

On the flip side professional content creators were jubilant.

“Through this historic vote, a message was sent by Europe to the world, in favour of culture, creation, authors, artists and journalists, and their right to fair remuneration in the digital world,” wrote the Society of Authors, Composers and Publishers of Music in a wordy statement that goes into detail in an attempt to rebut specific various laid charges against the reform. (Such as pointing out that the final text of Article 13 includes an exception for startups — “whose growth will be promoted by clarifying their situation for the use of content protected by authors’ rights”, it suggests.)

“This vote was an act of European sovereignty and a victory for democracy, because it was possible despite one of the most violent campaigns of lobbying and disinformation in the history of the European Union, on the part of those who wanted at all costs to avoid adopting a balanced text,” it added.

In an analysis following the vote law firm Linklaters’ Kathy Berry suggests the controversy and polarization around the copyright reform debate is part of a broader “Hollywood v Silicon Valley” tension — between “content creators that want a high level of copyright protection based on traditional models, and the tech industry that wants to clear the path for new and innovative ways to use and share content”.

“While Article 13 may have noble aims, in its current it functions as little more than a set of ideals, with very little guidance on exactly which service providers will be caught by it or what steps will be sufficient to comply,” she writes delving into the implications for big tech. “This is likely to result in an ongoing lack of legal and commercial certainty until the scope of the Directive is fleshed out by either the Commission’s proposed guidance or by European jurisprudence.”

On Article 11 extending copyright to news snippets Berry says the final version of the text is “much watered down” — noting that it excludes both hyperlinks and “very short extracts” of publications — going on to suggest it’s “unlikely to have any significant impact on news aggregators like Google News after all”.

Telegram adds ‘delete everywhere’ nuclear option — killing chat history

Telegram has added a feature that lets a user delete messages in one-to-one and/or group private chats, after the fact, and not only from their own inbox.

The new ‘nuclear option’ delete feature allows a user to selectively delete their own messages and/or messages sent by any/all others in the chat. They don’t even have to have composed the original message or begun the thread to do so. They can just decide it’s time.

Let that sink in.

All it now takes is a few taps to wipe all trace of a historical communication — from both your own inbox and the inbox(es) of whoever else you were chatting with (assuming they’re running the latest version of Telegram’s app).

Just over a year ago Facebook’s founder Mark Zuckerberg was criticized for silently and selectively testing a similar feature by deleting messages he’d sent from his interlocutors’ inboxes — leaving absurdly one-sided conversations. The episode was dubbed yet another Facebook breach of user trust.

Facebook later rolled out a much diluted Unsend feature — giving all users the ability to recall a message they’d sent but only within the first 10 minutes.

Telegram has gone much, much further. This is a perpetual, universal unsend of anything in a private chat.

The “delete any message in both ends in any private chat, anytime” feature has been added in an update to version 5.5 of Telegram — which the messaging app bills as offering “more privacy”, among a slate of other updates including search enhancements and more granular controls.

To delete a message from both ends a user taps on the message, selects ‘delete’ and then they’re offered a choice of ‘delete for [the name of the other person in the chat or for ‘everyone’] or ‘delete for me’. Selecting the former deletes the message everywhere, while the later just removes it from your own inbox.

Explaining the rational for adding such a nuclear option via a post to his public Telegram channel yesterday, founder Pavel Durov argues the feature is necessary because of the risk of old messages being taken out of context — suggesting the problem is getting worse as the volume of private data stored by chat partners continues to grow exponentially.

“Over the last 10-20 years, each of us exchanged millions of messages with thousands of people. Most of those communication logs are stored somewhere in other people’s inboxes, outside of our reach. Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever,” he writes.

“An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriends in school can come haunt you in 2030 when you decide to run for mayor.”

Durov goes on to claim that the new wholesale delete gives users “complete control” over messages, regardless of who sent them.

However that’s not really what it does. More accurately it removes control from everyone in any private chat, and opens the door to the most paranoid; lowest common denominator; and/or a sort of general entropy/anarchy — allowing anyone in any private thread to choose to edit or even completely nuke the chat history if they so wish at any moment in time.

The feature could allow for self-servingly and selectively silent and/or malicious edits that are intended to gaslight/screw with others, such as by making them look mad or bad. (A quick screengrab later and a ‘post-truth’ version of a chat thread is ready for sharing elsewhere, where it could be passed off a genuine conversation even though it’s manipulated and therefore fake.)

Or else the motivation for editing chat history could be a genuine concern over privacy, such as to be able to remove sensitive or intimate stuff — say after a relationship breaks down.

Or just for kicks/the lolz between friends.

Either way, whoever deletes first seizes control of the chat history — taking control away from everyone else in the process. RIP consent. This is possible because Telegram’s implementation of the super delete feature covers all messages, not just your own, and literally removes all trace of the deleted comms.

So unlike rival messaging app WhatsApp, which also lets users delete a message for everyone in a chat after the fact of sending it (though in that case the delete everywhere feature is strictly limited to messages a person sent themselves), there is no notification automatically baked into the chat history to record that a message was deleted.

There’s no record, period. The ‘record’ is purged. There’s no sign at all there was ever a message in the first place.

We tested this — and, well, wow.

It’s hard to think of a good reason not to create at very least a record that a message was deleted which would offer a check on misuse.

But Telegram has not offered anything. Anyone can secretly and silently purge the private record.

Again, wow.

There’s also no way for a user to recall a deleted message after deleting it (even the person who hit the delete button). At face value it appears to be gone for good. (A security audit would be required to determine whether a copy lingers anywhere on Telegram’s servers for standard chats; only its ‘secret chats’ feature uses end-to-end encryption which it claims “leave no trace on our servers”.)

In our tests on iOS we also found that no notifications is sent when a message is deleted from a Telegram private chat so other people in an old convo might simply never notice changes have been made, or not until long after. After all human memory is far from perfect and old chat threads are exactly the sort of fast-flowing communication medium where it’s really easy to forget exact details of what was said.

Durov makes that point himself in defence of enabling the feature, arguing in favor of it so that silly stuff you once said can’t be dredged back up to haunt you.

But it cuts both ways. (The other way being the ability for the sender of an abusive message to delete it and pretend it never existed, for example, or for a flasher to send and subsequently delete dick pics.)

The feature is so powerful there’s clearly massive potential for abuse. Whether that’s by criminals using Telegram to sell drugs or traffic other stuff illegally, and hitting the delete everywhere button to cover their tracks and purge any record of their nefarious activity; or by coercive/abusive individuals seeking to screw with a former friend or partner.

The best way to think of Telegram now is that all private communications in the app are essentially ephemeral.

Anyone you’ve ever chatted to could decide to delete everything you said (or they said) and go ahead without your knowledge let alone your consent.

The lack of any notification that a message has been deleted will certainly open Telegram to accusations it’s being irresponsible by offering such a nuclear delete option with zero guard rails. (And, indeed, there’s no shortage of angry comments on its tweet announcing the feature.)

Though the company is no stranger to controversy and has structured its business intentionally to minimize the risk of it being subject to any kind of regulatory and/or state control, with servers spread opaquely all over the world, and a nomadic development operation which sees its coders regularly switch the country they’re working out of for months at a time.

Durov himself acknowledges there is a risk of misuse of the feature in his channel post, where he writes: “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.”

Again, though, that’s a one-sided interpretation of what’s actually being enabled here. Because the feature inherently removes control from anyone it’s applied to. So it only offers ‘control’ to the person who first thinks to exercise it. Which is in itself a form of massive power asymmetry.

For historical chats the person who deletes first might be someone with something bad to hide. Or it might be the most paranoid person with the best threat awareness and personal privacy hygiene.

But suggesting the feature universally hands control to everyone simply isn’t true.

It’s an argument in line with a libertarian way of thinking that lauds the individual as having agency — and therefore seeks to empower the person who exercises it. (And Durov is a long time advocate for libertarianism so the design choice meshes with his personal philosophy.)

On a practical level, the presence of such a nuclear delete on Telegram’s platform arguably means the only sensible option for all users that don’t want to abandon the platform is to proactive delete all private chats on a regular and rolling basis — to minimize the risk of potential future misuse and/or manipulation of their chat history. (Albeit, what doing that will do to your friendships is a whole other question.)

Users may also wish to backup their own chats because they can no longer rely on Telegram to do that for them.

While, at the other end of the spectrum — for those really wanting to be really sure they totally nuke all message trace — there are a couple of practical pitfalls that could throw a spanner in the works.  

In our tests we found Telegram’s implementation did not delete push notifications. So with recently sent and deleted messages it was still possible to view the content of a deleted message via a persisting push notification even after the message itself had been deleted within the app.

Though of course, for historical chats — which is where this feature is being aimed; aka rewriting chat history — there’s not likely to be any push notifications still floating around months or even years later to cause a headache.

The other major issue is the feature is unlikely to function properly on earlier versions of Telegram. So if you go ahead and ‘delete everywhere’ there’s no way back to try and delete a message again if it was not successfully purged everywhere because someone in the chat was still running an older version of Telegram.

Plus of course if anyone has screengrabbed your chats already there’s nothing you can do about that.

In terms of wider impact, the nuclear delete might also have the effect of encouraging more screengrabbing (or other backups) — as users hedge against future message manipulation and/or purging. Or to make sure they have a record of abuse.

Which would just create more copies of your private messages in places you can’t at all control and where they could potentially leak if the person creating the backups doesn’t secure them properly so the whole thing risks being counterproductive to privacy and security, really.

Durov claims he’s comfortable with the contents of his own Telegram inbox, writing on his channel that “there’s not much I would want to delete for both sides” — while simultaneously claiming that “for the first time in 23 years of private messaging, I feel truly free and in control”.

The truth is the sensation of control he’s feeling is fleeting and relative.

In another test we performed we were able to delete private messages from Durov’s own inbox, including missives we’d sent to him in a private chat and one he’d sent us. (At least, in so far as we could tell — not having access to Telegram servers to confirm. But the delete option was certainly offered and content (both ours and his) disappeared from our end after we hit the relevant purge button.)

Only Durov could confirm for sure that the messages have gone from his end too. And most probably he’d have trouble doing so as it would require incredible memory for minor detail.

But the point is if the deletion functioned as Telegram claims it does, purging equally at both ends, then Durov was not in control at all because we reached right into his inbox and selectively rubbed some stuff out. He got no say at all.

That’s a funny kind of agency and a funny kind of control.

One thing certainly remains in Telegram users’ control: The ability to choose your friends — and choose who you talk to privately.

Turns out you need to exercise that power very wisely.

Otherwise, well, other encrypted messaging apps are available.

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.