Category Archives: FaceBook

Facebook Dating arrives in Canada and Thailand

On the heels of Tinder’s plans to go more casual, Facebook is today expanding access to its own dating service, Facebook Dating. First launched two months ago in Colombia for testing purposes, the social network is today rolling out Facebook Dating to Canada and Thailand. The company is also adding a few new features to coincide with the launch, including the ability to re-review people you passed on and take a break by putting the service on pause, among other things.

If that latter feature sounds familiar, it’s because it’s also something dating app Bumble recently announced, as well.

Bumble in September launched a Snooze button for its own app, which addressed the problem many online daters have – the need for a detox from dating apps for a bit. Sometimes that’s due to frustration or just being busy; while other times it’s because they’ve matched with someone and want to give them a chance.

Facebook says you can still message people you already matched while on pause.

Meanwhile, offering daters a chance to give someone a second look is also common among dating apps, though it’s presented in different ways. For example, OKCupid may resurface people you’ve passed on, while Tinder’s newer “Feed” feature lets you keep track of updates from matches that you had earlier decided to ignore.

Second Look will be in Facebook Dating’s Settings, and show people in reverse chronological order. You can go back through your Suggested Matches and even review people you may have accidentally passed on – features other dating apps charge for.

Also new today is the ability to review a blocked list, support for non-metric units (for things like range and height), and more interactive profile content, including tappable entry points for conversations – like a shared hometown or school.

These features will arrive in the new version of Facebook Dating, rolling out today, the company says.

It has tweaked the user interface a bit, too. Now, when scrolling through Groups and Events to unlock, these will appear vertically, instead of horizontally as before.

Facebook says it’s also working now on a pre-emptive block list, based on user feedback.

This would let you search for people who are not already your Facebook friends in Facebook Dating that you know you don’t want to see – for example, an ex you’ve unfriended but not blocked on Facebook, a family member, etc., the company tells TechCrunch.

You’ll be able to search for specific people regardless of whether or not you know they have a Dating profile, and doing so won’t reveal to you if that person has a profile on Facebook Dating or not.

Pre-emptive blocking is actually fairly clever, given that many dating apps today surprise you with people you’d rather not see.

Originally announced at F8 this May, Facebook has already figured out some of the larger details about how it wants its dating service to operate. That includes its decision to limit users from expressing interest in no more than 100 people per day, and other settings to open the service to matching with strangers or with friend-of-friends.

There’s a certain (evil) genius in launching a Facebook Dating service, given that Facebook is already the place people go – along with Instagram – to research their new matches and potential dates, once things progress to that point. Plus, the service can leverage Facebook’s data. After all, if anyone knows who you are and what you’re like, it’s them. That could save users time in answering the ‘getting to know you’ questions some apps pose to their users to help perfect their matching algorithms.

It also helps that Facebook is positioning the service for those who want relationships, given the leading dating app – Tinder – is known for the opposite. Match is preparing to focus Tinder more on young, casual dating then build out Hinge for those interested in serious dating.

Facebook’s challenge is that user trust in the company today is lacking. And dating is something many considerate very private – not something they’d want exposed on a network where they’re connected with work colleagues, industry peers, and extended family. While Facebook vows to maintain user privacy, its track record on this front is poor, which could limit the service’s growth.

Facebook has not said when the service will launch in the U.S., nor has it detailed the number of signups to date.

“We don’t have any specific metrics to share, but we’ve been pleased with the response in Colombia thus far and are excited to roll it out to Thailand and Canada,” a spokesperson said.

So I sent my mom that newfangled Facebook Portal

“Who am I going to be worried about? Oh Facebook seeing? No, I’m not worried about Facebook seeing. They’re going to look at my great art collection and say they want to come steal it? No, I never really thought about it.” That’s my 72-year-old mother Sally Constine’s response to whether she’s worried about her privacy now that she has a Facebook Portal video chat device. The gadget goes on sale and starts shipping today at $349 for the 15.6-inch swiveling screen Portal+, $199 for the 10-inch Portal, and $100 off for buying any two.

The sticking point for most technology reporters — that it’s creepy or scary to have a Facebook camera and microphone in your house — didn’t even register as a concern with a normal tech novice like my Mom. “I don’t really think of it any different from a phone call” she says. “It’s not a big deal for me.”

While Facebook has been mired by privacy scandals after a year of Cambridge Analytica and its biggest-ever data breach, the concept that it can’t be trusted hasn’t necessarily trickled down to everyone. And without that coloring her perception, my mom found the Portal to be an easy way to video chat with family, and a powerful reminder to do so.

For a full review of Facebook Portal, check out TechCrunch hardware editor Brian Heater’s report:

As a quick primer, Portal and Portal+ are smart video screens and bluetooth speakers that offer an auto-zooming camera that follows you around the room as you video chat. They include both Facebook’s own voice assistant for controlling Messenger, as well as Amazon Alexa. There’s also a third-party app platform for speech-activated Spotify and Pandora, video clips from The Food Network and Newsy, and it can slideshow through your Facebook photos while it’s idle. For privacy, communications are encrypted, AI voice processing is done locally on the device, there’s an off switch that disconnects the camera and mic, and it comes with a physical lens cover so you know no one’s watching you. It fares well in comparison to the price, specs, and privacy features compared to Amazon’s Echo Show, Google Home Hub, and other smart displays.

When we look at our multi-functional smartphones and computers, connecting with loved ones isn’t always the first thing that comes to mind that way it with an old-school home telephone. But with the Portal in picture frame mode rotating through our Facebook photos of those loved ones, and with it at the beck and call of our voice commands, it felt natural to turn those in-between times we might have scrolled through Instagram instead chatting face to face.

My mother found setting up the Portal to be quite simple, though she wished the little instructional card used a bigger font. She had no issue logging in to her Facebook, Amazon Alexa, and Spotify accounts. “It’s all those things in one. If you had this, you could put Alexa in a different room” the Constine matriarch says.

She found the screen to be remarkably sharp, though some of the on-screen buttons could be better labeled, at least at first. But once she explored the device’s software, she was uncontrollably giggling while trying on augmented reality masks as we talked. She even used the AR Storytime feature to read me a bed time tale like she would 30 years ago. If I was still a child, I think I would have loved this way to play with a parent who was away from home. The intuitive feature instantly had her reading a modernized Three Little Pigs story while illustrations filled our screens. And when she found herself draped in an AR big bad wolf costume during his quotes, she knew to adopt his gruff voice.

One of the few problems she found was that when Facebook’s commercials for Portal came on the TV, they’d end up accidentally activating her Portal. Facebook might need to train the device to ignore its own ads, perhaps by muting them in a certain part of the audio spectrum as one Reddit user suggested Amazon may have done to prevent causing trouble with its Super Bowl commercial.

My mom doesn’t Skype or FaceTime much. She’s just so used to a lifetime of audio calls with her sister  back in England that she rarely remembers that video is an option. Having a dedicated device in the kitchen kept the idea top-of-mind. “I really want to have a conversation seeing her. I think i would really feel close to her if I could see her like I’m seeing you now” she tells me.

Convincing jaded younger adults to buy a Portal might be a steep challenge for Facebook. But perhaps Facebook understands that. Rather than being seemingly ignorant of or calloused about the privacy climate it’s launching Portal into, the company may be purposefully conceding the tech news wonks that includes those who’ll be reviewing Portal but not necessarily the much larger mainstream audience. If it concentrates on seniors and families with young children who might not have the same fears of Facebook, it may have found a way to actually bring us closer together in the way its social network is supposed to.

Facebook is facing an EU data probe over fake ads

The UK’s privacy watchdog has asked Facebook’s lead EU regulator to look into ongoing data protection concerns about its ad platform — including how its platform is being used to target and spread fake adverts to try to manipulate voters.

Facebook’s international HQ is in Ireland so the regulator in play here is the Irish Data Protection Commission.

The ICO noted the action in a 113-page report to parliament yesterday giving an update on its long-running investigation into the use of data analytics in political campaigns — writing:

We have referred our ongoing concerns about Facebook’s targeting functions and techniques that are used to monitor individuals’ browsing habits, interactions and behaviour across the internet and different devices to the to the IDPC. Under the GDPR, the IDPC is the lead authority for Facebook in the EU. We will work with both the Irish regulator and other national data protection authorities to develop a longterm strategy on how we address these issues.

A spokesperson for the watchdog told us these concerns fall outside the remit of that still partially ongoing investigation, which was triggered by the Cambridge Analytica data misuse scandal.

So the issues of concern are not the same issues that the ICO fined Facebook for last month, when it handed the company the maximum possible penalty under the UK’s previous data protection regime. Hence the referral to the Irish DPC.

We’ve reached out to Facebook for comment on the referral.

A spokesman for the Irish regulator told us: “The DPC has yet to receive any information from the ICO.”

Giving one example of its concerns, the ICO’s spokesperson pointed to recent news reports flagging fake political ads that had passed Facebook’s checks and been able to circulate on the platform — until being spotted by journalists, after which they got pulled by Facebook.

Responding to the above ad, badged as being paid for by the now defunct and disgraced data company Cambridge Analytica, Facebook said: “This ad was not created by Cambridge Analytica. It is fake, violates our policies and has been taken down. We believe people on Facebook should know who is behind the political ads they’re seeing which is why we are creating the Ads Library so that you can see who is accountable for any political ad. We have tools for anyone to report suspicious activity such as this.”

Such an obvious fake slipping through Facebook’s checks on political ads — which were only rolled out in the UK a few weeks ago, in first phase form — suggests they can be trivially gamed.

In related news, the Guardian reports that Facebook has delayed a requirement that UK political advertisers verify their identity — pushing it back from an initial deadline of today to sometime in “the next month”, with the company saying it wants to take more time to strengthen the system after a spate of failures.

“We have learnt that some people may try to game the disclaimer system by entering inaccurate details and have been working to improve our review process to detect and prevent this kind of abuse,” a Facebook spokesperson told the newspaper.

The fake ads issue also highlights how self-styled ‘transparency’ without proper accountability can just further muddy already murky waters — where masses of personal data and opaque ad platforms are concerned.

During a hearing in front of the UK’s DCMS committee yesterday, the UK’s information commissioner, Elizabeth Denham, also raised concerns about the use of so-called ‘lookalike audiences’ for targeting voters on Facebook — saying a system that makes inferences in order to target people with political ads needs to be looked at closely in light of Europe’s new GDPR privacy framework.

She also told policymakers that Facebook needs to change its business model. And said all platforms “need to take much greater responsibility”.

“I don’t think that we want to use the same model that sells us holidays and shoes and cars to engage with people and voters. I think that people expect more than that. This is a time for a pause, to look at codes, to look at the practices of social media companies, to take action where they’ve broken the law,” she said.

Committee members raised some of their own political ad concerns with Denham, querying the lawfulness of a crop of ads recently circulating on Facebook, targeting MPs and their constituents, urging policymakers to ‘chuck chequers’ — a reference to the UK prime minister’s current Brexit proposal to the EU — which are badged as being paid for by an organization called ‘Mainstream Network’, without it being clear who on earth is behind that…

“We are investigating those matters and will be looking at whether or not there was a contravention of the GDPR by that organization in sending out those communications,” Denham told the committee.

But wider concerns about how Facebook’s ad platform operates have now been handed over to the Irish DPC to investigate — a far smaller, less well resourced watchdog than the ICO; the largest such agency in Europe.

Any future audit of Facebook’s platform — as has been recently called for by the EU parliament — would also be led by Ireland, Denham confirmed to the committee.

She was asked whether she had any concerns about the smaller regulator being able to handle its burgeoning caseload. “We can work with,” she replied, noting the ICO likely has greater capacity to conduct technical audits. “We certainly can support them and work with them.”

She noted too that the newly established European Data Protection Board — which is responsible for ensuring consistency in the application of the GDPR — is working on “a more holistic way” to co-ordinate regulating social media platforms across Europe.

“[It] is looking at… what we need to do as a community with Facebook and other social media platforms,” she told the committee, adding that under the GDPR the Irish DPC is the “lead authority on Facebook because that’s where Facebook is based in Europe so they would the lead on an audit that’s going forward in the future”.

“Regulators need to look at the effectiveness of their processes,” she added. “That’s really at the heart of this — and there’s a fundamental tension between the advertising business model of Facebook and fundamental rights like protection of privacy. And that’s where we’re at right now.

“It’s a very big job both for the regulators but for the policymakers to ensure that the right requirements and oversight and sanctions are in place.”

Where’s the accountability Facebook?

Facebook has yet again declined an invitation for its founder and CEO Mark Zuckerberg to answer international politicians’ questions about how disinformation spreads on his platform and undermines democratic processes.

But policymakers aren’t giving up — and have upped the ante by issuing a fresh invitation signed by representatives from another three national parliaments. So the call for global accountability is getting louder.

Now representatives from a full five parliaments have signed up to an international grand committee calling for answers from Zuckerberg, with Argentina, Australia and Ireland joining the UK and Canada to try to pile political pressure on Facebook.

The UK’s Digital, Culture, Media and Sport (DCMS) committee has been asking for Facebook’s CEO to attend its multi-month enquiry for the best part of this year, without success…

In its last request the twist was it came not just from the DCMS inquiry into online disinformation but also the Canadian Standing Committee on Access to Information, Privacy and Ethics.

This year policymakers on both sides of the Atlantic have been digging down the rabbit hole of online disinformation — before and since the Cambridge Analytica scandal erupted into a major global scandal — announcing last week they will form an ‘international grand committee’ to further their enquiries.

The two committees will convene for a joint hearing in the UK parliament on November 27 — and they want Zuckerberg to join them to answer questions related to the “platform’s malign use in world affairs and democratic process”, as they put it in their invitation letter.

Facebook has previously despatched a number of less senior representatives to talk to policymakers probing damages caused by disinformation — including its CTO, Mike Schroepfer, who went before the DCMS committee in April.

But both Schroepfer and Zuckerberg have admitted the accountability buck stops with Facebook’s CEO.

The company’s nine-month-old ‘Privacy Principles‘ also makes the following claim [emphasis ours]:

We are accountable

In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies.

The increasingly pressing question, though, is to whom is Facebook actually accountable?

Zuckerberg went personally to the US House and Senate to face policymakers’ questions in April. He also attended a meeting of the EU parliament’s Conference of Presidents in May.

But the rest of the world continues being palmed off with minions. Despite some major, major harms.

Facebook’s 2BN+ user platform does not stop at the US border. And Zuckerberg himself has conceded the company probably wouldn’t be profitable without its international business.

Yet so far only the supranational EU parliament has managed to secure a public meeting with Facebook’s CEO. And MEPs there had to resort to heckling Zuckerberg to try to get answers to their actual questions.

“Facebook say that they remain “committed to working with our committees to provide any additional relevant information” that we require. Yet they offer no means of doing this,” tweeted DCMS chair Damian Collins today, reissuing the invitation for Zuckerberg. “The call for accountability is growing, with representatives from 5 parliaments now meeting on the 27th.”

The letter to Facebook’s CEO notes that the five nations represent 170 million Facebook users.

“We call on you once again to take up your responsibility to Facebook users, and speak in person to their elected representatives,” it adds.

The UK’s information commissioner said yesterday that Facebook needs to overhaul its business model, giving evidence to parliament on the “unprecedented” data investigation her office has been running which was triggered by the Cambridge Analytica scandal. She also urged policymakers to strengthen the rules on the use of people’s data for digital campaigning.

Last month the European parliament also called for Facebook to let in external auditors in the wake of Cambridge Analytica, to ensure users’ data is being properly protected — yet another invitation Facebook has declined.

Meanwhile an independent report assessing the company’s human rights impact in Myanmar — which Facebook commissioned but chose to release yesterday on the eve of the US midterms when most domestic eyeballs would be elsewhere — agreed with the UN’s damning assessment that Facebook did not do enough to prevent its platform from being used to incite ethical violence.

The report also said Facebook is still not doing enough in Myanmar.

Facebook connects Russia to 100+ accounts it removed ahead of mid-terms

The 115 accounts Facebook took down yesterday for inauthentic behavior ahead of the mid-term elections may indeed have been linked to the Russia-based Internet Research Agency, according to a new statement from the company. It says that a site claiming association with the IRA today posted a list of Instagram accounts it had made which included many Facebook had taken down yesterday, and it also has since removed the rest. The IRA was previously llabeled as responsible for using Facebook to interfere with US politics and the 2016 Presidential election.

Facebook’s head of cyber security policy Nathaniel Gleicher issued this statement to TechCrunch:

“Last night, following a tip off from law enforcement, we blocked over 100 Facebook and Instagram accounts due to concerns that they were linked to the Russia-based Internet Research Agency (IRA) and engaged in coordinated inauthentic behavior, which is banned from our services. This evening a website claiming to be associated with the IRA published a list of Instagram accounts they claim to have created. We had already blocked most of these accounts yesterday, and have now blocked the rest. This is a timely reminder that these bad actors won’t give up — and why it’s so important we work with the US government and other technology companies to stay ahead.”

Yesterday, Facebook had published that it would provide an update on whether the removed accounts were connected to Russia, as some were in Russian languages:

On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities . . .  Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate . . . Typically, we would be further along with our analysis before announcing anything publicly. But given that we are only one day away from important elections in the US, we wanted to let people know about the action we’ve taken and the facts as we know them today. Once we know more — including whether these accounts are linked to the Russia-based Internet Research Agency or other foreign entities — we will update this post.”

Attribution of foreign interference into politics via social media can be difficult to accurately attribute, however. Facebook could have provided stronger wording in this update regarding its own evidence about the connection between Russia and the 80 Facebook accounts and 35 Instagram accounts it removed yesterday. Now with the mid-term results being counted, we’ll see if politicians or researchers suggest election interference could have influenced any of the results.

Tinder now has 4.1M paying users, expects $800M in revenue this year

Facebook Dating is no challenger to Tinder-owner Match Group (NASDAQ: MTCH), which posted third-quarter earnings per share of 44 cents on Tuesday.

The company, which owns several brands of internet dating services, including Tinder, Hinge, OkCupid and PlentyOfFish, surpassed analyst’s forecasted revenue of $437 million, reporting Q3 revenue of $444 million, a 29 percent increase year-over-year.

Match says it expects to bring in a total of $1.72 billion in annual revenue.

Despite positive earnings, the company’s 4Q outlook failed to satisfy Wall Street. Match said it expects between $440 and $450 million in revenue in Q4, falling short of the $454.5 million analysts’ estimate. Shares of Match sank 10 percent in after-hours trading as a result.

Year-to-date, Match’s stock is up roughly 60 percent.

Tinder, the location-based mobile dating application, continues to be Match’s growth engine, responsible for roughly half its paid users and half its projected annual revenue. Match’s total number of paid subscribers came in at 8.1 million, up from 7.7 million in Q2 and a 23 percent increase YoY. Much of that growth comes from Tinder Gold, Tinder’s premium subscription tier that lets users see who’s already liked them without doing any swiping. Overall, Tinder’s paying user base is up to 4.1 million from 3.8 million the previous quarter.

Tinder is expected to bring in $800 million in revenue in 2018.

Hinge, another app-based dating service acquired by Match in June, is on its way up. Match says it’s seen a 5x increase in downloads since it first invested.

Match also announced that it would, for the first time, issue a special cash dividend of $2.00 per share on Match Group common stock and Class B common stock, to be paid out on December 19.

Match continues to be on the prowl for strategic M&A opportunities, said its chief executive officer Mandy Ginsberg in a statement.

“[We] have the financial flexibility to acquire companies when we find innovative products with long-term potential,” she said.

The company has reportedly attempted to acquire Tinder-competitor Bumble on more than one occasion, though the nasty legal battle playing out between the dating powerhouses makes that combination unlikely. Most recently, Bumble said it was dropping its $400 million lawsuit against Match, which had claimed Match fraudulently obtained trade secrets during acquisition talks. Bumble may refile that suit at the state level.

Dallas-based Match is owned by IAC, which will itself report earnings tomorrow after the closing bell.

Facebook must change and policymakers must act on data, warns UK watchdog

The UK’s data watchdog has warned that Facebook must overhaul its privacy-hostile business model or risk burning user trust for good.

Comments she made today have also raised questions over the legality of so-called lookalike audiences to target political ads at users of its platform.

Information commissioner Elizabeth Denham was giving evidence to the Digital, Culture, Media and Sport committee in the UK parliament this morning. She’s just published her latest report to parliament, on the ICO’s (still ongoing) investigation into the murky world of data use and misuse in political campaigns.

Since May 2017 the watchdog has been pulling on myriad threads attached to the Cambridge Analytica Facebook data misuse scandal — to, in the regulator’s words, “follow the data” across an entire ecosystem of players; from social media firms to data brokers to political parties, and indeed beyond to other still unknown actors with an interest in also getting their hands on people’s data.

Denham readily admitted to the committee today that the sprawling piece of work had opened a major can of worms.

“I think we were astounded by the amount of data that’s held by all of these agencies — not just social media companies but data companies like Cambridge Analytica; political parties the extent of their data; the practices of data brokers,” she said.

“We also looked at universities, and the data practices in the Psychometric Centre, for example, at Cambridge University — and again I think universities have more to do to control data between academic researchers and the same individuals that are then running commercial companies.

“There’s a lot of switching of hats across this whole ecosystem — that I think there needs to be clarity on who’s the data controller and limits on how data can be shared. And that’s a theme that runs through our whole report.”

“The major concern that I have in this investigation is the very disturbing disregard that many of these organizations across the entire ecosystem have for personal privacy of UK citizens and voters. So if you look across the whole system that’s really what this report is all about — and we have to improve these practices for the future,” she added. “We really need to tighten up controls across the entire ecosystem because it matters to our democratic processes.”

Asked whether she would personally trust her data to Facebook, Denham told the committee: “Facebook has a long way to go to change practices to the point where people have deep trust in the platform. So I understand social media sites and platforms and the way we live our lives online now is here to stay but Facebook needs to change, significantly change their business model and their practices to maintain trust.”

“I understand that platforms will continue to play a really important role in people’s lives but they need to take much greater responsibility,” she added when pressed to confirm that she wouldn’t trust Facebook.

A code of practice for lookalike audiences

In another key portion of the session Denham confirmed that inferred data is personal data under the law.(Although of course Facebook has a different legal interpretation of this point.)

Inferred data refers to inferences made about individuals based on data-mining their wider online activity — such as identifying a person’s (non-stated) political views by examining which Facebook Pages they’ve liked. Facebook offers advertisers an interests-based tool to do this — by creating so-called lookalike audiences comprises of users with similar interests.

But if the information commissioner’s view of data protection law is correct, it implies that use of such tools to infer political views of individuals could be in breach of European privacy law. Unless explicit consent is gained beforehand for people’s personal data to be used for that purpose.

“What’s happened here is the model that’s familiar to people in the commercial sector — or behavioural targeting — has been transferred, I think transformed, into the political arena,” said Denham. “And that’s why I called for an ethical pause so that we can get this right.

“I don’t think that we want to use the same model that sells us holidays and shoes and cars to engage with people and voters. I think that people expect more than that. This is a time for a pause, to look at codes, to look at the practices of social media companies, to take action where they’ve broken the law.”

She told MPs that the use of lookalike audience should be included in a Code of Practice which she has previously called for vis-a-vis political campaigns’ use of data tools.

Social media platforms should also disclose the use of lookalike audiences for targeting political ads at users, she said today — a data-point that Facebook has nonetheless omitted to include in its newly launched political ad disclosure system.

“The use of lookalike audiences should be made transparent to the individuals,” she argued. “They need to know that a political party or an MP is making use of lookalike audiences, so I think the lack of transparency is problematic.”

Asked whether the use of Facebook lookalike audiences to target political ads at people who have chosen not to publicly disclose their political views is legal under current EU data protection laws, she declined to make an instant assessment — but told the committee: “We have to look at it in detail under the GDPR but I’m suggesting the public is uncomfortable with lookalike audiences and it needs to be transparent.”

We’ve reached out to Facebook for comment.

Links to known cyber security breaches

The ICO’s latest report to parliament and today’s evidence session also lit up a few new nuggets of intel on the Cambridge Analytica saga, including the fact that some of the misused Facebook data — which had found its way to Cambridge University’s Psychometric Centre — was not only accessed by IP addresses that resolve to Russia but some IP addresses have been linked to other known cyber security breaches.

“That’s what we understand,” Denham’s deputy, James Dipple-Johnstone told the committee. “We don’t know who is behind those IP addresses but what we understand is that some of those appear on lists of concern to cyber security professionals by virtue of other types of cyber incidents.”

“We’re still examining exactly what data that was, how secure it was and how anonymized,” he added saying “it’s part of an active line of enquiry”.

The ICO has also passed the information on “to the relevant authorities”, he added.

The regulator also revealed that it now knows exactly who at Facebook was aware of the Cambridge Analytica breach at the earliest instance — saying it has internal emails related to it issue which have “quite a large distribution list”. Although it’s still not been made public whether or not Mark Zuckerberg name is on that list.

Facebook’s CTO previously told the committee the person with ultimate responsibility where data misuse is concerned is Zuckerberg — a point the Facebook founder has also made personally (just never to this committee).

When pressed if Zuckerberg was on the distribution list for the breach emails, Denham declined to confirm so today, saying “we just don’t want to get it wrong”.

The ICO said it would pass the list to the committee in due course.

Which means it shouldn’t be too long before we know exactly who at Facebook was responsible for not disclosing the Cambridge Analytica breach to relevant regulators (and indeed parliamentarians) sooner.

The committee is pressing in this because Facebook gave earlier evidence to its online disinformation enquiry yet omitted to mention the Cambridge Analytica breach entirely. (Hence its accusation that senior management at Facebook deliberately withheld pertinent information.)

Denham agreed it would have been best practice for Facebook to notify relevant regulators at the time it became aware of the data misuse — even without the GDPR’s new legal requirement being in force then.

She also agreed with the committee that it would be a good idea for Zuckerberg to personally testify to the UK parliament.

Last week the committee issued yet another summons for the Facebook founder — this time jointly with a Canadian committee which has also been investigating the same knotted web of social media data misuse.

Though Facebook has yet to confirm whether or not Zuckerberg will make himself available this time.

How to regulate Internet harms?

This summer the ICO announced it would be issuing Facebook with the maximum penalty possible under the country’s old data protection regime for the Cambridge Analytica data breach.

At the same time Denham also called for an ethical pause on the use of social media microtargeting of political ads, saying there was an urgent need for “greater and genuine transparency” about the use of such technologies and techniques to ensure “people have control over their own data and that the law is upheld”.

She reiterated that call for an ethical pause today.

She also said the fine the ICO handed Facebook last month for the Cambridge Analytica breach would have been “significantly larger” under the rebooted privacy regime ushered in by the pan-EU GDPR framework this May — adding that it would be interesting to see how Facebook responds to the fine (i.e. whether it pays up or tries to appeal).

“We have evidence… that Cambridge Analytica may have partially deleted some of the data but even as recently as 2018, Spring, some of the data was still there at Cambridge Analytica,” she told the committee. “So the follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000.”

Data deletion assurances that Facebook had sought from various entities after the data misuse scandal blew up don’t appear to be worth the paper they’re written on — with the ICO also noting that some of these confirmations had not even been signed.

Dipple-Johnstone also said it believes that a number of additional individuals and academic institutions received “parts” of the Cambridge Analytica Facebook data-set — i.e. additional to the multiple known entities in the saga so far (such as GSR’s Aleksandr Kogan, and CA whistleblower Chris Wylie).

“We’re examining exactly what data has gone where,” he said, saying it’s looking into “about half a dozen” entities — but declining to name names while its enquiry remains ongoing.

Asked for her views on how social media should be regulated by policymakers to rein in data abuses and misuses, Denham suggested a system-based approach that looks at effectiveness and outcomes — saying it boils down to accountability.

“What is needed for tech companies — they’re already subject to data protection law but when it comes to the broader set of Internet harms that your committee is speaking about — misinformation, disinformation, harm to children in their development, all of these kinds of harms — I think what’s needed is an accountability approach where parliament sets the objectives and the outcomes that are needed for the tech companies to follow; that a Code of Practice is developed by a regulator; backstopped by a regulator,” she suggested.

“What I think’s really important is the regulators looking at the effectiveness of systems like takedown processes; recognizing bots and fake accounts and disinformation — rather than the regulator taking individual complaints. So I think it needs to be a system approach.”

“I think the time for self regulation is over. I think that ship has sailed,” she also told the committee.

On the regulatory powers front, Denham was generally upbeat about the potential of the new GDPR framework to curb bad data practices — pointing out that not only does it allow for supersized fines but companies can be ordered to stop processing data, which she suggested is an even more potent tool to control rogue data-miners.

She also said suggested another new power — to go in and inspect companies and conduct data audits — will help it get results.

But she said the ICO may need to ask parliament for another tool to be able to carry out effective data investigations. “One of the areas that we may be coming back to talk to parliament, to talk to government about is the ability to compel individuals to be interviewed,” she said, adding: “We have been frustrated by that aspect of our investigation.”

Both the former CEO of Cambridge Analytica, Alexander Nix, and Kogan, the academic who built the quiz app used to extract Facebook user data so it could be processed for political ad targeting purposes, had refused to appear for an interview with it under caution, she said today.

On the wider challenge of regulating a full range of “Internet harms” — spanning the spread of misinformation, disinformation and also offensive user-generated content — Denham suggested a hybrid regulatory model might ultimately be needed to tackle this, suggesting the ICO and communications regular Ofcom might work together.

“It’s a very complex area. No country has tackled this yet,” she conceded, noting the controversy around Germany’s social media take down law, and adding: “It’s very challenging for policymakers… Balancing privacy rights with freedom of speech, freedom of expression. These are really difficult areas.”

Asked what her full ‘can of worms’ investigation has highlighted for her, Denham summed it up as: “A disturbing amount of disrespect for personal data of voters and prospective voters.”

“The main purpose of this [investigation] is to pull back the curtain and show the public what’s happening with their personal data,” she added. “The politicians, the policymakers need to think about this too — stronger rules and stronger laws.”

One committee member suggestively floated the idea of social media platforms being required to have an ICO officer inside their organizations — to grease their compliance with the law.

Smiling, Denham responded that it would probably make for an uncomfortable prospect on both sides.

Study of political junk on Facebook raises fresh questions about its metrics

A midterms election study of political disinformation being fenced by Facebook’s platform supports the company’s assertion that a clutch of mostly right-leaning and politically fringe Pages it removed in October for sharing “inauthentic activity” were pulled for gaming its engagement metrics.

Though it remains unclear why it took Facebook so long to act against such prolific fakers — which the research suggests had been doping their metrics unchallenged on Facebook for up to five years.

The three-month research project carried out by Jonathan Albright of the Tow Center for Digital Journalism has largely focused on domestic political disinformation.

In a third and final blog detailing his findings he says some of the removed Pages had put up Facebook interaction numbers in the billions, and many of their videos consistently showed engagement in the tens of millions.

“I found that at least three of the Pages — removed less than a month ago — reported near-astronomical engagement numbers over the past five years,” he writes. “These are the kind of numbers that would be difficult to justify in almost any scenario — even in the case of a very large and sustained advertising spend on Facebook.”

One of the Pages with suspiciously high engagement flagged by Albright is a Page called Right Wing News.

“Less than a month before the 2018 midterm elections, when the Page was removed, Right Wing News had reported more engagement on Facebook over the past five years than the New York Times, The Washington Post, and Breitbart…combined,” he writes.

He also flags two other Pages that were removed by Facebook which had suspiciously high video views, called Silence is Consent and Daily Vine.

We’ve reached out to Facebook for a response.

The company is currently facing legal action from an unrelated group of advertisers who allege that Facebook knowingly misreported video metrics, inflating views for more than a year, and accusing the company of ad fraud. Facebook disputes the advertisers’ allegations.

In his blog, Albright also details how Facebook has seemingly failed to properly enforce a ban on conspiracy theorist and hate speech purveyor Alex Jones, whose personal Facebook Page and disinformation outlet, InfoWars, it pulled from its platform in August — writing: “Jones’ show and much of the removed InfoWars news content appears to have moved swiftly back onto the Facebook platform.”

How has Jones circumvented the ban on his main pages? By creating lots of similarly branded alternative Pages…

Albright writes that Facebook’s algorithms pushed Jones’ livestream show into search results when he was looking for Soros conspiracies: “And what did I get? The live high-definition stream of Jones’ show on Facebook — broadcast on one of the many InfoWars-branded Pages that is inconspicuously named “News Wars.”

According to his analysis, Jones’ InfoWars broadcasts appears to be almost back to where they were — in terms of views/engagement — before the Facebook ‘ban’ took down his two largest pages. Albright describes the “censorship” case as “a gross enforcement failure by Facebook”.

“Granular enforcement isn’t just reactive takedowns; it’s about proactive measures. This involves considering the factors — even the simple guerrilla marketing tactics — that play into how things like banned InfoWars live streams get further propagated,” he writes, summing up his findings.

“From what I’ve seen in this extensive look into Facebook’s platform, especially in regards to the company’s capacity to deal with the misuse of its platform as shown in the cases above — exactly two years after the end of the last election — I will argue that common sense approaches to platform integrity and manipulation still appear to be less of a priority for Facebook that automated detection and removal publicity.”

“The infinite gray area of information-sharing poses the real challenge: it’s the slippery soft conspiracy questions, the repetition of messages seen on shocking memes and statements like the “Soros Beto” caption [cited in the post], and the emotional clickbait that’s regularly shown in Jones’ InfoWars video cover stills. Without granular enforcement, the non-foreign bad actors will only get better, and refine their tactics to increase Americans’ exposure to [hyperpartisan junk news],” he adds.

“Information integrity is more than the scrutiny of provable statements or the linking of some data to shared content with an “i.” Transparency involves more than verifying one Page manager, putting it alongside a date and voluntary disclosure for a paid political campaign, and adding to an political “ad archive.”

Albright has posted additional findings from his three month trawl through the Facebook fake-o-sphere this week — including raising concerns about political Pages running ads targeting the US midterms which have changed moderator structure and included foreign-based administrators, as well as finding some running political ads that lacked a ‘Paid for’ disclosure label.

He also identifies a shift of tactics about political disinformation operators to sharing content in closed Facebook Groups where it’s less visible to outsiders trying to track junk news — yet can still be shared across Facebook’s platform to skew voters’ opinions.

Facebook admits it didn’t do enough to prevent ‘offline violence’ in Myanmar

A night before the U.S. midterm elections, Facebook has dropped an independent report into the platform's effect in Myanmar.

The report into Facebook's impact on human rights within the country was commissioned by the social media giant, but completed by non-profit organization BSR (Business for Social Responsibility).

And it affirms what many have suspected: Facebook didn't do enough to prevent violence and division in Myanmar.

"The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more," Facebook's product policy manager Alex Warofka wrote in a statement. Read more...

More about Tech, Facebook, Social Media, Human Rights, and Myanmar

Bots distorted the 2016 Election. Will the midterms be a sequel?

The fact that Russian-linked bots penetrated social media to influence the 2016 U.S. presidential election has been well documented, and the details of the deception are still trickling out.

In fact, on October 17, Twitter disclosed that foreign interference dating back to 2016 involved 4,611 accounts — most affiliated with the Internet Research Agency, a Russian troll farm. There were more than 10 million suspicious tweets and more than 2 million GIFs, videos and Periscope broadcasts.

In this season of another landmark election — a recent poll showed that about 62 percent of Americans believe the 2018 midterm elections are the most important midterms in their lifetime — it is natural to wonder if the public and private sectors have learned any lessons from the 2016 fiasco — and what is being done to better protect against this malfeasance by nation-state actors.

There is good news and bad news here. Let’s start with the bad.

Two years after the 2016 election, social media still sometimes looks like a reality show called “Propagandists Gone Wild.” Hardly a major geopolitical event takes place in the world without automated bots generating or amplifying content that exaggerates the prevalence of a particular point of view.

In mid-October, Twitter suspended hundreds of accounts that simultaneously tweeted and retweeted pro-Saudi Arabia talking points about the disappearance of journalist Jamal Khashoggi.

On October 22, The Wall Street Journal reported that Russian bots helped inflame the controversy over NFL players kneeling during the national anthem. Researchers from Clemson University told the newspaper that 491 accounts affiliated with the Internet Research Agency posted more 12,000 tweets on the issue, with activity peaking soon after a September 22, 2017 speech by President Trump in which he said team owners should fire players for taking a knee during the anthem.

The problem hasn’t persisted only in the United States. Two years after bots were blamed for helping sway the 2016 Brexit vote in Britain, Twitter bots supporting the anti-immigration Sweden Democrats increased significantly this spring and summer in the lead up to that country’s elections.

These and other examples of continuing misinformation-by-bot are troubling, but it’s not all doom and gloom.  I see positive developments, too.

Photo courtesy of Shutterstock/Nemanja Cosovic

First, awareness must be the first step in solving any problem, and cognizance of bot meddling has soared in the last two years amid all the disturbing headlines.

About two-thirds of Americans have heard of social media bots, and the vast majority of those people are worried bots are being used maliciously, according to a Pew Research Center survey of 4,500 U.S. adults conducted this summer. (It’s concerning, however, that much fewer of the respondents said they’re confident they can actually recognize when accounts are fake.)

Second, lawmakers are starting to take action. When California Gov. Jerry Brown on September 28 signed legislation making it illegal as of July 1, 2019 to use bots — to try to influence voter opinion or for any other purpose — without divulging the source’s artificial nature, it followed anti-ticketing-bot laws nationally and in New York State as the first bot-fighting statutes in the United States.

While I support the increase in awareness and focused interest by legislators, I do feel the California law has some holes. The measure is difficult to enforce because it’s often very hard to identify who is behind a bot network, the law’s penalties aren’t clear and an individual state is inherently limited it what it can do to attack a national and global issue. However, the law is a good start, and shows that governments are starting to take the problem seriously.

Third, the social media platforms — which have faced congressional scrutiny over their failure to address bot activity in 2016 — have become more aggressive in pinpointing and eliminating bad bots.

It’s important to remember that while they have some responsibility, Twitter and Facebook are victims here too, taken for a ride by bad actors who have hijacked these commercial platforms for their own political and ideological agendas.

While it can be argued that Twitter and Facebook should have done more sooner to differentiate the human from the non-human fakes in its user rolls, it bears remembering that bots are a newly acknowledged cybersecurity challenge. The traditional paradigm of a security breach has been a hacker exploiting a software vulnerability. Bots don’t do that — they attack online business processes and thus are difficult to detect though customary vulnerability-scanning methods.

I thought there was admirable transparency in Twitter’s October 17 blog accompanying its release of information about the extent of misinformation operations since 2016. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company said. “These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”

Which leads to the fourth reason I’m optimistic: technological advances.

In the earlier days of the internet, in the late 1990s and early 2000s, networks were extremely susceptible to worms, viruses and other attacks because protective technology was in its early stages of development. Intrusions still happen, obviously, but security technology has grown much more sophisticated and many attacks occur due to human error rather than failure of the defense systems themselves.

Bot detection and mitigation technology keeps improving, and I think we’ll get to a state where it becomes as automatic and effective as email spam filters are today. Security capabilities that too often are siloed within networks will integrate more and more into holistic platforms better able to detect and ward off bot threats.

So while we should still worry about bots in 2018, and the world continues to wrap its arms around the problem, we’re seeing significant action that should bode well for the future.

The health of democracy and companies’ ability to conduct business online may depend on it.