Author: Jon Russell

Twitter vows to continue spam fight despite negative impact on user numbers

Twitter has no intention of easing up on its fight against spam users and other factors that jeopardize the “health” of its service, despite the approach costing it three million in ‘lost’ monthly active users.

Investor panic sent Twitter’s stock price down by nearly 20 percent in early trading today following its latest financial report. Twitter posted a record profit of $100 million for Q2, but its monthly user count dropped by one million, with its U.S. number in particular down to 68 million from 69 million in the previous quarter.

The company said on an earnings call that efforts aimed at “prioritizing the health of the platform” combined with other factors cost it three million monthly users — a number which could have turned the user decline into a more favorable story of growth.

The company is anticipating another drop in the next quarter as it continues to double down on fighting spam and bots on its service. That isn’t the only factor reducing numbers, however. A reassessment of its paid partnerships with carriers worldwide — which help bring in and retain new users — in response to the development of its Lite app is also forecast to reduce MAU.

Investors may be concerned, but Twitter is bullish that an increase in the quality of users is ultimately better in the long run that the short-term gain of higher numbers.

Answering questions on an earnings call, Twitter CEO Jack Dorsey said the clean-up strategy would be ongoing as Twitter intends to “build [concerns for platform health] into our DNA.”

“When we do focus on removing some of the burden of people blocking/muting, we see positive results in our numbers,” he added. “We believe this will encourage our growth story.”

Yet the execs also played down the material impact by explaining that “many” of the “tens of millions” of removed accounts were already not counted within Twitter’s MAU metrics. Some, they added, had never been counted because they had been identified as questionable right from when they were registered.

Twitter explained as much in its earnings release:

When we suspend accounts, many of the removed accounts have already been excluded from MAU or DAU, either because the accounts were already inactive for more than one month at the time of suspension, or because they were caught at signup and were never included in MAU or DAU. We will continue to work hard to improve the health of the platform, providing updates on our progress at least quarterly, and prioritizing health efforts regardless of the near-term impact on metrics, as we believe the best driver of long-term growth of Twitter as a daily utility is a healthy conversation.

On the positive side, the executives played up the development of overseas revenue, which grew 44 percent year-on-year and now accounts for 48 percent of Twitter’s total income.

Facebook trips on its own moderation failures

After weeks of speculation around how it plans to handle conspiracy website Infowars, its creator Alex Jones and others that spread false information, Facebook finally gave us an answer: inconsistently.

The company hit Jones with a 30-day ban after it removed four videos that he shared on the Infowars Facebook Page.

The move is Facebook’s first that curtails the reach of Jones, who has been a major talking point in the media because he is continually allowed a voice on the social network, despite spreading “alternative theories” on events like 9/11 and the San Bernardino shootings.

Confusion

Sounds good so far, but, for a six-hour period today, it didn’t seem as though Facebook itself even knew what is going on.

CNET reported that Jones’ had been hit by a 30-day suspension for posting four videos that violate its community standards on the Infowars page that counts him as a moderator. When reached by TechCrunch to confirm the report, Facebook said Jones had only been handed a warning and that, in the event of another warning, a 30-day ban would then follow.

After hours of waiting for further confirmation and emails to the contrary, Facebook clarified that in fact Jones’ personal account was given a 30-day ban, while Infowars received a warning but no ban.

Facebook is literally shooting the messenger but allowing the page — which pushed the video out to its audience — to remain in place.

In subsequent emails, Facebook explained that the inconsistency is because Jones’ personal account had already received a past warning, which triggers the 30-day ban. Surprisingly, though, this is a first warning for the Infowars page.

At least, that’s what we think has happened because Facebook hasn’t fully clarified the exact summary of events. (We have asked.)

Beyond the four videos, there’s a lot riding on this decision — it sets a precedent. Infowars is one of the largest of its kind, but there are plenty of other organizations that thrive on pumping out misleading/false content that plays into insecurities, misplayed nationalistic pride and more.

That’s why Infowars (involuntarily) became the subject of two Facebook video events held with press his month. On both occasions, Facebook executives said that even those peddling false information deserve to have a voice on the social network, no matter how questionable or inflammatory their views may be. CEO Mark Zuckerberg himself even said Holocaust deniers have free speech on the service.

Based on today, so long as they spew their message within the Facebook community rules, they are fine.

Follow fast

In fact, you could take it further and suggest that if they don’t raise the suspicions of rival platforms like YouTube, they’ll remain untouched on Facebook.

The Jones/Infowars videos were pulled by Facebook days after being removed from YouTube. Indeed, one of the Facebook videos had even survived a review after it was flagged to Facebook moderators last month. The reviewer marked the video as acceptable and it remained on the platform — until this week.

Facebook called that decision a mistake, but arguably it’s a mistake that wouldn’t have been rectified had YouTube not raised the alarm by banning the videos on its platform first. (YouTube has well-documented content moderation problems so that it is it running circles around Facebook should draw much concern from the social network’s management.)

That Facebook is unable to communicate a significant decision like this in a cohesive manner doesn’t give the confidence to think it has its house in order when it comes to video moderation. If anything, it shows that the social network is playing catch up and winging what is a critical topic.

Its platform is being used nefariously worldwide, whether it is to sway elections or incite racial violence in foreign lands, so now, more than ever, Facebook needs to nail down the basics of handling malicious content like Infowars which, unlike those other threats, is hiding in plain sight.

Facebook also removes 4 Infowars videos, including one it previously cleared

Days after defending its decision to give a voice to conspiracy theory peddler Alex Jones and his Infowars site, Facebook has removed four of his videos for violating its community standards.

But one of the four had already been allowed to slip through the firm’s review system. A source within Facebook told TechCrunch that one of the videos had previously been flagged for review in June but, after being looked over by a checker, it was allowed remain on the social network. That decision was described as “erroneous” and it has now been removed.

Facebook’s removal of the videos comes days after YouTube scrubbed four videos from Jones from its site for violating its policies on content. The Facebook source confirmed that three of the videos it has removed were flagged for the first time on Wednesday — presumably after, or in conjunction with, them being highlighted to YouTube — but the fact that one had gotten the all-clear one again raises question marks about the consistency of Facebook’s review process.

Contrary to some media reports, Jones has not received a 30-day ban from Facebook following these removals. TechCrunch understands that such a ban will be issued if Jones violates the company’s policies in the future, but, for now, he has been given a warning.

“Our Community Standards make it clear that we prohibit content that encourages physical harm [bullying], or attacks someone based on their religious affiliation or gender identity [hate speech]. We remove content that violates our standards as soon as we’re aware of it. In this case, we received reports related to four different videos on the Pages that Infowars and Alex Jones maintain on Facebook. We reviewed the content against our Community Standards and determined that it violates. All four videos have been removed from Facebook,” a spokesperson said in a statement.

Earlier this month, the company’s head of News Feed John Hegeman said of Infowars content — which includes claims 9/11 was an inside job and alternate theories to the San Bernardino shootings — that “just for being false, doesn’t violate the community standards.” He added: “We created Facebook to be a place where different people can have a voice.”

Facebook seemed to double down on that stance on Monday when, at another event, VP of product Fidji Simo called Infowars “absolutely atrocious” but then said that “if you are saying something untrue on Facebook, you’re allowed to say it as long as you’re an authentic person and you are meeting the community standards.”

It’s not been a good week for Facebook. A poor earnings report spooked investors and caused its valuation drop by $123 billion in what is the largest-single market cap wipeout in U.S. trading history. That’s not the kind of record Facebook will want to own.

RIP Klout

Remember Klout?

The influencer market service that purportedly let social media influencers get free stuff is finally closing its doors this month.

Perhaps, like me, you’re surprised that Klout is still running in 2018, but time is nearly up. The closure will happen May 25 — you have until then to see what topics you’re apparently an expert on. The shutdown comes more than four years after it was acquired by social media software company Lithium Technologies for a reported $200 million. The plan was for Lithium to IPO, but that never happened.

Lithium operates a range of social media services, including products that handle social media marketing campaigns and engagement with customers, and now it has decided that Klout is no longer part of its vision.

“The Klout acquisition provided Lithium with valuable artificial intelligence (AI) and machine learning capabilities but Klout as a standalone service is not aligned with our long-term strategy,” CEO Pete Hess wrote in a short note.

Hess said those apparent AI and ML smarts will be put to work in the company’s other product lines.

He did tease a potential Klout replacement in the form of “a new social impact scoring methodology based on Twitter” that Lithium is apparently planning to release soon. I’m pretty sure someone out there is already pledging to bring Klout back on the blockchain and is frantically writing up an ICO whitepaper as we speak because that’s how it is these days.

RIP Klout

Twitter doesn’t care that someone is building a bot army in Southeast Asia

Facebook’s lack of attention to how third parties are using its service to reach users ended up with CEO Mark Zuckerberg taking questions from Congressional committees. With that in mind, you’d think that others in the social media space might be more attentive than usual to potentially malicious actors on their platforms.

Twitter, however, is turning the other way and insisting all is normal in Southeast Asia, despite the emergence of thousands of bot-like accounts that have followed prominent users in the region en masse over the past month.

Scores of reporters and Twitter users with large followers — yours truly included — have noticed swarms of accounts with generic names, no profile photo, no bio and no tweets have followed them over the past month.

These accounts might be evidence of a new ‘bot farm’ — the creation of large numbers of accounts for sale or usage on-demand which Twitter has cracked down on — or the groundwork for more nefarious activities, it’s too early to tell.

In what appears to be the first regional Twitter bot campaign, a flood of suspicious new followers has been reported by users across Southeast Asia and beyond, including Thailand, Myanmar Cambodia, Hong Kong, China, Taiwan, Sri Lanka among other places.

While it is true that the new accounts have done nothing yet, the fact that a large number of newly-created accounts have popped up out of nowhere with the aim of following the region’s most influential voices should be enough to concern Twitter. Especially since this is Southeast Asia, a region where Facebook is beset with controversies — from its role inciting ethnic hatred in Myanmar, to allegedly assisting censors in Vietnam, witnessing users jailed for violating lese majeste in Thailand, and aiding the election of controversial Philippines leader Duterte.

Then there are governments themselves. Vietnam has pledged to build a cyber army to combat “wrongful views,” while other regimes in Southeast Asia have clamped down on social media users.

Despite that, Twitter isn’t commenting.

The U.S. company issued a no comment to TechCrunch when we asked for further information about this rush of new accounts, and what action Twitter will take.

A source close to the company suggested that the sudden accumulation of new followers is “a pretty standard sign-up, or onboarding, issue” that is down to new accounts selecting to follow the suggested accounts that Twitter proposes during the new account creation process.

Twitter is more than 10 years old, and since this is the first example of this happening in Southeast Asia that explanation already seems inadequate at face value. More generally, the dismissive approach seems particularly naive. Twitter should be looking into the issue more closely, even if for now the apparent bot army isn’t being put to use yet.

Facebook is considered to be the internet by many in Southeast Asia, and the social network is considerably more popular than Twitter in the region, but there remains a cause for concern here.

“If we’ve learned anything from the Facebook scandal, it’s that what can at first seem innocuous can be leveraged to quite insidious and invasive effect down the line,” Francis Wade, who recently published a book on violence in Myanmar, told the Financial Times this week. “That makes Twitter’s casual dismissal of concerns around this all the more unsettling.”

Facebook is again criticized for failing to prevent religious conflict in Myanmar

Today marks the start of Facebook CEO Mark Zuckerberg’s much-anticipated trip to Washington as he attends a hearing with the Senate, before moving on to a Congressional hearing tomorrow.

Away from the U.S. political capital, Zuckerberg is engaged in serious discussions about Myanmar with a group of six civil society organizations in the country who took umbrage at his claim that Facebook’s systems had prevented messages aimed at inciting violence between Buddhists and Muslims last September.

Following an open letter to Facebook on Friday that claimed the social network had relied on local sources and remains ill-equipped to handle hate speech, Zuckerberg himself stepped in to personally respond.

“Thank you for writing it and I apologize for not being sufficiently clear about the important role that your organizations play in helping us understand and respond to Myanmar-related issues, including the September incident you referred to,” Zuckerberg wrote.

“In making my remarks, my intention was to highlight how we’re building artificial intelligence to help us better identify abusive, hateful or false content even before it is flagged by our community,” he added.

Zuckerberg also claimed Facebook is working to implement new features that include the option to report inappropriate content inside Messenger, and adding more Burmese language reviewers — two suggestions that the Myanmar-based group had raised.

The group has, however, fired back again to criticize Zuckerberg’s response which it said is “nowhere near enough to ensure that Myanmar users are provided with the same standards of care as users in the U.S. or Europe.”

Young men browse their Facebook wall on their smartphones as they sit in a street in Yangon on August 20, 2015. Facebook remains the dominant social network for US Internet users, while Twitter has failed to keep apace with rivals like Instagram and Pinterest, a study showed. AFP PHOTO / Nicolas ASFOURI (Photo credit should read NICOLAS ASFOURI/AFP/Getty Images)

In particular, the six companies are asking Facebook and Zuckerberg to give information around its efforts, including the number of abuse reports it has received, how many have been removed, how quickly it has been done, and its progress on banning accounts.

In addition, the group asked for clarity on the number of Burmese content reviewers on staff, the exact mechanisms that are in place for detecting hate speech, and an update on what action Facebook has taken following its last meeting with the group in December.

“When things go wrong in Myanmar, the consequences can be really serious — potentially disastrous,” it added.

The Cambridge Analytica story has become mainstream news in the U.S. and other parts of the world, yet less is known of Facebook’s role in spreading religious hatred in Myanmar, where the government stands accused of ethnic cleansing following its treatment of the minority Muslim Rohingya population.

A recent UN Fact-Finding Mission concluded that social media has played a “determining role” in the crisis, which Facebook the chief actor.

“We know that the ultranationalist Buddhists have their own [Facebook pages] and really [are] inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities. I’m afraid that Facebook has now turned into a beast, [instead of] what it was originally intended to be used [for],” the UN’s Yanghee Lee said to media.

Close to 30 million of Myanmar’s 50 million population is said to use the social network, making it a hugely effective way to reach large audiences.

“There’s this notion to many people [in Myanmar] that Facebook is the internet,” Jes Petersen, CEO of Phandeeyar — one of the companies involved in the correspondence with Zuckerberg — told TechCrunch in an interview last week.

Despite that huge popularity and high levels of abuse that Facebook itself has acknowledged, the social network does not have an office in Myanmar. In fact, its Burmese language reviewers are said to be stationed in Ireland while its policy team is located in Australia.

That doesn’t seem like the right combination, but it is also unclear whether Facebook is prepared to make changes to focus on user safety in Myanmar. The company declined to say whether it had plans to open an office on the ground when we asked last week.

Here’s Zuckerberg’s letter in full:

Dear Htaike Htaike, Jes, Victoire, Phyu Phyu and Thant,

I wanted to personally respond to your open letter. Thank you for writing it and I apologize for not being sufficiently clear about the important role that your organizations play in helping us understand and respond to Myanmar-related issues, including the September incident you referred to.

In making my remarks, my intention was to highlight how we’re building artificial intelligence to help us better identify abusive, hateful or false content even before it is flagged by our community.

These improvements in technology and tools are the kinds of solutions that your organizations have called on us to implement and we are committed to doing even more. For example, we are rolling out improvements to our reporting mechanism in Messenger to make it easier to find and simpler for people to report conversations.

In addition to improving our technology and tools, we have added dozens more Burmese language reviewers to handle reports from users across all our services. We have also increased the number of people across the company on Myanmar-related issues and we now we have a special product team working to better understand the specific local challenges and build the right tools to help keep people there safe.

There are several other improvements we have made or are making, and I have directed my teams to ensure we are doing all we can to get your feedback and keep you informed.

We are grateful for your support as we map out our ongoing work in Myanmar, and we are committed to working with you to find more ways to be responsive to these important issues.

Mark

And the group’s reply:

Dear Mark,

Thank you for responding to our letter from your personal email account. It means a lot.

We also appreciate your reiteration of the steps Facebook has taken and intends to take to improve your performance in Myanmar.

This doesn’t change our core belief that your proposed improvements are nowhere near enough to ensure that Myanmar users are provided with the same standards of care as users in the US or Europe.

When things go wrong in Myanmar, the consequences can be really serious – potentially disastrous. You have yourself publicly acknowledged the risk of the platform being abused towards real harm.

Like many discussions we have had with your policy team previously, your email focuses on inputs. We care about performance, progress and positive outcomes.

In the spirit of transparency, we would greatly appreciate if you could provide us with the following indicators, starting with the month of March 2018:

  • How many reports of abuse have you received?
  • What % of reported abuses did your team ultimately remove due to violations of the community standards?
  • How many accounts were behind flagging the reports received?
  • What was the average time it took for your review team to provide a final response to users of the reports they have raised?
  • What % of the reports received took more than 48 hours to receive a review?
  • Do you have a target for review times? Data from our own monitoring suggests that you might have an internal standard for review – with most reported posts being reviewed shortly after the 48 hrs mark. Is this accurate?
  • How many fake accounts did you identify and removed?
  • How many accounts did you subject to a temporary ban? How many did you ban from the platform?

Improved performance comes with investments and we would also like to ask for more clarifications around these. Most importantly, we would like to know:

  • How many Myanmar speaking reviewers did you have, in total, as of March 2018? How many do you expect to have by the end of the year? We are specifically interested in reviewers working on the Facebook service and looking for full-time equivalents figure.
  • What mechanisms do you have in place for stopping repeat offenders in Myanmar? We know for a fact that fake accounts remain a key issue and that individuals who were found to violate the community standards on a number of occasions continue to have a presence on the platform.
  • What steps have you taken to date to address the duplicate posts issue we raised in the briefing we provided your team in December 2017?

We’re enclosing our December briefing for your reference, as it further elaborates on the challenges we have been trying to work through with Facebook.

Best,

Myanmar group blasts Zuckerberg’s claim on Facebook hate speech prevention

It’s becoming common to say that Mark Zuckerberg is coming under fire, but the Facebook CEO is again being questioned, this time over a recent claim that Facebook’s internal monitoring system is able to thwart attempts to use its services to incite hatred.

Speaking to Vox, Zuckerberg used the example of Myanmar, where he claimed Facebook had successfully rooted out and prevented hate speech through a system that scans chats inside Messenger. In this case, Messenger had been used to send messages to Buddhists and Muslims with the aim of creating conflict on September 11 last year.

Zuckerberg told Vox:

The Myanmar issues have, I think, gotten a lot of focus inside the company. I remember, one Saturday morning, I got a phone call and we detected that people were trying to spread sensational messages through — it was Facebook Messenger in this case — to each side of the conflict, basically telling the Muslims, “Hey, there’s about to be an uprising of the Buddhists, so make sure that you are armed and go to this place.” And then the same thing on the other side.

So that’s the kind of thing where I think it is clear that people were trying to use our tools in order to incite real harm. Now, in that case, our systems detect that that’s going on. We stop those messages from going through. But this is certainly something that we’re paying a lot of attention to.

That claim has been rejected in a letter signed by six organizations in Myanmar, including tech accelerator firm Phandeeyar. Far from a success, the group said the incident shows why Facebook is not equipped to respond to hate speech in international markets since it relied entirely on information from the ground, where Facebook does not have an office, in order to learn of the issue.

The messages referenced by Zuckerberg, and translated to English by the Myanmar-based group

The group — which includes hate speech monitor Myanmar ICT for Development Organization and the Center for Social Integrity — explained that some four days elapsed between the sending of the first message and Facebook responding with a view to taking action.

In your interview, you refer to your detection ‘systems’. We believe your system, in this case, was us – and we were far from systematic. We identified the messages and escalated them to your team via email on Saturday the 9th September, Myanmar time. By then, the messages had already been circulating widely for three days.

The Messenger platform (at least in Myanmar) does not provide a reporting function, which would have enabled concerned individuals to flag the messages to you. Though these dangerous messages were deliberately pushed to large numbers of people – many people who received them say they did not personally know the sender – your team did not seem to have picked up on the pattern. For all of your data, it would seem that it was our personal connection with senior members of your team which led to the issue being dealt with.

The group added that it has not had feedback from the Messenger incident, and it is still to hear feedback on ideas raised at its last meeting with Facebook in November.

Myanmar has only recently embraced the internet in recent times, thanks to the slashing of the cost of a SIM card — which was once as much as $300 — but already most people in the country are online. Since its internet revolution has taken place over the last five years, the level of Facebook adoption per person is one of the highest in the world.

“Out of a 50 million population, there are nearly 30 million active users on Facebook every month,” Phandeeyar CEO Jes Petersen told TechCrunch. “There’s this notion to many people that Facebook is the internet.”

Young men browse their Facebook wall on their smartphones as they sit in a street in Yangon on August 20, 2015. Facebook remains the dominant social network for US Internet users, while Twitter has failed to keep apace with rivals like Instagram and Pinterest, a study showed. AFP PHOTO / Nicolas ASFOURI (Photo credit should read NICOLAS ASFOURI/AFP/Getty Images)

Facebook optimistically set out to connect the world, and particularly facilitate communication between governments and people, so that statistic may appear at face value to fit with its goal of connecting the world, but the platform has been abused in Myanmar.

Chiefly that has centered around stoking tension between the Muslim and Buddhist populations in the country.

The situation in the country is so severe that an estimated 700,000 Rohingya refugees are thought to have fled to neighboring Bangladesh following a Myanmar government crackdown that began in August. U.S. Secretary of State Rex Tillerson has labeled the actions as ethnic cleansing, as has the UN.

Tensions inflamed, Facebook has been a primary outlet for racial hatred from high-profile individuals inside Myanmar. One of them, monk Ashin Wirathu who is barred from public speaking due to past history, moved online to Facebook where he quickly found an audience. Though he had his Facebook account shuttered, he has vowed to open new ones in order to continue to amplifly his voice via the social network.

Beyond visible figures, the platform has been ripe for anti-Muslim and anti-Rohinga memes and false new stories to go viral. UN investigators last month said Facebook has “turned into a beast” and played a key role in spreading hate.

Petersen said that Phandeeyar — which helped Facebook draft its local language community standards page — and others have held regular information meetings with the social network on the occasions that it has visited Myanmar. But the fact that it does not have an office in the country nor local speakers on its permanent staff has meant that little to nothing has been done.

Likewise, there is no organizational structure to handle the challenging situation in Myanmar, with many of its policy team based in Australia, and Facebook itself is not customized to solicit feedback from users in the country.

“If you are serious about making Facebook better, we urge you to invest more into moderation — particularly in countries, such as Myanmar, where Facebook has rapidly come to play a dominant role in how information is accessed and communicated,” the group wrote.

“We urge you to be more intent and proactive in engaging local groups, such as ours, who are invested in finding solutions, and — perhaps most importantly — we urge you to be more transparent about your processes, progress and the performance of your interventions, so as to enable us to work more effectively together,” they added in the letter.

Facebook has offices covering five of Southeast Asia’s largest countries — Singapore, Thailand, Indonesia, Malaysia and the Philippines — and its approach to expansion has seemed to focus on advertising sales opportunities, with most staff in the region being sales or account management personnel. Using that framing, Myanmar — with a nascent online advertising space — isn’t likely to qualify for an office, but Phandeyaar’s Petersen believes there’s a strong alternative case.

“Myanmar could be a really good test market for how you fix these problems,” he said in an interview. “The issues are not exclusive to Myanmar, but Facebook is so dominant and there are serious issues in the country — here is an opportunity to test ways to mitigate hate speech and fake news.”

Indeed, Zuckerberg has been praised for pushing to make Facebook less addictive, even at the expense of reduced advertising revenue. By the same token, Facebook could sacrifice profit and invest in opening more offices worldwide to help live up to the responsibility of being the de facto internet in many countries. Hiring local people to work hand-in-hand with communities would be a huge step forward to addressing these issues.

With over $4 billion in profit per quarter, it’s hard to argue that Facebook can’t justify the cost of a couple of dozen people in countries where it has acknowledged that there are local issues. Like the newsfeed changes, there is probably a financially-motivated argument that a safer Facebook is better for business, but the humanitarian responsibility alone should be enough to justify the costs.

In a statement, Facebook apologized that Zuckerberg had not acknowledged the role of the local groups in reporting the messages.

“We took their reports very seriously and immediately investigated ways to help prevent the spread of this content. We should have been faster and are working hard to improve our technology and tools to detect and prevent abusive, hateful or false content,” a spokesperson said.

The company said it is rolling a feature to allow Messenger users to report abusive content inside the app. It said also that it has added more Burmese language reviewers to handle content across its services.

“There is more we need to do and we will continue to work with civil society groups in Myanmar and around the world to do better,” the spokesperson added.

The company didn’t respond when we asked if there are plans to open an office in Myanmar.

SAN FRANCISCO, CA – SEPTEMBER 11: Facebook Founder and CEO Mark Zuckerberg speaks during the TechCrunch Conference at SF Design Center on September 11, 2012 in San Francisco, California. (Photo by C Flanigan/WireImage)

Zuckerberg’s interview with Vox itself was one of the first steps of a media campaign that the Facebook supremo has embarked on in response to a wave of criticism and controversy that the company has weathered over the way it handles user data.

Facebook was heavily criticised last year for allowing Russian parties to disrupt the 2016 U.S. election using its platform, but the drama has intensified in recent weeks.

The company’s data privacy policy came under fire after it emerged that a developer named Dr. Aleksandr Kogan used the platform to administer a personality test app that collected data about participants and their friends. That data was then passed to Cambridge Analytica where it may have been leveraged to optimize political campaigns including that of 2016 presidential candidate Donald Trump and the Brexit vote, allegations which the company itself vehemently denies. Regardless of how the data was employed to political ends, that lax data sharing was enough to ignite a firestorm around Facebook’s privacy practices.

Zuckerberg himself fronted a rare call with reporters this week in which he answered questions on a range of topics, including whether he should resign as Facebook CEO. (He said he won’t.)

Most recently, Facebook admitted that as many as 87 million people on the service may have been impacted by Cambridge Analytica’s activities. That’s some way above its initial estimate of 50 million. Zuckerberg is scheduled to appear in front of Congress to discuss the affair, and likely a whole lot more, on April 11. The following day, he has a date with the Senate to discuss, we presume, more of the same.

Following the Cambridge Analytica revelations, the company’s stock dropped precipitously, wiping more than $60 billion off its market capitalization from its prior period of stable growth.

Added to this data controversy, Facebook has been found to have deleted messages that Zuckerberg and other senior executives sent to some users, as TechCrunch’s Josh Constine reported this week. That’s despite the fact that Facebook and its Messenger product do not allow ordinary users to delete sent messages from a recipient’s inbox.