Author: Taylor Hatmaker

Facebook bans the Proud Boys, cutting the group off from its main recruitment platform

Facebook is moving to ban the Proud Boys, a far-right men’s organization with ties to white supremacist groups. Business Insider first reported the decision. Facebook confirmed to TechCrunch the decision to ban the Proud Boys from Facebook and Instagram, indicating that the group (and presumably its leader Gavin McInnes) now meet the company’s definition of a hate organization or figure.

Facebook provided the following statement:

Our team continues to study trends in organized hate and hate speech and works with partners to better understand hate organizations as they evolve. We ban these organizations and individuals from our platforms and also remove all praise and support when we become aware of it. We will continue to review content, Pages, and people that violate our policies, take action against hate speech and hate organizations to help keep our community safe.

Even compared to other groups on the far right with online origins, the Proud Boys maximize their impact through social networking. The organization, founded by provocateur and Vice founder McInnes, relies on Facebook as its primary recruitment tool. As we reported in August, the Proud Boys operate a surprisingly sophisticated network for getting new members into the fold via many local and regional Facebook groups. All of it relies on Facebook — the Proud Boys homepage even links out to the web of Facebook groups to guide potential recruits toward next steps.

At the time of writing, Facebook’s ban appeared to affect some Proud Boys groups and not others. The profile of Proud Boys founder McInnes appears to still be functional. Facebook’s decision to act against the organization is likely tied to the recent arrest of five Proud Boys members in New York City on charges including assault, criminal possession of a weapon and gang assault.

Twitter’s U.S. midterms hub is a hot mess

Today, Jack Dorsey tweeted a link to his company’s latest gesture toward ongoing political relevance, a U.S. midterms news center collecting “the latest news and top commentary” on the country’s extraordinarily consequential upcoming election. If curated and filtered properly, that could be useful! Imagine. Unfortunately, rife with fake news, the tool is just another of Twitter’s small yet increasingly consequential disasters.

Beyond a promotional tweet from Dorsey, Twitter’s new offering is kind of buried — probably for the best. On desktop it’s a not particularly useful mash of national news reporters, local candidates and assorted unverifiable partisans. As Buzzfeed news details, the tool is swimming with conspiracy theories, including ones involving the migrant caravan. According to his social media posts, the Pittsburgh shooter was at least partially motivated by similar conspiracies, so this is not a good look to say the least.

Why launch a tool like this before performing the most basic cursory scan for the kind of low-quality sources that already have your company in hot water? Why have your chief executive promote it? Why why why

A few hours after Dorsey’s tweet, likely after the prominent callout, the main feed looked a bit tamer than it did at first glance. Subpages for local races appear mostly populated by candidates themselves, while the national feed looks more like an algorithmically generated echo chamber version of my regular Twitter feed, with inexplicably generous helpings of MSNBC pundits and more lefty activists.

For Twitter users already immersed in conspiracies, particularly those that incubate so successfully on the far right, does this feed offer yet another echo chamber disguised as a neutral news source? In spite of its sometimes dubiously left-leanings, my feed is still peppered with tweets from undercover video provocateur James O’Keefe — not exactly a high quality source.

In May, Twitter announced that political candidates would get a special badge, making them stand out from other users and potential imposters. That was useful! Anything that helps Twitter function as a fast news source with light context is a positive step, but unfortunately we haven’t seen a whole lot in this direction.

Social media companies need to stop launching additional amplification tools into the ominous void. No social tech company has yet exhibited a meaningful understanding of the systemic shifts that need to happen — possibly product-rending shifts — to dissuade bad actors and straight up disinformation from spreading like a back-to-school virus. 

Unfortunately, a week before the U.S. midterm elections, Twitter looks as disinterested as ever in the social disease wreaking havoc on its platform, even as users suffer its real-life consequences. Even more unfortunate for any members of its still dedicated, weary userbase, Twitter’s latest wholly avoidable minor catastrophe comes as a surprise to no one.

Twitter suspends accounts linked to mail bomb suspect

At least two Twitter accounts linked to the man suspected of sending explosive devices to more than a dozen prominent Democrats were suspended on Friday afternoon.

Facebook moved fairly quickly to suspend Sayoc’s account on the platform, though two Twitter accounts that appeared to belong to Sayoc remained online and accessible until around 2:30 p.m. Pacific. Both accounts featured numerous tweets, many of which contained far right political conspiracy theories, graphic images and specific threats.

TechCrunch was able to review the accounts extensively before they were removed. Both known accounts, @hardrockintlet and @hardrock2016, contained many tweets that appeared to threaten violence against perceived political enemies, including Keith Ellison and Joe Biden, an intended recipient of an explosive device.

In one case, those threats had been previously reported to Twitter. Democratic commentator Rochelle Ritchie tweeted that she reported a tweet from @hardrock2016 following her appearance on Fox News. According to a screenshot, Twitter received the report and on October 11 responded that it found “no violation of the Twitter rules against abusive behavior.”

The tweet stated “We will see u 4 sure. Hug your loved ones real close every time you leave home” accompanied by a photo of Ritchie, a screenshot of a news story about a body found in the everglades and the tarot card representing death.

Between the two accounts linked to Sayoc, many of the threats were depicted with graphic images in sequence. In one tweet on September 18 to former Vice President Joe Biden, the account tweeted images of an air boat, a symbol depicting an hourglass with a scythe and graphic images of a decapitated goat.

Threatening messages that emerge out of a sequence of images would likely be more difficult for machine learning moderation tools to parse, though any human content moderator would have no trouble extracting their meaning. In most cases the threatening images were paired with a verbal threat. At least one archive of a Twitter account linked to Sayoc remains online.

In a statement to TechCrunch, Twitter stated only that “This is an ongoing law enforcement investigation. We do not have a comment.” The company indicated that the accounts were suspended for violating Twitter’s rules though did not specify which.

A ton of people don’t know that Facebook owns WhatsApp

Americans looking to reduce their reliance on products from tech’s most alarmingly megalithic companies might be surprised to learn just how far their reach extends.

Privacy-minded browser company DuckDuckGo conducted a small study to look into that phenomenon and the results were pretty striking.

“… As Facebook usage wanes, messaging apps like WhatsApp are growing in popularity as a ‘more private (and less confrontational) space to communicate,'” DuckDuckGo wrote in the post. “That shift didn’t make much sense to us because both services are owned by the same company, so we tried to find an explanation.”

DuckDuckGo gathered a random sample of 1,297 adult Americans who are “collectively demographically similar to the general population of U.S. adults” (i.e. not just DuckDuckGo diehards) using SurveyMonkey’s audience tools. The survey found that 50.4 percent of those surveyed who had used WhatsApp in the prior six months (247 participants) did not know the company is owned by Facebook.

Similarly, DuckDuckGo found that 56.4 percent of those surveyed who had used Waze in the past six months (291 participants) had no idea that the navigation app is owned by Google. A similar study conducted back in April found the same phenomenon when it came to Facebook/Instagram and Google/YouTube, though for Instagram the effect was even stronger (wow).

If you’re reading TechCrunch it’s probably almost impossible to imagine that average people aren’t tracing the lines between tech’s biggest companies and the products scooped up or built under their wings. And yet, it is so.

Even as companies like Google and Facebook suffer blowback from privacy crises, it’s clear that they can lean on the products they’ve picked up along the way to chart a path forward. If this survey is any indication, half of U.S. consumers will have no idea that they’ve jumped ship from a big tech product into a lifeboat captained by the very same company they sought to escape.

And for the biggest tech companies, it’s at least one reason that keeping satellite products at arm’s length from their respective motherships is advantageous for maintaining trust — especially while aggressive data sharing happens behind the scenes.

Twitter tests out ‘annotations’ in Moments

Twitter is trying out a small new change to Moments that would provide contextual information within its curated stories. Spotted by Twitter user @kwatt and confirmed by a number of Twitter product team members, the little snippets appear sandwiched between tweets in a Moment.

Called “annotations” — not to be confused with Twitter’s metadata annotations of yore — the morsels of info aim to clarify and provide context for the tweets that comprise Twitter’s curated trending content. According to the product team, they are authored by Twitter’s curation group.

In our testing, annotations only appear on the mobile app and not on the same Moments on desktop. So far we’ve seen them on a story about the NFL, one about Moviepass and another about staffing changes in the White House.

While it’s a tiny feature tweak, annotations are another sign that Twitter is exploring ways to infuse its platform with value and veracity in the face of what so far appears to be an intractable misinformation crisis.

Instagram’s app-based 2FA is live now, here’s how to turn it on

If you’d like to be sure you’re the only one posting elaborately staged yet casual selfies to your Instagram feed, there’s now a powerful new option to help you keep your account safe.

In late September, Instagram announced that it would be adding non SMS-based two-factor authentication to the app. Instagram confirmed to TechCrunch that the company rolled out the security feature last week and that non-SMS two-factor authentication is live now for all users.

Enabling two-factor authentication (2FA) adds an additional “check” to an account so you can be sure you’re the only one who can log in. Instagram previously only offered less secure SMS-based 2FA, which is vulnerable to SIM hijacking attacks but still better than nothing.

Now, the app supports authenticator apps that generate a code or send a user a prompt in order to prove that they are in fact the authorized account holder. When it’s available, enabling 2FA is one of the easiest, most robust basic security precautions anyone can take to protect any kind of account.

If you’d like to enable app-based 2FA now, and you really should, here’s how to do it.

Open Instagram and navigate to the Settings menu. Scroll down into the Privacy and Security section and select Two-Factor Authentication. There, you’ll see two toggle options: Text Message and Authentication App. Choose Authentication App. On the next screen, Instagram will either detect existing authentication apps on your device, invite you to download one (Google Authenticator by default, Authy is a fine option too) or allow you to set up 2FA manually. Follow whichever option works best for you.

You’ll be asked to authenticate the device you’re on now, but you won’t have to do this every time for trusted devices once they have been authenticated. See? Not so bad. It was a long time for such a popular, well-resourced app to leave users unprotected by proper 2FA, but we’re glad it’s here now.

Additional reporting by Sarah Perez.

Facebook, are you kidding?

Facebook is making a video camera. The company wants you to take it home, gaze into its single roving-yet-unblinking eye and speak private thoughts to your loved ones into its many-eared panel.

The thing is called Portal and it wants to live on your kitchen counter or in your living room or wherever else you’d like friends and family to remotely hang out with you. Portal adjusts to keep its subject in frame as they move around to enable casual at-home video chat. The device minimizes background noise to boost voice clarity. These tricks are neat but not revelatory.

Sounds useful, though. Everyone you know is on Facebook. Or they were anyway… things are a bit different now.

Facebook, champion of bad timing

As many users are looking for ways to compartmentalize or scale back their reliance on Facebook, the company has invited itself into the home. Portal is voice activated, listening for a cue-phrase (in this case “Hey Portal) and leverages Amazon’s Alexa voice commands, as well. The problem is that plenty of users are already creeped out enough by Alexa’s always-listening functionality and habit of picking up snippets of conversation from the next room over. It may have the best social graph in the world, but in 2018 people are looking to use Facebook for less — not more.

Facebook reportedly planned to unveil Portal at F8 this year but held the product back due to the Cambridge Analytica scandal, among other scandals. The fact that the company released the device on the tail end of a major data breach disclosure suggests that the company couldn’t really hold back the product longer without killing it altogether and didn’t see a break in the clouds coming any time soon. Facebook’s Portal is another way for Facebook to blaze a path that its users walk daily to connect to one another. Months after its original intended ship date, the timing still couldn’t be worse.

Over the last eight years Facebook insisted time and time again that it is not and never would be a hardware company. I remember sitting in the second row at a mysterious Menlo Park press event five years ago as reporters muttered that we might at last meet the mythological Facebook phone. Instead, Mark Zuckerberg introduced Graph Search.

It’s hard to overstate just how much better the market timing would have been back in 2013. For privacy advocates, the platform was already on notice, but most users still bobbed in and out of Facebook regularly without much thought. Friends who’d quit Facebook cold turkey were still anomalous. Soul-searching over social media’s inexorable impact on social behavior wasn’t quite casual conversation except among disillusioned tech reporters.

Trusting Facebook (or not)

Onion headline-worthy news timing aside, Facebook showed a glimmer of self-awareness, promising that Portal was “built with privacy and security in mind.” It makes a few more promises:

“Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. Your Portal conversations stay between you and the people you’re calling. In addition, video calls on Portal are encrypted, so your calls are always secure.”

“For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.”

“Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, ‘Hey Portal.’ You can delete your Portal’s voice history in your Facebook Activity Log at any time.”

This stuff sounds okay, but it’s standard. And, like any Facebook product testing the waters before turning the ad hose on full-blast, it’s all subject to change. For example, Portal’s camera doesn’t identify who you are, but Facebook commands a powerful facial recognition engine and is known for blurring the boundaries between its major products, a habit that’s likely to worsen with some of the gatekeepers out of the way.

Facebook does not command a standard level of trust. To recover from recent lows, Facebook needs to establish an extraordinary level of trust with users. A fantastic level of trust. Instead, it’s charting new inroads into their lives.

Hardware is hard. Facebook isn’t a hardware maker and its handling of Oculus is the company’s only real trial with the challenges of making, marketing — and securing — something that isn’t a social app. In 2012, Zuckerberg declared that hardware has “always been the wrong strategy” for Facebook. Two years later, Facebook bought Oculus, but that was a bid to own the platform of the future after missing the boat on the early mobile boom — not a signal that Facebook wanted to be a hardware company.

Reminder: Facebook’s entire raison d’être is to extract personal data from its users. For intimate products — video chat, messaging, kitchen-friendly panopticons — it’s best to rely on companies with a business model that is not diametrically opposed to user privacy. Facebook isn’t the only one of those companies (um, hey Google) but Facebook’s products aren’t singular enough to be worth fooling yourself into a surfeit of trust.

Gut check

Right now, as consumers, we only have so much leverage. A small handful of giant tech companies — Facebook, Apple, Amazon, Google and Microsoft — make products that are ostensibly useful, and we decide how useful they are and how much privacy we’re willing to trade to get them. That’s the deal and the deal sucks.

As a consumer it’s worth really sitting with that. Which companies do you trust the least? Why?

It stands to reason that if Facebook cannot reliably secure its flagship product — Facebook itself — then the company should not be trusted with experimental forays into wildly different products, i.e. physical ones. Securing a software platform that serves 2.23 billion users is an extremely challenging task, and adding hardware to that equation just complicates existing concerns.

You don’t have to know the technical ins and outs of security to make secure choices. Trust is leverage — demand that it be earned. If a product doesn’t pass the smell test, trust that feeling. Throw it out. Better yet, don’t invite it onto your kitchen counter to begin with.

If we can’t trust Facebook to safely help us log in to websites or share news stories, why should we trust Facebook to move into our homes an always-on counter-mounted speaker capable of collecting incredibly sensitive data? Tl; dr: We shouldn’t! Of course we shouldn’t. But you knew that.

What Instagram users need to know about Facebook’s security breach

Even if you never log into Facebook itself these days, the other apps and services you use might be impacted by Facebook’s latest big, bad news.

In a follow-up call on Friday’s revelation that Facebook has suffered a security breach affecting at least 50 million accounts, the company clarified that Instagram users were not out of the woods — nor were any other third-party services that utilized Facebook Login. Facebook Login is the tool that allows users to sign in with a Facebook account instead of traditional login credentials and many users choose it as a convenient way to sign into a variety of apps and services.

Third-party apps and sites affected too

Due to the nature of the hack, Facebook cannot rule out the fact that attackers may have also accessed any Instagram account linked to an affected Facebook account through Facebook Login. Still, it’s worth remembering that while Facebook can’t rule it out, the company has no evidence (yet) of this kind of activity.

“So the vulnerability was on Facebook, but these access tokens enable someone to use [a connected account] as if they were the account holder themselves — this does mean they could have access other third party apps that were using Facebook login,” Facebook Vice President of Product Management Guy Rosen explained on the call.

“Now that we have reset all of those access tokens as part of protecting the security of people’s accounts, developers who use Facebook login will be able to detect that those access tokens has been reset, identify those users and as a user, you will simply have to log in again into those third party apps.”

Rosen reiterated that there is plenty Facebook does not know about the hack, including the extent to which attackers manipulated the three security bugs in question to obtain access to external accounts through Facebook Login.

“The vulnerability was on Facebook itself and we’ve yet to determine, given the investigation is really early, [what was] the exact nature of misuse and whether there was any access to Instagram accounts, for example,” Rosen said.

Anyone with a Facebook account affected by the breach — you should have been automatically logged out and will receive a notification — will need to unlink and relink their Instagram account to Facebook in order to continue cross-posting content to Facebook.

How to relink your Facebook account and do a security check

To do relink your Instagram account to Facebook, if you choose to, open Instagram Settings > Linked Accounts and select the checkbox next to Facebook. Click Unlink and confirm your selection. If you’d like to reconnect Instagram with Facebook, you’ll need to select Facebook in the Linked Accounts menu and login with your credentials like normal.

If you know your Facebook account was affected by the breach, it’s wise to check for suspicious activity on your account. You can do this on Facebook through the Security and Login menu.

There, you’ll want to browse the activity listed to make sure you don’t see anything that doesn’t look like you — logins from other countries, for example. If you’re concerned or just want to play it safe, you can always find the link to “Log Out Of All Sessions” by scrolling toward the bottom of the page.

While we know a little bit more now about Facebook’s biggest security breach to date, there’s still a lot that we don’t. Expect plenty of additional information in the coming days and weeks as Facebook surveys the damage and passes that information along to its users. We’ll do the same.

Facebook is blocking users from posting some stories about its security breach

Some users are reporting that they are unable to post today’s big story about a security breach affecting 50 million Facebook users. The issue appears to only affect particular stories from certain outlets, at this time one story from The Guardian and one from the Associated Press, both reputable press outlets.

When going to share the story to their news feed, some users, including members of the staff here at TechCrunch who were able to replicate the bug, were met with the following error message which prevented them from sharing the story.

According to the message, Facebook is flagging the stories as spam due to how widely they are being shared or as the message puts it, the system’s observation that “a lot of people are posting the same content.”

To be clear, this isn’t one Facebook content moderator sitting behind a screen rejecting the link somewhere or the company conspiring against users spreading damning news. The situation is another example of Facebook’s automated content flagging tools marking legitimate content as illegitimate, in this case calling it spam. Still, it’s strange and difficult to understand why such a bug wouldn’t affect many other stories that regularly go viral on the social platform.

This instance is by no means a first for Facebook. The platform’s automated tools — which operate at unprecedented scale for a social network — are well known for at times censoring legitimate posts and flagging benign content while failing to detect harassment and hate speech. We’ve reached out to Facebook for details about how this kind of thing happens but the company appears to have its hands full with the bigger news of the day.

While the incident is nothing particularly new, it’s an odd quirk — and in this instance quite a bad look given that the bad news affects Facebook itself.

Facebook policy head makes a surprising cameo at the Kavanaugh hearing

Facebook might be doing its best to stay out of political scandals in the latter half of 2018, but the company had a presence, front and center, at one of the most contentious Senate hearings in modern history.

Facebook’s Vice President of Global Public Policy at Facebook, Joel Kaplan, was spotted sitting prominently near his wife, Laura Cox Kaplan, in the section for Brett Kavanaugh’s supporters. He is pictured on the left side of the header image, second row, in a blue tie.

For reference, below is an image of Kaplan to the immediate right of Mark Zuckerberg during a Senate Judiciary joint hearing in April of this year.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg concludes his testimony before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. (Photo by Win McNamee/Getty Images)

Kaplan has not made any public commentary on Twitter or Facebook about his support for the Supreme Court nominee, though through retweets, Kaplan’s wife appears to be of the mind that the hearing is part of a “smear campaign” against the family friend.

Kaplan is also featured in this viral image, making the rounds on Twitter.

His appearance during the hearing is a show of personal support, though it still turns heads for such a prominent Facebook employee to make a visible statement during such a politically divisive event. Kaplan is not representing Facebook in a formal capacity.

Kaplan served as a policy adviser on George W. Bush’s 2000 election campaign and went on to serve as a policy assistant to the president and as the deputy director of the Office of Management and Budget (OMB) and a deputy chief of staff. Kavanaugh worked for the Bush administration during the same period, joining the former president’s legal team and going on to work on the nomination of Chief Justice John Roberts to the Supreme Court.

Kaplan joined Facebook in 2011 as its VP of U.S. public policy. Kaplan continues to serve in a heavily influential political role with the company today, leading its Washington D.C. office which serves as the company’s lobbying arm.