Author: Taylor Hatmaker

After Christchurch, Reddit bans communities infamous for sharing graphic videos of death

In the aftermath of the tragic mosque massacre that claimed 49 lives in Christchurch, New Zealand, tech companies scrambled to purge their platforms of promotional materials that the shooter left behind. As most of the internet is now unfortunately aware, the event was broadcast live on Facebook, making it one of the most horrific incidents of violence to spread through online communities in realtime.

As Twitter users cautioned others from sharing the extraordinarily graphic video, some Reddit users actively sought the video and knew exactly where to look. The infamous subreddit r/watchpeopledie was quarantined (making it unsearchable) in September 2018 but until today remained active for anyone to visit directly. The subreddit has a long history of sharing extremely graphic videos following tragic events and acts of violence, like the 2018 murder of two female tourists in Morocco.

After Thursday’s shooting, the subreddit became extremely active with users seeking out a copy of the video, which was shot in first-person perspective from a head-mounted camera.

After the flurry of interest, one the subreddit’s moderators locked the a thread about the video and posted this statement:

“Sorry guys but we’re locking the thread out of necessity here. The video stays up until someone censors us. This video is being scrubbed from major social media platforms but hopefully Reddit believes in letting you decide for yourself whether or not you want to see unfiltered reality. Regardless of what you believe, this is an objective look into a terrible incident like this.

Remember to love each other.”

Late Thursday, the subreddit’s members were actively sharing mirrored links to the Christchurch video, though they did so largely via direct messaging. After watching the footage, many users returned to the thread to express that the content was extremely disturbing and to caution even their most violence-hardened peers from seeking the video.

The subreddit remained active until some time late Friday morning Pacific Time, when Reddit banned the controversial community.

Reddit declined to provide details about its decision to ban the long-running community after this particular act of violence. “We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit,” a company spokesperson told TechCrunch. “Subreddits that fail to adhere to those site-wide rules will be banned.”

The subreddit’s many detractors consider the act of seeking and sharing such graphic depictions of death both inherently disturbing and disrespectful to victims and their families.

The subreddit is unquestionably grisly but remains surprisingly well-loved by some devotees, who insist that its graphic depictions of death are in fact life-affirming.

“Definitely saved me and helped me figure out I didn’t necessarily have tomorrow to get my shit in order,” one former member said in a thread discussing the since-banned community.

“Don’t think it is the kind of place to spend too much time in but, we all need reminders.”

Reddit banned the adjacent subreddits r/gore and r/wpdtalk (“watch people die talk”) on Friday as well.

Slack removes 28 accounts linked to hate groups

Today in a short blog post, Slack announced that it had removed 28 accounts with a “clear affiliation with known hate groups.” Few details were provided about the accounts or how Slack identified them, but we have reached out to the company for more information.

The announcement, brief as it is, comes as a surprise. To date, Slack has managed to stay out of the conversation around what happens when sometimes violent politically extreme organizations use popular social platforms to organize. While that conversation should be fairly clear cut when it comes to public-facing content somewhere like Facebook, it’s a bit more nuanced on messaging platforms where communication is private in nature.

Slack is mostly used for workplace communication, but the chat platform Discord, popular with gamers, has been grappling with the same issues. In 2017, Discord removed a public server tied to AltRight.com that the company said violated its rules against harassment and calls for violence. As fringe groups are booted off of mainstream platforms, it will be interesting to see where they wind up and how their new platforms of choice will handle their unsavory new clusters of users.

For reference, here’s Slack’s full blog post:

Today we removed 28 accounts because of their clear affiliation with known hate groups. The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform. Slack is designed to help businesses communicate better and more collaboratively so people can do their best work. Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack. When we are made aware of an organization using Slack for illegal, harmful, or other prohibited purposes, we will investigate and take appropriate action and we are updating our terms of service to make that more explicit.

Instagram and Facebook will start censoring ‘graphic images’ of self-harm

In light of a recent tragedy, Instagram is updating the way it handles pictures depicting self-harm. Instagram and Facebook announced changes to their policies around content depicting cutting and other forms of self harm in dual blog posts Thursday.

The changes comes about in light of the 2017 suicide of a 14 year old girl named Molly Russell, a UK resident who took her own life in 2017. Following her death, her family discovered that Russell was engaged with accounts that depicted and promoted self harm on the platform.

As the controversy unfolded, Instagram Head of Product Adam Mosseri penned an op-ed in the Telegraph to atone for the platform’s at times high consequence shortcomings. Mosseri previously announced that Instagram would implement “sensitivity screens” to obscure self harm content, but the new changes go a step further.

Starting soon, both platforms will no longer allow any “graphic images of self-harm” most notably those that depict cutting. This content was previously allowed because the platforms worked under the assumption that allowing people to connect and confide around these issues was better than the alternative. After a “comprehensive review with global experts and academics on youth, mental health and suicide prevention” those policies are shifting.

“… It was advised that graphic images of self-harm – even when it is someone admitting their struggles – has the potential to unintentionally promote self-harm,” Mosseri said.

Instagram will also begin burying non-graphic images about self harm (pictures of healed scars, for example) so they don’t show up in search, relevant hashtags or on the explore tab. “We are not removing this type of content from Instagram entirely, as we don’t want want to stigmatize or isolate people who may be in distress and posting self-harm related content as a cry for help,” Mosseri said.

According to the blog post, after consulting with groups like the Centre for Mental Health and Save.org, Instagram tried to strike a balance that would still allow users to express their personal struggles without encouraging others to hurt themselves. For self harm, like disordered eating, that’s a particularly difficult line to walk. It’s further complicated by the fact that not all people who self harm have suicidal intentions and the behavior has its own nuances apart from suicidality.

“Up until now, we’ve focused most of our approach on trying to help the individual who is sharing their experiences around self-harm. We have allowed content that shows contemplation or admission of self-harm because experts have told us it can help people get the support they need. But we need to do more to consider the effect of these images on other people who might see them. This is a difficult but important balance to get right.”

Mental health research and treatment teams have long been aware of “peer influence processes” that can make self destructive behaviors take on a kind of social contagiousness. While online communities can also serve as a vital support system for anyone engaged in self destructive behaviors, the wrong kind of peer support can backfire, reinforcing the behaviors or even popularizing them. Instagram’s failure to sufficiently safeguard for the potential impact this kind of content can have on a hashtag-powered social network is fairly remarkable considering that the both Instagram and Facebook claim to have worked with mental health groups to get it right.

These changes are expected in the “coming weeks.” For now, a simple search of Instagram’s #selfharm hashtag still reveals a huge ecosystem of self-harmers on Instagram, including self-harm related memes (some hopeful, some not) and many very graphic photos of cutting.

“It will take time and we have a responsibility to get this right,” Mosseri said. “Our aim is to have no graphic self-harm or graphic suicide related content on Instagram… while still ensuring we support those using Instagram to connect with communities of support.”

Facebook just removed a new wave of suspicious activity linked to Iran

Facebook just announced its latest round of “coordinated inauthentic behavior,” this time out of Iran. The company took down 262 Pages, 356 accounts, three Facebook groups and 162 Instagram accounts that exhibited “malicious-looking indicators” and patterns that identify it as potentially state-sponsored or otherwise deceptive and coordinated activity.

As Facebook Head of Cybersecurity Policy Nathaniel Gleicher noted in a press call, Facebook coordinated closely with Twitter to discover these accounts, and by collaborating early and often the company “[was] able to use that to build up our own investigation.” Today, Twitter published a postmortem on its efforts to combat misinformation during the U.S. midterm election last year.

Example of the content removed

As the Newsroom post details, the activity affected a broad swath of areas around the globe:

There were multiple sets of activity, each localized for a specific country or region, including Afghanistan, Albania, Algeria, Bahrain, Egypt, France, Germany, India, Indonesia, Iran, Iraq, Israel, Libya, Mexico, Morocco, Pakistan, Qatar, Saudi Arabia, Serbia, South Africa, Spain, Sudan, Syria, Tunisia, US, and Yemen. The Page administrators and account owners typically represented themselves as locals, often using fake accounts, and posted news stories on current events… on topics like Israel-Palestine relations and the conflicts in Syria and Yemen, including the role of the US, Saudi Arabia, and Russia.

Today’s takedown is the result of an internal investigation linking the newly discovered activity to other content out of Iran late last year. Remarkably, the activity Facebook flagged today dates back to 2010.

The Iranian activity was not focused on creating real-world events, as we’ve seen in other cases. In many cases, the content “repurposed” reporting from Iranian state media and spread ideas that could benefit Iran’s positions on various geopolitical issues. Still, Facebook declined to link the newly identified activity to Iran’s government directly.

“Whenever we make an announcement like this we’re really careful,” Gleicher said. “We’re not in a position to directly assert who the actor is in this case, we’re asserting what we can prove.”

Facebook users who quit the social network for a month feel happier

New research out of Stanford and New York University took a look at what happens when people step back from Facebook for a month.

Through Facebook, the research team recruited 2,488 people who averaged an hour of Facebook use each day. After assessing their “willingness to accept” the idea of deactivating their account for a month, the study assigned eligible participants to an experimental category that would deactivate their accounts or a control group that would not.

Over the course of the month-long experiment, researchers monitored compliance by checking participants’ profiles. The participants self-reported a rotating set of well being measures in real time, including happiness, what emotion a participant felt over the last 10 minutes and a measure of loneliness.

As the researchers report, leaving Facebook correlated with improvements on well being measures. They found that the group tasked with quitting Facebook ended up spending less time on other social networks too, instead spending more time to offline activities like spending time with friends and family (good) and watching television (maybe not so good). Overall the group reported that it spent less time consuming news in general.

The group that Facebook also reported less time spent on the social network after the study-imposed hiatus was up, suggesting that the break might have given them new insight into their own habits.

“Reduced post-experiment use aligns with our finding that deactivation improved subjective well-being, and it is also consistent with the hypotheses that Facebook is habit forming… or that people learned that they enjoy life without Facebook more than they had anticipated,” the paper’s authors wrote.

There are a few things to be aware of with the research. The paper notes that subjects were told they would “keep [their] access to Facebook Messenger.” Though the potential impact of letting participants remain on Messenger isn’t mentioned again, it sounds like they were still freely using one of the platform’s main functions though perhaps one with fewer potential negative effects on mood and behavior.

Unlike some recent research, this study was conducted by economics researchers. That’s not unusual for social psych-esque stuff like this but does inform aspects of the method, measured used and perspective.

Most important for a bit more context, the research was conducted in the run-up to the 2016 U.S. presidential election. That fact that is likely to have informed participants’ attitudes around social media, both before and after the election.

While the participants reported that they were less informed about current events, they also showed evidence of being less politically polarized, “consistent with the concern that social media have played some role in the recent rise of polarization in the US.”

In an era of ubiquitous threats to quit the world’s biggest social network, the fact remains that we mostly have no idea what our online habits are doing to our brains and behavior. Given that, we also don’t know what happens when we step back from social media environments like Facebook and give our brains a reprieve. With its robust sample size and fairly thorough methodology, this study provides us a useful glimpse into those effects. For more insight into the research, you can read the full paper here.

EFF lawyer joins WhatsApp as privacy policy manager

In an effort to bolster its public credibility in the wake of a very rough year, Facebook is bringing a fierce former critic into the fold.

Next month, longtime Electronic Frontier Foundation (EFF) counsel Nate Cardozo will join WhatsApp, Facebook’s encrypted chat app. Cardozo most recently held the position of Senior Information Security Counsel with the EFF where he worked closely with the organization on cybersecurity policy. As his bio there reads, Cardozo is “an expert in technology law and civil liberties” and already works with private companies on privacy policies that protect user rights.

Cardozo announced the move in a post to Facebook on Tuesday.

“Personal news!

After six and a half years at the Electronic Frontier Foundation (EFF), I’ll be leaving at the end of next week. I’m incredibly sad to be leaving such a great organization and I’ll miss my colleagues with all my heart.

Where to? Starting 2/19, I’ll be the Privacy Policy Manager for WhatsApp!! I could NOT be more excited.

If you know me at all, you’ll know this isn’t a move I’d make lightly. After the privacy beating Facebook’s taken over the last year, I was skeptical too. But the privacy team I’ll be joining knows me well, and knows exactly how I feel about tech policy, privacy, and encrypted messaging. And that’s who they want at managing privacy at WhatsApp. I couldn’t pass up that opportunity.

It’s going to be an enormous challenge professionally but I’m ready for it.”

Though it also does more cooperative work with major tech companies, the EFF frequently finds itself on the opposite side of the ring. Cardozo’s own background reflects that adversarial relationship, and he certainly hasn’t minced words about his new employer. In a 2015 op-ed, Cardozo hit the nail on the head about Facebook’s lucrative habit of tracking its users’ every move.

“It’s creepy, but maybe you don’t care enough about a faceless corporation’s data mining to go out of your way to protect your privacy, and anyway you don’t have anything to hide,” Cardozo wrote. “Facebook counts on that; its business model depends on our collective confusion and apathy about privacy.”

Personally, we’d sleep ever so slightly better at night knowing that the guy who wrote the sentence “If a business model depends on deception and apathy, it deserves to fail” is trying a turn on the inside.

The cognitive dissonance of a well-regarded privacy advocate moving over to Facebook is notable, though not without precedent. For all its privacy blunders, Facebook does own the most popular digital messaging app in most countries around the world — an app it opts to keep end-to-end encrypted by default (so far, anyway).

As far as WhatsApp goes, Cardozo’s hiring comes at a critical time: Last week, The New York Times reported Facebook’s intention to integrate WhatsApp, Instagram and Facebook Messenger. The massive change has some security and privacy-minded people happy (more end-to-end encryption!) and plenty more worried about what else the integration will mean.

Leading into the change, if it materializes, Facebook would be smart to hire as many prominent voices in online privacy as it can attract. Public criticism of the company hasn’t waned exactly, but hiring critics is a straightforward way to build trust in the meantime. For a company not known for public dissent and open dialogue, Facebook’s critics may prove a valuable asset if they can be recruited for a tour of duty behind the big blue line.

Update: Cardozo isn’t alone in making the switch from privacy advocacy to Facebook. The company has also hired Robyn Greene from the Open Technology Institute. As she announced in a tweet, Greene will focus on law enforcement access and data protection in her new role with Facebook.

Research finds heavy Facebook users make impaired decisions like drug addicts

Researchers at Michigan State University are exploring the idea that there’s more to “social media addiction” than casual joking about being too online might suggest. Their paper, titled “Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task” (Meshi, Elizarova, Bender and Verdejo-Garcia) and published in the Journal of Behavioral Addictions, indicates that people who use social media sites heavily actually display some of the behavioral hallmarks of someone addicted to cocaine or heroin.

The study asked 71 participants to first rate their own Facebook usage with a measure known as the Bergen Facebook Addiction Scale. The study subjects then went on to complete something called the Iowa Gambling Task (IGT), a classic research tool that evaluates impaired decision making. The IGT presents participants with four virtual decks of cards associated with rewards or punishments and asks them to choose cards from the decks to maximize their virtual winnings. As the study explains, “Participants are also informed that some decks are better than others and that if they want to do well, they should avoid the bad decks and choose cards from the good decks.”

What the researchers found was telling. Study participants who self-reported as excessive Facebook users actually performed worse than their peers on the IGT, frequenting the two “bad” decks that offer immediate gains but ultimate result in losses. That difference in behavior was statistically significant in the latter portion of the IGT, when a participant has had ample time to observe the deck’s patterns and knows which decks present the greatest risk.

The IGT has been used to study everything from patients with frontal lobe brain injuries to heroin addicts, but using it as a measure to examine social media addicts is novel. Along with deeper, structural research, it’s clear that researchers can apply to social media users much of the existing methodological framework for learning about substance addiction.

The study is narrow, but interesting, and offers a few paths for follow-up research. As the researchers recognize, in an ideal study, the researchers could actually observe participants’ social media usage and sort them into categories of high or low social media usage based on behavior rather than a survey they fill out.

Future research could also delve more deeply into excessive users across different social networks. The study only looked at Facebook use, “because it is currently the most widely used [social network] around the world,” but one could expect to see similar results with the billion-plus monthly Instagram and potentially the substantially smaller portion of people on Twitter.

Ultimately, we know that social media is shifting human behavior and potentially its neurological underpinnings, we just don’t know the extent of it — yet. Due to the methodical nature of behavioral research and the often extremely protracted process of publishing it, we likely won’t know for years to come the results of studies conducted now. Still, as this study proves, there are researchers at work examining how social media is impacting our brains and our behavior — we just might not be able to see the big picture for some time.

People lost their damn minds when Instagram accidentally went horizontal

Earlier today, when Instagram suddenly transformed into a landscape-oriented Tinder-esque nightmare, the app’s dedicated users extremely lost their minds and immediately took to Twitter to be vocal about it.

As we reported, the company admitted that the abrupt shift from Instagram’s well-established vertical scrolling was a mistake. The mea culpa came quickly enough, but Instagram’s accidental update was already solidified as one of the last meme-able moments of 2018.

Why learn about the thing itself and why it happened when you could watch the meta-story play out in frantic, quippy tweets, all vying for relevance as we slide toward 2019’s horrific gaping maw? If you missed it the first time around, here you go.

A handful of memes even managed to incorporate another late-2018 meme, Sandra Bullock in Bird Box — a Netflix original that is not a birds-on-demand service, we are told.

Unupdate might not be a word, but it is absolutely a state of mind.

For better or worse, the Met got involved with what we can only assume is a Very Important Artifact for the cause.

But can we ever really go back? Can we unsee a fate so great, one still looming on some distant social influencer shore? Probably yeah, but that doesn’t mean we won’t all lose it if it happens again.

Facebook defends allowing third parties access to user messages

In a new blog post, Facebook VP of Product Partnerships Ime Archibong addressed the company’s latest user privacy controversy. The rebuttal is the second round of Facebook’s push back against Tuesday’s report by the New York Times detailing some of Facebook’s special partnerships and extensive data sharing with major tech players.

In the new post, Archibong specifically argues that Facebook never allowed its partners to access private Facebook messages without a user’s permission. While Facebook did in fact share user messages with third parties, the company claims it only did so “if they chose to use Facebook Login.” Facebook Login allows users to log into third party sites without making a specific new set of login credentials.

As Archibong writes:

“We worked closely with four partners to integrate messaging capabilities into their products so people could message their Facebook friends — but only if they chose to use Facebook Login. These experiences are common in our industry — think of being able to have Alexa read your email aloud or to read your email on Apple’s Mail app.”

He goes on to claim that these features “were experimental and have now been shut down for nearly three years.” Facebook is being purposefully quite specific here about what this particular timeline applies to, as the New York Times story reports that the company engaged in some forms of “special access” data sharing with third parties “as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.”

As to the question of why Facebook would grant messaging partners deep messaging access:

“That was the point of this feature — for the messaging partners mentioned above, we worked with them to build messaging integrations into their apps so people could send messages to their Facebook friends…

In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook. No third party was reading your private messages, or writing messages to your friends without your permission.”

Facebook’s post provides screenshots of these messaging integrations, which happened long enough ago that most of us don’t remember them at all. What Facebook declined to provide in this post: the permissions screens that users saw when granting this access. Those will be key in determining just how informed users were of what they were handing over when casually enabling these integrations.

screenshot via Facebook

Still, no matter how clearly Facebook might have worded the permissions screens, social media users are only just now broadly awakening to the fact that something is unsettling about all of this data sharing. The fact remains that even if users clicked to grant their consent for a feature like this, it’s a problem that they didn’t understand the privacy implications of doing so.

In this instance, it isn’t just Facebook’s problem. With privacy regulation looming on the horizon in the U.S. and the GDPR already making major waves for consumer privacy in the EU, it’s only a matter of time before all major tech companies that rent user data to advertisers face a reckoning that could change everything about the way they do business.

Washington D.C. Attorney General sues Facebook over Cambridge Analytica scandal

Facebook users might have already moved on to the company’s next notable outrage, but the company is still answering for its privacy missteps from earlier this year.

Washington D.C. Attorney General Karl Racine filed a lawsuit against Facebook on Wednesday, alleging that the company has not fulfilled its responsibility to protect user data. Racine’s office specifically cites the Cambridge Analytica scandal in the suit, noting that Facebook’s lax data sharing policies with third-parties led to users having their personal data harvested for profit without their consent.

“Facebook failed to protect the privacy of its users and deceived them about who had access to their data and how it was used,” Attorney General Racine said of his decision to sue the company. “Facebook put users at risk of manipulation by allowing companies like Cambridge Analytica and other third-party applications to collect personal data without users’ permission. Today’s lawsuit is about making Facebook live up to its promise to protect its users’ privacy.”

According to its announcement, the D.C. AG’s office will seek an injunction to pressure Facebook to implement “protocols and safeguards” to oversee user data sharing as well as privacy tools that simplify protections for users. The full text of the suit is embedded below.