Author: Taylor Hatmaker

The queer dating app Her expands with curated community spaces

After carving out a niche as the first dating app by and for queer women, Her is broadening its mission. Today, the app formerly known as Dattch is launching a Communities feature — kind of like a set of mini queer subreddits — to let people connect around interests and identity as a group.

“We spent the past three years bringing people together in one on one conversations and introductions — communities is about taking it beyond the one on one,” Her founder Robyn Exton told TechCrunch.

“We started paying attention to the number of queer spaces that are closing,” Exton said, noting that women’s centers, lesbian bars, queer bookshops and other queer IRL spaces are closing in record numbers in recent years. “We actually think they’re needed more than ever.”

Her’s new Communities feature aims to create a digital version of those collective queer spaces, letting users connect with interest and identity-based groups, with message boards custom built for Her’s unique user base. Users can post content in Communities or follow another person’s feed to stay up to date on what’s going in the Her universe.

A curated starter pack of Communities launches today, though Exton plans to add more over time with the potential for user-generated Communities and pop-ups around specific events. The first set includes a space for queer women of color, one centered around mindfulness and wellbeing and another for news and entertainment, among others.

The categories are pretty broad for now, but it sounds like Her plans to adapt Communities to whatever its users end up wanting. That flexibility coupled with Exton’s commitment to maintaining a space that’s “so ragingly queer” set Her apart from dating apps that generally fumble any dating experience that isn’t explicitly for straight people or gay men.

Her also plans to push toward internationalization in 2018 to grow its 3 million registered users. The app is already live in 55 countries and its largest non-English speaking markets are France, Germany, Spain, Italy, Brazil, Mexico, Indonesia and the Philippines. The app will host events tailored toward each of those locales in the coming year.

Just in time for Pride Month, Her is also launching a rebrand aimed at making the app more inclusive and reflective of what Exton calls “the future of fluidity that we believe in.”

“Our community and our audience has changed hugely, even in the last three years,” Exton said. “We needed to reflect that as a brand.”

According to Exton, there’s been a massive spike in Her users under the age of 29 describing their gender as non-binary or their sexuality as pansexual — a shift reflective of language and identity evolution in the queer community at large. The language of the rebrand describes a vision in which “sexuality and gender are found on a spectrum, where labels remain but are not set in stone.”

Exton hopes that Communities will create meaningful spaces in which Her users can gather and explore their own identities as they evolve and change. “So much queerness that happens inside of Her,” Exton said. “People describe it as feeling like you’re coming home.”

Facebook’s policy on white supremacy plays right into a racist agenda

In an ongoing series over at Motherboard, we’re learning quite a bit about how Facebook polices hate speech and hate organizations on its platform. Historically, the company has been far less than transparent about its often inconsistent censorship practices, even as white supremacist content — and plenty of other forms of hate targeted at marginalized groups — runs rampant on the platform.

Now we know more about why. For one, according to a series of internal slides on white supremacy, Facebook walks a fine line that arguably doesn’t exist at all. According to these post-Charlottesville training documents, the company opted to officially differentiate between white nationalism and white supremacy, allowing the former and forbidding the latter.

White nationalism gets the green light

Facebook appears to take the distinction between white nationalism and white supremacy seriously, but many white nationalists don’t, opting only for the slightly more benign term to soften their image. This is a well-documented phenomenon, as anyone who has spent time in these online circles can attest. It’s also the first sentence in the Anti Defamation League (ADL) entry on white nationalism:

White nationalism is a term that originated among white supremacists as a euphemism for white supremacy.

Eventually, some white supremacists tried to distinguish it further by using it to refer to a form of white supremacy that emphasizes defining a country or region by white racial identity and which seeks to promote the interests of whites exclusively, typically at the expense of people of other backgrounds.

As Motherboard reports, Facebook notes “overlaps with white nationalism/separatism” as a challenge in its relevant training notes section for white supremacy, adding that “Media reports also use the terms interchangeably (for example referring to David Duke as white supremacist even though he doesn’t explicitly identify himself as one).”

Facebook’s own articulation of white supremacy offers considerable concessions:

Although there doesn’t seem to be total agreement among academics on whether white supremacy always implies racial hatred, the fact that it is based on a racist premise is widely acknowledged. [original emphasis]

Most of Facebook’s slides on hate speech and hate groups read like an embarrassingly simplistic CliffsNotes, lacking nuance and revealing the company’s apparently slapdash approach to the issue of racial hate. Tellingly, some portions of Facebook’s training text copy Wikipedia’s own language verbatim.

Here are the first few sentences of the Wikipedia entry on white supremacy:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races.

White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose members of other races as well as Jews.

Facebook’s training note on white supremacy, with differences bolded:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races. White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose people of color, Jews and non-Protestants.

Facebook slides recreated by Motherboard

Bafflingly, Facebook also notes that “White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics]” which by definition, a white state does.

According to slides recreated by Motherboard, Facebook asserts that “we don’t allow praise, support and representation of white supremacy as an ideology” but stipulates that it does “allow praise, support and representation” for both white nationalism and white separatism. [Again, emphasis theirs.]

Facebook further clarifies:

By the same token, we allow to call for the creation of white ethno-states (e.g. “The US should be a white-only nation”).

White supremacy versus white nationalism

By failing to recognize the political motivations behind white nationalism as an identity, Facebook legitimates white nationalism as something meaningfully distinct from white supremacy. While not all white nationalists call for the dream of a white ethnostate to be achieved through racial domination — and arguably the two could be studied distinctly from a purely academic perspective — they have far more in common than they have differences. Even with such thin sourcing, Facebook has devoted a surprising amount of language to differentiating the two.

In grappling with this question after Charlottesville, the Associated Press offered this clarification for its own coverage:

For many people the terms can be used almost interchangeably. Both terms describe groups that favor whites and support discrimination by race.

The AP also mentions the “subtle difference” that white supremacists believe whites to be superior.

For white nationalists, that attitude at times appears more implicit than explicit but that doesn’t mean it’s not there. From my own reading and considerable hours spent immersed online in white nationalist groups and forums, there is massive observable ideological overlap between the two groups. The instances in which white supremacists and white nationalists truly espouse wholly distinct ideologies are rare.

Further, it’s impossible to ignore that violence against non-whites is a central thread running throughout white nationalism, whether stated or implied. Imagining a white ethnostate that does not directly come about at the cost of the safety, wellbeing and financial security of racial minorities is pure fantasy — a fantasy Facebook is apparently content to entertain in pretending that the “white state” would not “explicitly exclude” anyone based on the protected characteristic of race.

The Southern Poverty Law Center (SPLC) defines white nationalism in similarly broad strokes, tying it directly to white supremacy and stating that “white nationalist groups espouse white supremacist or white separatist ideologies, often focusing on the alleged inferiority of nonwhites.”

The SPLC, an organization devoted to studying hate, explains the expedient fallacy of the white ethnostate as a nonviolent goal:

These racist aspirations are most commonly articulated as the desire to form a white ethnostate — a calculated idiom favored by white nationalists in order to obscure the inherent violence of such a radical project. Appeals for the white ethnostate are often disingenuously couched in proclamations of love for members of their own race, rather than hatred for others.

Apparently, Facebook ignored most dissenting definitions linking white nationalist goals directly to white supremacy. Naively or not, the company bought into white supremacy’s slightly more palatable public-facing image in shaping its policy platforms. In sourcing its policies, Facebook was apparently content to pick and choose which points supported its decision to allow white nationalism on its platform while supposedly casting out white supremacy.

“White nationalist groups espouse white separatism and white supremacy,” the Wikipedia page that Facebook drew from states. “Critics argue that the term ‘white nationalism’ and ideas such as white pride exist solely to provide a sanitized public face for white supremacy, and that most white nationalist groups promote racial violence.”

Sadly, for anyone who has watched many virulent strains of racism flourish and even organize on Facebook, the company’s shoddily crafted internal guidance on white supremacy comes as little surprise. Nor does the fact that the company failed to dedicate even a sliver of its considerable resources to understanding the nuance of white supremacist movements, aims and language.

We reached out to Facebook to see if these alarmingly reductive policies on racial hate have evolved in recent months (these materials are less than a year old), but the company only pointed us to the broad public-facing  “Community Standards.” Any further detail on the actual implementation of policies around hate remains opaque.

Though it may have learned some harsh lessons in 2018, for Facebook, opacity is always the best policy.

Twitter will give political candidates a special badge during US midterm elections

Ahead of 2018 U.S. midterm elections, Twitter is taking a visible step to combat the spread of misinformation on its famously chaotic platform. In a blog post this week, the company explained how it would be adding “election labels” to the profiles of candidates running for political office.

“Twitter has become the first place voters go to seek accurate information, resources, and breaking news from journalists, political candidates, and elected officials,” the company wrote in its announcement. “We understand the significance of this responsibility and our teams are building new ways for people who use Twitter to identify original sources and authentic information.”

These labels feature a small government building icon and text identifying the position a candidate is running for and the state or district where the race is taking place. The label information included in the profile will also appear elsewhere on Twitter, even when tweets are embedded off-site.

The labels will start popping up after May 30 and will apply to candidates in state governor races as well as those campaigning for a seat in the Senate or the House of Representatives.

Twitter will partner with nonpartisan political nonprofit Ballotpedia to create the candidate labels. In a statement announcing its partnership, Ballotpedia explains how that process will work:

Ballotpedia covers all candidates in every upcoming election occurring within the 100 most-populated cities in the U.S., plus all federal and statewide elections, including ballot measures. After each state primary, Ballotpedia will provide Twitter with information on gubernatorial and Congressional candidates who will appear on the November ballot. After receiving consent from each candidate, Twitter will apply the labels to each candidate profile.

The decision to create a dedicated process to verify political profiles is a step in the right direction for Twitter. With major social platforms still in upheaval over revelations around foreign misinformation campaigns during the 2016 U.S. presidential election, Twitter and Facebook need to take decisive action now if they intend to inoculate their users against a repeat threat in 2018.

Instagram adds emoji slider stickers to spice up polls

If you’ve been meaning to ask your friends just how eggplant emoji your new summer cutoffs are, you’re in luck. Today, Instagram is introducing a feature it’s calling the “emoji slider,” a new audience feedback sticker that polls your viewers on a rating scale using any emoji. The updated Instagram app is available now in the App Store and in Google Play.

For example, if you decide to stay in on a Friday night and take risqué selfies you could ask your friends to rate just how angel emoji or how inexplicably-purple-devil-emoji your behavior is. Or say you see an animal and can’t quite figure out if it’s a snake or a salamander with those little tiny legs, you could poll your Instagram story-goers to ask how snake emoji the thing was on a scale of no snake emoji to 100 percent snake emoji. The impractical applications are endless.

Instagram says the emoji slider grew out of the natural popularity of the poll sticker, which is admittedly a pretty fun way to pressure your friends and admirers into spontaneous audience participation. With the emoji slider, you can ask how [emoji] something is instead of just asking your followers to operate under a binary set of options, because binaries are over, man.

If you’re into it, you can find the emoji slider in the sticker tray with most of the other excellent stoner nonsense. Just select it, write out your question, slap that baby on your story and wait for the sweet, sweet feedback to roll in.

Tech watchdogs call on Facebook and Google for transparency around censored content

If a company like Facebook can’t even understand why its moderation tools work the way they do, then its users certainly don’t have a fighting shot. Anyway, that’s the idea behind what a coalition of digital rights groups are calling The Santa Clara Principles (PDF), “a set of minimum standards” aimed at Facebook, Google, Twitter and other tech companies that moderate the content published on their platforms.

The suggested guidelines grew out of a set of events addressing “Content Moderation and Removal at Scale,” the second of which is taking place today in Washington, D.C. The group participating in these conversations shared the goal of coming up with a suggested ruleset for how major tech companies should disclose which content is being censored, why it is being censored and how much speech is censored overall.

“Users deserve more transparency and greater accountability from platforms that play an outsized role — in Myanmar, Australia, Europe, and China, as well as in marginalized communities in the U.S. and elsewhere — in deciding what can be said on the internet,” Electronic Frontier Foundation (EFF) Director for International Freedom of Expression Jillian C. York said.

As the Center for Democracy and Technology explains, The Santa Clara principles (PDF) ask tech companies to disclose three categories of information:

  • Numbers (of posts removed, accounts suspended);
  • Notice (to users about content removals and account suspensions); and
  • Appeals (for users impacted by content removals or account suspensions).

“The Santa Clara Principles are the product of years of effort by privacy advocates to push tech companies to provide users with more disclosure and a better understanding of how content policing works,” EFF Senior Staff Attorney Nate Cardozo added.

“Facebook and Google have taken some steps recently to improve transparency, and we applaud that. But it’s not enough. We hope to see the companies embrace The Santa Clara Principles and move the bar on transparency and accountability even higher.”

Participants in drafting The Santa Clara Principles include the ACLU Foundation of Northern California, Center for Democracy and Technology, Electronic Frontier Foundation, New America’s Open Technology Institute and a handful of scholars from departments studying ethics and communications.

Facebook’s Free Basics program ended quietly in Myanmar last year

As recently as last week, Facebook was touting the growth of its Internet.org app Free Basics, but the program isn’t working out everywhere. As the Outline originally reported and TechCrunch confirmed, the Free Basics program has ended in Myanmar, perhaps Facebook’s most controversial non-Western market at the moment.

Its mission statement pledging to “bring more people online and help improve their lives” is innocuous enough, but Facebook’s Internet.org strategy is extremely aggressive, optimized for explosive user growth in markets that the company has yet to penetrate. Free Basics, an initiative under Internet.org, is an app that offers users in developing markets a “free” Facebook-curated version of the broader internet.

The app provides users with internet access — stuff like the weather and local news — but keeps them within Facebook’s walled garden. The result in some countries with previously low connectivity rates was that the social network became synonymous with the internet itself — and as we’ve seen, that can lead to a whole host of very real problems.

While the Outline reports that Free Basics has ended in “half a dozen nations and territories” including Bolivia, Papua New Guinea, Trinidad and Tobago, Republic of Congo, Anguilla, Saint Lucia and El Salvador, Facebook notes that only two international mobile providers have ended the program, leaving room for interpretation about how other countries ended their involvement and why.

As a Facebook spokeswoman told TechCrunch, Facebook is still moving forward with the program:

“We’re encouraged by the adoption of Free Basics. It is now available in more than 50 countries with 81 mobile operator partners around the world. Today, more than 1,500 services are available on Free Basics worldwide, provided to people in partnership with mobile operators.

Free Basics remains live with the vast majority of participating operators who have opted to continue offering the service. We remain committed to bringing more people around the world online by breaking down barriers to connectivity.”

Facebook confirmed to TechCrunch that Free Basics did indeed end in Myanmar in September 2017, a little over a year since its June 2016 launch in the country. The company clarified that Myanmar’s state-owned telecom Myanma Posts and Telecommunications (MPT) cooperated with the Myanmar government to shut down access to all free services, including Free Basics in September of last year. The move was part of a broader regulatory effort by the Myanmar government.

In a press release, MPT described how the regulation shaped policy for the country’s three major telecoms:

“… As responsible operators, [MPT, Ooredoo and Telenor] abide by sound price competition practices – hallmarks of a healthy marketplace and to adhere to industry best practices and ethical business guidelines.

This [includes] compliance with the authority imposed floor pricing as set out in the Post and Telecommunications Department’s Pricing and Tariff Regulatory Framework of 28 June 2017, including refraining from behavior such as free distribution or sales of SIM cards and supplying services and handsets at below the cost including delivery.”

In Myanmar, Facebook’s Free Basics offering ran afoul of the same price floor regulations that restricted the distribution of free SIM cards.

Elsewhere, Facebook’s Free Basics program is winding down for other reasons. Last fall, the telecom Digicel ended access to Free Basics in El Salvador and some of its Caribbean markets. Digicel confirmed to TechCrunch that it stopped offering Free Basics due to commercial reasons on its end and that the decision was not a result of any action by Facebook or Internet.org.

As the Free Basics program is part of a partnership between Facebook and local mobile providers, the latter can terminate access to the app at will. Still, it’s not clear if that was the case in all the countries in which the app is no longer available.

In 2016, India regulated Facebook’s free internet deal out of existence, effectively blocking Facebook’s access to its most sought-after new market in the process. Since then, vocal critics have called Facebook’s Internet.org efforts everything from digital colonialism to a spark in the tinderbox for countries dealing with targeted violence against religious minorities.

Still, according to Facebook, even as some markets dry up, the program is quietly expanding. In late 2017 Facebook added Sudan and Cote d’Ivoire to its Free Basics roster. This year, Facebook launched the initiative in Cameroon and added additional mobile partners in Columbia and Peru.

Myanmar’s access to Free Basics is now restricted, but Facebook indicated that its efforts to connect the country — and its 54 million newly-minted or yet to be converted Facebook users — are not over.

Pro-Trump social media duo accuses Facebook of anti-conservative censorship

Following up on a recurring thread from Mark Zuckerberg’s congressional appearance earlier this month, the House held a hearing today on perceived bias against conservatives on Facebook and other social platforms. The hearing, ostensibly about “how social media companies filter content on their platforms,” focused on the anecdotal accounts of social media stars Diamond and Silk (Lynnette Hardaway and Rochelle Richardson), a pro-Trump viral web duo that rose to prominence during Trump’s presidential campaign.

“Facebook used one mechanism at a time to diminish reach by restricting our page so that our 1.2 million followers would not see our content, thus silencing our conservative voices,” Diamond and Silk said in their testimony.

“It’s not fair for these Giant Techs [sic] like Facebook and YouTube get to pull the rug from underneath our platform and our feet and put their foot on our neck to silence our voices; it’s not fair for them to put a strong hold on our finances.”

During the course of their testimony, Diamond and Silk repeated their unfounded assertions that Facebook targeted their content as a deliberate act of political censorship.

What followed was mostly a partisan back-and-forth. Republicans who supported the hearing’s mission asked the duo to elaborate on their claims and Democrats pointed out their lack of substantiating evidence and their willingness to denounce documented facts as “fake news.”

Controversially, they also denied that they had accepted payment from the Trump campaign, in spite of public evidence to the contrary. On November 22, 2016, the pair received $1,274.94 for “field consulting,” as documented by the FEC.

Earlier in April, Zuckerberg faced a question about the pair’s Facebook page from Republican Rep. Joe Barton:

Why is Facebook censoring conservative bloggers such as Diamond and Silk? Facebook called them “unsafe” to the community. That is ludicrous. They hold conservative views. That isn’t unsafe.

At the time, Zuckerberg replied that the perceived censorship was an “enforcement error” and had been in contact with Diamond and Silk to reverse its mistake. Senator Ted Cruz also asked Zuckerberg about what he deemed a “pervasive pattern of bias and political censorship” against conservative voices on the platform.

Today’s hearing, which California Rep. Ted Lieu dismissed as “stupid and ridiculous,” was little more than an exercise in idle hyper-partisanship, but it’s notable for a few reasons. For one, Diamond and Silk are two high-profile creators who managed to take their monetization grievances with tech companies, however misguided, all the way to Capitol Hill. Beyond that, and the day’s strange role-reversal of regulatory stances, the hearing was the natural escalation of censorship claims made by some Republicans during the Zuckerberg hearings. Remarkably, those accusations only comprised a sliver of the two days’ worth of testimony; in a rare display of bipartisanship, Democrats and Republicans mostly cooperated in grilling the Facebook CEO on his company’s myriad failures.

Congressional hearing or not, the truth of Facebook’s platform screw-ups is far more universal than political claims on the right or left might suggest. As Zuckerberg’s testimony made clear, Facebook’s moderation tools don’t exactly work as intended and the company doesn’t even really know the half of it. Facebook users have been manipulating the platform’s content reporting tools for years, and unfortunately that phenomenon coupled with Facebook’s algorithmic and moderation blind spots punishes voices on both sides of the U.S. political spectrum — and everyone in between.