Category Archives: Google+

Here’s Twitter’s position on Alex Jones (and hate-peddling anti-truthers) — hint: It’s a fudge

The number of tech platforms taking action against Alex Jones, the far right InfoWars conspiracy theorist and hate speech preacher, has been rising in recent weeks — with bans or partial bans including from Google, Apple and Facebook.

However, as we noted earlier, Twitter is not among them. Although it has banned known hate peddlers before.

Jones continues to be allowed a presence on Twitter’s platform — and is using his verified Twitter account to scream about being censored all over the mainstream place, hyperventilating at one point in the past 16 hours that ‘censoring Alex Jones is censoring everyone’ — because, and I quote, “we’re all Alex Jones now”.

(Fact check: No, we’re not… And, Alex, if you’re reading this, we suggest you take heart from the ideas in this Onion article and find a spot in your local park.)

We asked Twitter why it has not banned Jones outright, given that its own rules service proscribe hate speech and hateful conduct…

Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

Hateful conduct: You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. Read more about our hateful conduct policy.

Add to that, CEO Jack Dorsey has made it his high profile mission of late to (try to) improve conversational health on the platform. So it seems fair to wonder how Twitter continuing to enable a peddler of toxic lies and hate is going to achieve that?

While Twitter would not provide a statement about Jones’ continued presence on its platform, a spokesman told us that InfoWars and Jones’ personal account are not in violation of Twitter (or Periscope’s) ToS . At least not yet. Though he pointed out it could of course take action in the future — i.e. if it’s made aware of particular tweets that violate its rules.

Twitter’s position therefore appears to be that the content posted by InfoWars to other social media platforms is different to the content Jones posts to Twitter itself — ergo, its (hedgy & fudgy) argument essentially boils down to saying Jones is walking a fine enough line on Twitter itself to avoid a ban, because he hasn’t literally tweeted content that violates the letter of Twitter’s ToS.

(Though he has tweeted stuff like “the censorship of Infowars just vindicates everything we’ve been saying” — and given the hate-filled, violently untruthful things he has been saying all over the Internet, he’s essentially re-packaged all those lies into that single tweet, so… )

To spell out Twitter’s fudge: The fact of Jones being a known conspiracy theorist and widely visible hate preacher is not being factored into its ToS enforcement decisions.

The company says it’s judging the man by his output on Twitter — which means it’s failing to take into account the wider context around Jones’ tweets, i.e. all the lies and hate he peddles elsewhere (and indeed all the insinuating nods and dog whistles he makes to his followers on Twitter) — and by doing so it is in fact enabling the continued spread of hate via the wink-wink-nod-nod back door.

Twitter’s spokesman did not want to engage in a lengthy back and forth conversation, healthy or otherwise, about Jones/InfoWars so it was not possible to get a response from the company on that point.

However it does argue, i.e. in defense of its fudged position, that keeping purveyors of false news on its platform allows for an open, real-time debate which in turn allows for their lies to be challenged and debunked by people who are in their right minds — so, basically, this is the ‘fight bad speech with more speech argument’ that’s so beloved of people already enjoying powerful privilege.

The problem with that argument (actually, there are many) is it does not factor in the human cost; the people suffering directly because toxic lies impact their lives. Nor the cost to truth itself; To belief in the veracity and authenticity of credible sources of information which are under sustained and vicious attack by anti-truthers like Jones; The corrosive impact on professional journalism from lies being packaged and peddled under the lying banner of self-styled ‘truth journalism’ that Jones misappropriates. Nor the cost to society from hate speech whose very purpose is to rip up the social fabric and take down civic values — and, in the case of Jones’ particular bilious flavor, to further bang the drum of abuse via the medium of toxic disinformation — to further amplify and spread his pollution, via the power of untruth — to whip up masses of non-critically thinking conspiracy-prone followers. I could go on. (I have here.)

The amplification effect of social media platforms — combined with cynical tricks used by hate peddlers to game algorithms, such as bots retweeting and liking content to make it seem more popular than it is — makes this stuff a major, major problem.

‘Bad speech’ on such powerful platforms can become not just something to roll your eyes at and laughingly dismiss, but a toxic force that bullies, beats down and drowns out other types of speech — perhaps most especially truthful speech, because falsehood flies (and online it’s got rocket fuel) — and so can have a very deleterious impact on conversational health.

Really, it needs to be handled in a very different way. Which means Twitter’s position on Jones, and hateful anti-truthers in general, looks both flawed and weak.

It’s also now looking increasingly isolated, as other tech platforms are taking action.

Twitter’s spokesman also implied the company is working on tuning its systems to actively surface high quality counter-narratives and rebuttals to toxic BS — such as in replies to known purveyors of fake news like InfoWars.

But while such work is to be applauded, working on a fix also means you don’t actually have a fix yet. Meanwhile the lies you’re not stopping are spreading on your platform — at horrible and high cost to people and society.

It’s hard to see this as a defensible position.

And while Twitter keeps sitting on its fence, Jones’ hate speech and toxic lies, broadcast to millions as a weapon of violent disinformation, have got his video show booted from YouTube (which, after first issuing a strike yesterday then terminated his page for “violating YouTube’s Community Guidelines”).

The platform had removed ads from his channel back in March — but had not then (as Jones falsely claimed at the time) banned it. That decision took another almost half year for YouTube to arrive at.

Also yesterday, almost all of Jones’ podcasts were pulled by Apple, with the company saying it does not tolerate hate speech. “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions,” it added.

Earlier this month, music streaming service Spotify also removed some of Jones’ podcasts for violating its hate-speech policy.

Even Facebook removed a bunch of Jones’ videos late last month, for violating its community standards — albeit after some dithering, and what looked like a lot of internal confusion.

The social media behemoth also imposed a 30-day ban on Jones’ personal account for posting the videos, and served him a warning notice for the InfoWars Facebook Page he controls.

Facebook later clarified it had banned Jones’ personal profile because he had previously received a warning — whereas the InfoWars Page had not, hence the latter only getting a strike.

There have even been bans from some unlikely quarters: YouPorn just announced action against Jones for a ToS violation — nixing his ability to try to pass off anti-truth hate preaching as a porn alternative on its platform.

Pinterest, too, removed Jones’ ‘hate, lies & supplements’ page after Mashable made enquiries.

So, uh, other responses than Twitter’s (of doing nothing) are widely possible.

On Twitter, Jones also benefits from being able to distinguish his account from any would-be imitators or satirists, because he has a verified account — denoted on the platform by a blue check mark badge.

We asked Twitter why it hasn’t removed Jones’ blue badge — given that the company has, until relatively recently, been rethinking its verification program. And last year it actively removed blue badges from a number of white supremacists because it was worried it looked like it had been endorsing them. Yet Jones — who spins the gigantic lie of ‘white genocide’ — continues to keep his.

Twitter’s spokesman pointed us to this tweet last month from product lead, Kayvon Beykpour, who wrote that updating the program “isn’t a top priority for us right now”.

Beykpour went on to explain that while Twitter had “paused” public verification last November (because “we wanted to address the issue that verifying the authenticity of an account was being conflated with endorsement”), it subsequently paused its own ‘pause for thought’ on having verified some very toxic individuals, with Beykpour writing in an email to staff in July:

Though the current state of Verification is definitely not ideal (opaque criteria and process, inconsistency in our procedures, external frustration from customers), I don’t believe we have the bandwidth to address this holistically (policy, process, product, and a plan around how & when these fit together) without coming at the cost of our other priorities and distracting the team.

At the same time Beykpour admits in the thread that Twitter has been ‘unpausing’ its pause on verification in some circumstances (“we still verify accounts ad hoc when we think it serves the public conversation & is in line with our policy”); but not, evidently, going so far as to unpause its pause on removing badges from hateful people who gain unjustified authenticity and authority from the perceived endorsement of Twitter verification — such as in ‘ad hoc’ situations where doing so might be terribly, terribly appropriate. Like, uh, this one.

Beykpour wrote that verification would be addressed by Twitter post-election. So it’s presumably sticking to its lack of having a policy at all right now, for now. (“I know this isn’t the most satisfying news, but I wanted to be transparent about our priorities,” he concluded.)

Twitter’s spokesman told us it doesn’t have anything further to share on verification at this point.

Jones’ toxic activity on social media has included spreading the horrendous lie that children who died in the Sandy Hook U.S. school shooting were ‘crisis actors’.

So, for now, a man who lies about the violent death of little children continues to be privileged with a badge on his not-at-all-banned Twitter account.

Two of the parents of a child who died at the school wrote an open letter to Facebook’s founder, Mark Zuckerberg, last month, describing how toxic lies about the school shooting spread via social media had metastasized into violent hate and threats directed at them.

“Our families are in danger as a direct result of the hundreds of thousands of people who see and believe the lies and hate speech, which you have decided should be protected,” wrote Lenny Pozner and Veronique De La Rosa, the parents of Noah, who died on 14 December, 2012, at the age of six.

“What makes the entire situation all the more horrific is that we have had to wage an almost inconceivable battle with Facebook to provide us with the most basic of protections to remove the most offensive and incendiary content.”

YouTube is testing its own ‘Explore’ tab on iPhone

YouTube CEO Susan Wojcicki on Friday promised the company would do a better job with communicating to creators about its experiments and tests. Today, YouTube is making good on that commitment with an update about a new feature it’s testing out: an Explore tab, aimed at offering viewers a more diverse set of video recommendations.

The news was announced via the Creator Insider channel – the same channel Wojcicki highlighted in her recent update as the “unofficial” resource operated by YouTube employees. The channel today offers weekly updates, responses to creator feedback, and behind-the-scenes info on product launches.

According to the announcement, the new Explore feature is currently in testing with just 1 percent of iPhone YouTube app viewers, so there’s a good chance you won’t see the option in your own app.

However, if you do happen to be in the test group, then you’ll notice the bottom navigation bar of the app looks different. Instead of the tabs Home, Trending, Subscriptions, Inbox and Library you have today, you’ll instead see Home, Explore, Subscriptions, Activity and Library.

The idea behind Explore is to offer YouTube viewers a wider variety of what-to-watch suggestions than they receive today. Currently, personalized video recommendations are very much influenced by past viewing activity and other behavior, which can then create a sort of homogenous selection of recommended content.

“Explore is designed to help you be exposed to different kinds of topics, videos or channels that you might not otherwise encounter, but they’re still personalized,” said Tom Leung, Director of Product Management, in a YouTube video.

That is, the videos are still based on viewing activity.

For example, he explains, a viewer who was watching videos about telescopes might be recommended videos about high-end cameras.

“It’s just going to give you a little more variety,” says Leung.

The tab will also feature a “Trending” section at the top of the screen, which directs users to the same sort of content that the Trending tab in the current version of the YouTube app today features.

The hope, however, with the new Explore tab is to offer creators the ability to reach more viewers, even if their content doesn’t “trend.”

Whether or not that theory proves true, remains to be seen. YouTube will review the data from the experiment before making a decision to roll out the Explore tab to more users.

Early feedback from YouTube creators in the comments section of the video seems cautiously optimistic, with many expressing hopes that the new tab would provide exposure to smaller creators rather than just the well-known names.

Calling the tab “Explore” makes sense in light of the increased threat from Instagram, whose own Explore section features personalized video suggestions, and has launched a YouTube rival with IGTV. YouTube has responded by offering its stars big, five to six-figure checks to post their best stuff on YouTube, according to Business Insider. (YouTube downplayed the report, saying it has “always invested in creators’ success.)

But an experiment involving YouTube’s own Explore section makes it clear that the company is interested taking on Instagram head-on when it comes to offering a home for discovering new video content through algorithmic recommendations.

If successful, YouTube’s Explore tab would connect viewers to more creator channels they’ll like and subscribe to, as well as increase their time spent in app. That, in turn, could potentially decrease viewers’ time in apps like IGTV, Facebook, Instagram and elsewhere.

 

Facebook, Google and more unite to let you transfer data between apps

The Data Transfer Project is a new team-up between tech giants to let you move your content, contacts, and more across apps. Founded by Facebook, Google, Twitter, and Microsoft, the DTP today revealed its plans for an open source data portability platform any online service can join. While many companies already let you download your information, that’s not very helpful if you can’t easily upload and use it elsewhere — whether you want to evacuate a social network you hate, back up your data somewhere different, or bring your digital identity along when you try a new app. The DTP’s tool isn’t ready for use yet, but the group today laid out a white paper for how it will work.

Creating an industry standard for data portability could force companies to compete on utility instead of being protected by data lock-in that traps users because it’s tough to switch services. The DTP could potentially offer a solution to a major problem with social networks I detailed in April: you can’t find your friends from one app on another. We’ve asked Facebook for details on if and how you’ll be able to transfer your social connections and friends’ contact info which it’s historically hoarded.

From porting playlists in music streaming services to health data from fitness trackers to our reams of photos and videos, the DTP could be a boon for startups. Incumbent tech giants maintain a huge advantage in popularizing new functionality because they instantly interoperate with a user’s existing data rather than making them start from scratch. Even if a social networking startup builds a better location sharing feature, personalized avatar, or payment system, it might be a lot easier to use Facebook’s clone of it because that’s where your profile, friends, and photos live.

If the DTP gains industry-wide momentum and its founding partners cooperate in good faith rather than at some bare minimum level of involvement, it could lower the barrier for people to experiment with new apps. Meanwhile, the tech giants could argue that the government shouldn’t step in to regulate them or break them up because DTP means users are free to choose whichever app best competes for their data and attention.

Dems and GOP unite, slamming Facebook for allowing violent Pages

In a rare moment of agreement, members of the House Judiciary Committee from both major political parties agreed that Facebook needed to take down Pages that bullied shooting survivors or called for more violence. The hearing regarding social media filtering practices saw policy staffers from Facebook, Google and Twitter answering questions, though Facebook absorbed the brunt of the ire. The hearing included Republican Representative Steve King ask “What about converting the large behemoth organizations that we’re talking about here into public utilities?”

The meatiest part of the hearing centered on whether social media platforms should delete accounts of conspiracy theorists and those inciting violence, rather than just removing the offending posts.

The issue has been a huge pain point for Facebook this week after giving vague answers for why it hasn’t deleted known faker Alex Jones’ Infowars Page, and tweeting that “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news.” Facebook’s Head of Global Policy Management Monica Bickert today reiterated that “sharing information that is false does not violate our policies.”

As I detailed in this opinion piece, I think the right solution is to quarantine the Pages of Infowars and similar fake news, preventing their posts or shares of links to their web domain from getting any visibility in the News Feed. But deleting the Page without instances of it directly inciting violence would make Jones a martyr and strengthen his counterfactual movement. Deletion should be reserved for those that blatantly encourage acts of violence.

Rep. Ted Deutch (D-Florida) asked about how Infowars’ claims in YouTube videos that Parkland shooting’s survivors were crisis actors squared with the company’s policy. Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs explained that “We have a specific policy that says that if you say a well-documented violent attack didn’t happen and you use the name or image of the survivors or victims of that attack, that is a malicious attack and it violates our policy.” She noted that YouTube has a “three strikes” policy, it is “demoting low-quality content and promoting more authoritative content,” and it’s now showing boxes atop result pages for problematic searches, like “is the earth flat?” with facts to dispel conspiracies.

Facebook’s answer was much less clear. Bickert told Deutch that “We do use a strikes model. What that means is that if a Page, or profile, or group is posting content and some of that violates our polices, we always remove the violating posts at a certain point” (emphasis mine). That’s where Facebook became suddenly less transparent.

“It depends on the nature of the content that is violating our policies. At a certain point we would also remove the Page, or the profile, or the group at issue,” Bickert continued. Deutch then asked how many strikes conspiracy theorists get. Bickert noted that “crisis actors” claims violate its policy and its removes that content. “And we would continue to remove any violations from the Infowars Page.” But regarding Page-level removals, she got wishy-washy, saying, “If they posted sufficient content that it would violate our threshold, then the page would come down. The threshold varies depending on the different types of violations.”

“The threshold varies”

Rep. Matt Gaetz (R-Florida) gave the conservatives’ side of the same argument, citing two posts by the Facebook Page “Milkshakes Against The Republican Party” that called for violence, including one that saying “Remember the shooting at the Republican baseball game? One of those should happen every week.”

While these posts have been removed, Gaetz asked why the Page hadn’t. Bickert noted that “There’s no place for any calls for violence on Facebook.” Regarding the threshold, she did reveal that “When someone posts an image of child sexual abuse imagery their account will come down right away. There are different thresholds for different violations.” But she repeatedly refused to make a judgement call about whether the Page should be removed until she could review it with her team.

Image: Bryce Durbin/TechCrunch

Showing surprising alignment in such a fractured political era, Democratic Representative Jamie Raskin of Florida said “I’m agreeing with the chairman about this and I think we arrived at the same exact same place when we were taking about at what threshold does Infowars have their Page taken down after they repeatedly denied the historical reality of massacres of children in public school.”

Facebook can’t rely on a shadowy “the threshold varies” explanation any more. It must outline exactly what types of violations incur not only post removal but strikes against their authors. Perhaps that’s something like “one strike for posts of child sexual abuse, three posts for inciting violence, five posts for bullying victims or denying documented tragedies occurred, and unlimited posts of less urgently dangerous false information.”

Whatever the specifics, Facebook needs to provide specifics. Until then, both liberals and conservatives will rightly claim that enforcement is haphazard and opaque.

For more from today’s hearing:

House Rep suggests converting Google, Facebook, Twitter into public utilities

Amidst vague and uninformed questions during today’s House Judiciary hearing with Facebook, Google, and Twitter on social media filtering practices, Representative Steve King (R-Iowa) dropped a bombshell. “What about converting the large behemoth organizations that we’re talking about here into public utilities?”

King’s suggestion followed his inquiries about right-wing outlet Gateway Pundit losing reach on social media and how Facebook’s algorithm worked. The insinuation was that these companies cannot properly maintain fair platforms for discourse.

The Representative also suggested that there may be need for “review” of Section 230 of the Communications Decency Act that protects interactive computer services from being treated as the publisher of content users post on their platforms. If that rule was changed, social media companies could be held responsible for illegal content from copyright infringement or child pornography appearing on their platform. That would potentially cripple the social media industry, requiring extensive pre-vetting of any content they display.

The share prices of the tech giants did not see significant declines upon the Representative’s comments, indicating the markets don’t necessarily fear that overbearing regulation of this nature is likely.

Representative Steve King questions Google’s Juniper Downs

Here’s the exchange between King and Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs:

King: “Ms Downs, I think you have a sense of my concern about where this is going. I’m all for freedom of speech, and free enterprise, and for competition and finding a way that competition itself does its own regulation so government doesn’t have to. But if this gets further out of hand, it appears to me that Section 230 needs to be reviewed.

And one of the discussions that I’m hearing is ‘what about converting the large behemoth organizations that we’re talking about here into public utilities?’ How do you respond to that inquiry?”

Downs: “As I said previously, we operate in a highly competitive environment , the tech  industry is incredibly dynamic, we see new entratnts all the time. We see competitorsacross  all of our products at google, and we believe that the framework that governs our services is an appropriate way to continue to support innovation.”

Unfortunately, many of the Representatives frittered away their five minutes each asking questions that companies had already answered in previous congressional hearings or public announcements, allowing them to burn the time without providing much new information. Republican reps focused many questions on whether social media platforms are biased against conservatives. Democrats cited studies saying metrics do not show this bias, and concentrated their questions on how the platforms could protect elections from disinformation.

Image via Social Life N Sydney

Protestors during the hearing held up signs behind Facebook’s Head of Global Policy Management Monica Bickert showing Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg as heads of an octopous sitting upon a globe, but the protestors were later removed.

One surprise was when Representative Jerrold Nadler (D-New York) motioned to cut the hearing for an executive session to discuss President Trump’s comments at the Helsinki press conference yesterday that he said were submissive to Russian president Vladimir Putin. However, the motion was defeated 12-10.

Facebook says “The Threshold Varies” For Deleting Fake News

Later in the hearing, Rep Ted Deutch (D-Florida) questioned Facebook and Google’s YouTube about how they deal with conspiracy theorists. The issue has been a huge pain point for Facebook this weak after giving vague answers for why it hasn’t deleted known faker Alex Jones’ Infowars Page, and tweeting that “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news.” Bickert today reiterated that “sharing information that is false does not violate our policies.”

As I detailed in this opinion piece, I think the right solution is to quarantine the Pages of Infowars and similar fake newers, preventing their posts or shares of links to their web domain from getting any visibility in the News Feed. But that deleting the Page without instances of it directly inciting violence would make Jones a martyr and strengthen his counterfactual movement.

When Deutch asked about how Infowars’ claims in YouTube videos that Parkland shooting’s survivors were crisis actors squared with the company’s policy, Downs explained that “We have a specific policy that says that if you say a well documented violent attack didn’t happen and you use the name or image of the survivors or victims of that attack, that is a malicious attack and it violates our policy.” She noted that YouTube has a ‘three strikes’ policy, it is “demoting low quality content and promoting more authoritative content”, and it’s now showing boxes atop result pages for problematic searches like is the earth flat?’ with facts to dispel conspiracies.

Facebook’s answer was much less clear. Bickert told Deutch that “We do use a strikes model. What that means is that if a Page, or profile, or group is posting content and some of that violates our polices, we always remove the violating posts at a certain point” (emphasis mine). That’s where Facebook became suddenly less transparent.

“It depends on the nature of the content that is violating our policies. At a certain point we would also remove the Page, or the profile, or the group at issue” Bickert continued. Deutch then asked how many strikes conspiracy theorists get. Bickert noted that ‘crisis actors’ claims violate its policy and its removes that content. “And we would continue to remove any violations from the Infowars Page.” But regarding Page-level removals, she got wishy-washy, saying “If they posted sufficient content that it would violated our threshold, then the page would come down. The threshold varies depending on the different types of violations.”

Facebook will need to come up with a much clearer rubrick for exactly how that threshold varies, and make that publicly available, or it will continue to be seen as indecisive, and lacking in proper response.

Republicans think social media companies censor opposing political viewpoints

I guess it's easier than just admitting your tweets are bad.

A majority of Republicans surveyed by the nonpartisan Pew Research Center claim that the technology companies of the world are liberal, and, what's more, that social media companies specifically censor opposing political viewpoints on their platforms. 

In other words, Republicans believe they're the victims of a vast Silicon Valley conspiracy that will do anything in its power to keep the lid on the Truth. Or something. 

This, of course, is nonsense — but don't tell that to the Republicans surveyed by Pew. The report, released today, quantifies the kind of conspiratorial thinking that gave rise to the likes of alt-right social media platforms such as Gab. And the picture those numbers paint isn't pretty.  Read more...

More about Facebook, Twitter, Google, Social Media, and Republicans

Study calls out ‘dark patterns’ in Facebook and Google that push users towards less privacy

More scrutiny than ever is in place on the tech industry, and while high-profile cases like Mark Zuckerberg’s appearance in front of lawmakers garner headlines, there are subtler forces at work. This study from a Norway watchdog group eloquently and painstakingly describes the ways that companies like Facebook and Google push their users towards making choices that negatively affect their own privacy.

It was spurred, like many other new inquiries, by Europe’s GDPR, which has caused no small amount of consternation among companies for whom collecting and leveraging user data is their main source of income.

The report (PDF) goes into detail on exactly how these companies create an illusion of control over your data while simultaneously nudging you towards making choices that limit that control.

Although the companies and their products will be quick to point out that they are in compliance with the requirements of the GDPR, there are still plenty of ways in which they can be consumer-unfriendly.

In going through a set of privacy popups put out in May by Facebook, Google, and Microsoft, the researchers found that the first two especially feature “dark patterns, techniques and features of interface design mean to manipulate users…used to nudge users towards privacy intrusive options.”

Flowchart illustrating the Facebook privacy options process – the green boxes are the “easy” route.

It’s not big obvious things — in fact, that’s the point of these “dark patterns”: that they are small and subtle yet effective ways of guiding people towards the outcome preferred by the designers.

For instance, in Facebook and Google’s privacy settings process, the more private options are simply disabled by default, and users not paying close attention will not know that there was a choice to begin with. You’re always opting out of things, not in. To enable these options is also a considerably longer process: 13 clicks or taps versus 4 in Facebook’s case.

That’s especially troubling when the companies are also forcing this action to take place at a time of their choosing, not yours. And Facebook added a cherry on top, almost literally, with the fake red dots that appeared behind the privacy popup, suggesting users had messages and notifications waiting for them even if that wasn’t the case.

When choosing the privacy-enhancing option, such as disabling face recognition, users are presented with a tailored set of consequences: “we won’t be able to use this technology if a stranger uses your photo to impersonate you,” for instance, to scare the user into enabling it. But nothing is said about what you will be opting into, such as how your likeness could be used in ad targeting or automatically matched to photos taken by others.

Disabling ad targeting on Google, meanwhile, warns you that you will not be able to mute some ads going forward. People who don’t understand the mechanism of muting being referred to here will be scared of the possibility — what if an ad pops up at work or during a show and I can’t mute it? So they agree to share their data.

Before you make a choice, you have to hear Facebook’s case.

In this way users are punished for choosing privacy over sharing, and are always presented only with a carefully curated set of pros and cons intended to cue the user to decide in favor of sharing. “You’re in control,” the user is constantly told, though those controls are deliberately designed to undermine what control you do have and exert.

Microsoft, while guilty of the biased phrasing, received much better marks in the report. Its privacy setup process put the less and more private options right next to each other, presenting them as equally valid choices rather than some tedious configuration tool that might break something if you’re not careful. Subtle cues do push users towards sharing more data or enabling voice recognition, but users aren’t punished or deceived the way they are elsewhere.

You may already have been aware of some of these tactics, as I was, but it makes for interesting reading nevertheless. We tend to discount these things when it’s just one screen here or there, but seeing them all together along with a calm explanation of why they are the way they are makes it rather obvious that there’s something insidious at play here.