Category Archives: FaceBook

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

Facebook unveils ‘SapFix’ AI auto-debugger and AI chip partners like Intel

Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually share it with the developer community.

“To our knowledge, this marks the first time that a machine-generated fix — with automated end-to-end testing and repair — has been deployed into a codebase of Facebook’s scale,” writes Facebook’s developer tool team. “It’s an important milestone for AI hybrids and offers further evidence that search-based software engineering can reduce friction in software development.” SapFix can run with or without Sapienz, Facebook’s previous automated bug spotter. It uses it in conjunction with SapFix, suggesting solutions to problems Sapienz discovers.

These types of tools could allow smaller teams to build more powerful products, or let big corporations save a ton on wasted engineering time. That’s critical for Facebook as it has so many other problems to worry about.

Glow AI hardware partners

Meanwhile, Facebook is pressing forward with its strategy of reorienting the computing hardware ecosystem around its own machine learning software. Today it announced that its Glow compiler for machine learning hardware acceleration has signed up the top silicon manufacturers, like Cadence, Esperanto, Intel, Marvell, and Qualcomm, to support Glow. The plan mirrors Facebook’s Open Compute Project for open sourcing server designs and Telecom Infra Project for connectivity technology.

Glow works with a wide array of machine learning frameworks and hardware accelerators to speed up how they perform deep learning processes. It was open sourced earlier this year at Facebook’s F8 conference.

“Hardware accelerators are specialized to solve the task of machine learning execution. They typically contain a large number of execution units, on-chip memory banks, and application-specific circuits that make the execution of ML workloads very efficient,” Facebook’s team writes. “To execute machine learning programs on specialized hardware, compilers are used to orchestrate the different parts and make them work together . . . Hardware partners that use Glow can reduce the time it takes to bring their product to market.”

Essentially, Facebook needs help in the silicon department. Instead of isolating itself and building its own chips like Apple and Google, it’s effectively outsourcing the hardware development to the experts. That means it might forego a competitive advantage from this infrastructure, but it also allows it to save money and focus on its core strengths.

The technologies aside, the Scale conference was evidence that Facebook will keep hacking, policy scandals be damned. There was nary a mention of Cambridge Analytica or election interference as a packed room of engineers chuckled to nerdy jokes during keynotes packed with enough coding jargon to make the unindoctrinated assume it was in another language. If Facebook is burning, you couldn’t tell from here.

 

Facebook rolls out photo/video fact checking so partners can train its AI

Sometimes fake news lives inside of Facebook as photos and videos designed to propel misinformation campaigns, instead of off-site on news articles that can generate their own ad revenue. To combat these politically rather than financially-motivated meddlers, Facebook has to be able to detect fake news inside of images and the audio that accompanies video clips. Today its expanding its photo and video fact checking program from four countries to all 23 of its fact-checking partners in 17 countries.

“Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken” says Facebook product manager Antonia Woodford. “As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model.”

The goal is for Facebook to be able to automatically spot manipulated images, out of context images that don’t show what they say they do, or text and audio claims that are provably false.

In last night’s epic 3,260-word security manifesto, Facebook CEO Mark Zuckerberg explained that “The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.” That means using AI to proactively hunt down false news rather than waiting for it to be flagged by users. For that, Facebook needs AI training data that will be produced as exhaust from its partners’ photo and video fact checking operations.

Facebook is developing technology tools to assist its fact checkers in this process. “we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated” Woodford notes, referring to DeepFakes that use AI video editing software to make someone appear to say or do something they haven’t.

Image memes were one of the most popular forms of disinformation used by the Russian IRA election interferers. The problem is that since they’re so easily re-shareable and don’t require people to leave Facebook to view them, they can get viral distribution from unsuspecting users who don’t realize they’ve become pawns in a disinformation campaign.

Facebook could potentially use the high level of technical resources necessary to build fake news meme-spotting AI as an argument for why Facebook shouldn’t be broken up. With Facebook, Messenger, Instagram, and WhatsApp combined, the company gains economies of scale when it comes to fighting the misinformation scourage.

 

Facebook’s ‘Rosetta’ system helps the company understand memes

Memes are the language of the web and Facebook wants to better understand them.

Facebook’s AI teams have made substantial advances over the years in both computer vision and natural language recognition. Today, they’ve announced some of their latest work that works to combine advances in the two fields. A new system, codenamed “Rosetta,” helps teams at Facebook and Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content.

It’s not all memes; the tool scans over a billion images and video frames daily across multiple languages in real time, according to a company blog post.

Rosetta makes use of recent advances in optical character recognition (OCR) to first scan an image and detect text that is present, at which point the characters are placed inside a bounding box that is then analyzed by convolutional neural nets that try to recognize the characters and determine what’s being communicated.

via Facebook

This technology has been in practice for a while — Facebook has been working with OCR since 2015 — but implementing this across the company’s vast networks provides a crazy degree of scale that motivated the company to develop some new strategies around character detection and recognition.

If you’re interested in some of the more technical details of what they did here, check out the team’s research paper on the topic.

Facebook has plenty of reasons to be interested in the text that is accompanying videos or photos, particularly when it comes to their content moderation needs.

Identifying spam is pretty straightforward when the text description of a photo is “Bruh!!! ” or “1 like = 1 prayer,” but videos and photos that employ similar techniques seemed to be more present in timelines as Facebook tweaks its algorithm to promote “time well spent.” The same goes for hate speech, which can much more easily be shared when all the messaging is encapsulated in one image or video, which makes text overlays a useful tool.

The company says that this system presents new challenges for them in terms of multi-language support as it’s currently running off a unified model for languages and the bulk of available training data is currently in the Latin alphabet. In the company’s research paper, the team details that it has some strategies to conjure up new language support by repurposing existing databases.

As Facebook looks to offload work from human content moderators and allow its news feed algorithms to sort content based on assigned classifications, a tool like this has a lot of potential to shape how Facebook identifies harmful content, but also put more interesting content in front of you.

Teens know social media is manipulative, but they still use it more than ever

Whoa, the teens really are woke.

The organization Common Sense Media released a research report on Monday that aims to paint a picture of the role that social media plays in teens' lives. Entitled 'Social Media, Social Life,' the survey covered topics like how much and what kinds of social media teens use, as well as how they feel about these apps, how social media makes them feel about themselves, how it affects their relationships, and more. 

Teens' social media use has increased by 36 percentage points since 2012. Unsurprisingly, their favorite apps are Snapchat and Instagram (Facebook is for communicating "with my grandparents" — not even parents, now... ouch).  Read more...

More about Facebook, Instagram, Snapchat, Social Media, and Teens

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

Highlights from the Senate Intelligence hearing with Facebook and Twitter

Another day, another political grilling for social media platform giants.

The Senate Intelligence Committee’s fourth hearing took place this morning, with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey present to take questions as U.S. lawmakers continue to probe how foreign influence operations are playing out on Internet platforms — and eye up potential future policy interventions. 

During the session US lawmakers voiced concerns about “who owns” data they couched as “rapidly becoming me”. An uncomfortable conflation for platforms whose business is human surveillance.

They also flagged the risk of more episodes of data manipulation intended to incite violence, such as has been seen in Myanmar — and Facebook especially was pressed to commit to having both a legal and moral obligation towards its users.

The value of consumer data was also raised, with committee vice chair, Sen. Mark Warner, suggesting platforms should actively convey that value to their users, rather than trying to obfuscate the extent and utility of their data holdings. A level of transparency that will clearly require regulatory intervention.

Here’s our round-up of some of the other highlights from this morning’s session.

Google not showing up

Today’s hearing was a high profile event largely on account of two senior bums sitting on the seats before lawmakers — and one empty chair.

Facebook sent its COO Sheryl Sandberg. Twitter sent its bearded wiseman CEO Jack Dorsey (whose experimental word of the month appears to be “cadence” — as in he frequently said he would like a greater “cadence” of meetings with intelligence tips from law enforcement).

But Google sent the offer of its legal chief in place of Larry Page or Sundar Pichai, who the committee had actually asked for.

Which meant the company instantly became the politicians’ favored punchbag, with senator after senator laying into Alphabet for empty chairing them at the top exec level.

Whatever Page and Pichai were too busy doing to answer awkward questions about its business activity and ambitions in China the move looks like a major open goal for Alphabet as it was open season for senators to slam it.

Page staying away also made Facebook and Twitter look the very model of besuited civic responsibility and patriotism just for bothering to show up.

We got “Jack” and “Sheryl” first name terms from some of the senators, and plenty of “thanks for turning up” heaped on them from all corners — with some very particular barbs reserved for Google.

“I want to commend both of you for your appearance here today for what was no doubt going to be some uncomfortable questions. And I want to commend your companies for making you available. I wish I could say the same about Google,” said Senator Tom Cotton, addressing those in the room. “Both of you should wear it as a badge of honor that the Chinese Communist Party has blocked you from operating in their country.”

“Perhaps Google didn’t send a senior executive today because they’ve recently taken actions such as terminating a co-operation they had with the American military on programs like artificial intelligence that are designed not just to protect our troops and help them fight in our country’s wars but to protect civilians as well,” he continued, warming to his theme. “This is at the very same time that they continue to co-operate with the Chinese Communist Party on matters like artificial intelligence or partner with Huawei and other Chinese telecom companies who are effectively arms of the Chinese Communist Party.

“And credible reports suggest that they are working to develop a new search engine that would satisfy the Chinese Communist Party’s censorship standards after having disclaimed any intent to do so eight years ago. Perhaps they did not send a witness to answer these questions because there is no answer to these questions. And the silence we would hear right now from the Google chair would be reminiscent of the silence that that witness would provide.”

Even Sandberg seemed to cringe when offered the home-run opportunity to stick the knife in to Google — when Cotton asked both witnesses whether their companies would consider taking these kinds of actions?

But after a split second’s hesitation her media training kicked in — and she found her way of diplomatically giving Google the asked for kicking. “I’m not familiar with the specifics of this at all but based on how you’re asking the question I don’t believe so,” was her reply.

After his own small pause, Dorsey, the man of fewer words, added: “Also no.”

 

Dorsey repeat apologizing 

‘We haven’t done a good job of that’ was the most common refrain falling from Dorsey’s bearded lips this morning as senators asked why the company hasn’t managed to suck less from all sorts of angles — whether that’s by failing to provide external researchers with better access to data to help them help it with malicious interference; or failing to informing individual users who’ve been the targeted victims of Twitter fakery that that abuse has been happening to them; or just failing to offer any kind of contextual signal to its users that some piece of content they’re seeing is (or might be) maliciously fake.

But then this is the man who has defended providing a platform to people who make a living selling lies, so…

“We haven’t done a good job of that in the past,” was certainly phrase of the morning for a contrite Dorsey.  And while admitting failure is at least better than denying you’re failing, it’s still just that: Failure.

And continued failure has been a Twitter theme for so long now, when it comes to things like harassment and abuse, it’s starting to feel intentional. (As if, were you able to cut Twitter you’d find the words ‘feed the trolls’ running all the way through its business.)

Sadly the committee seemed to be placated by Dorsey’s repeat confessions of inadequacy. And he really wasn’t pressed enough. We’d have liked to see a lot more grilling of him over short term business incentives that tie his hands on fighting abuse.

Amusingly, one senator rechristened Dorsey “Mr Darcey”, after somehow tripping over the two syllables of his name. But actually, thinking about it, ‘pride and prejudice’ might be a good theme for the Twitter CEO to explore during one of his regular meditation sessions.

Y’know, as he ploughs through a second turgid decade of journeying towards self-awareness — while continuing to be paralyzed, on the business, civic and, well, human being, front, by rank indecision about which people and points of view to listen to (Pro-Tip: If someone makes money selling lies and/or spreading hate you really shouldn’t be letting them yank your operational chain) — leaving his platform (the would be “digital public square”, as he kept referring to it today), incapable of upholding the healthy standards it claims to want to have. (Or daubed with all manner of filthy graffiti, if you want a visual metaphor.)

The problem is Twitter’s stated position/mission, in Dorsey’s prepared statements to the committee, of keeping “all voices on the platform” is hubris. It’s a flawed ideology that results in massive damage to the free speech and healthy conversation he professes to want to champion because nazis are great at silencing people they hate and harass.

Unfortunately Dorsey still hasn’t had that eureka moment yet. And there was no sign of any imminent awakening judging by this morning’s performance.

 

Sandberg’s oh-so-smooth operation — but also an exchange that rattled her

The Facebook COO isn’t chief operating officer for nothing. She’s the queen of the polished, non-committal soundbite. And today she almost always had one to hand — smoothly projecting the impression that the company is always doing something. Whether that’s on combating hate speech, hoaxes and “inauthentic” content, or IDing and blocking state-level disinformation campaigns — thereby shifting attention off the deeper question of whether Facebook is doing enough. (Or even whether its platform might not be the problem itself.)

Albeit the bar looks very low indeed when your efforts are being set against Twitter and an empty chair.  (Aka the “invisible witness” as one senator sniped at Google.)

Very many of her answers courteously informed senators that Facebook would ‘follow up’ with answers and/or by providing some hazily non-specific ‘collaborative work’ at some undated future time — which is the most professional way to kick awkward questions into the long grass.

Though do it long enough and the grass can turn on you and start to bite back because it’s got so long and unkempt it now contains some very angry snakes.

Senator, Kamala Harrisvery clearly seething at this point — having had her questions to Facebook knocked about since November 2017, when its general council had first testified to the committee on the disinformation topic — was determined to get under Sandberg’s skin. And she did.

The exchange that rattled the Facebook COO started off around how much money it makes off of ads run by fake accounts — such as the Kremlin-backed Internet Research Agency.

Sandberg slickly reframed “inauthentic content” to an even more boring sound “inorganic content” — now several psychologic steps removed from the shockingly outrageous Kremlin propaganda that the company eventually disclosed.

She added it was equivalent to .004% of content in News Feed (hence Facebook’s earlier contention to Harris that it’s “immaterial to earnings”).

It’s not so much the specific substance of the question that’s the problem here for Facebook — with Sandberg also smoothly reiterating that the IRA had spent about $100k (which is petty cash in ad terms) — it’s the implication that Facebook’s business model profits off of fakes and hates, and is therefore amorously entwined in bed with fakes and hates.

“From our point of view, Senator Harris, any amount is too much,” continued Sandberg after she rolled out the $100k figure, and now beginning to thickly layer on the emulsion.

Harris cut her off, interjecting: “So are you saying that the revenue generated was .004% of your annual revenue”, before adding the pointed observation: “Because of course that would not be immaterial” — which drew a rare stuttered double “so” from Sandberg.

“So what metric are you using to calculate the revenue that was generated associated with those ads, and what is the dollar amount that is associated then with that metric?” pressed Harris.

Sandberg couldn’t provide the straight answer being sought, she said, because “ads don’t run with inorganic content on our service” — claiming: “There is actually no way to firmly ascertain how much ads are attached to how much organic content; it’s not how it works.”

“But what percentage of the content on Facebook is organic,” rejoined Harris.

That elicited a micro-pause from Sandberg, before she fell back on the usual: “I don’t have that specific answer but we can come back to you with that.”

Harris pushed her again, wondering if it’s “the majority of content”?

“No, no,” said Sandberg, sounding almost flustered.

“Your company’s business model is complex but it benefits from increased user engagement… so, simply put, the more people that use your platform the more they are exposed to third party ads, the more revenue you generate — would you agree with that,” continued Harris, starting to sound boring but only to try to reel her in.

After another pause Sandberg asked her to repeat this hardly complex question — before affirming “yes, yes” and then hastily qualifying it with: “But only I think when they see really authentic content because I think in the short run and over the long run it doesn’t benefit us to have anything inauthentic on our platform.”

Harris continued to hammer on how Facebook’s business model benefits from greater user engagement as more ads are viewed via its platform —  linking it to “a concern that many have is how you can reconcile an incentive to create and increase your user engagement with the content that generates a lot of engagement is often inflammatory and hateful”.

She then skewered Sandberg with a specific example of Facebook’s hate speech moderation failure — and by suggestive implication a financially incentivized policy and moral failure — referencing a ProPublica report from June 2017 which revealed the company had told moderators to delete hate speech targeting white men but not black children — because the latter were not considered a “protected class”.

Sandberg, sounding uncomfortable now, said this was “a bad policy that has been changed”. “We fixed it,” she added.

“But isn’t that a concern with hate period, that not everyone is looked at the same way,” wondered Harris?

Facebook “cares tremendously about civil rights” said Sandberg, trying to regain the PR initiative. But she was again interrupted by Harris — wondering when exactly Facebook had “addressed” that specific policy failure.

Sandberg was unable to put a date on when the policy change had been made. Which obviously now looked bad.

“Was the policy changed after that report? Or before that report from ProPublica?” pressed Harris.

“I can get back to you on the specifics of when that would have happened,” said Sandberg.

“You’re not aware of when it happened?”

“I don’t remember the exact date.”

“Do you remember the year?”

“Well you just said it was 2017.”

“So do you believe it was 2017 when the policy changed?”

“Sounds like it was.”

The awkward exchange ended with Sandberg being asked whether or not Facebook had changed its hate speech policies to protect not just those people who have been designated legally protected classes of people.

“I know that our hate speech policies go beyond the legal classifications, and they are all public, and we can get back to that on that,” she said, falling back on yet another pledge to follow up.

Twitter agreeing to bot labelling in principle  

We flagged this earlier but Senator Warner managed to extract from Dorsey a quasi-agreement to labelling automation on the platform in future — or at least providing more context to help users navigate what they’re being exposed to in tweet form.

He said Twitter has been “definitely” considering doing this — “especially this past year”.

Although, as we noted earlier, he had plenty of caveats about the limits of its powers of bot detection.

“It’s really up to the implementation at this point,” he added.

How exactly ‘bot or not’ labelling will come to Twitter isn’t clear. Nor was there any timeframe.

But it’s at least possible to imagine the company could add some sort of suggestive percentage of automated content to accounts in future — assuming Dorsey can find his first, second and third gears.

Lawmakers worried about the impact of deepfakes

Deepfakes, aka AI-powered manipulation of video to create fake footage of people doing things they never did is, perhaps unsurprisingly, already on the radar of reputation-sensitive U.S. lawmakers — even though the technology itself is hardly in widespread, volume usage.

Several senators asked whether (and how comprehensively) the social media companies archive suspended or deleted accounts.

Clearly politicians are concerned. No senator wants to be ‘filmed in bed with an intern’ — especially one they never actually went to bed with.

The response they got back was a qualified yes — with both Sandberg and Dorsey saying they keep such content if they have any suspicions.

Which is perhaps rather cold comfort when you consider that Facebook had — apparently — zero suspicious about all the Kremlin propaganda violently coursing across its platform in 2016 and generating hundreds of millions of views.

Since that massive fuck-up the company has certainly seemed more proactive on the state-sponsored fakes front  — recently removing a swathe of accounts linked to Iran which were pushing fake content, for example.

Although unless lawmakers regulate for transparency and audits of platforms there’s no real way for anyone outside these commercially walled gardens to be 110% sure.

Sandberg’s clumsy affirmation of WhatsApp encryption 

Since the WhatsApp founders left Facebook, earlier this year and in fall last, there have been rumors that the company might be considering dropping the flagship end-to-end encryption that the messaging platform boasts — specifically to help with its monetization plans around linking businesses with users.

And Sandberg was today asked directly if WhatsApp still uses e2e encryption. She replied by affirming Facebook’s commitment to encryption generally — saying it’s good for user security.

“We are strong believers in encryption,” she told lawmakers. “Encryption helps keep people safe, it’s what secures our banking system, it’s what secures the security of private messages, and consumers rely on it and depend on it.”

Yet on the specific substance of the question, which had asked whether WhatsApp is still using end-to-end encryption, she pulled out another of her professionally caveated responses — telling the senator who had asked: “We’ll get back to you on any technical details but to my knowledge it is.”

Most probably this was just her habit of professional caveating kicking in. But it was an odd way to reaffirm something as fundamental as the e2e encrypted architecture of a product used by billions of people on a daily basis. And whose e2e encryption has caused plenty of political headaches for Facebook — which in turn is something Sandberg has been personally involved in trying to fix.

Should we be worried that the Facebook COO couldn’t swear under oath that WhatsApp is still e2e encrypted? Let’s hope not. Presumably the day job has just become so fettered with fixes she just momentarily forgot what she could swear she knows to be true and what she couldn’t.