Author: Danny Crichton

The ethics of internet culture: a conversation with Taylor Lorenz

Taylor Lorenz was in high demand this week. As a prolific journalist at The Atlantic and about-to-be member of Harvard’s prestigious Nieman Fellowship for journalism, that’s perhaps not surprising. Nor was this the first time she’s had a bit of a moment: Lorenz has already served as an in-house expert on social media and the internet for several major companies, while having written and edited for publications as diverse as The Daily Beast, The Hill, People, The Daily Mail, and Business Insider, all while remaining hip and in touch enough to currently serve as a kind of youth zeitgeist translator, on her beat as a technology writer for The Atlantic.

Lorenz is in fact publicly busy enough that she’s one of only two people I personally know to have openly ‘quit email,’ the other being my friend Russ, an 82 year-old retired engineer and MIT alum who literally spends all day, most days, working on a plan to reinvent the bicycle.

I wonder if any of Lorenz’s previous professional experiences, however, could have matched the weight of the events she encountered these past several days, when the nightmarish massacre in Christchurch, New Zealand brought together two of her greatest areas of expertise: political extremism (which she covered for The Hill), and internet culture. As her first Atlantic piece after the shootings said, the Christchurch killer’s manifesto was “designed to troll.” Indeed, his entire heinous act was a calculated effort to manipulate our current norms of Internet communication and connection, for fanatical ends.

Taylor Lorenz

Lorenz responded with characteristic insight, focusing on the ways in which the stylized insider subcultures the Internet supports can be used to confuse, distract, and mobilize millions of people for good and for truly evil ends:

Before people can even begin to grasp the nuances of today’s internet, they can be radicalized by it. Platforms such as YouTube and Facebook can send users barreling into fringe communities where extremist views are normalized and advanced. Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling. The darker corners of the internet are so fragmented that even when they spawn a mass shooting, as in New Zealand, the shooter’s words can be nearly impossible to parse, even for those who are Extremely Online.”

Such insights are among the many reasons I was so grateful to be able to speak with Taylor Lorenz for this week’s installment of my TechCrunch series interrogating the ethics of technology.

As I’ve written in my previous interviews with author and inequality critic Anand Giridharadas, and with award-winning Google exec turned award-winning tech critic James Williams, I come to tech ethics from 25 years of studying religion. My personal approach to religion, however, has essentially always been that it plays a central role in human civilization not only or even primarily because of its theistic beliefs and “faith,” but because of its culture — its traditions, literature, rituals, history, and the content of its communities.

And because I don’t mind comparing technology to religion (not saying they are one and the same, but that there is something to be learned from the comparison), I’d argue that if we really want to understand the ethics of the technologies we are creating, particularly the Internet, we need to explore, as Taylor and I did in our conversation below, “the ethics of internet culture.”

What resulted was, like Lorenz’s work in general, at times whimsical, at times cool enough to fly right over my head, but at all times fascinating and important.

Editor’s Note: we ungated the first of 11 sections of this interview. Reading time: 22 minutes / 5,500 words.

Joking with the Pope

Greg Epstein: Taylor, thanks so much for speaking with me. As you know, I’m writing for TechCrunch about religion, ethics, and technology, and I recently discovered your work when you brought all those together in an unusual way. You subtweeted the Pope, and it went viral.

Taylor Lorenz: I know. [People] were freaking out.

Greg: What was that experience like?

Taylor: The Pope tweeted some insane tweet about how Mary, Jesus’ mother, was the first influencer. He tweeted it out, and everyone was spamming that tweet to me because I write so much about influencers, and I was just laughing. There’s a meme on Instagram about Jesus being the first influencer and how he killed himself or faked his death for more followers.

Because it’s fluid, it’s a lifeline for so many kids. It’s where their social network lives. It’s where identity expression occurs.

I just tweeted it out. I think a lot of people didn’t know the joke, the meme, and I think they just thought that it was new & funny. Also [some people] were saying, “how can you joke about Jesus wanting more followers?” I’m like, the Pope literally compared Mary to a social media influencer, so calm down. My whole family is Irish Catholic.

A bunch of people were sharing my tweet. I was like, oh, god. I’m not trying to lead into some religious controversy, but I did think whether my Irish Catholic mother would laugh. She has a really good sense of humor. I thought, I think she would laugh at this joke. I think it’s fine.

Greg: I loved it because it was a real Rorschach test for me. Sitting there looking at that tweet, I was one of the people who didn’t know that particular meme. I’d like to think I love my memes but …

Taylor: I can’t claim credit.

Greg: No, no, but anyway most of the memes I know are the ones my students happen to tell me about. The point is I’ve spent 15 plus years being a professional atheist. I’ve had my share of religious debates, but I also have had all these debates with others I’ll call Professional Strident Atheists.. who are more aggressive in their anti-religion than I am. And I’m thinking, “Okay, this is clearly a tweet that Richard Dawkins would love. Do I love it? I don’t know. Wait, I think I do!”

Taylor: I treated it with the greatest respect for all faiths. I thought it was funny to drag the Pope on Twitter .

The influence of Instagram

Alexander Spatari via Getty Images

The adversarial persuasion machine: a conversation with James Williams

James Williams may not be a household name yet in most tech circles, but he will be.

For this second in what will be a regular series of conversations exploring the ethics of the technology industry, I was delighted to be able to turn to one of our current generation’s most important young philosophers of tech.

Around a decade ago, Williams won the Founder’s Award, Google’s highest honor for its employees. Then in 2017, he won an even rarer award, this time for his scorching criticism of the entire digital technology industry in which he had worked so successfully. The inaugural winner of Cambridge University’s $100,000 “Nine Dots Prize” for original thinking, Williams was recognized for the fruits of his doctoral research at Oxford University, on how “digital technologies are making all forms of politics worth having impossible, as they privilege our impulses over our intentions and are designed to exploit our psychological vulnerabilities in order to direct us toward goals that may or may not align with our own.” In 2018, he published his brilliantly written book Stand Out of Our Light, an instant classic in the field of tech ethics.

In an in-depth conversation by phone and email, edited below for length and clarity, Williams told me about how and why our attention is under profound assault. At one point, he points out that the artificial intelligence which beat the world champion at the game Go is now aimed squarely — and rather successfully — at beating us, or at least convincing us to watch more YouTube videos and stay on our phones a lot longer than we otherwise would. And while most of us have sort of observed and lamented this phenomenon, Williams believes the consequences of things like smartphone compulsion could be much more dire and widespread than we realize, ultimately putting billions of people in profound danger while testing our ability to even have a human will.

It’s a chilling prospect, and yet somehow, if you read to the end of the interview, you’ll see Williams manages to end on an inspiring and hopeful note. Enjoy!

Editor’s note: this interview is approximately 5,500 words / 25 minutes read time. The first third has been ungated given the importance of this subject. To read the whole interview, be sure to join the Extra Crunch membership. ~ Danny Crichton

Introduction and background

Greg Epstein: I want to know more about your personal story. You grew up in West Texas. Then you found yourself at Google, where you won the Founder’s Award, Google’s highest honor. Then at some point you realized, “I’ve got to get out of here.” What was that journey like?

James Williams: This is going to sound neater and more intentional than it actually was, as is the case with most stories. In a lot of ways my life has been a ping-ponging back and forth between tech and the humanities, trying to bring them into some kind of conversation.

It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight

I spent my formative years in a town called Abilene, Texas, where my father was a university professor. It’s the kind of place where you get the day off school when the rodeo comes to town. Lots of good people there. But it’s not exactly a tech hub. Most of my tech education consisted of spending late nights, and full days in the summer, up in the university computer lab with my younger brother just messing around on the fast connection there. Later when I went to college, I started studying computer engineering, but I found that I had this itch about the broader “why” questions that on some deeper level I needed to scratch. So I changed my focus to literature.

After college, I started working at Google in their Seattle office, helping to grow their search ads business. I never, ever imagined I’d work in advertising, and there was some serious whiplash from going straight into that world after spending several hours a day reading James Joyce. Though I guess Leopold Bloom in Ulysses also works in advertising, so there’s at least some thread of a connection there. But I think what I found most compelling about the work at the time, and I guess this would have been in 2005, was the idea that we were fundamentally changing what advertising could be. If historically advertising had to be an annoying, distracting barrage on people’s attention, it didn’t have to anymore because we finally had the means to orient it around people’s actual intentions. And search, that “database of intentions,” was right at the vanguard of that change.

The adversarial persuasion machine

Photo by joe daniel price via Getty Images

Greg: So how did you end up at Oxford, studying tech ethics? What did you go there to learn about?

James: What led me to go to Oxford to study the ethics of persuasion and attention was that I didn’t see this reorientation of advertising around people’s true goals and intentions ultimately winning out across the industry. In fact, I saw something really concerning happening in the opposite direction. The old attention-grabby forms of advertising were being uncritically reimposed in the new digital environment, only now in a much more sophisticated and unrestrained manner. These attention-grabby goals, which are goals that no user anywhere has ever had for themselves, seemed to be cannibalizing the design goals of the medium itself.

In the past advertising had been described as a kind of “underwriting” of the medium, but now it seemed to be “overwriting” it. Everything was becoming an ad. My whole digital environment seemed to be transmogrifying into some weird new kind of adversarial persuasion machine. But persuasion isn’t even the right word for it. It’s something stronger than that, something more in the direction of coercion or manipulation that I still don’t think we have a good word for. When I looked around and didn’t see anybody talking about the ethics of that stuff, in particular the implications it has for human freedom, I decided to go study it myself.

Greg: How stressful of a time was that for you when you were realizing that you needed to make such a big change or that you might be making such a big change?

James: The big change being shifting to do doctoral work?

Greg: Well that, but really I’m trying to understand what it was like to go from a very high place in the tech world to becoming essentially a philosopher critic of your former work.

James: A lot of people I talked to didn’t understand why I was doing it. Friends, coworkers, I think they didn’t quite understand why it was worthy of such a big step, such a big change in my personal life to try to interrogate this question. There was a bit of, not loneliness, but a certain kind of motivational isolation, I guess. But since then, it’s certainly been heartening to see many of them come to realize why I felt it was so important. Part of that is because these questions are so much more in the foreground of societal awareness now than they were then.

Liberation in the age of attention

Greg: You write about how when you were younger you thought “there were no great political struggles left.” Now you’ve said, “The liberation of human attention may be the defining moral and political struggle of our time.” Tell me about that transition intellectually or emotionally or both. How good did you think it was back then, the world was back then, and how concerned are you now?

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.”

James: I think a lot of people in my generation grew up with this feeling that there weren’t really any more existential threats to the liberal project left for us to fight against. It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight and get a good job and keep recycling and try not to crash the car as we cruise off into this ultra-stable sunset at the end of history.

What I’ve realized, though, is that this crisis of attention brought upon by adversarial persuasive design is like a bucket of mud that’s been thrown across the windshield of the car. It’s a first-order problem. Yes, we still have big problems to solve like climate change and extremism and so on. But we can’t solve them unless we can give the right kind of attention to them. In the same way that, if you have a muddy windshield, yeah, you risk veering off the road and hitting a tree or flying into a ravine. But the first thing is that you really need to clean your windshield. We can’t really do anything that matters unless we can pay attention to the stuff that matters. And our media is our windshield, and right now there’s mud all over it.

Greg: One of the terms that you either coin or use for the situation that we find ourselves in now is the age of attention.

James: I use this phrase “Age of Attention” not so much to advance it as a serious candidate for what we should call our time, but more as a rhetorical counterpoint to the phrase “Information Age.” It’s a reference to the famous observation of Herbert Simon, which I discuss in the book, that when information becomes abundant it makes attention the scarce resource.

Much of the ethical work on digital technology so far has addressed questions of information management, but far less has addressed questions of attention management. If attention is now the scarce resource so many technologies are competing for, we need to give more ethical attention to attention.

Greg: Right. I just want to make sure people understand how severe this may be, how severe you think it is. I went into your book already feeling totally distracted and surrounded by totally distracted people. But when I finished the book, and it’s one of the most marked-up books I’ve ever owned by the way, I came away with the sense of acute crisis. What is being done to our attention is affecting us profoundly as human beings. How would you characterize it?

James: Thanks for giving so much attention to the book. Yeah, these ideas have very deep roots. In the Dhammapada the Buddha says, “All that we are is a result of what we have thought.” The book of Proverbs says, “As a man thinketh in his heart, so is he.” Simone Weil wrote that “It is not we who move, but images pass before our eyes and we live them.” It seems to me that attention should really be seen as one of our most precious and fundamental capacities, cultivating it in the right way should be seen as one of the greatest goods, and injuring it should be seen as of the greatest harms.

In the book, I was interested to explore whether the language of attention can be used to talk usefully about the human will. At the end of the day I think that’s a major part of what’s at stake in the design of these persuasive systems, the success of the human will.

“Want what we want?”

Photo by Buena Vista Images via Getty Images

Greg: To translate those concerns about “the success of the human will” into simpler terms, I think the big concern here is, what happens to us as human beings if we find ourselves waking up in the morning and going to bed at night wanting things that we really only want because AI and algorithms have helped convince us we want them? For example, we want to be on our phone chiefly because it serves Samsung or Google or Facebook or whomever. Do we lose something of our humanity when we lose the ability to “want what we want?”

James: Absolutely. I mean, philosophers call these second order volitions as opposed to just first order volitions. A first order volition is, “I want to eat the piece of chocolate that’s in front of me.” But the second order volition is, “I don’t want to want to eat that piece of chocolate that’s in front of me.” Creating those second order volitions, being able to define what we want to want, requires that we have a certain capacity for reflection.

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.” But that’s basically taking evidence of effective persuasion as evidence of intention, which is very convenient for serving design metrics and business models, but not necessarily a user’s interests.

AI and attention

STR/AFP/Getty Images

Greg: Let’s talk about AI and its role in the persuasion that you’ve been describing. You talk about, a number of times, about the AI behind the system that beat the world champion at the board game Go. I think that’s a great example and that that AI has been deployed to keep us watching YouTube longer, and that billions of dollars are literally being spent to figure out how to get us to look at one thing over another.

Why we lie to ourselves, every day

Human action requires motivation, but what exactly are those motivations? Donating money to a charity might be motivated by altruism, and yet, only 1% of donations are anonymous. Donors don’t just want to be altruistic, they also want credit for that altruism plus badges to signal to others about their altruistic ways.

Worse, we aren’t even aware of our true motivations — in fact, we often strategically deceive ourselves to make our behavior appear more pure than it really is. It’s a pattern that manifests itself across all kinds of arenas, including consumption, politics, education, medicine, religion and more.

In their book Elephant in the Brain, Kevin Simler, formerly a long-time engineer at Palantir, and Robin Hanson, an associate professor of economics at George Mason University, take the most dismal parts of the dismal science of economics and weave them together into a story of humans acting badly (but believing they are great!) As the authors write in their intro, “The line between cynicism and misanthropy — between thinking ill of human motives and thinking ill of humans — is often blurry.” No kidding.

Elephant in the Brain by Kevin Simler and Robin Hanson. Oxford University Press, 2018

The eponymous elephant in the brain is essentially our self-deception and hidden motivations regarding the actions we take in everyday life. Like the proverbial elephant in the room, this elephant in the brain is visible to those who search for it, but we often avoid looking at it lest we get discouraged at our selfish behavior.

Humans care deeply about being perceived as prosocial, but we are also locked into constant competition, over status attainment, careers, and spouses. We want to signal our community spirit, but we also want to selfishly benefit from our work. We solve for this dichotomy by creating rationalizations and excuses to do both simultaneously. We give to charity for the status as well as the altruism, much as we get a college degree to learn, but also to earn a degree which signals to employers that we will be hard workers.

The key is that we self-deceive: we don’t realize we are taking advantage of the duality of our actions. We truly believe we are being altruistic, just as much as we truly believe we are in college to learn and explore the arts and humanities. That self-deception is critical, since it lowers the cost of demonstrating our prosocial bona fides: we would be heavily cognitively taxed if we had to constantly pretend as if we cared about the environment when what we really care about is being perceived as an ethical consumer.

Elephant in the Brain is a bold yet synthetic thesis. Simler and Hanson build upon a number of research advances, such as Jonathan Haidt’s work on the righteous mind and Robert Trivers work on evolutionary psychology to undergird their thesis in the first few chapters, and then they apply that thesis to a series of other fields (ten, in fact) in relatively brief and facile chapters to describe how the elephant in the brain affects us in every sphere of human activity.

Refreshingly, far from being polemicists, the authors are quite curious and investigatory about this pattern of human behavior, and they realize they are pushing at least some of their readers into uncomfortable territory. They even begin the book by stating that “we expect the typical reader to accept roughly two-thirds of our claims about human motives and institutions.”

Yet, the book is essentially making one claim, just applied in a myriad of ways. It’s unclear to me who the reader would be who accepts only parts of the book’s premise. Either you have come around to the cynical view of humans (pre or post book), or you haven’t — there doesn’t seem to me to be a middle point between those two perspectives.

Worse, even after reading the book, I am left completely unaware of what exactly to do with the thesis now that I have read it. There is something of a lukewarm conclusion in which the authors push for us to have greater situational awareness, and a short albeit excellent section on designing better institutions to account for hidden motivations. The book’s observations ultimately don’t lead to any greater project, no path toward a more enlightened society. That’s fine, but disappointing.

Indeed, for a book that arguably strives to be optimistic, I fear its results will be nothing more than cynical fodder for Silicon Valley product designers. Don’t design products for what humans say they want, but design them to punch the buttons of their hidden motivations. Viewed in this light, Elephant in the Brain is perhaps a more academic version of the Facebook product manual.

The dismal science is dismal precisely because of this cynicism: because as a project, as a set of values, it leads pretty much nowhere. Everyone is secretly selfish and obsessed with status, and they don’t even know it. As the authors conclude in their final line, “We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” Yes we did, and it is precisely that surprise from such a dreary species that we should take solace in. There is indeed an elephant in our brain, but its influence can wax and wane — and ultimately humans hold their agency in their own hands.

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never exempted an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.

M17 delays IPO debut after pricing this morning on NYSE

M17 Entertainment, a Taipei-based live streaming and dating app group, priced its IPO this morning on the NYSE and was expected to open trading today according to their final press release. But with just a little more than two hours to go before market closing, it’s still not trading, and no one seems to know why.

An interview I had scheduled with the CEO earlier this afternoon was canceled at the last minute, with the company’s representative saying that M17 couldn’t comment since its shares were not yet actively trading, and thus the company remains under an SEC-mandated quiet period.

M17 has had a rocky non-debut so far. Originally targeting a fundraise of $115 million of American Depository Receipts (shares of foreign companies listed domestically on the NYSE), the company concluded its roadshow raising less than half of its target, for a final investment of $60.1 million. The company priced its ADR shares at $8 each, with each ADR representing 8 shares of the stock’s Class A security.

My colleague Jon Russell has covered the company’s rapid growth over the past three years. It was formed from the merger of dating app company Paktor and live streaming business 17 Media. Joseph Phua, who was CEO of Paktor, became CEO of the joint M17 company following the merger. Together, the two halves have raised tens of millions in venture capital.

M17 provides live streaming and dating apps throughout “Developed Asia”

The company’s main product is a live streaming product where creators can build their fanbases and brands. Fans can purchase virtual gifts to send to their favorite artists, and those points are proving to be extraordinarily lucrative for the company. The company, according to its amended F-1 statement, has seen tremendous revenue growth, netting $37.9 million of revenue in the first three months of this year. The company has also been able to attract more live streaming talent, increasing its contracted artists from 999 at the end of December 2016 to 7,719 at the end of March this year.

That’s where the good news ends for the company though. Despite that revenue growth, operating losses are torrential, with the company losing $24.8 million in the first three months of this year. The company in its statement says that it has $31.4 million in cash and cash equivalents, giving it limited runway to continue operations without a strong IPO debut.

User growth has been mostly stagnant. Active monthly users has increased from 1.5 million to 1.7 million between March 31 of 2017 and 2018. What the company has succeeded in doing is monetizing those users much better. The percentage of users paying on the platform has more than doubled over the same time period, and the value of those users has increased more than 40% to $355 per user per month.

The big challenge for M17 is revenue quality. Live streaming represents 91.4% of the company’s revenues, but those revenues are concentrated on a handful of “whales” who buy a freakishly high number of virtual gifts. The company’s top ten users represent 11.8% of all revenues (that’s $447,220 a user in the first three months this year!), and its top 500 users accounted for almost a majority of total revenues. That concentration on the demand side is just as heavy on the supply side. M17’s top 100 artists accounted for more than a third of the company’s revenue.

That concentration has improved over the past few months, according to the company’s filing. But Wall Street investors have learned after Zynga and other whale-based revenue models that the sustainability of these businesses can be tough.

Finally, one complication for many investors wary of the increasing use of dual-class stock issues is the governance of the company. Phua, the CEO, will have 56.3% of the voting rights of the company, and M17 will be a controlled company under NYSE rules according to the company’s amended filing. Class B shares vote at a 20:1 ratio with Class A share voting rights.

All of this is to say that while the company has had some dizzying growth in its revenue numbers over the past 24 months, that success is moderated by some significant challenges in revenue concentration that will have to be a top priority for M17 going forward. Why the company priced and hasn’t traded though remains a mystery, and we have reached out for more comments.

Subscription hell

Another week, another paywall. This time, it’s Bloomberg, which announced that it would be adding a comprehensive paywall to its news service and television channel (except TicToc, its media partnership with Twitter). A paywall was hardly a surprise, but what was surprising was the price: the standard subscription is $35 a month (up from $0 a month), or $40 a month including access to online and print editions of Businessweek.

And people say avocado toast is expensive.

That’s not the only subscription coming up though. Now Facebook is considering adding an ad-free subscription option. These rumors have come and gone in the past, with no sign of change in the company’s resolute focus on advertising as its core business model. Post-Cambridge Analytica and post-GDPR though, it seems the company’s position is more malleable, and could be following the plan laid out by my colleague Josh Constine recently. He pegged the potential price at $11 a month, given the company’s revenue per user.

I’m an emphatic champion of subscription models, particularly in media. Subscriptions align incentives in a way that advertising can never do, while also avoiding the morass of privacy and ethics that plague ad targeting. Subscription revenues are also more reliable than ad dollars, making it easier to budget and improve operational efficiency for an organization.

Incentive alignment is one thing, and my wallet is another. All of these subscriptions are starting to add up. These days, my media subscriptions are hovering around $80 a month, and I don’t even have TV. Storage costs for Google, Apple, and Dropbox are another $13 a month. Cable and cell service are another $200 a month combined. Software subscriptions are probably about $20 a month (although so many are annualized its hard to keep track of them). Amazon Prime and a few others total in around $25 a month.

Worse, subscriptions aren’t getting any cheaper. Amazon Prime just increased its price to $120 a year, Netflix increased its popular middle-tier plan to $11 a month late last year, and YouTube increased its TV pricing to $40 a month last month. Add in new paywalls, and the burden of subscriptions is rising far faster than consumer incomes.

I’m frustrated with this hell. I’m frustrated that the web’s promise of instant and free access to the world’s information appears to be dying. I’m frustrated that subscription usually means just putting formerly free content behind a paywall. I’m frustrated that the price for subscriptions seems wildly high compared to the ad dollars that the fees substitute for. And I’m frustrated that subscription pricing rarely seems to account for other subscriptions I have, even when content libraries are similar.

Subscriptions can be a great tool, but everyone seems to be doing them wrong. We need to transform our thinking here if we are to move on from the manacles of the ad networks.

Before we dive in though, let’s be clear: the web needs a business model. We didn’t need paywalls on the early web because we focused on plain text from other users. Plain text is easier to produce, lowering the friction for people to contribute, and it’s also cheaper to store and transmit, lowering the cost of bandwidth.

Today’s consumers though have significantly higher standards than the original users of the web. Consumers want immersive experiences, well-designed pages with fonts, graphics, photos, and videos coming together into a compelling format. That “quality” costs enormous sums in engineering and design talent, not to mention massively increasing bandwidth and storage costs.

Take my colleague Connie Loizos’ article from yesterday reporting on a new venture fund. The text itself is about 3.5 kilobytes uncompressed, but the total payload of the page if nothing is cached is more than 10 MB, or more than 3000x the data usage of the actual text itself. This pattern has become so common that it has been called the website obesity crisis. Yet, all of our research shows people want high-definition images with their stories, instant loading of articles on the site, and interactivity. Those features have to be paid somehow, begetting us the advertising and subscription models we see today.

The other cost is content production itself. Volunteers just haven’t produced the information we are seeking. Wikipedia is an extraordinary resource, but its depth falters when we start looking for information about our local communities, or news, or individuals who aren’t famous. The reality is that information gathering is hard work, and in a capitalist system, we need to compensate people to do it. My colleagues and I are passionate about startups and technology, but we need to eat to publish.

While an open, free, and democratized web is ideal, these two challenges demonstrate that a business model had to be attached to make it function. Advertising is one such model, with massive privacy violations required to optimize it. The other approach is charging for access.

Unfortunately, subscription seems to be an area filled with product engineers and marketers led by brain-dead executives. The default choice of Bloomberg this week and so many other publications is to simply put formerly free content behind a paywall. No consumer wants to pay for something they formerly got for free, and yet we repeatedly see examples of subscriptions designed this way.

I don’t know when media started hiring IRS accountants, but subscriptions should be seen as an upgrade, not a tax. A subscription should provide new features, content, and capabilities that didn’t exist before while maintaining the former product that consumers have enjoyed for years.

Take MoviePass for instance. Consumers can continue to watch movies as they always have in the past, but now they have a new subscription option to watch potentially more movies for a set price. Among my friends, MoviePass has completely changed the way they think of films. Instead of just seeing one blockbuster every month, they are heading to an art house film because “we’ve essentially already paid for it, so why not try it?” The pricing is clearly too cheap, but that shouldn’t distract from a product that offered a completely new experience from a subscription.

The hell is even worse though. We not only get paywalls where none existed before, but the prices of those subscriptions are always vastly more expensive than consumers ever wanted. It’s not just Bloomberg and media — it’s software too. I used to write everything in Ulysses, a syncing Markdown editor for OS X and iOS. I paid $70 to buy the apps, but then the company switched to a $40 a year annual subscription, and as the dozens of angry reviews and comments illustrate, that price is vastly out of proportion from the cost of providing the software (which I might add, is entirely hosted on iCloud infrastructure).

For product marketers, the default mentality is to extract a lot of value from the 1% of readers or users that are going to convert to paid. Subscriptions are always positioned as all-or-nothing, with limited metering or tiering, to try to force the conversion. To my mind though, the question is not how to get 1% of readers to pay an exorbitant price, but how to get say 20% of your readers to pay you a cheaper price. It’s not about exclusion, but about participation.

One way we could fix that situation would be to allow subscriptions to combine together more cheaply. We are starting to see this too: Spotify, Hulu, and Scribd appear to be investigating a deal in which consumers can get a joint subscription from these services for a lower rate. Setapp is a set of more than one hundred OS X apps that come bundled for about $10 a month.

I’d love to see more of these partnerships, because they are much more fair to the consumer and ultimately allow smaller subscription companies to compete with the likes of Google, Amazon, Apple, and others. Cross-marketing lowers subscriber acquisition costs, and those savings should ultimately stream down to the consumer.

Subscription hell is real, but that doesn’t mean the business model is flawed. Rather, we need to completely transform our thinking around these models, including the marketing behind them and the features that they offer. We also need to consider consumers and their wallets more holistically, since no one buys a subscription in a vacuum. For too long, paywall playbooks have just been copied rather than innovated upon. It’s time for product leaders to step up and build a better future.

As Chinese censorship intensifies, gays are back while teenage mothers and tattoos are out

Following the passage of a new cybersecurity law and the removal of term limits from Chinese president Xi Jinping, China’s government is conducting a comprehensive crackdown on online discussions and content, with few companies spared the rod by the central government.

Among the casualties has been Bytedance, the extremely high-flying $20 billion media unicorn startup which was forced to publicly apologize for content that degraded the character of the nation. The government forced the company to shut down its popular Neihan Duanzi comedy app, as well as to remove its headline news app, Jinri Toutiao, for three weeks. The company announced that it would expand the number of human censors from 6,000 to 10,000.

Another high-flying media unicorn, Kuaishou, has been under fire for allowing teenage moms to be depicted in a positive light. The app is unique among China’s top social networks in focusing on ordinary Chinese, and is known for its focus on people outside of large cities like Beijing and Shanghai. The company has faced public criticism from central television channel CCTV, as well as from regulators who have demanded the company act more aggressively in removing the content, a demand the company has acquiesced to.

Meanwhile, over at Sina Weibo, China’s Twitter-like service, the company announced on Friday that it would ban violent and gay content from its service, following instructions from the State Administration of Press, Publication, Radio, Film, and Television. LGBT content has been in the crosshairs of the country’s media regulators for years; for example, censors banned “abnormal sexual behaviors” from being depicted in any media or mobile apps in 2017, a term which includes homosexuality.

However, in a rare about-face for corporate China and internet censors, the company announced that it would reverse its ban of LGBT-themed content, following thousands of comments and discussions online by gay Chinese citizens. The company’s crackdown on other content though is expected to continue though.

There are other forms of censorship underway these days in China. China’s soccer players were banned from having tattoos a few weeks ago, since it depicts a “dispirited culture,” which is banned on all media. Perhaps most importantly, the government has banned the use of private VPNs, in order to better control online discourse.

China’s censorship regime is certainly not new, but its intensity around culture and how it is depicted is relatively novel. While the Chinese government has generally kept a tight lid on political dissent, particularly since the Tiananmen Protests in 1989, it has generally used a lighter touch on non-political subjects.

However, the Communist Party of China is now attempting to control the culture much more directly, not just on broadcast media like television, but also on apps and devices throughout the Middle Kingdom.

Following the National People’s Congress in March, the regulation of China’s media has been reassigned from the government to the party’s Central Propaganda Department. Since then, the party has been working in overdrive to tamp down content that it deems to be foreign, crude, vulgar, or not in the best spirit of the Chinese people.

While China’s media startups generally focus heavily on the mainland, their apps are also located in the app stores in other countries. Bytedance, which was forced to shut down its news app, also owns musical.ly, the popular music video app used by approximately 14% of American teenagers, according to some estimates. China’s censorship regime doesn’t stop at the nation’s borders then, but can extend its influence far wider.

Another example is Grindr, the popular gay dating app, which sold a majority share of its ownership to Beijing Kunlun Tech Company in early 2016.

The crackdown on speech is expected to continue over the coming weeks as the new rules are applied uniformly across the country. The situation is a reminder of the challenges of Chinese companies operating in the heavily-controlled country.

Although there are many trade tensions between the U.S. and China these days, a key issue has been access to the Chinese market for American technology companies. Even if China were to open its borders though, it remains unclear how U.S. companies could faithfully apply the law of China while maintaining their own moral standards.

Congress should demand Zuckerberg move to “one share, one vote”

Mark Zuckerberg is an autocrat, and not hypothetically. Through his special voting rights held in FB’s Class B shares, he wields absolute command of the company, while owning just a handful of percentage points of the company’s equity.

Like any autocrat, he has taken extraordinary measures to maintain control over his realm. He produced a plan exactly two years ago that would have zeroed out the voting rights for everyday shareholders with a new voteless Class C share, only to pull back at the last minute as a Delaware court case was set to begin. He has received the irrevocable proxies of many Facebook insiders, allowing him to control their votes indefinitely. Plus, any Class B shares that are sold are converted to Class A shares, allowing him to continue to consolidate power as people leave.

And now, borrowing a page straight out of George Orwell’s 1984, he has even tried to retract and disappear his own messages to others on his platform (which has now been retracted itself after it became public).

While Congress is right to focus on Cambridge Analytica, and electoral malfeasance, and political ads, and a whole crop of other controversies surrounding Facebook, it should instead direct its attention to the single solution that would begin to solve all of this: dissolve Facebook’s dual-class share structure and thereby democratize its ownership.

Just as congressmen are elected under the principle of “one man, one vote,” it should demand that Facebook follow the highest standards used by most other publicly-listed companies and return to “one share, one vote.”

Zuckerberg himself should certainly agree with this. After all, the original logic of creating a voteless share class was that the company’s financial performance was strong and Zuckerberg needed to be protected to continue it that way. The plan was announced the same quarter that Facebook crushed its financial results, and there was an absolutely implied connection between those results and the controlling stake held by Zuckerberg.

Yet in the two months, from its intraday peak at a share price of $195.32 on February 1, 2018 to today’s price of $160, Facebook has lost more than $100 billion in its market cap. If Congressional inquiries eventually lead to further regulation, it could further erode the value of the stock. It’s easy to argue that a chief executive should be protected when the performance of a company is rocketing up. It’s much harder when everything is crumbling and no one is being held accountable.

Shareholders may have been blinded by Facebook’s dizzying growth over the past few years, but we now know that the edifice of that growth is far more tenuous than we ever knew before. Zuckerberg’s 15-year apology tour can no longer sustain the view that corporate governance should be ignored for the good of the share price.

There’s just one problem though, and it is the problem that confronts any country with a tyrant: shareholders have no power here to affect change. They can’t change the composition of the board, they can’t change the management team. They can’t change anything at all, since one person controls the realm with an iron fist. A proposal back in 2015 to move to “one share, one vote” was struck down at Facebook’s shareholder meeting.

I am not asking for Zuckerberg to be fired, or to resign. I think people should clean up their own messes, and few people have the means to clean up Facebook right now other than him. But I do think there should be consequences, and so far, there have been exactly zero. Zuckerberg has to personally relinquish his control, and no act of mea culpa would better show that he understands the consequences of his actions.

There is a counter-argument, which is that ravenous mobs of private investors would swoop into Facebook and force the company to steal even more data from users to sell to advertisers if Zuckerberg lost control. I am wholly unconvinced though, mostly because Facebook has basically done precisely that over its entire history. Plus, any further deterioration of trust with users would strike at the heart of its financial results.

Zuckerberg says in his prepared statement that, “My top priority has always been our social mission of connecting people, building community and bringing the world closer together.” There are few things he could do to build the community around Facebook’s leadership than sharing the burdens and the responsibilities with a wider, more diverse set of people. Take a page from American history, and abolish the discrimination inherent in the dual-class share vote.

RSS is undead

RSS died. Whether you blame Feedburner, or Google Reader, or Digg Reader last month, or any number of other product failures over the years, the humble protocol has managed to keep on trudging along despite all evidence that it is dead, dead, dead.

Now, with Facebook’s scandal over Cambridge Analytica, there is a whole new wave of commentators calling for RSS to be resuscitated. Brian Barrett at Wired said a week ago that “… anyone weary of black-box algorithms controlling what you see online at least has a respite, one that’s been there all along but has often gone ignored. Tired of Twitter? Facebook fatigued? It’s time to head back to RSS.”

Let’s be clear: RSS isn’t coming back alive so much as it is officially entering its undead phase.

Don’t get me wrong, I love RSS. At its core, it is a beautiful manifestation of some of the most visionary principles of the internet, namely transparency and openness. The protocol really is simple and human-readable. It feels like how the internet was originally designed with static, full-text articles in HTML. Perhaps most importantly, it is decentralized, with no power structure trying to stuff other content in front of your face.

It’s wonderfully idealistic, but the reality of RSS is that it lacks the features required by nearly every actor in the modern content ecosystem, and I would strongly suspect that its return is not forthcoming.

Now, it is important before diving in here to separate out RSS the protocol from RSS readers, the software that interprets that protocol. While some of the challenges facing this technology are reader-centric and therefore fixable with better product design, many of these challenges are ultimately problems with the underlying protocol itself.

Let’s start with users. I, as a journalist, love having hundreds of RSS feeds organized in chronological order allowing me to see every single news story published in my areas of interest. This use case though is a minuscule fraction of all users, who aren’t paid to report on the news comprehensively. Instead, users want personalization and prioritization — they want a feed or stream that shows them the most important content first, since they are busy and lack the time to digest enormous sums of content.

To get a flavor of this, try subscribing to the published headlines RSS feed of a major newspaper like the Washington Post, which publishes roughly 1,200 stories a day. Seriously, try it. It’s an exhausting experience wading through articles from the style and food sections just to run into the latest update on troop movements in the Middle East.

Some sites try to get around this by offering an almost array of RSS feeds built around keywords. Yet, stories are almost always assigned more than one keyword, and keyword selection can vary tremendously in quality across sites. Now, I see duplicate stories and still manage to miss other stories I wanted to see.

Ultimately, all of media is prioritization — every site, every newspaper, every broadcast has editors involved in determining what is the hierarchy of information to be presented to users. Somehow, RSS (at least in its current incarnation) never understood that. This is both a failure of the readers themselves, but also of the protocol, which never forced publishers to provide signals on what was most and least important.

Another enormous challenge is discovery and curation. How exactly do you find good RSS feeds? Once you have found them, how do you group and prune them over time to maximize signal? Curation is one of the biggest on-boarding challenges of social networks like Twitter and Reddit, which has prevented both from reaching the stratospheric numbers of Facebook. The cold start problem with RSS is perhaps its greatest failing today, although could potentially be solved by better RSS reader software without protocol changes.

RSS’ true failings though are on the publisher side, with the most obvious issue being analytics. RSS doesn’t allow publishers to track user behavior. It’s nearly impossible to get a sense of how many RSS subscribers there are, due to the way that RSS readers cache feeds. No one knows how much time someone reads an article, or whether they opened an article at all. In this way, RSS shares a similar product design problem with podcasting, in that user behavior is essentially a black box.

For some users, that lack of analytics is a privacy boon. The reality though is that the modern internet content economy is built around advertising, and while I push for subscriptions all the time, such an economy still looks very distant. Analytics increases revenues from advertising, and that means it is critical for companies to have those trackers in place if they want a chance to make it in the competitive media environment.

RSS also offers very few opportunities for branding content effectively. Given that the brand equity for media today is so important, losing your logo, colors, and fonts on an article is an effective way to kill enterprise value. This issue isn’t unique to RSS — it has affected Google’s AMP project as well as Facebook Instant Articles. Brands want users to know that the brand wrote something, and they aren’t going to use technologies that strip out what they consider to be a business critical part of their user experience.

These are just some of the product issues with RSS, and together they ensure that the protocol will never reach the ubiquity required to supplant centralized tech corporations. So, what are we to do then if we want a path away from Facebook’s hegemony?

I think the solution is a set of improvements. RSS as a protocol needs to be expanded so that it can offer more data around prioritization as well as other signals critical to making the technology more effective at the reader layer. This isn’t just about updating the protocol, but also about updating all of the content management systems that publish an RSS feed to take advantage of those features.

That leads to the most significant challenge — solving RSS as business model. There needs to be some sort of a commerce layer around feeds, so that there is an incentive to improve and optimize the RSS experience. I would gladly pay money for an Amazon Prime-like subscription where I can get unlimited text-only feeds from a bunch of a major news sources at a reasonable price. It would also allow me to get my privacy back to boot.

Next, RSS readers need to get a lot smarter about marketing and on-boarding. They need to actively guide users to find where the best content is, and help them curate their feeds with algorithms (with some settings so that users like me can turn it off). These apps could be written in such a way that the feeds are built using local machine learning models, to maximize privacy.

Do I think such a solution will become ubiquitous? No, I don’t, and certainly not in the decentralized way that many would hope for. I don’t think users actually, truly care about privacy (Facebook has been stealing it for years — has that stopped its growth at all?) and they certainly aren’t news junkies either. But with the right business model in place, there could be enough users to make such a renewed approach to streams viable for companies, and that is ultimately the critical ingredient you need to have for a fresh news economy to surface and for RSS to come back to life.