Author: Danny Crichton

Why we lie to ourselves, every day

Human action requires motivation, but what exactly are those motivations? Donating money to a charity might be motivated by altruism, and yet, only 1% of donations are anonymous. Donors don’t just want to be altruistic, they also want credit for that altruism plus badges to signal to others about their altruistic ways.

Worse, we aren’t even aware of our true motivations — in fact, we often strategically deceive ourselves to make our behavior appear more pure than it really is. It’s a pattern that manifests itself across all kinds of arenas, including consumption, politics, education, medicine, religion and more.

In their book Elephant in the Brain, Kevin Simler, formerly a long-time engineer at Palantir, and Robin Hanson, an associate professor of economics at George Mason University, take the most dismal parts of the dismal science of economics and weave them together into a story of humans acting badly (but believing they are great!) As the authors write in their intro, “The line between cynicism and misanthropy — between thinking ill of human motives and thinking ill of humans — is often blurry.” No kidding.

Elephant in the Brain by Kevin Simler and Robin Hanson. Oxford University Press, 2018

The eponymous elephant in the brain is essentially our self-deception and hidden motivations regarding the actions we take in everyday life. Like the proverbial elephant in the room, this elephant in the brain is visible to those who search for it, but we often avoid looking at it lest we get discouraged at our selfish behavior.

Humans care deeply about being perceived as prosocial, but we are also locked into constant competition, over status attainment, careers, and spouses. We want to signal our community spirit, but we also want to selfishly benefit from our work. We solve for this dichotomy by creating rationalizations and excuses to do both simultaneously. We give to charity for the status as well as the altruism, much as we get a college degree to learn, but also to earn a degree which signals to employers that we will be hard workers.

The key is that we self-deceive: we don’t realize we are taking advantage of the duality of our actions. We truly believe we are being altruistic, just as much as we truly believe we are in college to learn and explore the arts and humanities. That self-deception is critical, since it lowers the cost of demonstrating our prosocial bona fides: we would be heavily cognitively taxed if we had to constantly pretend as if we cared about the environment when what we really care about is being perceived as an ethical consumer.

Elephant in the Brain is a bold yet synthetic thesis. Simler and Hanson build upon a number of research advances, such as Jonathan Haidt’s work on the righteous mind and Robert Trivers work on evolutionary psychology to undergird their thesis in the first few chapters, and then they apply that thesis to a series of other fields (ten, in fact) in relatively brief and facile chapters to describe how the elephant in the brain affects us in every sphere of human activity.

Refreshingly, far from being polemicists, the authors are quite curious and investigatory about this pattern of human behavior, and they realize they are pushing at least some of their readers into uncomfortable territory. They even begin the book by stating that “we expect the typical reader to accept roughly two-thirds of our claims about human motives and institutions.”

Yet, the book is essentially making one claim, just applied in a myriad of ways. It’s unclear to me who the reader would be who accepts only parts of the book’s premise. Either you have come around to the cynical view of humans (pre or post book), or you haven’t — there doesn’t seem to me to be a middle point between those two perspectives.

Worse, even after reading the book, I am left completely unaware of what exactly to do with the thesis now that I have read it. There is something of a lukewarm conclusion in which the authors push for us to have greater situational awareness, and a short albeit excellent section on designing better institutions to account for hidden motivations. The book’s observations ultimately don’t lead to any greater project, no path toward a more enlightened society. That’s fine, but disappointing.

Indeed, for a book that arguably strives to be optimistic, I fear its results will be nothing more than cynical fodder for Silicon Valley product designers. Don’t design products for what humans say they want, but design them to punch the buttons of their hidden motivations. Viewed in this light, Elephant in the Brain is perhaps a more academic version of the Facebook product manual.

The dismal science is dismal precisely because of this cynicism: because as a project, as a set of values, it leads pretty much nowhere. Everyone is secretly selfish and obsessed with status, and they don’t even know it. As the authors conclude in their final line, “We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” Yes we did, and it is precisely that surprise from such a dreary species that we should take solace in. There is indeed an elephant in our brain, but its influence can wax and wane — and ultimately humans hold their agency in their own hands.

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never exempted an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.

M17 delays IPO debut after pricing this morning on NYSE

M17 Entertainment, a Taipei-based live streaming and dating app group, priced its IPO this morning on the NYSE and was expected to open trading today according to their final press release. But with just a little more than two hours to go before market closing, it’s still not trading, and no one seems to know why.

An interview I had scheduled with the CEO earlier this afternoon was canceled at the last minute, with the company’s representative saying that M17 couldn’t comment since its shares were not yet actively trading, and thus the company remains under an SEC-mandated quiet period.

M17 has had a rocky non-debut so far. Originally targeting a fundraise of $115 million of American Depository Receipts (shares of foreign companies listed domestically on the NYSE), the company concluded its roadshow raising less than half of its target, for a final investment of $60.1 million. The company priced its ADR shares at $8 each, with each ADR representing 8 shares of the stock’s Class A security.

My colleague Jon Russell has covered the company’s rapid growth over the past three years. It was formed from the merger of dating app company Paktor and live streaming business 17 Media. Joseph Phua, who was CEO of Paktor, became CEO of the joint M17 company following the merger. Together, the two halves have raised tens of millions in venture capital.

M17 provides live streaming and dating apps throughout “Developed Asia”

The company’s main product is a live streaming product where creators can build their fanbases and brands. Fans can purchase virtual gifts to send to their favorite artists, and those points are proving to be extraordinarily lucrative for the company. The company, according to its amended F-1 statement, has seen tremendous revenue growth, netting $37.9 million of revenue in the first three months of this year. The company has also been able to attract more live streaming talent, increasing its contracted artists from 999 at the end of December 2016 to 7,719 at the end of March this year.

That’s where the good news ends for the company though. Despite that revenue growth, operating losses are torrential, with the company losing $24.8 million in the first three months of this year. The company in its statement says that it has $31.4 million in cash and cash equivalents, giving it limited runway to continue operations without a strong IPO debut.

User growth has been mostly stagnant. Active monthly users has increased from 1.5 million to 1.7 million between March 31 of 2017 and 2018. What the company has succeeded in doing is monetizing those users much better. The percentage of users paying on the platform has more than doubled over the same time period, and the value of those users has increased more than 40% to $355 per user per month.

The big challenge for M17 is revenue quality. Live streaming represents 91.4% of the company’s revenues, but those revenues are concentrated on a handful of “whales” who buy a freakishly high number of virtual gifts. The company’s top ten users represent 11.8% of all revenues (that’s $447,220 a user in the first three months this year!), and its top 500 users accounted for almost a majority of total revenues. That concentration on the demand side is just as heavy on the supply side. M17’s top 100 artists accounted for more than a third of the company’s revenue.

That concentration has improved over the past few months, according to the company’s filing. But Wall Street investors have learned after Zynga and other whale-based revenue models that the sustainability of these businesses can be tough.

Finally, one complication for many investors wary of the increasing use of dual-class stock issues is the governance of the company. Phua, the CEO, will have 56.3% of the voting rights of the company, and M17 will be a controlled company under NYSE rules according to the company’s amended filing. Class B shares vote at a 20:1 ratio with Class A share voting rights.

All of this is to say that while the company has had some dizzying growth in its revenue numbers over the past 24 months, that success is moderated by some significant challenges in revenue concentration that will have to be a top priority for M17 going forward. Why the company priced and hasn’t traded though remains a mystery, and we have reached out for more comments.

Subscription hell

Another week, another paywall. This time, it’s Bloomberg, which announced that it would be adding a comprehensive paywall to its news service and television channel (except TicToc, its media partnership with Twitter). A paywall was hardly a surprise, but what was surprising was the price: the standard subscription is $35 a month (up from $0 a month), or $40 a month including access to online and print editions of Businessweek.

And people say avocado toast is expensive.

That’s not the only subscription coming up though. Now Facebook is considering adding an ad-free subscription option. These rumors have come and gone in the past, with no sign of change in the company’s resolute focus on advertising as its core business model. Post-Cambridge Analytica and post-GDPR though, it seems the company’s position is more malleable, and could be following the plan laid out by my colleague Josh Constine recently. He pegged the potential price at $11 a month, given the company’s revenue per user.

I’m an emphatic champion of subscription models, particularly in media. Subscriptions align incentives in a way that advertising can never do, while also avoiding the morass of privacy and ethics that plague ad targeting. Subscription revenues are also more reliable than ad dollars, making it easier to budget and improve operational efficiency for an organization.

Incentive alignment is one thing, and my wallet is another. All of these subscriptions are starting to add up. These days, my media subscriptions are hovering around $80 a month, and I don’t even have TV. Storage costs for Google, Apple, and Dropbox are another $13 a month. Cable and cell service are another $200 a month combined. Software subscriptions are probably about $20 a month (although so many are annualized its hard to keep track of them). Amazon Prime and a few others total in around $25 a month.

Worse, subscriptions aren’t getting any cheaper. Amazon Prime just increased its price to $120 a year, Netflix increased its popular middle-tier plan to $11 a month late last year, and YouTube increased its TV pricing to $40 a month last month. Add in new paywalls, and the burden of subscriptions is rising far faster than consumer incomes.

I’m frustrated with this hell. I’m frustrated that the web’s promise of instant and free access to the world’s information appears to be dying. I’m frustrated that subscription usually means just putting formerly free content behind a paywall. I’m frustrated that the price for subscriptions seems wildly high compared to the ad dollars that the fees substitute for. And I’m frustrated that subscription pricing rarely seems to account for other subscriptions I have, even when content libraries are similar.

Subscriptions can be a great tool, but everyone seems to be doing them wrong. We need to transform our thinking here if we are to move on from the manacles of the ad networks.

Before we dive in though, let’s be clear: the web needs a business model. We didn’t need paywalls on the early web because we focused on plain text from other users. Plain text is easier to produce, lowering the friction for people to contribute, and it’s also cheaper to store and transmit, lowering the cost of bandwidth.

Today’s consumers though have significantly higher standards than the original users of the web. Consumers want immersive experiences, well-designed pages with fonts, graphics, photos, and videos coming together into a compelling format. That “quality” costs enormous sums in engineering and design talent, not to mention massively increasing bandwidth and storage costs.

Take my colleague Connie Loizos’ article from yesterday reporting on a new venture fund. The text itself is about 3.5 kilobytes uncompressed, but the total payload of the page if nothing is cached is more than 10 MB, or more than 3000x the data usage of the actual text itself. This pattern has become so common that it has been called the website obesity crisis. Yet, all of our research shows people want high-definition images with their stories, instant loading of articles on the site, and interactivity. Those features have to be paid somehow, begetting us the advertising and subscription models we see today.

The other cost is content production itself. Volunteers just haven’t produced the information we are seeking. Wikipedia is an extraordinary resource, but its depth falters when we start looking for information about our local communities, or news, or individuals who aren’t famous. The reality is that information gathering is hard work, and in a capitalist system, we need to compensate people to do it. My colleagues and I are passionate about startups and technology, but we need to eat to publish.

While an open, free, and democratized web is ideal, these two challenges demonstrate that a business model had to be attached to make it function. Advertising is one such model, with massive privacy violations required to optimize it. The other approach is charging for access.

Unfortunately, subscription seems to be an area filled with product engineers and marketers led by brain-dead executives. The default choice of Bloomberg this week and so many other publications is to simply put formerly free content behind a paywall. No consumer wants to pay for something they formerly got for free, and yet we repeatedly see examples of subscriptions designed this way.

I don’t know when media started hiring IRS accountants, but subscriptions should be seen as an upgrade, not a tax. A subscription should provide new features, content, and capabilities that didn’t exist before while maintaining the former product that consumers have enjoyed for years.

Take MoviePass for instance. Consumers can continue to watch movies as they always have in the past, but now they have a new subscription option to watch potentially more movies for a set price. Among my friends, MoviePass has completely changed the way they think of films. Instead of just seeing one blockbuster every month, they are heading to an art house film because “we’ve essentially already paid for it, so why not try it?” The pricing is clearly too cheap, but that shouldn’t distract from a product that offered a completely new experience from a subscription.

The hell is even worse though. We not only get paywalls where none existed before, but the prices of those subscriptions are always vastly more expensive than consumers ever wanted. It’s not just Bloomberg and media — it’s software too. I used to write everything in Ulysses, a syncing Markdown editor for OS X and iOS. I paid $70 to buy the apps, but then the company switched to a $40 a year annual subscription, and as the dozens of angry reviews and comments illustrate, that price is vastly out of proportion from the cost of providing the software (which I might add, is entirely hosted on iCloud infrastructure).

For product marketers, the default mentality is to extract a lot of value from the 1% of readers or users that are going to convert to paid. Subscriptions are always positioned as all-or-nothing, with limited metering or tiering, to try to force the conversion. To my mind though, the question is not how to get 1% of readers to pay an exorbitant price, but how to get say 20% of your readers to pay you a cheaper price. It’s not about exclusion, but about participation.

One way we could fix that situation would be to allow subscriptions to combine together more cheaply. We are starting to see this too: Spotify, Hulu, and Scribd appear to be investigating a deal in which consumers can get a joint subscription from these services for a lower rate. Setapp is a set of more than one hundred OS X apps that come bundled for about $10 a month.

I’d love to see more of these partnerships, because they are much more fair to the consumer and ultimately allow smaller subscription companies to compete with the likes of Google, Amazon, Apple, and others. Cross-marketing lowers subscriber acquisition costs, and those savings should ultimately stream down to the consumer.

Subscription hell is real, but that doesn’t mean the business model is flawed. Rather, we need to completely transform our thinking around these models, including the marketing behind them and the features that they offer. We also need to consider consumers and their wallets more holistically, since no one buys a subscription in a vacuum. For too long, paywall playbooks have just been copied rather than innovated upon. It’s time for product leaders to step up and build a better future.

As Chinese censorship intensifies, gays are back while teenage mothers and tattoos are out

Following the passage of a new cybersecurity law and the removal of term limits from Chinese president Xi Jinping, China’s government is conducting a comprehensive crackdown on online discussions and content, with few companies spared the rod by the central government.

Among the casualties has been Bytedance, the extremely high-flying $20 billion media unicorn startup which was forced to publicly apologize for content that degraded the character of the nation. The government forced the company to shut down its popular Neihan Duanzi comedy app, as well as to remove its headline news app, Jinri Toutiao, for three weeks. The company announced that it would expand the number of human censors from 6,000 to 10,000.

Another high-flying media unicorn, Kuaishou, has been under fire for allowing teenage moms to be depicted in a positive light. The app is unique among China’s top social networks in focusing on ordinary Chinese, and is known for its focus on people outside of large cities like Beijing and Shanghai. The company has faced public criticism from central television channel CCTV, as well as from regulators who have demanded the company act more aggressively in removing the content, a demand the company has acquiesced to.

Meanwhile, over at Sina Weibo, China’s Twitter-like service, the company announced on Friday that it would ban violent and gay content from its service, following instructions from the State Administration of Press, Publication, Radio, Film, and Television. LGBT content has been in the crosshairs of the country’s media regulators for years; for example, censors banned “abnormal sexual behaviors” from being depicted in any media or mobile apps in 2017, a term which includes homosexuality.

However, in a rare about-face for corporate China and internet censors, the company announced that it would reverse its ban of LGBT-themed content, following thousands of comments and discussions online by gay Chinese citizens. The company’s crackdown on other content though is expected to continue though.

There are other forms of censorship underway these days in China. China’s soccer players were banned from having tattoos a few weeks ago, since it depicts a “dispirited culture,” which is banned on all media. Perhaps most importantly, the government has banned the use of private VPNs, in order to better control online discourse.

China’s censorship regime is certainly not new, but its intensity around culture and how it is depicted is relatively novel. While the Chinese government has generally kept a tight lid on political dissent, particularly since the Tiananmen Protests in 1989, it has generally used a lighter touch on non-political subjects.

However, the Communist Party of China is now attempting to control the culture much more directly, not just on broadcast media like television, but also on apps and devices throughout the Middle Kingdom.

Following the National People’s Congress in March, the regulation of China’s media has been reassigned from the government to the party’s Central Propaganda Department. Since then, the party has been working in overdrive to tamp down content that it deems to be foreign, crude, vulgar, or not in the best spirit of the Chinese people.

While China’s media startups generally focus heavily on the mainland, their apps are also located in the app stores in other countries. Bytedance, which was forced to shut down its news app, also owns musical.ly, the popular music video app used by approximately 14% of American teenagers, according to some estimates. China’s censorship regime doesn’t stop at the nation’s borders then, but can extend its influence far wider.

Another example is Grindr, the popular gay dating app, which sold a majority share of its ownership to Beijing Kunlun Tech Company in early 2016.

The crackdown on speech is expected to continue over the coming weeks as the new rules are applied uniformly across the country. The situation is a reminder of the challenges of Chinese companies operating in the heavily-controlled country.

Although there are many trade tensions between the U.S. and China these days, a key issue has been access to the Chinese market for American technology companies. Even if China were to open its borders though, it remains unclear how U.S. companies could faithfully apply the law of China while maintaining their own moral standards.

Congress should demand Zuckerberg move to “one share, one vote”

Mark Zuckerberg is an autocrat, and not hypothetically. Through his special voting rights held in FB’s Class B shares, he wields absolute command of the company, while owning just a handful of percentage points of the company’s equity.

Like any autocrat, he has taken extraordinary measures to maintain control over his realm. He produced a plan exactly two years ago that would have zeroed out the voting rights for everyday shareholders with a new voteless Class C share, only to pull back at the last minute as a Delaware court case was set to begin. He has received the irrevocable proxies of many Facebook insiders, allowing him to control their votes indefinitely. Plus, any Class B shares that are sold are converted to Class A shares, allowing him to continue to consolidate power as people leave.

And now, borrowing a page straight out of George Orwell’s 1984, he has even tried to retract and disappear his own messages to others on his platform (which has now been retracted itself after it became public).

While Congress is right to focus on Cambridge Analytica, and electoral malfeasance, and political ads, and a whole crop of other controversies surrounding Facebook, it should instead direct its attention to the single solution that would begin to solve all of this: dissolve Facebook’s dual-class share structure and thereby democratize its ownership.

Just as congressmen are elected under the principle of “one man, one vote,” it should demand that Facebook follow the highest standards used by most other publicly-listed companies and return to “one share, one vote.”

Zuckerberg himself should certainly agree with this. After all, the original logic of creating a voteless share class was that the company’s financial performance was strong and Zuckerberg needed to be protected to continue it that way. The plan was announced the same quarter that Facebook crushed its financial results, and there was an absolutely implied connection between those results and the controlling stake held by Zuckerberg.

Yet in the two months, from its intraday peak at a share price of $195.32 on February 1, 2018 to today’s price of $160, Facebook has lost more than $100 billion in its market cap. If Congressional inquiries eventually lead to further regulation, it could further erode the value of the stock. It’s easy to argue that a chief executive should be protected when the performance of a company is rocketing up. It’s much harder when everything is crumbling and no one is being held accountable.

Shareholders may have been blinded by Facebook’s dizzying growth over the past few years, but we now know that the edifice of that growth is far more tenuous than we ever knew before. Zuckerberg’s 15-year apology tour can no longer sustain the view that corporate governance should be ignored for the good of the share price.

There’s just one problem though, and it is the problem that confronts any country with a tyrant: shareholders have no power here to affect change. They can’t change the composition of the board, they can’t change the management team. They can’t change anything at all, since one person controls the realm with an iron fist. A proposal back in 2015 to move to “one share, one vote” was struck down at Facebook’s shareholder meeting.

I am not asking for Zuckerberg to be fired, or to resign. I think people should clean up their own messes, and few people have the means to clean up Facebook right now other than him. But I do think there should be consequences, and so far, there have been exactly zero. Zuckerberg has to personally relinquish his control, and no act of mea culpa would better show that he understands the consequences of his actions.

There is a counter-argument, which is that ravenous mobs of private investors would swoop into Facebook and force the company to steal even more data from users to sell to advertisers if Zuckerberg lost control. I am wholly unconvinced though, mostly because Facebook has basically done precisely that over its entire history. Plus, any further deterioration of trust with users would strike at the heart of its financial results.

Zuckerberg says in his prepared statement that, “My top priority has always been our social mission of connecting people, building community and bringing the world closer together.” There are few things he could do to build the community around Facebook’s leadership than sharing the burdens and the responsibilities with a wider, more diverse set of people. Take a page from American history, and abolish the discrimination inherent in the dual-class share vote.

RSS is undead

RSS died. Whether you blame Feedburner, or Google Reader, or Digg Reader last month, or any number of other product failures over the years, the humble protocol has managed to keep on trudging along despite all evidence that it is dead, dead, dead.

Now, with Facebook’s scandal over Cambridge Analytica, there is a whole new wave of commentators calling for RSS to be resuscitated. Brian Barrett at Wired said a week ago that “… anyone weary of black-box algorithms controlling what you see online at least has a respite, one that’s been there all along but has often gone ignored. Tired of Twitter? Facebook fatigued? It’s time to head back to RSS.”

Let’s be clear: RSS isn’t coming back alive so much as it is officially entering its undead phase.

Don’t get me wrong, I love RSS. At its core, it is a beautiful manifestation of some of the most visionary principles of the internet, namely transparency and openness. The protocol really is simple and human-readable. It feels like how the internet was originally designed with static, full-text articles in HTML. Perhaps most importantly, it is decentralized, with no power structure trying to stuff other content in front of your face.

It’s wonderfully idealistic, but the reality of RSS is that it lacks the features required by nearly every actor in the modern content ecosystem, and I would strongly suspect that its return is not forthcoming.

Now, it is important before diving in here to separate out RSS the protocol from RSS readers, the software that interprets that protocol. While some of the challenges facing this technology are reader-centric and therefore fixable with better product design, many of these challenges are ultimately problems with the underlying protocol itself.

Let’s start with users. I, as a journalist, love having hundreds of RSS feeds organized in chronological order allowing me to see every single news story published in my areas of interest. This use case though is a minuscule fraction of all users, who aren’t paid to report on the news comprehensively. Instead, users want personalization and prioritization — they want a feed or stream that shows them the most important content first, since they are busy and lack the time to digest enormous sums of content.

To get a flavor of this, try subscribing to the published headlines RSS feed of a major newspaper like the Washington Post, which publishes roughly 1,200 stories a day. Seriously, try it. It’s an exhausting experience wading through articles from the style and food sections just to run into the latest update on troop movements in the Middle East.

Some sites try to get around this by offering an almost array of RSS feeds built around keywords. Yet, stories are almost always assigned more than one keyword, and keyword selection can vary tremendously in quality across sites. Now, I see duplicate stories and still manage to miss other stories I wanted to see.

Ultimately, all of media is prioritization — every site, every newspaper, every broadcast has editors involved in determining what is the hierarchy of information to be presented to users. Somehow, RSS (at least in its current incarnation) never understood that. This is both a failure of the readers themselves, but also of the protocol, which never forced publishers to provide signals on what was most and least important.

Another enormous challenge is discovery and curation. How exactly do you find good RSS feeds? Once you have found them, how do you group and prune them over time to maximize signal? Curation is one of the biggest on-boarding challenges of social networks like Twitter and Reddit, which has prevented both from reaching the stratospheric numbers of Facebook. The cold start problem with RSS is perhaps its greatest failing today, although could potentially be solved by better RSS reader software without protocol changes.

RSS’ true failings though are on the publisher side, with the most obvious issue being analytics. RSS doesn’t allow publishers to track user behavior. It’s nearly impossible to get a sense of how many RSS subscribers there are, due to the way that RSS readers cache feeds. No one knows how much time someone reads an article, or whether they opened an article at all. In this way, RSS shares a similar product design problem with podcasting, in that user behavior is essentially a black box.

For some users, that lack of analytics is a privacy boon. The reality though is that the modern internet content economy is built around advertising, and while I push for subscriptions all the time, such an economy still looks very distant. Analytics increases revenues from advertising, and that means it is critical for companies to have those trackers in place if they want a chance to make it in the competitive media environment.

RSS also offers very few opportunities for branding content effectively. Given that the brand equity for media today is so important, losing your logo, colors, and fonts on an article is an effective way to kill enterprise value. This issue isn’t unique to RSS — it has affected Google’s AMP project as well as Facebook Instant Articles. Brands want users to know that the brand wrote something, and they aren’t going to use technologies that strip out what they consider to be a business critical part of their user experience.

These are just some of the product issues with RSS, and together they ensure that the protocol will never reach the ubiquity required to supplant centralized tech corporations. So, what are we to do then if we want a path away from Facebook’s hegemony?

I think the solution is a set of improvements. RSS as a protocol needs to be expanded so that it can offer more data around prioritization as well as other signals critical to making the technology more effective at the reader layer. This isn’t just about updating the protocol, but also about updating all of the content management systems that publish an RSS feed to take advantage of those features.

That leads to the most significant challenge — solving RSS as business model. There needs to be some sort of a commerce layer around feeds, so that there is an incentive to improve and optimize the RSS experience. I would gladly pay money for an Amazon Prime-like subscription where I can get unlimited text-only feeds from a bunch of a major news sources at a reasonable price. It would also allow me to get my privacy back to boot.

Next, RSS readers need to get a lot smarter about marketing and on-boarding. They need to actively guide users to find where the best content is, and help them curate their feeds with algorithms (with some settings so that users like me can turn it off). These apps could be written in such a way that the feeds are built using local machine learning models, to maximize privacy.

Do I think such a solution will become ubiquitous? No, I don’t, and certainly not in the decentralized way that many would hope for. I don’t think users actually, truly care about privacy (Facebook has been stealing it for years — has that stopped its growth at all?) and they certainly aren’t news junkies either. But with the right business model in place, there could be enough users to make such a renewed approach to streams viable for companies, and that is ultimately the critical ingredient you need to have for a fresh news economy to surface and for RSS to come back to life.

Princeton study finds very few affiliate marketers make required disclosures on YouTube and Pinterest

Convincing humans to buy products is a massive business called marketing, and few areas of marketing are growing as fast as influencer marketing. Influencers on platforms like Instagram, Pinterest, and YouTube can command prodigious fees based on their audience size and engagement: some data suggests that a single video on YouTube by a top influencer can command as much as $300,000.

While top influencers often have direct partnerships with product companies, others with smaller audiences often take advantage of affiliate networks to build their revenues. These networks allow an influencer to take a small cut of any sales that are generated through their unique affiliate link, and their flexibility means that influencers can prioritize products that they believe best match their audience.

This industry is regulated by the Federal Trade Commission, which has set out a series of rules requiring paid affiliate links to be disclosed to users. There’s just one problem according to a new analysis by Princeton researchers: very little content on sites like YouTube and Pinterest with affiliate links actually disclose their monetization.

Computer scientists Arunesh Mathur, Arvind Narayanan, and Marshini Chetty compiled a random sample of hundreds of thousands of videos on YouTube and millions of pins on Pinterest . They then used text extraction and frequency analysis to investigate URLs located in the descriptions of these items to determine whether the URL or any redirects behind it connected to an affiliate network.

For all the growth in affiliate marketing, the researchers found that less than 1% of videos and pins in their random sample had affiliate links attached to them. Some categories had a significantly higher percentage of affiliate links though, such as science and technology videos on YouTube which averaged 3.61% and women’s fashion on Pinterest, which had a rate of 4.62%.

What’s more interesting is that content with affiliate links was statistically more engaging than videos without affiliate links. The researchers found that affiliated videos had longer run times as well as more likes and view counts, and a similar pattern was seen on Pinterest. The incentives around affiliate marketing then are clearly working.

The researchers next investigated the text of content with affiliate links and analyzed whether they made any disclosures about their economics to users. Among content that had affiliate links, 10.49% of YouTube videos and 7.03% of pins on Pinterest had disclosures. Worse, the disclosure language recommended by the FTC was only included on roughly 2% of affiliated content across the two platforms.

Given the NLP and basic machine learning methodology of the paper, these numbers should be perceived as a lower bound on disclosures. Nonetheless, it is clear that much of the influence economy that exists on these platforms remain cloaked from everyday users, despite being in clear violation of FTC guidelines and rules.

These results raise a series of challenging product and policy questions for startup companies with user-generated content. In the wake of the 2016 election where fake news factories built viral content and generated serious advertising revenues, social networks like Facebook have had to confront the tradeoff between a maniacal focus on quantitative engagement like page views and time on site and the quality of that engagement. If affiliated content does have higher engagement statistically as this study showed, that poses a dilemma for companies looking to boost revenue while also improving engagement quality at the expense of quantity.

For instance, the authors of the study suggest that products like YouTube should have better native features to disclose affiliate sponsors. Placing disclosures though could dampen enthusiasm for some clearly high-engagement content. How then can companies build a framework for building ethical policies that follow FTC requirements while also ensuring their products reach the right metrics?

Finally — and much harder to measure — is evaluating the effect of disclosures on affiliate revenue. Do people click on links less if they know they were placed there because of marketing economics? If proper disclosures dampen the influencer industry, that could put a brake on its breakneck growth.

Such policy and product challenges aren’t simple to answer, but the intensity of the problem is only going to increase with more and more money flowing into the influencer economy. This research clearly shows that there is a wide gap between what the government requires, and what affiliate marketers actually do that needs to be rectified.

No one wants to build a “feel good” internet

 If there is one policy dilemma facing nearly every tech company today, it is what to do about “content moderation,” the almost-Orwellian term for censorship. Charlie Warzel of Buzzfeed pointedly asked the question a little more than a week ago: “How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves… Read More