Category Archives: Google+

Google CEO admits company must better address the spread of conspiracy theories on YouTube

Google CEO Sundar Pichai admitted today that YouTube needs to do better in dealing with conspiracy content on its site that can lead to real-world violence. During his testimony on Tuesday before the House Judiciary Committee, the exec was questioned on how YouTube handles extremist content that promotes conspiracy theories like Pizzagate and, more recently, a Hillary Clinton-focused conspiracy theory dubbed Frazzledrip.

According to an article in Monday’s Washington Post, Frazzledrip is a variation on Pizzagate that began spreading on YouTube this spring.

In a bizarre series of questions, Rep. Jamie Raskin (D-MD) asked Pichai if he knew what Frazzledrip was.

Pichai replied that he was “not aware of the specifics about it.”

Raskin went on to explain that the recommendation engine on YouTube has been suggesting videos that claim politicians, celebrities and other leading figures were “sexually abusing and consuming the remains of children, often in satanic rituals.” He said these new conspiracist claims were echoing the discredited Pizzagate conspiracy, which two years ago led to a man firing shots into a Washington, D.C. pizzeria, in search of the children he believed were held as sex slaves by Democratic Party leaders.

He also explained the new Frazzledrip theory in more detail, which he read about in The Washington Post’s report about the still rampant hateful conspiracies being hosted by YouTube. This newer conspiracy claims that Hillary Clinton and longtime aide Huma Abedin sexually assaulted a girl and drank her blood.

The Post said some of the video clips were removed after first appearing in April, and had been debunked, but its review of the matter found dozens of videos where the claims were still being discussed. Combined, these videos had been viewed millions of times over the past eight months. In addition, the investigation found that YouTube’s search box would highlight these videos when people typed in terms like “HRC video” or “Frazzle.”

YouTube’s policy doesn’t prevent people from uploading falsehoods, the Post’s report noted.

Raskin asked Pichai about this type of extremist propaganda.

“What is your company policy on that? And are you trying to deal with it?,” he questioned.

Pichai admitted, essentially, that YouTube needed to do better.

“We are constantly undertaking efforts to deal with misinformation. We have clearly stated policies and we have made lots of progress in many of the areas where over the past year — so, for example, in areas like terrorism, child safety, and so on,” said Pichai. “We are looking to do more,” he said.

In terms of the Frazzledrip theory, he said it was more of a recent happening.

“But I’m committed to following up on it and making sure we are evaluating these against our policies,” the CEO promised.

The issue with videos like Frazzledrip is that YouTube’s current policies don’t fully encompass how to handle extremist propaganda. Instead, as the Post also said, its policies focus on videos with hateful, graphic and violent content directed at minorities and other protected groups. Meanwhile, it seeks to allow freedom of speech to others who upload content to its site, despite the disinformation they may spread or their potential to lead to violence.

The balance between free speech and content policies is a delicate matter — and an important one, given YouTube’s power to influence dangerous individuals. In addition to the Pizzagate shooter, the mass shooter who killed 11 people at the Pittsburgh synagogue in October had been watching neo-Nazi propaganda on YouTube, the Post’s report pointed out, in another example.

Asked what YouTube was doing about all this, Pichai didn’t offer specifics.

The CEO instead admitted that YouTube struggles with evaluating videos individually because of the volume of content it sees.

“We do get around 400 hours of video every minute. But it’s our responsibility, I think, to make sure YouTube is a platform for freedom of expression, but it’s responsible and contributes positively to society,” Pichai said. He added that its policies allow it to take down videos that “incite harm or hatred or violence.” But conspiracy videos don’t always directly incite violence — they just radicalize individuals, who then sometimes act out violently as a result.

“It’s an area we acknowledge there’s more work to be done, and we’ll definitely continue doing that,” Pichai said. “But I want to acknowledge there is more work to be done. With our growth comes more responsibility. And we are committed to doing better as we invest more in this area,” he said.

Watch Google CEO Sundar Pichai testify in Congress — on bias, China and more

Google CEO Sundar Pichai has managed to avoid the public political grillings that have come for tech leaders at Facebook and Twitter this year. But not today.

Today he will be in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices.

The hearing kicks off at 10:00 ET — and will be streamed live via our YouTube channel (with the feed also embedded above in this post).

Announcing the hearing last month, committee chairman Bob Goodlatte said it would “examine potential bias and the need for greater transparency regarding the filtering practices of tech giant Google”.

Republicans have been pressuring the Silicon Valley giant over what they claim is ‘liberal bias’ embedded at the algorithmic level.

This summer President Trump publicly lashed out at Google, expressing displeasure about news search results for his name in a series of tweets in which he claimed: “Google & others are suppressing voices of Conservatives and hiding information and news that is good.”

Google rejected the allegation, responding then that: “Search is not used to set a political agenda and we don’t bias our results toward any political ideology.”

In his prepared remarks ahead of the hearing, Pichai reiterates this point.

“I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests,” he writes. “We are a company that provides platforms for diverse perspectives and opinions—and we have no shortage of them among our own employees.”

He also seeks to paint a picture of Google as a proudly patriotic “American company” — playing up its role as a creator of local jobs and a bolster for the wider US economy, likely in the hopes of defusing some of the expected criticism from conservatives on the committee.

However his statement makes no mention of a separate controversy that’s been dogging Google this year — after news leaked this summer that it had developed a censored version of its search service for a potential relaunch in China.

The committee looks certain to question Google closely on its intentions vis-a-vis China.

In statements ahead of the hearing last month, House majority leader, Kevin McCarthy, flagged up reports he said suggested Google is “compromising its core principles by complying with repressive censorship mandates from China”.

Trust in general is a key theme, with lawmakers expressing frustration at both the opacity of Google’s blackbox algorithms, which ultimately shape content hierarchies on its platforms, and the difficulty they’ve had in getting facetime with its CEO to voice questions and concerns.

At a Senate Intelligence committee hearing three months ago, which was attended by Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg, senators did not hide their anger that Pichai had turned down their invitation — openly ripping into company leaders for not bothering to show up. (Google offered to send its chief legal officer instead.)

“For months, House Republicans have called for greater transparency and openness from Google. Company CEO Sundar Pichai met with House Republicans in September to answer some of our questions. Mr. Pichai’s scheduled appearance in front of the House Judiciary Committee is another important step to restoring public trust in Google and all the companies that shape the Internet,” McCarthy wrote last month.

Other recent news that could inform additional questions for Pichai from the committee include the revelation of yet another massive security breach at Google+; and a New York Times investigation of how mobile apps are location tracking users — with far more Android apps found to contain location-sharing code than iOS apps.

Tech giants offer empty apologies because users can’t quit

A true apology consists of a sincere acknowledgement of wrong-doing, a show of empathic remorse for why you wronged and the harm it caused, and a promise of restitution by improving ones actions to make things right. Without the follow-through, saying sorry isn’t an apology, it’s a hollow ploy for forgiveness.

That’s the kind of “sorry” we’re getting from tech giants — an attempt to quell bad PR and placate the afflicted, often without the systemic change necessary to prevent repeated problems. Sometimes it’s delivered in a blog post. Sometimes it’s in an executive apology tour of media interviews. But rarely is it in the form of change to the underlying structures of a business that caused the issue.

Intractable Revenue

Unfortunately, tech company business models often conflict with the way we wish they would act. We want more privacy but they thrive on targeting and personalization data. We want control of our attention but they subsist on stealing as much of it as possible with distraction while showing us ads. We want safe, ethically built devices that don’t spy on us but they make their margins by manufacturing them wherever’s cheap with questionable standards of labor and oversight. We want groundbreaking technologies to be responsibly applied, but juicy government contracts and the allure of China’s enormous population compromise their morals. And we want to stick to what we need and what’s best for us, but they monetize our craving for the latest status symbol or content through planned obsolescence and locking us into their platforms.

The result is that even if their leaders earnestly wanted to impart meaningful change to provide restitution for their wrongs, their hands are tied by entrenched business models and the short-term focus of the quarterly earnings cycle. They apologize and go right back to problematic behavior. The Washington Post recently chronicled a dozen times Facebook CEO Mark Zuckerberg has apologized, yet the social network keeps experiencing fiasco after fiasco. Tech giants won’t improve enough on their own.

Addiction To Utility

The threat of us abandoning ship should theoretically hold the captains in line. But tech giants have evolved into fundamental utilities that many have a hard time imagining living without. How would you connect with friends? Find what you needed? Get work done? Spend your time? What hardware or software would you cuddle up with in the moments you feel lonely? We live our lives through tech, have become addicted to its utility, and fear the withdrawal.

If there were principled alternatives to switch to, perhaps we could hold the giants accountable. But the scalability, network effects, and aggregation of supply by distributors has led to near monopolies in these core utilities. The second-place solution is often distant. What’s the next best social network that serves as an identity and login platform that isn’t owned by Facebook? The next best premium mobile and PC maker behind Apple? The next best mobile operating system for the developing world beyond Google’s Android? The next best ecommerce hub that’s not Amazon? The next best search engine? Photo feed? Web hosting service? Global chat app? Spreadsheet?

Facebook is still growing in the US & Canada despite the backlash, proving that tech users aren’t voting with their feet. And if not for a calculation methodology change, it would have added 1 million users in Europe this quarter too.

One of the few tech backlashes that led to real flight was #DeleteUber. Workplace discrimination, shady business protocols, exploitative pricing and more combined to spur the movement to ditch the ridehailing app. But what was different here is that US Uber users did have a principled alternative to switch to without much hassle: Lyft. The result was that “Lyft benefitted tremendously from Uber’s troubles in 2018” eMarketer’s forecasting director Shelleen Shum told the USA Today in May. Uber missed eMarketer’s projections while Lyft exceeded them, narrowing the gap between the car services. And meanwhile, Uber’s CEO stepped down as it tried to overhaul its internal policies.

But in the absence of viable alternatives to the giants, leaving these mainstays is inconvenient. After all, they’re the ones that made us practically allergic to friction. Even after massive scandals, data breaches, toxic cultures, and unfair practices, we largely stick with them to avoid the uncertainty of life without them. Even Facebook added 1 million monthly users in the US and Canada last quarter despite seemingly every possible source of unrest. Tech users are not voting with their feet. We’ve proven we can harbor ill will towards the giants while begrudgingly buying and using their products. Our leverage to improve their behavior is vastly weakened by our loyalty.

Inadequate Oversight

Regulators have failed to adequately step up either. This year’s congressional hearings about Facebook and social media often devolved into inane and uninformed questioning like how does Facebook earn money if its doesn’t charge? “Senator, we run ads” Facebook CEO Mark Zuckerberg said with a smirk. Other times, politicians were so intent on scoring partisan points by grandstanding or advancing conspiracy theories about bias that they were unable to make any real progress. A recent survey commissioned by Axios found that “In the past year, there has been a 15-point spike in the number of people who fear the federal government won’t do enough to regulate big tech companies — with 55% now sharing this concern.”

When regulators do step in, their attempts can backfire. GDPR was supposed to help tamp down on the dominance of Google and Facebook by limiting how they could collect user data and making them more transparent. But the high cost of compliance simply hindered smaller players or drove them out of the market while the giants had ample cash to spend on jumping through government hoops. Google actually gained ad tech market share and Facebook saw the littlest loss while smaller ad tech firms lost 20 or 30 percent of their business.

Europe’s GDPR privacy regulations backfired, reinforcing Google and Facebook’s dominance. Chart via Ghostery, Cliqz, and WhoTracksMe.

Even the Honest Ads act, which was designed to bring political campaign transparency to internet platforms following election interference in 2016, has yet to be passed even despite support from Facebook and Twitter. There’s hasn’t been meaningful discussion of blocking social networks from acquiring their competitors in the future, let alone actually breaking Instagram and WhatsApp off of Facebook. Governments like the U.K. that just forcibly seized documents related to Facebook’s machinations surrounding the Cambridge Analytica debacle provide some indication of willpower. But clumsy regulation could deepen the moats of the incumbents, and prevent disruptors from gaining a foothold. We can’t depend on regulators to sufficiently protect us from tech giants right now.

Our Hope On The Inside

The best bet for change will come from the rank and file of these monolithic companies. With the war for talent raging, rock star employees able to have huge impact on products, and compensation costs to keep them around rising, tech giants are vulnerable to the opinions of their own staff. It’s simply too expensive and disjointing to have to recruit new high-skilled workers to replace those that flee.

Google declined to renew a contract with the government after 4000 employees petitioned and a few resigned over Project Maven’s artificial intelligence being used to target lethal drone strikes. Change can even flow across company lines. Many tech giants including Facebook and Airbnb have removed their forced arbitration rules for harassment disputes after Google did the same in response to 20,000 of its employees walking out in protest.

Thousands of Google employees protested the company’s handling of sexual harassment and misconduct allegations on Nov. 1.

Facebook is desperately pushing an internal communications campaign to reassure staffers it’s improving in the wake of damning press reports from the New York Times and others. TechCrunch published an internal memo from Facebook’s outgoing VP of communications Elliot Schrage in which he took the blame for recent issues, encouraged employees to avoid finger-pointing, and COO Sheryl Sandberg tried to reassure employees that “I know this has been a distraction at a time when you’re all working hard to close out the year — and I am sorry.” These internal apologizes could come with much more contrition and real change than those paraded for the public.

And so after years of us relying on these tech workers to build the product we use every day, we must now rely that will save us from them. It’s a weighty responsibility to move their talents where the impact is positive, or commit to standing up against the business imperatives of their employers. We as the public and media must in turn celebrate when they do what’s right for society, even when it reduces value for shareholders. And we must accept that shaping the future for the collective good may be inconvenient for the individual.

For more on this topic:

Google lays outs narrow “EU election advertiser” policy ahead of 2019 vote

Google has announced its plan for combating election interference in the European Union, ahead of elections next May when up to 350 million voters across the region will vote to elect 705 Members of the European Parliament.

In a blog post laying out a narrow approach to democracy-denting disinformation, Google says it will introduce a verification system for “EU election advertisers to make sure they are who they say they are”, and require that any election ads disclose who is paying for them.

The details of the verification process are not yet clear so it’s not possible to assess how robust a check this might be.

But Facebook, which also recently announced checks on political advertisers, had to delay its UK launch of ID checks earlier this month, after the beta system was shown being embarrassingly easy to game. So just because a piece of online content has an ‘ID badge’ on it does not automatically make it bona fide.

Google’s framing of “EU election advertisers” suggests it will exclude non-EU based advertisers from running election ads, at least as it’s defining these ads. (But we’ve asked for a confirm on that.)

What’s very clear from the blog post is that the adtech giant is defining political ads as an extremely narrowly category — with only ads that explicitly mention political parties, candidates or a current officeholder falling under the scope of the policy.

Here’s how Google explains what it means by “election ads”:

“To bring people more information about the election ads they see across Google’s ad networks, we’ll require that ads that mention a political party, candidate or current officeholder make it clear to voters who’s paying for the advertising.”

So any ads still intended to influence public opinion — and thus sway potential voters — but which cite issues, rather than parties and/or politicians, will fall entirely outside the scope of its policy.

Yet of course issues are material to determining election outcomes.

Issue-based political propaganda is also — as we all know very well now — a go-to tool for the shadowy entities using Internet platforms for highly affordable, mass-scale online disinformation campaigns.

The Kremlin seized on divisive issues for much of the propaganda it deployed across social media ahead of the 2016 US presidential elections, for example.

Russia didn’t even always wrap its politically charged infowar bombs in an ad format either.

All of which means that any election ‘security’ effort that fixes on a narrow definition (like “election ads”) seems unlikely to offer much more than a micro bump in the road for anyone wanting to pay to play with democracy.

The only real fix for this problem is likely full disclosure of all advertising and advertisers; Who’s paying for every online ad, regardless of what it contains, plus a powerful interface for parsing that data mountain.

Of course neither Google nor Facebook is offering that — yet.

Because, well, this is self-regulation, ahead of election laws catching up.

What Google is offering for the forthcoming EU parliament elections is an EU-specific Election Ads Transparency Report (akin to the one it already launched for the US mid-terms) — which it says it will introduce (before the May vote) to provide a “searchable ad library to provide more information about who is purchasing election ads, whom they’re targeted to, and how much money is being spent”.

“Our goal is to make this information as accessible and useful as possible to citizens, practitioners, and researchers,” it adds.

The rest of its blog post is given over to puffing up a number of unrelated steps it says it will also take, in the name of “supporting the European Union Parliamentary Elections”, but which don’t involve Google itself having to be any more transparent about its own ad platform.

So it says it will —

  • be working with data from Election Commissions across the member states to “make authoritative electoral information available and help people find the info they need to get out and vote”
  • offering in-person security training to the most vulnerable groups, who face increased risks of phishing attacks (“We’ll be walking them through Google’s Advanced Protection Program, our strongest level of account security and Project Shield, a free service that uses Google technology to protect news sites and free expression from DDoS attacks on the web.”)
  • collaborating — via its Google News Lab entity — with news organizations across all 27 EU Member States to “support online fact checking”. (The Lab will “be offering a series of free verification workshops to point journalists to the latest tools and technology to tackle disinformation and support their coverage of the elections”)

No one’s going to turn their nose up at security training and freebie resource.

But the scale of the disinformation challenge is rather larger and more existential than a few free workshops and an anti-DDoS tool can fix.

The bulk of Google’s padding here also fits comfortably into its standard operating philosophy where the user-generated content that fuels its business is concerned; aka ‘tackle bad speech with more speech’. Crudely put: More speech, more ad revenue.

Though, as independent research has repeatedly shown, fake news flies much faster and is much, much harder to unstick than truth.

Which means fact checkers, and indeed journalists, are faced with the Sisyphean task of unpicking all the BS that Internet platforms are liberally fencing and accelerating (and monetizing as they do so).

The economic incentives inherent in the dominant adtech platform of the Internet should really be front and center when considering the modern disinformation challenge.

But of course Google and Facebook aren’t going to say that.

Meanwhile lawmakers are on the back foot. The European Commission has done something, signing tech firms up to a voluntary Code of Practice for fighting fake news — Google and Facebook among them.

Although, even in that dilute, non-legally binding document, signatories are supposed to have agreed to take action to make both political advertising and issue based advertising “more transparent”.

Yet here’s Google narrowly defining election ads in a way that lets issues slide on past.

We asked the company what it’s doing to prevent issue-based ads from interfering in EU elections. At the time of writing it had not responded to that question.

Safe to say, ‘election security’ looks to be a very long way off indeed.

Not so the date of the EU poll. That’s fast approaching: May 23 through 26, 2019.

How a small French privacy ruling could remake adtech for good

A ruling in late October against a little known French adtech firm that popped up on the national data watchdog’s website earlier this month, is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile ‘demand side platform’, Vectaury, was only published in the regulator’s native dense French legalese.

Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single ‘I agree’ button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual ‘carpet bag’ that gets passed around to everyone else in their chain as soon as the user clicks ‘I agree’.

This is important because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better Internet down the road.

Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there is still so many tortuously confusing ‘consent flows’ in play.

The long-standing online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a ‘phoney war’ phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made”, she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

Meanwhile adtech giant Google has also made itself unpopular with publisher ‘partners’ over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts that it’s doing actual due diligence on.

Yet Google is orders of magnitude more massive of course.

The Vectaury file gives us a fascinating little glimpse into adtech ‘business as usual’. Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR ‘privacy by design and default’ standpoint.

The sprawling process of programmatic explains the very long list of ‘partners’ nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it there would need to be radical reshaping of ad targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real time bidding!

In Vectaury’s case CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70% of collected data is not stored.

In the decision there was no fine but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the UK) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the UK’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.

Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls”.

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g., the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech plaforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30%).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform dominated Internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question.”Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though ‘taking back control’ could ever sound so good?

Read the mud-slinging pitches Facebook’s PR firm sent us 

Facebook’s latest PR crisis has cast a lurid spotlight on a GOP-led publicity firm called Definers Public Affairs, after a New York Times investigation revealed last week the firm had sought to discredit Facebook critics by, in one instance, linking them to the liberal financier George Soros — a long-time target of anti-semitic conspiracy theories.

The sight of any company paying a firm to leverage anti-semitic and antisocial sentiment on its behalf is, to put it very politely, not a good look.

For Facebook, whose platform is aflame with socially divisive fakes, it’s bombshell bad news.

Although it’s not the only tech firm caught tapping Definers’ oppo research tactics. A piece of internal moves news the PR firm emailed us last month, in happier times for its own reputation, containing promotions and personnel moves in its Washington office, enthused about Definers adding “three new team members to its Bay Area office in California”.

“Today, Definers is a team of 40 with locations in Washington, D.C., San Francisco, and an affiliate operation in London,” the upbeat announcement ended.

How well the Definers brand survives its brush with Facebook remains to be seen.

Tarnishing

Facebook was quick to issue a rebuttal to the NYT article, claiming it had never asked Definers to generate fake news or anti-semitic memes in an attempt to smear its critics.

But it could not deny it had hired a mud-slinger in the first place, raising questions about due diligence, business oversight and, well, whether Facebook has any self perspective at all in the midst of a global brand trust scandal.

Zooming out for a second, you do also have to pause and wonder at quite how radioactive the corporate culture must be when the ‘solution’ to a string of hugely damaging disinformation scandals is to reach for whataboutery and even actual fake news, as the NYT has claimed, to try to muddy the waters in your favor.

It’s almost as if manipulation is in the corporate DNA.

Though again Facebook has decried knowledge of exactly what Definers was up to on its behalf. Yet not knowing isn’t any kind of defence when your business stands accused of defective oversight, self-serving opacity and having a vacuum where its moral compass should be. Accountability? Facebook’s algorithms keep saying no.

It’s still not clear which individual (or individuals) at Facebook actually signed on the line to put a controversial PR outfit to work slinging mud on its behalf.

In a call with reporters the day after the NYT story broke Facebook’s founder Mark Zuckerberg claimed not to know — suggesting: “Someone on our comms team must have hired them.”

He then went on to imply — in the same breath — that there could be more skeletons in the closet, reaching for his favorite solution to self-made scandals (another self-audit), by saying: “In general we need to go through and look at all the relations we have and see if there are more like this.”

As we reported earlier Facebook’s comms department has a bunch of ties to Definers. While Joel Kaplan, its longtime chief lobbyist, looks a very likely candidate for an intimate acquaintance with ‘oppo research’ dark arts — if indeed COO Sheryl Sandberg is in the clear on this one.

But without an actual answer from Facebook we’re left to speculate.

Meanwhile, Facebook users, investors and lawmakers should absolutely be left staggered at the WTFuckery of all this. How is it possible that no one in senior Facebook management knew what its left hand was doing? Where was even basic oversight of its own crisis PR response?

And who in its exec team actually feels accountability for all these fuck ups since no one with actual responsibility has fallen on their sword (though CSO Alex Stamos left recently, apparently of his own volition) — despite 2018 being another annus horribilis for Facebook, with a freshly cracked pandora’s box of privacy scandals, trust breaches and PR own-goals.

Zuckerberg’s artful political question-dodging on home turf and over the pond, in the European parliament, has merely served to further enrage lawmakers who — much like journalists — really don’t like being fobbed off with PR guff.

As a strategy the tactic necessarily burns its own runway. And it already looks to have boxed Facebook’s leadership in.

This is also — let’s not forget — the year that Zuckerberg made it his personal mission to ‘fix Facebook’. Frankly he might have had more success with another f-word.

Mud sticks

Whoever at Facebook made the call to bring in Definers opened the door to dirt-digging and smear tactics that are euphemistically passed off in political circles with the vanilla-sounding label of ‘opposition research’.

More knowingly it’s referred to as ‘the dark arts. 

The basic modus operandi is to locate (or indeed generate) selective information and seed it to the media (or, nowadays, the socials) with the intention of discrediting an opponent. 

These tactics are typically associated with the free-for-all of campaign season politics. And even there it’s always a dirty, unpleasant and ugly business.

Smear tactics and cynically spun counter narratives are also of course the bread and butter of murky interest groups seeking to manipulate public opinion without disclosing their actual agenda (and funders).

Plenty of wealthy individuals and industry groups have been fingered on the non-transparent lobbying front. And social media platforms like Facebook have, ironically enough, made it easier for shadowy agenda-pushers to deploy astroturfing techniques to mask and pass off their self-interested lobbying as grassroots activism — and thus to try to shift public opinion without being caught in the act.

Facebook engaging a PR firm to fling mud on its behalf squares this virtue-less circle.

And the connective tissue is that all these self-interests are being very well-served indeed by unregulated social media.

Since the NYT story broke, Facebook has claimed journalists were well aware that Definers was working on its behalf. But the truth is rather murkier there too.

We checked our inboxes and none of the pitches Definers sent to TechCrunch made an explicit disclosure that the messages they contained had been paid for by Facebook to push a pro-Facebook agenda. They all required the recipient to join those dots themselves.

A proper journalist engaging their critical faculties should have been able to deduce Facebook was the paying customer, given the usually obvious skew.

But if Definers was also sending this stuff (and indeed worse things than we were pitched) out more widely, to content seeders and fencers that trade on framed outrage to drive online clicks, their tasty-sounding tidbits would not have been so critically parsed. And angles they were pushing likely still flowed where they could influence opinion — thanks to the ‘inverse’ osmosis of social media.

(As far as we can tell none of the Definers’ oppo research pitches that we received ended up in a TechCrunch article — well, until now… )

You might find it interesting…

Here’s an example of Definers’ oppo mud-slinging we were sent targeting Apple and Google on Facebook’s behalf:

Just came across this – thought you might find it interesting: https://digitalcontentnext.org/blog/2018/08/21/google-data-collection-research/

“A major part of Google’s data collection occurs while a user is not directly engaged with any of its products. The magnitude of such collection is significant, especially on Android mobile devices, arguably the most popular personal accessory now carried 24/7 by more than 2 billion people.”
The study’s findings are rather shocking… It really highlights how other tech companies should be looked at critically – scrutiny shouldn’t just be on FB for data misuse. Apple & Google have been perpetrators of data abuse as well… 

“Scrutiny shouldn’t just be on FB for data misuse” is the key line there, though it’s still hardly a plain English disclosure that Facebook paid for the message to be sent.

We received multiple Definers’ pitches on behalf of what looks to be three different tech companies — and only one of these is explicitly badged as a press release from the firm paying Definers to do PR. (In that case, e-scooter startup Lime.)

We weren’t entirely convinced even then — given the sender was a random public affairs company — and ended up emailing our own Lime contacts and CCing their press email to double-check.

Generally, though, the Definers pitches we received looked nothing like traditional press releases.

A different pitch that was also sent (we must assume) on Lime’s behalf sought not, as the aforementioned press release did, to trumpet a positive PR goal (of Lime shooting to make its global fleet carbon neutral) but to fling dirt on rival scooter startup, Bird.

Dirt doesn’t fit in a traditional press release template though. So instead we got this email…

I read your piece on Bird’s custom scooter and delivery. Just wanted to flag that Bird’s numbers seem off based on what they have listed on their website: https://www.bird.co/
They’ve taken a bunch off the list. Seems odd since they just announced 100 cities two weeks agoThought you’d find this interesting. 

Other similarly mug-slinging Definers pitches we received included more fulsome info dumps in the body of the email — not just a link or few lines trailing something selectively “interesting”.

Sometimes these data dumps came with key lines highlighted. Sometimes there was also a chattily worded email intro (like the one above) to frame the content — typically including a clickbait-style appeal to journalistic curiosity. (The word “interesting” seems to be a popular choice with Definers flaks.)

At other times the pitches didn’t include much or any foreplay at all.

One “ICYMI” email subject line pitch was introed in the email body text without fanfare — with just two words: “see below”. Another had no intro text at all.

The “see below” content in the aforementioned pitch referred to this Mashable article — literally pasted word for word but with two paragraphs highlighted, drawing attention to the author’s claim that the next iPhone “could have significantly slower LTE data speeds than competing Android phones”; and to an “independent” speedtest study cited in the article (which was actually carried out by a company owned by Mashable’s own parent company… ) — and which the author concludes “revealed just how inferior Intel’s modems are compared to Qualcomm’s latest modems”.

It’s not yet been confirmed who Definers was working for to spread that particular cut-n-paste conjecture — but one obvious candidate is Qualcomm . (And for the why, the Mashable article includes an accidentally helpful pointer, noting the pair’s legal disputes over patent royalties and Apple moving away from using Qualcomm chips.)

Another “ICYMI” cut-n-paste job that Definers sent us also targeted Apple — though likely, in that case, the mud was being flung on Facebook’s behalf.

Here the pasted content was this article, by the National Legal and Policy Center, reporting on an Apple shareholder filing a proposal for the company to make a report on human rights and free speech.

So for free speech read ‘Facebook’ as the most likely self-interested source.

(The NYT article also suggested Zuckerberg was especially unhappy about Apple CEO Tim Cook publicly blasting privacy hostile business models — suggesting Facebook might have been keen to find a way to throw shade at its claim to ‘human rights’-based moral high ground.)

As an aside, the Apple-China talking point surfaced by Definers via the aforementioned National Legal and Policy Center article is also, interestingly enough, something Facebook’s former CSO Stamos has sought to hammer hard on in public…

And while Stamos may have left the building at 1 Hacker Way he’s continued to speak up on behalf of his former employer and its choices in public — and liberally fling blame at Facebook’s critics.

That Facebook’s ex-CSO is using the exact same attack points as Definers is interesting in terms of the PR alignment. How deep does that strategic ‘infowars’ rabbit hole go?

Returning again to Definers, in another instance the firm reached out to me via email to “pass along some context” after I wrote this article — about a tool created by Oxford University’s Oxford Internet Institute to aggregate junk news being shared on Facebook.

“Facebook ahas [sic] been working to curb the proliferation of this kind of news and there have been encouraging results from three different studies in the past month,” wrote the flak, flagging three studies to back up his claim — summarizing them in short bullet points (without linking to the cited research).

The ‘context’ being pitched here boiled down to:

  • an academic study that Definers claimed suggested “interactions with fake news sites declined by more than half on Facebook after the 2016 election”;
  • a metric created by another university to measure the Facebook distribution of the number of sites that share misinformation — again with the pitch claiming ‘dramatic improvements’ for Facebook at the same time as flinging shade on Twitter (Definers wrote: “The metric was very high for Facebook in 2016 — much higher than Twitter’s — but beginning in mid 2017 it was dramatically improved, and now Facebook has 50% less of what the University of Michigan calls “Iffy Quotient content” than Twitter”);
  • and a study by French newspaper looking at 630 French websites and claiming “Facebook engagement with “unreliable or dubious sites” has halved in France since 2015”

As another aside Facebook policy staffers recently cited the exact same ‘Iffy Quotient’ metric in a letter to the UK’s DCMS committee — which has been running a multi-month enquiry into online disinformation and trying (unsuccessfully) to get Zuckerberg to personally answer its questions — as part of several pages of ‘contextual filler’ Facebook used to pad out yet another letter to UK lawmakers that contained the word ‘no’.

Committee chair Damian Collins was not impressed by Facebook’s attention-sapping tactics.

“We will not let the matter rest there, and are not reassured in any way by the corporate puff piece that passes off as Facebook’s letter back to us,” he wrote. “The fact that the University of Michigan believes that Facebook’s ‘Iffy Quotient’ scores have recently improved means nothing to the victims of Facebook data breaches.”

Well, quite.

Further reflections

Facebook’s approach to its own publicity brings to mind something that academic and techno-sociologist Zeynep Tufecki wrote earlier this year — when she asserted: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

Although, in that moment, she was actually talking more about online disinformation tactics than the distribution platforms themselves.

Yet the point does seem to stand — when, in Facebook’s case, the platform business appears to be reflecting (or, well, channeling, via its PR) the same problematic qualities that mire and/or bog down content on Facebook.

Again, returning to how Definers sought to engage with us, in another more labor intensive episode, it pitched another TechCrunch journalist — ahead of a Senate Intelligence hearing which was attended by Facebook’s COO, Sheryl Sandberg and Twitter’s CEO Jack Dorsey. But not by any senior execs from Google.

Here the firm worked to flag up and critically frame Google’s absence, after the Facebook adtech rival had declined to send either of the two C-suite execs the committee had asked for.

“Hey… Are you covering Google’s lack of cooperation for next week’s Senate Intel hearing with Twitter & FB? If so, let me know. May have a new angle for you,” was its opening gambit to a TC colleague in an email sent on the last day of August (the committee hearing took place on September 5) — which earned it a “happy to entertain a pitch” response from the journalist in question.

Definers then suggested a phone call. But after about an hour of radio silence it emailed again, now fleshing out its ‘Google isn’t taking the committee’s concerns seriously’ angle:

I’m sure this is on your radar, but wanted to flag something for you. Google isn’t sending an exec to testify at next week’s Senate Intel hearing:
From all reports on the Hill, it will be an empty chair. Given recent news that disruption campaigns have been launched by the Russians and Iranians, it seems very irresponsible on their part. After all, Google is not only the most powerful search engine, it also has one of the largest market shares on digital ads.
I think there is an interesting story on how Twitter and Facebook (while both are far from perfect) are taking the committee’s concerns seriously and Google is absent.
Thoughts?

Note the “both are far from perfect” fillip aimed at Twitter and Facebook to lay down a little light covering fire for a reframed double-barrel assault on Google as the really big baddie for not even showing up.

A few days later the same Definers’ staffer pitched this reporter again, now the day before the Senate hearing — offering “an interesting backgrounder re the committee’s members’ campaign expenses for FB ads, campaign contributions from big tech, and the data tools senators are using to track visitors to their website”.

After getting through on the phone this time they emailed to hammer home a final thought: “Check out the attached docs – there’s a level of hypocrisy here especially before tomorrow’s hearing with FB & Twitter”.

More smear tactics — now aimed directly at the lawmakers who would be asking Facebook tough questions by seeking to attack their moral right to defend privacy.

A month later the Definers operator was back pitching the same TC reporter. Though here it’s even less clear who’s the paymaster behind this particular pitch.

“Hey – any interest in taking a look at Apple employees’ political contributions from the last 14 years or so?” the PR opened.

The pitch was for a report written by another Washington-based PR firm, called GovPredict — whose website describes its business as “research, analytics, and actionable intelligence for winning public affairs campaigns” — which Definers said it could share ahead of release time, under embargo.

The report in question consisted of a six-page proprietary “analysis” conducted by the other PR firm which claimed to summarize the recipients of political contributions of Apple employees — slicing the self-structured data by political party and breaking out contributions to key individuals (e.g. Hillary Clinton, Obama etc).

“In total, 91% of Apple employee contributions have gone to Democrats, and 9% to Republicans,” concluded the ‘report’ — which had been compiled by a PR firm whose stated business is “winning public affairs campaigns” on behalf of its clients, and which was seeded to a journalist by another PR firm being paid by an unknown tech firm to daub Apple in partisan colors.

Whoever was paying to paint a picture of Apple in near pure Democrat blue clearly had an agenda to peddle. Just as clearly, they didn’t want to be seen doing the peddling themselves.

Nor did they need to — given the mushrooming influencer PR industry that’s more than happy to be paid to fling mud on the tech industry’s behalf. (Even, seemingly, at the same company for different paying clients. Nice but dirty business if you can get it then.)

Yet many of the wider problems of big tech which are the root cause of their brand trust crises boil down to a problematic lack of transparency. And the chain-linked lack of accountability that flows from that.

Throwing more mud at this problem doesn’t look like a fix for or an answer to anything.

Nor is it a great look for a scandal-hit adtech giant like Facebook, whose founder claims to be hard at work fixing a flawed platform philosophy that’s failed repeatedly on integrity, transparency and responsibility, to be found dipping into a murky oppo research well — even as it’s simultaneously trying to cast the specter of regulation from the door.

For dark arts read fresh scandals, as Facebook has now found.

Yet it’s interesting that someone at the company — realizing it was in a trust hole — only knew how to keep digging.

Facebook Portal needs more. At least it just added YouTube

To offset the creepiness of having Facebook’s camera and microphone in your house, its new Portal video chat gadget needs best-in-class software.  Its hardware is remarkably well done, plus Messenger and the photo frame feature work great. But its third-party app platform was pretty skimpy when the device launched this week.

Facebook is increasingly relying on its smart display competitors to boost Portal’s capabilities. It already comes with Amazon Alexa inside. And now, Google’s YouTube is part of the Portal app platform. “Yes, YouTube.com is available through an optional install in the ‘Portal Apps’ catalog” a Facebook spokesperson tells me. You can open it with a “Hey Portal” command, but there currently seems to be no way to queue up specific videos or control playback via voice.

The addition gives Portal much greater flexibility when it comes to video. Previously it could only play videos from Facebook Watch, Food Network, or Newsy. It also brings the device to closer parity with Google’s Home Hub screen, the Google Assistant-powered smart displays from JBL and Lenovo, and the Amazon Echo Show 2 which Google blocked from using YouTube before Amazon added a web browser to the device to reopen YouTube access.

Read our comparison of the top smart display gadgets

YouTube makes the most of the $349 Portal+’s 15.6-inch 1080p screen, the biggest and sharpest of the smart display crop. Whether for watching shows or recipe videos while making dinner, instructional clips while putting together furniture, or Baby Shark to keep the kids busy, Portal becomes a lot more useful with YouTube.

But we’re still waiting for the most exciting thing Facebook has planned for Portal: Google Assistant. A month ago Facebook’s VP of Portal Rafa Camargo told me We definitely have been talking to Google as well. We view the future of these home devices . . . as where you will have multiple assistants and you will use them for whatever they do best . . . We’d like to expand and integrate with them.” Now a Facebook spokesperson tells me that they “Don’t have an update on Google Assistant today but we’re working on adding new experiences to Portal.”

The potential to put both Google and Amazon’s voice assistants on one device could make Portal’s software stronger than either competitor’s devices. Many critics have asked if Facebook was naive or calloused to launch Portal in the wak of privacy issues like the Cambridge Analytica scandal and its recent data breach. But as I found when testing the Portal with my 72-year-old mother, not everyone is concerned with Facebook’s privacy problems and instead see Portal as a way for the social network to truly bring them closer to their loved ones. With Amazon and Google racing to win the smart display market, Facebook may see it worth the tech insider backlash to have a shot at mainstream success before its boxed out.

Big tech must not reframe digital ethics in its image

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.

The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.

The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.

So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.

As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”

As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.

Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.

But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.

(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)

And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.

This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.

“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”

Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.

And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)

It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.

A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.

“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.

Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.

SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)

The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.

“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.

“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”

Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.

It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.

The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.

They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.

Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.

This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.

The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.

Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.

Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.

A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.

They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.

“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.

“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”

Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).

She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.

She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.

But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.

Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.

But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.

Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.

This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.

Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.

Not so long as the platform retains the power to cause damage at scale.

It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.

“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.

“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”

Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.

On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.

Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.

The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.

His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.

“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.

“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”

“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”

For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.

Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.

The adtech modeus operandi sums to “surveillance”, Cook asserted.

Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.

“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.

Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.

The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.

So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.

“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.

It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.

No profit making entity should be used as the model for where the ethical line should lie.

Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.

One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.

DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.

But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.

There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.

One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.

Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.

And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.

Call for collective action

Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.

Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)

Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.

“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”

The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.

Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”

“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.

“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”

So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.

Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.

The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.

Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.

Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)

Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.

The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.

The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.

In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.

It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.

Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.

People and civic society must teach them.

A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.

She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.

The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.

She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.

This is vital work to defend values and rights against the overreach of the digital here and now.

“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”

If the conference had one short sharp message it was this: Society must wake up to technology — and fast.

“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.

“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”

This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.

But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.

Should cash-strapped Snapchat sell out? To Netflix?

Snapchat needs a sugar daddy. Its cash reserves dwindling from giant quarterly losses. Poor morale from a battered share price and cost-cutting measures sap momentum. And intense competition from Facebook is preventing rapid growth. With just $1.4 billion in assets remaining at the end of a brutal Q3 2018 and analysts estimating it will lose $1.5 billion in 2019 alone, Snapchat could run out of money well before it’s projected to break even in 2020 or 2021.

So what are Snap’s options?

A long and lonely road

Snap’s big hope is to show a business turnaround story like Twitter, which saw its stock jump 14 percent this week despite losing monthly active users by deepening daily user engagement and producing profits. But without some change that massively increases daily time spent while reducing costs, it could take years for Snap to reach profitability. The company has already laid off 120 employees in March, or 7 percent of its workforce. And 40 percent of the remaining 3,000 employees plan to leave — up 11 percentage points from Q1 2018 according to internal survey data attained by Cheddar’s Alex Heath.

Snapchat is relying on the Project Mushroom engineering overhaul of its Android app to speed up performance, and thereby accelerate user growth and retention. Snap neglected the developing world’s Android market for years as it focused on iPhone-toting US teens. Given Snapchat is all about quick videos, slow load times made it nearly unusable, especially in markets with slower network connections and older phones.

Looking at the competitive landscape, WhatsApp’s Snapchat Stories clone Status has grown to 450 million daily users while Instagram Stories has reached 400 million dailies — much of that coming in the developing world, thereby blocking Snap’s growth abroad as I predicted when Insta Stories launched. Snap actually lost 3 million daily users in Q2 2018. Snap Map hasn’t become ubiquitous, Snap’s Original Shows still aren’t premium enough to drag in tons of new users, Discover is a clickbait-overloaded mess, and Instagram has already copied the best parts of its ephemeral messaging.

SAN FRANCISCO, CA – SEPTEMBER 09: Evan Spiegel of Snapchat attends TechCruch Disrupt SF 2013 at San Francisco Design Center on September 9, 2013 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

As BTIG’s Rich Greenfield points out, CEO Evan Spiegel claims Snapchat is the fastest way to communicate, but it’s not for text messaging, and the default that chats disappear makes it unreliable of utilitarian chat. And if WhatsApp were to add an ephemeral messaging feature of its own, growth for Snapchat could get even tougher. Snap will have to hope it can hold on to its existing users and squeeze more cash out of them to keep reducing losses.

All those product missteps and and market neglect have metastasized into a serious growth problem for Snapchat. It lost another 2 million users this quarter, and expects to sink further in Q4. Even with the Android rebuild, Spiegel’s assurances for renewed user growth in 2019 seem spurious. That means it’s highly unlikely that Snapchat will achieve Speigel’s goal of hitting profitability in 2019. It needs either an investor or acquirer to come to its aid.

A bailout check

Snap could sell more equity to raise money. $500 million to $1 billion would probably give it the runway necessary to get into the black. But from where? With all the scrutiny on Saudi Arabia, Snap might avoid taking money from the kingdom. Saudi’s Prince Al-Waleed Talal already invested $250 million to buy 2.5 percent of Snap on the open market.

Snap’s best bet might be to take more money from Chinese internet giant Tencent. The massive corporation already spent around $2 billion to buy a 12 percent stake in Snap from the open market. The WeChat owner has plenty of synergies with Snapchat, especially since it runs a massive gaming business and Snap is planning to launch a third-party developer gaming platform.

Tencent could still be a potential acquirer for Snap, but given President Trump’s trade war with China, he might push regulators to block a sale. The state of American social networks like Twitter and Facebook that are under siege by foreign election interference, trolls, and hackers might make the US government understandably concerned about a Chinese giant owning one of the top teen apps.

Regardless of who would invest, they’d likely demand real voting rights — something Snap has denied investors through a governance structure. Spiegel and his co-founder Bobby Murphy both get 10 votes per share. That’s estimated to amount to 89 percent of the voting rights. Shares issued in the IPO came with zero voting rights.

Evan Spiegel and Bobby Murphy, developers of Snapchat (Photo by J. Emilio Flores/Corbis via Getty Images)

But that surely wouldn’t sit well with any investor willing to pour hundreds of millions of dollars into the beleaguered company. Spiegel has taken responsibility for pushing the disastrous redesign early this year that coincided with a significant drop in its download rank. It also inspired a tweet from mega-celebrity Kylie Jenner bashing the app that shaved $1.3 billion off the company’s market cap.

Between the redesign flop, stagnant product innovation, and Spiegel laughing off Facebook’s competition only to be crushed by it, the CEO no longer has the sterling reputation that allowed him to secure total voting control for the co-founders. That means investors will want assurance that if they inject a ton of cash, they’ll have some recourse if Spiegel mismanages it. He may need to swallow his pride, issue voting shares, and commit to milestones he’s required to hit to retain his role as chief executive.

A Soft Landing Somewhere Else

Snap could alternatively surrender as an independent company and be acquired by a deep-pocketed tech giant. Without having to worry about finances or short-term goals, Snap could invest in improving its features and app performance for the long-term. Social networks are tough to kill entirely, so despite competition, Snap could become lucrative if aided through this rough spot.

Combine that with the $637 million bonus Spiegel got for taking Snap public, and he has little financial incentive or shareholder pressure compelling him to sell. Even if the company was bleeding out much worse than it is already, Spiegel could ride it into the ground.

Again, the biggest barrier to this path is Spiegel. Combine totalitarian voting control with the $637 million bonus Spiegel got for taking Snap public, and he has little financial incentive or shareholder pressure compelling him to sell. Even if the company was bleeding out much worse than it is already, Spiegel could ride it into the ground. The only way to get a deal done might be to make Spiegel perceive it as a win.

Selling to Disney could be spun as a such. It hasn’t really figured out mobile amidst distraction from super heroes and Star Wars. Its core tween audience are addicted to YouTube and Snap even if they shouldn’t be on them. They’re both LA companies. And Disney already ponied up $350 million to buy kids desktop social networking game Club Penguin. Becoming head of mobile or something like that for the most iconic entertainment company ever could a vaulted-enough position to entice Spiegel. I could see him being a Disney CEO candidate one day.

What about walking in the footsteps of Steve Jobs? Apple isn’t social. It failed so badly with efforts like its Ping music listeners network that it’s basically abdicated the whole market. iMessage and its cutesy Animoji are its only stakes. Meanwhile, it’s getting tougher and tougher to differentiate with mobile hardware. Each new iPhone seems closer to the last. Apple has resorted to questionable decisions like ditching the oft-missed headphone jack and reliable TouchID to keep the industrial design in flux.

Increasingly, Apple must rely on its iOS software to compete for customers with Android headsets. But you know who’s great at making interesting software? Snapchat. You know who has a great relationship with the next generation of phone owners? Snapchat. And do you know whose CEO could probably smile earnestly beside Tim Cook announcing a brighter future for social media unlocked by two privacy-focused companies joining forces? Snapchat. Plus, think of all the fun Snapple jokes?

There’s a chance to take revenge on Facebook if Snapchat wanted to team up with Mark Zuckerberg’s old arch nemesis Google . After Zuck declared “Carthage must be destroyed”, Google+ flopped and its messaging apps became a fragmented mess. Alphabet has since leaned away from social networking. Of course it still has the juggernaut that is YouTube — a perennial teen favorite alongside Snapchat and Instagram. And it’s got the perfect complement to Snap’s ephemerality in the form of Google Photos, the best-in-class permanent photo archiving tool. With the consume side of Google+ shutting down after accidentally exposing user data, Google still lacks a traditional social network where being a friend comes before being a fan.

What Google does have is a reputation for delivering the future. From Waymo’s self-driving cars to Calico’s plan to make you live forever, Google is an inventive place where big ideas come to fruition. Spiegel could frame Google as aligned with its philosophy of creating new ways to organize and consume information that adapt to human behavior. He surely wouldn’t mind being lumped in with Internet visionaries like Larry Page and Sergei Brin. Google’s Android expertise could reinvigorate Snap in emerging markets. And together they could take a stronger swing at Facebook.

But there are problems with all of these options. Buying Snap would be a massive bet for Disney, and Snap’s lingering bad rap as a sexting app might dissuade Mickey Mouse’s overlords. Apple rarely buys such late-stage public companies. CEO Tim Cook has been able to take the moral high ground because Apple makes its money from hardware rather than off of  personal info through ad targeting. If Apple owned Snap, it’d be in the data exploitation business just like everyone else.

And Google’s existing dominance in software might draw the attention of regulators. The prevailing sentiment is that it was a massive mistake to let Facebook acquire Instagram and WhatsApp, as it centralized power and created a social empire. With Google already owning YouTube, the government might see problems with it buying one of the other most popular teen apps.

That’s why I think Netflix could be a great acquirer for Snap. They’re both video entertainment companies at the vanguard of cultural relevance, yet have no overlap in products. Netflix already showed its appreciation for Snapchat’s innovation by adopting a Stories-like vertical video clip format for discovering and previewing what you could watch. The two could partner to promote Netflix Originals and subscriptions inside of Snapchat. Netflix could teach Snap how to win at exclusive content while gaining a place to distribute video that’s under 20 minutes long.

With a $130 billion market cap, Netflix could certainly afford it. Though since Netflix already has $6 billion in debt from financing Originals, it would have to either sell more debt or issue Netflix shares to Snapchat’s owners. But given Netflix’s high-flying performance, massive market share, and cultural primacy, the big question is whether Snap would drag it down.

So how much would it potentially cost? Snap’s market cap is hovering around $8.8 billion with a $6.28 share price. That’s around its all-time low and just over a quarter of its IPO pop share price high. Acquiring Snap would surely require paying a premium above the market cap. Remember, Google already reportedly offered to acquire Snap for $30 billion prior to its final funding round and IPO. But that was before Snap’s growth rate sunk and it started losing the Stories War to Facebook. A much smaller offer could look a lot prettier now.

Social networks are hard to kill. If Snap can cut costs, fix its product, improve revenue per users, and score some outside investment, it could survive and slowly climb. If Twitter is any indication, aging social networks can reflower into lucrative businesses given enough time and product care. But if Snapchat wants to play in the big leagues and continue having a major influence on the mobile future, it may have to snap out of the idea that it can win on its own.

Google Maps takes on Facebook Pages with new ‘Follow’ feature for tracking businesses

Google Maps has been steadily rolling out new features to make its app more than just a way to find places and navigate to them. In recent months, it’s added things like group trip planningmusic controls, commuter tools, ETA sharing, personalized recommendations, and more. Now, it’s introducing a new way for users to follow their favorite businesses, as well – like restaurants, bars, or stores, for example – in order to stay on top of their news and updates.

If that sounds a lot like Google Maps’ own version of Facebook Pages, you’re right.

Explains the company, once you tap the new “follow” to track a business, you’ll then be able to see news from those places like their upcoming events, their offers, and other updates right in the “For You” tab on Google Maps.

Events, deals and photo-filled posts designed to encourage foot traffic? That definitely sounds like a Facebook Page competitor aimed at the brick-and-mortar crowd.

Businesses can also use the Google Maps platform to start reaching potential customers before they open to the public, Google notes.

After building a Business Profile using Google My Business which includes their opening date, the business will then be surfaced in users’ searches on mobile web and in the app, up to three months before their opening.

This profile will display the opening date in orange just below the business name, and users can save the business to one of their lists, if they choose. Users can also view all the other usual business information, like address, phone, website and photos.

The new “follow” feature will be accessible to the over 150 million places already on Google Maps, as well as the millions of users who are seeking them out.

The feature has been spotted in the wild for some time before Google’s official announcement this week, and is rolling out over the next few weeks, initially on Android.

The “For You” tab is currently available in limited markets, with more countries coming soon, says Google.