The war for the countertop has begun. Google, Amazon, and Facebook all revealed their new smart displays this month. Each hopes to become the center of your internet of things-equipped home and a window to your loved ones. The $149 Google Home Hub is cheap and privacy-safe smart home controller. The $229 Amazon Echo Show 2 gives Alexa a visual complement. And the $199 Facebook Portal and $349 Portal+ offer a Smart Lens that automatically zooms in and out to keep you in frame while you video chat.
For consumers, the biggest questions to consider are how much you care about privacy, whether you really video chat, which smart home ecosystem you’re building around, and how much you want to spend.
For the privacy obsessed, Google’s Home Hub is the only one without a camera and it’s dirt cheap at $149.
For the privacy agnostic, Facebook’s Portal+ offers the best screen and video chat functionality
For the chatty, Amazon Echo Show 2 can do message and video chat over Alexa, call phone numbers, and is adding Skype
If you want to go off-brand, there’s also the Lenovo Smart Display with stylish hardware in a $249 10-inch 1080p version and a $199 8-inch 720p version. And for the audiophile, there’s the $199 JBL Link View. While those hit the market earlier than the platform-owned versions we’re reviewing here, they’re not likely to benefit from the constant iteration Google, Amazon, and Facebook are working on for their tabletop screens.
Here’s a comparison of the top smart displays, including their hardware specs, unique software, killer features, and pros and cons:
Google+ is shutting down at last. Google announced today it’s sunsetting its consumer-facing social network due to lack of user and developer adoption, low usage and engagement. Oh, and a data leak. It even revealed how poorly the network is today performing, noting that 90% of Google+ user sessions are less than five seconds long. Yikes.
But things weren’t always like this. Google+ was once heralded as a serious attempt to topple Facebook’s stranglehold on social networking, and was even met with excitement in its first days.
June: The Unveiling
The company originally revealed its new idea for social networking in June 2011. It wasn’t Google’s first foray into social, however. Google had made numerous attempts to offer a social networking service of some sort, with Orkut, launched in 2004 and shuttered in fall 2014; Google Friend Connect in 2008 (retired in 2012), and Google Buzz in 2010 (it closed the next year).
But Google+ was the most significant attempt the company had made, proclaiming at the time: “we believe online sharing is broken.”
The once top-secret project was the subject of severalleaksahead of its launch, allowing consumer interest in the project to build.
Led by Vic Gundotra and Bradley Horowitz, Google’s big idea to fix social was to get users to create groups of contacts – called “Circles” – in order to have more control over social sharing. That is, there are things that are appropriate for sharing with family or close friends, and other things that make more sense to share with coworkers, classmates, or those who share a similar interest – like biking or cooking, for example.
But getting users to create groups is difficult because the process can be tedious. Google, instead, cleverly designed a user interface that made organizing contacts feel simpler – even fun, some argued. It was also better than the system for contact organization that Facebook was offering at the time.
Next thing you know, everyone was setting up their Circles by dragging-and-dropping little profile icons into these groups, and posting updates and photos to their newly created micro-networks.
Another key feature, “Sparks,” helped users find news and content related to a user’s particular interests. This way, Google could understand what people liked and wanted to track, without having an established base of topical pages for users to “Like,” as on Facebook. But it also paved the way for a new type of search. Instead of just returning a list of blue links, a search on Google+ could return people’s profiles who were relevant to the topic at hand, matching pages, and other content.
Google+ also introduced Hangouts, a way to video chat with up to 10 people in one of your Circles at once.
At the time, the implementation was described as almost magical. This was due to a number of innovative features, like the way the software focused in on the person talking, for example, and the way everyone could share content within a chat.
Early growth looked promising
Within two weeks, it seemed Google had a hit on its hands, as the network had reached 10 million users. Just over a month after launch, it had grown to 25 million. By October 2011, it reached 40 million. And by year-end, 90 million. Even if Google was only tracking sign-up numbers, it still appeared like a massive threat to Facebook.
Facebook CEO Mark Zuckerberg’s first comment about Google+, however, smartly pointed out that any Facebook competitor will have to build up a social graph to be relevant. Facebook, which had 750 million users at the time, had already done this. Google+ was getting the sign-ups, but whether users would remain active over time was still in question.
July: Backlashes over brands and Real Names policy
In an effort to compete with Facebook, Google+ also enforced a “real names” policy. This angered many users who wanted to use pseudonyms or nicknames, especially when Google began deleting their accounts for non-compliance. This was a larger issue than merely losing social networking access, because losing a Google account meant losing Gmail, Documents, Calendar and access to other Google products, too.
It wouldn’t fix some of these problems for years, in fact. Eric Schmidt even reportedly once suggested finding another social network if you didn’t want to use your real name – a comment that came across as condescending.
If you can’t beat ’em, force ’em! Google began to require users to have a Google+ account in order to sign-up for Gmail. It was not a user-friendly change, and was the start of a number of forced integrations to come.
March: Criticism mounts
TechCrunch’s Devin Coldewey argued that Google failed to play the long game in social, and was too ambitious in its attempt with Google+. All the network really should have started with was its “+1” button – the clicks would generate piles of data tied to users that could then be searchable, private by default, and shareable elsewhere.
June: Event spam goes viral
Spam remained an issue on Google+. This time, event spam had emerged, thanks to all the nifty integrations between Google+ and mission-critical products like Calendar.
Users were not thrilled that other people were able to “invite” them to events, and these automatically showed up on your Calendar – even if you had not yet confirmed that you would be attending. It made using Google+ feel like a big mistake.
November: Hangouts evolves
The following year after Google+’s launch, there was already a lot of activity around Hangouts – which interestingly, has since become one of the big products that will outlive its original Google+ home.
Video was a tough space to get right – which is why businesses like Skype were still thriving. And while Hangouts were designed for friends and family to use in Google+, Google was already seeing companies adopt the technology for meetings, and brands like the NBA for connecting with fans.
December: Google+ adds Communities
The focus on user interests in Google+ also continued to evolve this year with the launch of Communities – a way for people to set up topic-based forums on the site. The move was made in hopes of attracting more consumer interest, as growth had slowed.
It was a notable indication of how little love people had for Google+. YouTubers were downright pissed. One girl even crafted a profane music video in response, with lyrics like “You ruined our site and called it integration / I’m writing this song just to vent our frustration / Fuck you, Google Plusssssss!”
April: Vic Gundotra, Father of Google+, leaves Google
Google+ lost its founder. In April 2014, it was announced that Vic Gundotra, the father of Google+, was leaving the company. Google CEO Larry Page said at the time that the social network would still see investment, but it was a signal that a shift was coming in terms of Google’s approach.
The forced integrations of the past would be walked back, like those in Gmail and YouTube, and teams would be reshuffled.
July: Hangouts breaks free
Perhaps one of the most notable changes was letting Hangouts go free. Hangouts was a compelling product – too important to require a tie to Google+. In July 2014, Hangouts began to work without a Google+ account, rolled out to businesses and got itself an SLA.
July: Google+ drops its real name rule and apologizes
While Google had started rolling back on the real name policy in January of 2012 by opening rules to include maiden names and select nicknames, it still displayed your real name alongside your chosen name. It was nowhere near what people wanted.
Now, Google straight up apologized for its decision around real names and hoped the change would bring users back. It did not. It was too late.
May: Google Photos breaks free
Following Hangouts, Google realized that Google+’s photo-sharing features also deserved to become their own, standalone product.
At Google I/O 2015, the company announced its Google Photos revamp. The new product took advantage of A.I. and machine learning capabilities that originated on Google+. This included allowing users to search photos for persons, places and things, as well as an update on Google+’s “auto awesome” feature, which turned into the more robust Google Photos Assistant.
Bradley Horowitz, VP, Photos and Streams at Google and Product Director at Google, Luke Wroblewski, had teamed up to redesign Google+ around what Google’s data indicated was working: Communities and Collections. Essentially, the new Google+ was focused on users and their interests. It let people network around topics, but not necessarily their personal connections.
Horowitz explained at the time that Google had heard from users “that it doesn’t make sense for your Google+ profile to be your identity in all the other Google products you use,” and it was responding accordingly.
August: Hangouts on Air moved to YouTube Live
One of the social network’s last exclusive features, Hangouts on Air – a way to broadcast a Hangout – moved to YouTube Live in 2016, as well.
Google+ went fairly quiet. The site was still there, but the communities were filling with spam. Community moderators said they couldn’t keep up. Google’s inattention to the problem was a signal in and of itself that the grand Google+ experiment may be coming to a close.
In January 2017, it no longer allowed users to switch back to the old look. It also took the time to highlight groups that were popular on Google+ to counteract the narrative that the site was “dead.” (Even though it was.)
August: Google+ removed share count from +1 button
The once ubiquitous “+1” button launched in spring 2012, was getting a revamp. It would no longer display the number of shares. Google said this was to make the button load more quickly. But it was really because the share counts were not worth touting anymore.
October 2018: Google+ got its Cambridge Analytica moment
A security bug allowed third-party developers to access Google+ user profile data since 2015 until Google discovered it in March, but decided not to inform users. In total, 496,951 users’ full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status were potentially exposed. Google says it doesn’t have evidence the data was misused, but it decided to shut down the consumer-facing Google+ site anyway, given its lack of use.
Data misuse scandals like Cambridge Analytica have damaged Facebook and Twitter’s reputations, but Google+ wasn’t similarly impacted. After all, Google was no longer claiming Google+ be a social network. And, as its own data shows, the network that remained was largely abandoned.
But the company still had piles of user profile data on hand, which were put at risk. That may lead Google to face a similar fate as the more active social networks, in terms of being questioned by Congress or brought up in lawmakers’ discussions about regulations.
In hindsight, then, maybe it would have been better if Google had shut down Google+ years ago.
Google is about to have its Cambridge Analytica moment. A security bug allowed third-party developers to access Google+ user profile data since 2015 until Google discovered and patched it in March, but decided not to inform the world. When a user gave permission to an app to access their public profile data, the bug also let those developers pull their and their friends’ non-public profile fields. 496,951 users’ full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status were potentially exposed, though Google says it has no evidence the data was misused by the 438 apps that could have had access.
The company decided against informing the public because it would lead to “us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal” according to an internal memo. Now Google+, which was already a ghost town largely abandoned or never inhabited by users, has become a massive liability for the company.
The news comes from a damning Wall Street Journal report that said Google is expected to announce a slew of privacy reforms today in response to the bug. Google made that announcement about the findings of its Project Strobe security audit minutes after the WSJ report was published. The changes include stopping most third-party developers from accessing Android phone SMS data, call logs, and some contact info. Gmail will restrict building add-ons to a small number of developers. Google+ will cease all its consumer services while winding down over the next 10-months with an opportunity for users to export their data while Google refocuses on making G+ an enterprise product.
Google will also change its Account Permissions system for giving third-party apps access to your data such that you have to confirm each type of access individually rather than all at once. Gmail Add-Ons will be limited to those “directly enhancing email functionality”, including email clients, backup, CRM, mail merge, and productivity tools.
90 percent of Google+ sessions were less than 5 seconds
Embarrasingly, Google’s admits that “This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.” For more on G+’s demise, read our 2014 take on the beginning of the end.
Since the bug and subsequent security hole started in 2015 and was discovered in March before Europe’s GDPR went into effect in May, Google will likely be spared a 2 percent of global annual revenue fine for failing to disclose the issue within 72 hours. The company could still face class-action lawsuits and public backlash. On the bright side, G+ posts and messages, Google account data and phone numbers, and G Suite enterprise content wasn’t exposed.
How Google+ looked, in case you can’t remember
Given it’s unclear whether the G+ user data was scraped or if it will be employed for a nefarious purpose, the news of the bug itself might have eventually blow over, similar to how I wrote Facebook’s recent 50 million user privacy breach may be forgotten if no evil use is found. But because Google tried to cover up the problem because it didn’t meet some threshold of severity, the company looks much worse. That casts doubt on whether Google is being transparent on tons of other controversial questions about its practices.
The fiasco could thrust Google into the same churning sea of scrutiny currently drowning Facebook, just as the company feared. Google has managed to float above much of the criticism leveled at Facebook and Twitter, in part by claiming it’s not really a social network. But now its failed Facebook knock-off from seven years ago could drag down the search giant and see it endure increasingly calls for testimony before congress and regulation.
European Union lawmakers are facing a major vote on digital copyright reform proposals on Wednesday — a process that has set the Internet’s hair fully on fire.
Here’s a run down of the issues and what’s at stake…
The most controversial component of the proposals concerns user-generated content platforms such as YouTube, and the idea they should be made liable for copyright infringements committed by their users — instead of the current regime of takedowns after the fact (which locks rights holders into having to constantly monitor and report violations — y’know, at the same time as Alphabet’s ad business continues to roll around in dollars and eyeballs).
Critics of the proposal argue that shifting the burden of rights liability onto platforms will flip them from champions to chillers of free speech, making them reconfigure their systems to accommodate the new level of business risk.
More specifically they suggest it will encourage platforms into algorithmically pre-filtering all user uploads — aka #censorshipmachines — and then blinkered AIs will end up blocking fair use content, cool satire, funny memes etc etc, and the free Internet as we know it will cease to exist.
Backers of the proposal see it differently, of course. These people tend to be creatives whose professional existence depends upon being paid for the sharable content they create, such as musicians, authors, filmmakers and so on.
Their counter argument is that, as it stands, their hard work is being ripped off because they are not being fairly recompensed for it.
Consumers may be the ones technically freeloading by uploading and consuming others’ works without paying to do so but creative industries point out it’s the tech giants that are gaining the most money from this exploitation of the current rights rules — because they’re the only ones making really fat profits off of other people’s acts of expression. (Alphabet, Google’s ad giant parent, made $31.16BN in revenue in Q1 this year alone, for example.)
YouTube has been a prime target for musicians’ ire — who contend that the royalties the company pays them for streaming their content are simply not fair recompense.
The second controversy attached to the copyright reform concerns the use of snippets of news content.
European lawmakers want to extend digital copyright to also cover the ledes of news stories which aggregators such as Google News typically ingest and display — because, again, the likes of Alphabet is profiting off of bits of others’ professional work without paying them to do so. And, on the flip side, media firms have seen their profits hammered by the Internet serving up free content.
The reforms would seek to compensate publishers for their investment in journalism by letting them charge for use of these text snippets — instead of only being ‘paid’ in traffic (i.e. by becoming yet more eyeball fodder in Alphabet’s aggregators).
Critics don’t see it that way of course. They see it as an imposition on digital sharing — branding the proposal a “link tax” and arguing it will have a wider chilling effect of interfering with the sharing of hyperlinks.
They argue that because links can also contain words of the content being linked to. And much debate has raged over on how the law would (or could) define what is and isn’t a protected text snippet.
They also claim the auxiliary copyright idea hasn’t worked where it’s already been tried (in Germany and Spain). Google just closed its News aggregator in the latter market, for example. Though at the pan-EU level it would have to at least pause before taking a unilateral decision to shutter an entire product.
Germany’s influential media industry is a major force behind Article 11. But in Germany a local version of a snippet law that was passed in 2013 ended up being watered down — so news aggregators were not forced to pay for using snippets, as had originally been floated.
Without mandatory payment (as is the case in Spain) the law has essentially pitted publishers against each other. This is because Google said it would not pay and also changed how it indexes content for Google News in Germany to make it opt-in only.
That means any local publishers that don’t agree to zero-license their snippets to Google risk losing visibility to rivals that do. So major German publishers have continued to hand their snippets over to Google.
But they appear to believe a pan-EU law might manage to tip the balance of power. Hence Article 11.
Awful amounts of screaming
For critics of the reforms, who often sit on the nerdier side of the spectrum, their reaction can be summed up by a screamed refrain that IT’S THE END OF THE FREE WEB AS WE KNOW IT.
A coalition of original Internet architects, computer scientists, academics and others — including the likes of world wide web creator Sir Tim Berners-Lee, security veteran Bruce Schneier, Google chief evangelist Vint Cerf, Wikipedia founder Jimmy Wales and entrepreneur Mitch Kapor — also penned an open letter to the European Parliament’s president to oppose Article 13.
In it they wrote that while “well-intended” the push towards automatic pre-filtering of users uploads “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.
There is more than a little irony there, though, given that (for example) Google’s ad business conducts automated surveillance of the users of its various platforms for ad targeting purposes — and through that process it’s hoping to control the buying behavior of the individuals it tracks.
At the same time as so much sound and fury has been directed at attacking the copyright reform plans, another very irate, very motivated group of people have been lustily bellowing that content creators need paying for all the free lunches that tech giants (and others) have been helping themselves to.
But the death of memes! The end of fair digital use! The demise of online satire! The smothering of Internet expression! Hideously crushed and disfigured under the jackboot of the EU’s evil Filternet!
And so on and on it has gone.
(For just one e.g., see the below video — which was actually made by an Australian satirical film and media company that usually spends its time spoofing its own government’s initiatives but evidently saw richly viral pickings here… )
For a counter example, to set against the less than nuanced yet highly sharable satire-as-hyperbole on show in that video, is the Society of Authors — which has written a 12-point breakdown defending the actual substance of the reform (at least as it sees it).
A topline point to make right off the bat is it’s hardly a fair fight to set words against a virally sharable satirical video fronted by a young lady sporting very pink lipstick. But, nonetheless, debunk the denouncers these authors valiantly attempt to.
To wit: They reject claims the reforms will kill hyperlinking or knife sharing in the back; or do for online encyclopedias like Wikimedia; or make snuff out of memes; or strangle free expression — pointing out that explicit exceptions that have been written in to qualify what it would (and would not) target and how it’s intended to operate in practice.
Wikipedia, for example, has been explicitly stated as being excluded from the proposals.
But they are still pushing water uphill — against the tsunami of DEATH OF THE MEMES memes pouring the other way.
Russian state propaganda mouthpiece RT has even joined in the fun, because of course Putin is no fan of EU…
The Society of Authors makes the very pertinent point that tech giants have spent millions lobbying against the reforms. They also argue this campaign has been characterised by “a loop of misinformation and scaremongering”.
So, basically, Google et al stand accused of spreading (even more) fake news with a self-interested flavor. Who’d have thunk it?!
Dollar bills standing on a table in Berlin, Germany. (Photo by Thomas Trutschel/Photothek via Getty Images)
The EU’s (voluntary) Transparency Register records Google directly spending between $6M and $6.4M on regional lobbying activities in 2016 alone. (Although that covers not just copyright related lobbying but a full laundry list of “fields of interest” its team of 14 smooth-talking staffers apply their Little Fingers to.)
But the company also seeks to exert influence on EU political opinion via membership of additional lobbying organizations.
And the register lists a full TWENTY-FOUR organizations that Google is therefore also speaking through (by contrast, Facebook is merely a member of eleven bodies) — from the American chamber of Commerce to the EU to dry-sounding thinktanks, such as the Center for European Policy Studies and the European Policy Center. It is also embedded in startup associations, like Allied for Startups. And various startup angles have been argued by critics of the copyright reforms — claiming Europe is going to saddle local entrepreneurs with extra bureaucracy.
Google’s dense web of presence across tech policy influencers and associations amplifies the company’s regional lobbying spend to as much as $36M, music industry bosses contend.
Though again that dollar value would be spread across multiple GOOG interests — so it’s hard to sum the specific copyright lobbying bill. (We asked Google — it didn’t answer). Multiple millions looks undeniable though.
Of course the music industry and publishers have been lobbying too.
But probably not at such a high dollar value. Though Europe’s creative industries have the local contacts and cultural connections to bend EU politicians’ ears. (As, well, they probably should.)
Seasoned European commissioners have professed themselves astonished at the level of lobbying — and that really is saying something.
Yes there are actually two sides to consider…
Returning to the Society of Authors, here’s the bottom third of their points — which focus on countering the copyright reform critics’ counterarguments:
The proposals aren’t censorship: that’s the very opposite of what most journalists, authors, photographers, film-makers and many other creators devote their lives to.
Not allowing creators to make a living from their work is the real threat to freedom of expression.
Not allowing creators to make a living from their work is the real threat to the free flow of information online.
Not allowing creators to make a living from their work is the real threat to everyone’s digital creativity.
Stopping the directive would be a victory for multinational internet giants at the expense of all those who make, enjoy and enjoy using creative works.
Certainly some food for thought there.
But as entrenched, opposing positions go, it’s hard to find two more perfect examples.
And with such violently opposed and motivated interest groups attached to the copyright reform issue there hasn’t really been much in the way of considered debate or nuanced consideration on show publicly.
But being exposed to endless DEATH OF THE INTERNET memes does tend to have that effect.
What’s that about Article 3 and AI?
There is also debate about Article 3 of the copyright reform plan — which concerns text and data-mining. (Or TDM as the Commission sexily conflates it.)
The original TDM proposal, which was rejected by MEPs, would have limited data mining to research organisations for the purposes of scientific research (though Member States would have been able to choose to allow other groups if they wished).
This portion of the reforms has attracted less attention (butm again, it’s difficult to be heard above screams about dead memes). Though there have been concerns raised from certain quarters that it could impact startup innovation — by throwing up barriers to training and developing AIs by putting rights blocks around (otherwise public) data-sets that could (otherwise) be ingested and used to foster algorithms.
Or that “without an effective data mining policy, startups and innovators in Europe will run dry”, as a recent piece of sponsored content inserted into Politico put it.
That paid for content was written by — you guessed it! — Allied for Startups.
Aka the organization that counts Google as a member…
The most fervent critics of the copyright reform proposals — i.e. those who would prefer to see a pro-Internet-freedoms overhaul of digital copyright rules — support a ‘right to read is the right to mine’ style approach on this front.
So basically a free for all — to turn almost any data into algorithmic insights. (Presumably these folks would agree with this kind of thing.)
Middle ground positions which are among the potential amendments now being considered by MEPs would support some free text and data mining — but, where legal restrictions exist, then there would be licenses allowing for extractions and reproductions.
And now the amendments, all 252 of them…
The whole charged copyright saga has delivered one bit of political drama already — when the European Parliament voted in July to block proposals agreed only by the legal affairs committee, thereby reopening the text for amendments and fresh votes.
So MEPs now have the chance to refine the parliament’s position via supporting select amendments — with that vote taking place next week.
There are 252 in all! Which just goes to show how gloriously messy the democratic process is.
It also suggests the copyright reform could get entirely stuck — if parliamentarians can’t agree on a compromise position which can then be put to the European Council and go on to secure final pan-EU agreement.
So, for example, she argues that amendments to add limited exceptions for platform liability would still constitute “upload filters” (and therefore “censorship machines”).
Her preference would be deleting the article entirely and making no change to the current law. (Albeit that’s not likely to be a majority position, given how many MEPs backed the original Juri text of the copyright reform proposals 278 voted in favor, losing out to 318 against.)
But she concedes that limiting the scope of liability to only music and video hosting platforms would be “a step in the right direction, saving a lot of other platforms (forums, public chats, source code repositories, etc.) from negative consequences”.
She also flags an interesting suggestion — via another tabled amendment — of “outsourcing” the inspection of published content to rightholders via an API”.
“With a fair process in place [it] is an interesting idea, and certainly much better than general liability. However, it would still be challenging for startups to implement,” she adds.
Reda has also tabled a series of additional amendments to try to roll back what she characterizes as “some bad decisions narrowly made by the Legal Affairs Committee” — including adding a copyright exception for user generated content (which would essentially get platforms off the hook insofar as rights infringements by web users are concerned); adding an exception for freedom of panorama (aka the taking and sharing of photos in public places, which is currently not allowed in all EU Member States); and another removing a proposed extra copyright added by the Juri committee to cover sports events — which she contends would “filter fan culture away“.
So is the free Internet about to end??
MEP Catherine Stihler, a member of the Progressive Alliance of Socialists and Democrats, who also voted in July to reopen debate over the reforms reckons nearly every parliamentary group is split — ergo the vote is hard to call.
“It is going to be an interesting vote,” she tells TechCrunch. “We will see if any possible compromise at the last minute can be reached but in the end parliament will decide which direction the future of not just copyright but how EU citizens will use the internet and their rights on-line.
“Make no mistake, this vote affects each one of us. I do hope that balance will be struck and EU citizens fundamental rights protected.”
So that sort of sounds like a ‘maybe the Internet as you know it will change’ then.
Other views are available, though, depending on the MEP you ask.
We reached out to Axel Voss, who led the copyright reform process for the Juri committee, and is a big proponent of Article 13, Article 11 (and the rest), to ask if he sees value in the debate having been reopened rather than fast-tracked into EU law — to have a chance for parliamentarians to achieve a more balanced compromise. At the time of writing Voss hadn’t responded.
Voting to reopen the debate in July, Stihler argued there are “real concerns” about the impact of Article 13 on freedom of expression, as well as flagging the degree of consumer concern parliamentarians had been seeing over the issue (doubtless helped by all those memes + petitions), adding: “We owe it to the experts, stakeholders and citizens to give this directive the full debate necessary to achieve broad support.”
MEP Marietje Schaake, a member of the Alliance of Liberals and Democrats for Europe, was willing to hazard a politician’s prediction that the proposals will be improved via the democratic process — albeit, what would constitute an improvement here of course depends on which side of the argument you stand.
But she’s routing for exceptions for user generated content and additional refinements to the three debated articles to narrow their scope.
Her spokesman told us: “I think we’ll end up with new exceptions on user generated content and freedom of panorama, as well as better wording for article 3 on text and data mining. We’ll end up probably with better versions of articles 11 and 13, the extent of the improvement will depend on the final vote.”
The vote will be held during an afternoon plenary session on September 12.
Another day, another political grilling for social media platform giants.
The Senate Intelligence Committee’s fourth hearing took place this morning, with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey present to take questions as U.S. lawmakers continue to probe how foreign influence operations are playing out on Internet platforms — and eye up potential future policy interventions.
During the session US lawmakers voiced concerns about “who owns” data they couched as “rapidly becoming me”. An uncomfortable conflation for platforms whose business is human surveillance.
They also flagged the risk of more episodes of data manipulation intended to incite violence, such as has been seen in Myanmar — and Facebook especially was pressed to commit to having both a legal and moral obligation towards its users.
The value of consumer data was also raised, with committee vice chair, Sen. Mark Warner, suggesting platforms should actively convey that value to their users, rather than trying to obfuscate the extent and utility of their data holdings. A level of transparency that will clearly require regulatory intervention.
Here’s our round-up of some of the other highlights from this morning’s session.
Google not showing up
Today’s hearing was a high profile event largely on account of two senior bums sitting on the seats before lawmakers — and one empty chair.
Facebook sent its COO Sheryl Sandberg. Twitter sent its bearded wiseman CEO Jack Dorsey (whose experimental word of the month appears to be “cadence” — as in he frequently said he would like a greater “cadence” of meetings with intelligence tips from law enforcement).
But Google sent the offer of its legal chief in place of Larry Page or Sundar Pichai, who the committee had actually asked for.
Which meant the company instantly became the politicians’ favored punchbag, with senator after senator laying into Alphabet for empty chairing them at the top exec level.
Whatever Page and Pichai were too busy doing to answer awkward questions about its business activity and ambitions in China the move looks like a major open goal for Alphabet as it was open season for senators to slam it.
Page staying away also made Facebook and Twitter look the very model of besuited civic responsibility and patriotism just for bothering to show up.
We got “Jack” and “Sheryl” first name terms from some of the senators, and plenty of “thanks for turning up” heaped on them from all corners — with some very particular barbs reserved for Google.
“I want to commend both of you for your appearance here today for what was no doubt going to be some uncomfortable questions. And I want to commend your companies for making you available. I wish I could say the same about Google,” said Senator Tom Cotton, addressing those in the room. “Both of you should wear it as a badge of honor that the Chinese Communist Party has blocked you from operating in their country.”
“Perhaps Google didn’t send a senior executive today because they’ve recently taken actions such as terminating a co-operation they had with the American military on programs like artificial intelligence that are designed not just to protect our troops and help them fight in our country’s wars but to protect civilians as well,” he continued, warming to his theme. “This is at the very same time that they continue to co-operate with the Chinese Communist Party on matters like artificial intelligence or partner with Huawei and other Chinese telecom companies who are effectively arms of the Chinese Communist Party.
“And credible reports suggest that they are working to develop a new search engine that would satisfy the Chinese Communist Party’s censorship standards after having disclaimed any intent to do so eight years ago. Perhaps they did not send a witness to answer these questions because there is no answer to these questions. And the silence we would hear right now from the Google chair would be reminiscent of the silence that that witness would provide.”
Even Sandberg seemed to cringe when offered the home-run opportunity to stick the knife in to Google — when Cotton asked both witnesses whether their companies would consider taking these kinds of actions?
But after a split second’s hesitation her media training kicked in — and she found her way of diplomatically giving Google the asked for kicking. “I’m not familiar with the specifics of this at all but based on how you’re asking the question I don’t believe so,” was her reply.
After his own small pause, Dorsey, the man of fewer words, added: “Also no.”
Dorsey repeat apologizing
‘We haven’t done a good job of that’ was the most common refrain falling from Dorsey’s bearded lips this morning as senators asked why the company hasn’t managed to suck less from all sorts of angles — whether that’s by failing to provide external researchers with better access to data to help them help it with malicious interference; or failing to informing individual users who’ve been the targeted victims of Twitter fakery that that abuse has been happening to them; or just failing to offer any kind of contextual signal to its users that some piece of content they’re seeing is (or might be) maliciously fake.
But then this is the man who has defended providing a platform to people who make a living selling lies, so…
“We haven’t done a good job of that in the past,” was certainly phrase of the morning for a contrite Dorsey. And while admitting failure is at least better than denying you’re failing, it’s still just that: Failure.
And continued failure has been a Twitter theme for so long now, when it comes to things like harassment and abuse, it’s starting to feel intentional. (As if, were you able to cut Twitter you’d find the words ‘feed the trolls’ running all the way through its business.)
Sadly the committee seemed to be placated by Dorsey’s repeat confessions of inadequacy. And he really wasn’t pressed enough. We’d have liked to see a lot more grilling of him over short term business incentives that tie his hands on fighting abuse.
Amusingly, one senator rechristened Dorsey “Mr Darcey”, after somehow tripping over the two syllables of his name. But actually, thinking about it, ‘pride and prejudice’ might be a good theme for the Twitter CEO to explore during one of his regular meditation sessions.
Y’know, as he ploughs through a second turgid decade of journeying towards self-awareness — while continuing to be paralyzed, on the business, civic and, well, human being, front, by rank indecision about which people and points of view to listen to (Pro-Tip: If someone makes money selling lies and/or spreading hate you really shouldn’t be letting them yank your operational chain) — leaving his platform (the would be “digital public square”, as he kept referring to it today), incapable of upholding the healthy standards it claims to want to have. (Or daubed with all manner of filthy graffiti, if you want a visual metaphor.)
The problem is Twitter’s stated position/mission, in Dorsey’s prepared statements to the committee, of keeping “all voices on the platform” is hubris. It’s a flawed ideology that results in massive damage to the free speech and healthy conversation he professes to want to champion because nazis are great at silencing people they hate and harass.
Unfortunately Dorsey still hasn’t had that eureka moment yet. And there was no sign of any imminent awakening judging by this morning’s performance.
Sandberg’s oh-so-smooth operation— but also an exchange that rattled her
The Facebook COO isn’t chief operating officer for nothing. She’s the queen of the polished, non-committal soundbite. And today she almost always had one to hand — smoothly projecting the impression that the company is always doing something. Whether that’s on combating hate speech, hoaxes and “inauthentic” content, or IDing and blocking state-level disinformation campaigns — thereby shifting attention off the deeper question of whether Facebook is doing enough. (Or even whether its platform might not be the problem itself.)
Albeit the bar looks very low indeed when your efforts are being set against Twitter and an empty chair. (Aka the “invisible witness” as one senator sniped at Google.)
Very many of her answers courteously informed senators that Facebook would ‘follow up’ with answers and/or by providing some hazily non-specific ‘collaborative work’ at some undated future time — which is the most professional way to kick awkward questions into the long grass.
Though do it long enough and the grass can turn on you and start to bite back because it’s got so long and unkempt it now contains some very angry snakes.
Senator, Kamala Harris, very clearly seething at this point — having had her questions to Facebook knocked about since November 2017, when its general council had first testified to the committee on the disinformation topic — was determined to get under Sandberg’s skin. And she did.
The exchange that rattled the Facebook COO started off around how much money it makes off of ads run by fake accounts — such as the Kremlin-backed Internet Research Agency.
Sandberg slickly reframed “inauthentic content” to an even more boring sound “inorganic content” — now several psychologic steps removed from the shockingly outrageous Kremlin propaganda that the company eventually disclosed.
She added it was equivalent to .004% of content in News Feed (hence Facebook’s earlier contention to Harris that it’s “immaterial to earnings”).
It’s not so much the specific substance of the question that’s the problem here for Facebook — with Sandberg also smoothly reiterating that the IRA had spent about $100k (which is petty cash in ad terms) — it’s the implication that Facebook’s business model profits off of fakes and hates, and is therefore amorously entwined in bed with fakes and hates.
“From our point of view, Senator Harris, any amount is too much,” continued Sandberg after she rolled out the $100k figure, and now beginning to thickly layer on the emulsion.
Harris cut her off, interjecting: “So are you saying that the revenue generated was .004% of your annual revenue”, before adding the pointed observation: “Because of course that would not be immaterial” — which drew a rare stuttered double “so” from Sandberg.
“So what metric are you using to calculate the revenue that was generated associated with those ads, and what is the dollar amount that is associated then with that metric?” pressed Harris.
Sandberg couldn’t provide the straight answer being sought, she said, because “ads don’t run with inorganic content on our service” — claiming: “There is actually no way to firmly ascertain how much ads are attached to how much organic content; it’s not how it works.”
“But what percentage of the content on Facebook is organic,” rejoined Harris.
That elicited a micro-pause from Sandberg, before she fell back on the usual: “I don’t have that specific answer but we can come back to you with that.”
Harris pushed her again, wondering if it’s “the majority of content”?
“No, no,” said Sandberg, sounding almost flustered.
“Your company’s business model is complex but it benefits from increased user engagement… so, simply put, the more people that use your platform the more they are exposed to third party ads, the more revenue you generate — would you agree with that,” continued Harris, starting to sound boring but only to try to reel her in.
After another pause Sandberg asked her to repeat this hardly complex question — before affirming “yes, yes” and then hastily qualifying it with: “But only I think when they see really authentic content because I think in the short run and over the long run it doesn’t benefit us to have anything inauthentic on our platform.”
Harris continued to hammer on how Facebook’s business model benefits from greater user engagement as more ads are viewed via its platform — linking it to “a concern that many have is how you can reconcile an incentive to create and increase your user engagement with the content that generates a lot of engagement is often inflammatory and hateful”.
She then skewered Sandberg with a specific example of Facebook’s hate speech moderation failure — and by suggestive implication a financially incentivized policy and moral failure — referencing a ProPublica report from June 2017 which revealed the company had told moderators to delete hate speech targeting white men but not black children — because the latter were not considered a “protected class”.
Sandberg, sounding uncomfortable now, said this was “a bad policy that has been changed”. “We fixed it,” she added.
“But isn’t that a concern with hate period, that not everyone is looked at the same way,” wondered Harris?
Facebook “cares tremendously about civil rights” said Sandberg, trying to regain the PR initiative. But she was again interrupted by Harris — wondering when exactly Facebook had “addressed” that specific policy failure.
Sandberg was unable to put a date on when the policy change had been made. Which obviously now looked bad.
“Was the policy changed after that report? Or before that report from ProPublica?” pressed Harris.
“I can get back to you on the specifics of when that would have happened,” said Sandberg.
“You’re not aware of when it happened?”
“I don’t remember the exact date.”
“Do you remember the year?”
“Well you just said it was 2017.”
“So do you believe it was 2017 when the policy changed?”
“Sounds like it was.”
The awkward exchange ended with Sandberg being asked whether or not Facebook had changed its hate speech policies to protect not just those people who have been designated legally protected classes of people.
“I know that our hate speech policies go beyond the legal classifications, and they are all public, and we can get back to that on that,” she said, falling back on yet another pledge to follow up.
Twitteragreeing tobot labelling in principle
We flagged this earlier but Senator Warner managed to extract from Dorsey a quasi-agreement to labelling automation on the platform in future — or at least providing more context to help users navigate what they’re being exposed to in tweet form.
He said Twitter has been “definitely” considering doing this — “especially this past year”.
Although, as we noted earlier, he had plenty of caveats about the limits of its powers of bot detection.
“It’s really up to the implementation at this point,” he added.
How exactly ‘bot or not’ labelling will come to Twitter isn’t clear. Nor was there any timeframe.
But it’s at least possible to imagine the company could add some sort of suggestive percentage of automated content to accounts in future — assuming Dorsey can find his first, second and third gears.
Lawmakers worried about the impact of deepfakes
Deepfakes, aka AI-powered manipulation of video to create fake footage of people doing things they never did is, perhaps unsurprisingly, already on the radar of reputation-sensitive U.S. lawmakers — even though the technology itself is hardly in widespread, volume usage.
Several senators asked whether (and how comprehensively) the social media companies archive suspended or deleted accounts.
Clearly politicians are concerned. No senator wants to be ‘filmed in bed with an intern’ — especially one they never actually went to bed with.
The response they got back was a qualified yes — with both Sandberg and Dorsey saying they keep such content if they have any suspicions.
Which is perhaps rather cold comfort when you consider that Facebook had — apparently — zero suspicious about all the Kremlin propaganda violently coursing across its platform in 2016 and generating hundreds of millions of views.
Since that massive fuck-up the company has certainly seemed more proactive on the state-sponsored fakes front — recently removing a swathe of accounts linked to Iran which were pushing fake content, for example.
Although unless lawmakers regulate for transparency and audits of platforms there’s no real way for anyone outside these commercially walled gardens to be 110% sure.
Sandberg’s clumsy affirmation of WhatsApp encryption
Since the WhatsApp founders left Facebook, earlier this year and in fall last, there have been rumors that the company might be considering dropping the flagship end-to-end encryption that the messaging platform boasts — specifically to help with its monetization plans around linking businesses with users.
And Sandberg was today asked directly if WhatsApp still uses e2e encryption. She replied by affirming Facebook’s commitment to encryption generally — saying it’s good for user security.
“We are strong believers in encryption,” she told lawmakers. “Encryption helps keep people safe, it’s what secures our banking system, it’s what secures the security of private messages, and consumers rely on it and depend on it.”
Yet on the specific substance of the question, which had asked whether WhatsApp is still using end-to-end encryption, she pulled out another of her professionally caveated responses — telling the senator who had asked: “We’ll get back to you on any technical details but to my knowledge it is.”
Most probably this was just her habit of professional caveating kicking in. But it was an odd way to reaffirm something as fundamental as the e2e encrypted architecture of a product used by billions of people on a daily basis. And whose e2e encryption has caused plenty of political headaches for Facebook — which in turn is something Sandberg has been personally involved in trying to fix.
Should we be worried that the Facebook COO couldn’t swear under oath that WhatsApp is still e2e encrypted? Let’s hope not. Presumably the day job has just become so fettered with fixes she just momentarily forgot what she could swear she knows to be true and what she couldn’t.
Alphabet’s decision to decline to send its CEO Larry Page to today’s Senate Intelligence Committee hearing — to answer questions about what social media platforms are doing to thwart foreign influence operations intended to sow political division in the U.S. — has earned it a stinging rebuke from the committee’s vice chair, Sen. Mark Warner.
“I’m deeply disappointed that Google – one of the most influential digital platforms in the world – chose not to send its own top corporate leadership to engage this committee,” said Warner in his opening remarks, after praising Facebook and Twitter for agreeing to send their COO and CEO respectively.
Alphabet offered its SVP of global affairs and chief legal officer, Kent Walker, to testify in front of lawmakers but declined to send CEO Page or Google CEO Sundar Pichai .
Committee chairman, Richard Burr, was slightly less stinging in his opening remarks but also professed himself “disappointed that Google decided against sending the right senior level executive”.
“If the answer is regulation let’s have an honest dialogue about what that looks like. If the key is more resources or legislation that facilitates information sharing and government co-operation let’s get it out there,” he concluded. “If it’s national security policies that punish the kind of information and influence operations that we’re talking about this morning to the point that they aren’t even considered in foreign capitals then let’s acknowledge that. But whatever the answer is we’ve got to do this collaboratively and we’ve got to do this now. That’s our responsibility to the American people.”
Warner said committee members have “difficult questions about structural vulnerabilities on a number of Google’s platforms that we will need answered“, calling out a number of Google products by name and identifying abuse associated with those services.
“From Google Search, which continues to have problems surfacing absurd conspiracies….To YouTube, where Russian-backed disinformation agents promoted hundreds of divisive videos….To Gmail, where state-sponsored operatives attempt countless hacking attempts, Google has an immense responsibility in this space. Given its size and influence, I would have thought the leadership at Google would want to demonstrate how seriously it takes these challenges and to lead this important public discussion.”
We’ve reached out to Google for a response.
Warner concluded his opening remarks with some policy suggestions for regulating social media platforms, saying he wanted to get the companies’ constructive thoughts on issues such as whether platforms should identify bots to their users; whether there’s a public interest in ensuring more anonymized data is available to researchers and academics to help identify potential problems and misuse; why terms of service are “so difficult to find and nearly impossible to read; why US lawmakers shouldn’t adopt ideas such as data portability, data minimization, or first party consent — which are already baked into EU privacy law — and what further accountability there should be related to platforms’ “flawed advertising model”.
Update: A Google spokesperson sent us its earlier statement — in which it writes:
Over the last 18 months we’ve met with dozens of Committee Members and briefed major Congressional Committees numerous times on our work to prevent foreign interference in US elections. Our SVP of Global Affairs and Chief Legal Officer, who reports directly to our CEO and is responsible for our work in this area, will be in Washington, D.C. on September 5, where he will deliver written testimony, brief Members of Congress on our work, and answer any questions they have. We had informed the Senate Intelligence Committee of this in late July and had understood that he would be an appropriate witness for this hearing.
Facebook’s chief operating officer Sheryl Sandberg has admitted that the social networking giant could have done more to prevent foreign interference on its platforms, but said that the government also needs to step up its intelligence sharing efforts.
“We were too slow to spot this and too slow to act,” said Sandberg in prepared remarks. “That’s on us.”
The hearing comes in the aftermath of Russian interference in the 2016 presidential election. Social media companies have been increasingly under the spotlight after foreign actors, believed to be working for or closely to the Russian government, used disinformation spreading tactics to try to influence the outcome of the election, as well as in the run-up to the midterm elections later this year.
Both Facebook and Twitter have removed accounts and bots from their sites believed to be involved in spreading disinformation and false news. Google said last year that it found Russian meddling efforts on its platforms.
“We’re getting better at finding and combating our adversaries, from financially motivated troll farms to sophisticated military intelligence operations,” said Sandberg.
But Facebook’s second-in-command also said that the US government could do more to help companies understand the wider picture from Russian interference.
“We continue to monitor our service for abuse and share information with law enforcement and others in our industry about these threats,” she said. “Our understanding of overall Russian activity in 2016 is limited because we do not have access to the information or investigative tools that the U.S. government and this Committee have,” she said.
Later, Twitter’s Dorsey also said in his own statement: “The threat we face requires extensive partnership and collaboration with our government partners and industry peers,” adding: “We each possess information the other does not have, and the combined information is more powerful in combating these threats.”
Both Sandberg and Dorsey are subtly referring to classified information that the government has but private companies don’t get to see — information that is considered a state secret.
Tech companies have in recent years pushed for more access to knowledge that federal agencies have, not least to help protect against increasing cybersecurity threats and hostile nation state actors. The theory goes that the idea of sharing intelligence can help companies defend against the best resourced hackers. But efforts to introduce legislation has proven controversial because critics argue that in sharing threat information with the government private user data would also be collected and sent to US intelligence agencies for further investigation.
Instead, tech companies are now pushing for information from Homeland Security to better understand the threats they face — to independently fend off future attacks.
As reported, tech companies last month met in secret to discuss preparations to counter foreign manipulation on their platforms. But attendees, including Facebook, Twitter, and Google and Microsoft are said to have “left the meeting discouraged” that they received little insight from the government.