Whenever a company may be guilty of something, from petty neglect to grand deception, there’s usually a class action lawsuit filed. But until a judge rules that lawsuit legitimate, the threat remains fairly empty. Unfortunately for Facebook, one major suit from 2015 has just been given that critical go-ahead.
The case concerns an Illinois law that prohibits collection of biometric information, including facial recognition data, in the way that Facebook has done for years as part of its photo-tagging systems.
BIPA, the Illinois law, is a real thorn in Facebook’s side. The company has not only been pushing to have the case dismissed, but it has been working to have the whole law changed by supporting an amendment that would defang it — but more on that another time.
Judge James Donato in California’s Northern District has made no determination as to the merits of the case itself; first, it must be shown that there is a class of affected people with a complaint that is supported by the facts.
For now, he has found (you can read the order here) that “plaintiffs’ claims are sufficiently cohesive to allow for a fair and efficient resolution on a class basis.” The class itself will consist of “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.”
An earlier, broader class suggested by the plaintiffs included all Illinois users who appeared in a photograph on Facebook, but the judge, commendably, decided that this would include people who appeared in images but were not in fact recognized or recorded as face templates by the recognition systems. The more limited class will still amount to millions of people.
Facebook’s attempt to discredit the suit, quibbling over definitions and saying the plaintiffs “know almost nothing” about the systems in question, did not go over well with the judge. “The deposition testimony by the named plaintiffs shows a perfectly adequate understanding of the case, and it clearly manifests their concerns about Facebook’s treatment of personal biometric data,” he writes.
Its suggestion that no “actual” harm was caused also fails to hold water: “As the Court has already found, there is no question that plaintiffs here has sufficiently alleged that intangible injury.” Requiring “actual” injury would severely limit the reach of a rule like BIPA in Illinois, because, of course, the harm caused is one to one’s privacy and security, not to one’s body or wallet. Of course, the question of whether users consented to their “intangible injury” is yet to be settled, and may be a major crux in the case.
Facebook also tries the old chestnut of saying its servers aren’t in Illinois, so Illinois law doesn’t apply. “Contrary to Facebook’s suggestion,” writes Donato, “the geographic location of its data servers is not a dispositive factor. Server location may be one factor in the territoriality inquiry, but it is not the exclusive one.”
Lastly and most absurdly, Facebook argued that to establish legitimacy it would be necessary to check which users’ face templates were derived from scans of printed photographs instead of natively digital shots. “This too is unavailing,” says Donato, citing a total lack of evidence presented by Facebook.
When contacted for comment, Facebook provided a simple statement:
We are reviewing the ruling. We continue to believe the case has no merit and will defend ourselves vigorously.
The case will go ahead as ordered, though as before, at a snail’s pace.
Senator Kamala Harris (D-CA) spent her portion of today’s epic-length questioning of Mark Zuckerberg getting the CEO to squeeze himself deeper and deeper between a rock and a hard place. He didn’t reveal anything particularly damning, but he also — with her help — made himself look ineffective and clueless.
Her questioning had Zuckerberg contradicting himself on a serious topic: how the decision was made in 2015 to not inform the 87 million users that their data had been improperly sold off. If he didn’t know about how that decision was made, what kind of leadership was that? But if he did know, then how could no conversation have taken place about the decision before it was made? It was one of the few times in the hearing where Zuckerberg’s prepared remarks proved wholly insufficient.
Harris, who sounded bored — as well she might be after some of the softballs that had been lobbed in Zuckerberg’s direction — began by saying that she was “concerned” by what she’d heard.
“During the course of this hearing these last four hours you’ve been asked several critical questions for which you don’t have answers,” she began.
We were also tracking the many, many times Zuckerberg declined to answer clearly or deferred with the standard “we’ll follow up.” For the record, Harris listed that Zuckerberg did not address:
Whether Facebook can track activity across devices
Who is Facebook’s biggest competitor (Senator Graham pursued this with vigor)
Whether Facebook “may store up to 96 categories of user information” (I would be surprised if it is that few)
Whether he knew about Aleksandr Kogan’s terms of service or whether Kogan could sell or transfer data under them
But her main issue, aside from informing Zuckerberg that these points had not been forgotten, was to bring up the specific occurrence that in 2015, Facebook learned that the data of millions of users had been abused, and yet did not inform those users.
“A concern of mine is that you, meaning Facebook, and I’m going to assume you personally as CEO, became aware in December of 2015 that Dr Kogan and Cambridge Analytica misappropriated data from 87 million Facebook users. That’s 27 months ago,” she said. “However, a decision was made not to notify the users. So my question is did anyone at Facebook have a conversation, at the time that you became aware of this breach, have a conversation wherein the decision was made not to contact the users?”
Here Zuckerberg attempted the defense of not being able to know every conversation at Facebook “because I wasn’t in a lot of them… I mean, I’m not sure what other people discussed.”
WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)
Harris did not take the bait and when Zuckerberg attempted to steer the conversation towards the known facts of how Facebook responded in 2015, she pressed on:
“Were you part of a discussion that resulted in the decision not to inform your users?”
“I don’t remember a conversation like that,” Zuckerberg responded, and attempted to expand with “for the reason why—” only to be cut off by Harris again.
“Are you aware of anyone in leadership at Facebook who was in a conversation where a decision was made not to inform your users,” she asked, “or do you believe no such conversation ever took place?”
This was an excellent move. She’d limited Zuckerberg’s options to either admitting he was unaware of conversations among leadership choosing to withhold news of this data abuse from users (unrealistic), or admitting that leadership did not have those conversations (deeply troubling). Both reflect poorly on him, his executives, and the company. Zuckerberg prudently chose to plead ignorance.
“I’m not sure whether there was a conversation about that,” he said, yet immediately hit on a prepared line. “But I can tell you about the thought process at the time, of the company, which was that in 2015 when we hard about this, we banned the developer and we demanded that they delete all the data and stop using it, and the same with Cambridge Analytica. They told us they had—”
But Harris had no intention of allowing him to run out the clock with recycled, irrelevant statements, as he had many times in the previous hours.
“I’ve heard your testimony in that regard,” she cut in before finally taking her chance to bear down on him.
“But I’m talking about notification of the users. This relates to the issue of transparency and the relationship of trust — informing the user about what you know in terms of how their personal information has been misused. When you personally became aware of this, did you or senior leadership do an inquiry to find out who at Facebook had this information, and did they not have a discussion about whether or not the users should be informed, back in December of 2015?”
Zuckerberg was faced again with a poor choice, and instead opted for a show of humbleness.
“Senator, in retrospect I think we clearly view it as a mistake that we didn’t inform people, and we did that based on false information that we thought that the case was closed and that the data had been deleted.”
Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC. (Photo: JIM WATSON/AFP/Getty Images)
Harris jumped on this admission that “we did that”: “So there was a decision made on that basis not to inform the users, is that correct?”
“That’s my understanding, yes. But in retrospect I think that was a mistake and knowing what we know now we should have handled a lot of things here differently,” he continued, abjectly.
Harris politely dismissed this sad act (“And I appreciate that point”) and returned to business for one last question on this: “Do you know when that decision was made not to inform the users?”
“I don’t,” Zuckerberg said simply.
So to sum up: in 2015, it became clear to Facebook and certainly to senior leadership that the data of 87 million people had been sold against the company’s terms. Whether or not to inform those users seems like a fundamental question, yet Zuckerberg claimed to have no recollection of any discussion thereof. That hardly seems possible — especially since he later said that they had in fact had that discussion, and that the decision was made on bad information. But he doesn’t remember when this discussion, which he does or doesn’t remember, did or didn’t take place!
While this poor showing likely doesn’t rise to the level of falsity, this blatant dissimulation by Zuckerberg results in him coming off looking like a liar and a sap. For a hearing where the Senators themselves were often the ones making fools of themselves, it was nice to see the shoe on the other foot. I look forward to Senator Harris’s continuing attentions — her parting shot was telling Zuckerberg and everyone else how subpar their answers to her 50 (!) written questions from a previous hearing were. Here’s hoping she gets answers.
You can watch the full video below (courtesy of ABC):
The app permissions that led to 87 million Facebook users’ data being harvested and sold to Cambridge Analytica may have also allowed access to those users’ inboxes, the company confirmed today. This wasn’t achieved by any underhanded means, exactly, but people might not have realized that they were granting permission to read and record their private messages as well as more public data like location and interests.
That messages may have been collected by CA was revealed first by Facebook itself as part of its warning issued to the 87 million users in question. “A small number of people who logged into ‘This Is Your Digital Life’ also shared their own News Feed, timeline, posts and messages which may have included posts and messages from you,” reads the warning.
Access to messages had not been previously disclosed. And of course if someone affected had chatted with you, then your messages would also have been collected.
The permission used to do this was called “read_mailbox,” though it would have been put in more everyday terms when a user was agreeing to it. The dialog box would have said something along the lines of, “This app will be able to access your wall posts, friend list, contacts, messages…” in bullet points.
This Is Your Digital Life, the app created by researcher Aleksandr Kogan which served as the harvester for all this data, requested “read_mailbox” privileges for some period and, as Facebook tells Wired, a total of 1,500 people granted that permission.
It’s unclear why the number is so low if hundreds of thousands agreed to the terms, but the app may only have requested messaging access for a brief period — stopping, perhaps, upon finding that people balked at granting it.
Still, even if only 1,500 people had their messages collected directly, the number of people whose messages were indirectly collected could be orders of magnitude higher. After all, look at your inbox, if you have one — there are likely dozens of conversations, perhaps with hundreds of people. So that 1,500 could balloon to 150,000 real fast.
I’ve asked Facebook for clarification on how the 1,500 number was determined and what the number of secondary affected users is.
Today’s testimony by Mark Zuckerberg in front of a Senate joint committee was often boring or redundant with previous statements. But there was an exchange near the two-hour mark that was pleasantly refreshing: Senator Lindsey Graham (R-SC) doggedly pursuing a common-sense answer from Zuckerberg on the question of whether it had any real competition.
Graham doesn’t let Zuckerberg employ his spin on the admittedly complex question of what Facebook’s competitors are. Demanding a simpler answer by employing a folksy car-buying metaphor, he makes it clear that at least from one perspective, Facebook is more or less without a real competitor — with the possible exception of Instagram, which it of course opted to buy for a fortune rather than allow it to exist as a credible rival.
The Senator also makes it clear that he doesn’t think Facebook should be allowed to self-regulate — but his invitation to Zuckerberg to collaborate on rules sure sounds like he wants the company to have a say in how it should or should not be bound by law.
I’ve transcribed the exchange below:
Graham: Who’s your biggest competitor? Zuckerberg: Senator, we have a lot of competitors. Graham: Who’s your biggest? Zuckerberg: Mmm… I think the categories of… do you want just one? I’m not sure I can give one. But can I give a bunch? Graham: Mmhm. Zuckerberg: So there are three categories I would focus on. One are [sic] the other tech platforms, so Google, Apple, Amazon, Microsoft, we overlap with them in different ways. Graham: Do they do, do they provide the same service that you provide? Zuckerberg: Um, in different ways, different parts of it yes. Graham: Let me put it this way. If I buy a Ford and it doesn’t work well and I don’t like it, I can buy a Chevy. If I’m upset with Facebook, what’s the equivalent product that I can go sign up for? Zuckerberg: Ah well, the second category that I was going to talk about was… Graham: I’m not talking about categories. I’m talking about is there real competition you face. Because car companies face a lot of competition. If they make a defective car, it gets out in the world, people stop buying that car, they buy another one. Is there an alternative to Facebook in the private sector? Zuckerberg: Yes Senator, the average American uses 8 different apps… Graham: OK. Zuckerberg: …to communicate with their friends and stay in touch with people, ranging from text to email. Graham: OK, which is the same service that you provide. Zuckerberg: Well, we provide a number of different services. Graham: Is Twitter the same as what you do? Zuckerberg: It overlaps with a portion of what we do. Graham: You don’t think you have a monopoly? Zuckerberg: (long pause) Ah, it certainly doesn’t feel like that to me! (laughter) Graham: OK, so it doesn’t. So, Instagram. You bought Instagram. Why did you buy Instagram? Zuckerberg: Because they were very talented app developers who were making good use of our platform and understood our values. Graham: It was a good business decision. My point is that one way to regulate a company is through competition, through government regulation. Here’s the question that all of us got to answer. What do we tell our constituents, given what’s happened here, why we should let you self-regulate? What would you tell people in South Carolina, that given all the things we’ve just discovered here, it’s a good idea for us to rely on you to regulate your own business practices? Zuckerberg: Well Senator, my position is not that there should be no regulation. I think the internet is increasing in… Graham: Mmkay. You’d embrace regulation? Zuckerberg: I think the real question as the internet becomes more important in people’s lives, is what is the right regulation, not whether there should be regulation. Graham: But you as a company welcome regulation? Zuckerberg: I think if it’s the right regulation then yes. Graham: Do you think the Europeans have it right? Zuckerberg: Ah, I think that they get… things right. Graham: Have you ever submitted… (laughter) That’s true. So would you work with us in terms of what regulations you think are necessary in your industry? Zuckerberg: Absolutely. Graham: OK, would you submit to us and propose regulations? Zuckerberg: Yes and I’ll have my team follow up with you so that way we can have this discussion across the different categories where I think this discussion needs to happen. Graham: Looking forward to it.
While it’s admittedly not the toughest questioning, it does baldly address the simple idea that Graham and others consider Facebook effectively a monopoly and intend to craft regulations or legislation to remedy what they perceive as a regulatory gap.
Facebook has previously officially noted that 470 accounts associated with Russia’s Internet Research Agency have been banned related to the 2016 election, plus 270 more in Russia just last week. But in today’s testimony Mark Zuckerberg also mentioned a much higher estimate of “tens of thousands,” though the confidence in this number would be also be much lower.
“In the IRA specifically, the ones we’ve pegged back to the IRA, we can identify 470 in the American elections, and the 270 that we went after in Russia last week,” he began in response to Senator Feinstein (D-), who had asked about the numbers of accounts associated with this type of coordinated disinformation campaign.
But then he continued:
“There are many others that our systems catch which are more difficult to attribute specifically to Russian intelligence, but the number would be in the tens of thousands of fake accounts…”
The tens of thousands number must be taken with a grain of salt, since clearly Facebook has not been able to definitively attribute more than the stated 740 or so to the IRA and Russian intel. But it is still significant; this is clearly different from the 30,000-odd accounts banned in relation to France’s election. That was a specific number and also not mentioned in connection with Russia specifically, as this estimate was.
It seems clear that Facebook is being conservative in its enumeration of Russian-linked accounts, and that very well may be the responsible thing to do. But Zuckerberg’s remarks today establish a ceiling in the tens of thousands in addition to the floor of several hundred. That’s worth keeping in mind.
AggregateIQ, a Canadian advertising tech and audience intelligence company, has been suspended by Facebook for allegedly being closely connected with SCL, the parent company of Cambridge Analytica, reported the National Observer.
Essentially it was set up as a Canadian entity for people who wanted to work on SCL projects who didn’t want to move to London. That’s how AIQ got started: originally to service SCL and Cambridge Analytica projects.
AggregateIQ has never been and is not a part of Cambridge Analytica or SCL. Aggregate IQ has never entered into a contract with Cambridge Analytica . Chris Wylie has never been employed by AggregateIQ. AggregateIQ has never managed, nor did we ever have access to, any Facebook data or database allegedly obtained improperly by Cambridge Analytica.
But the reporting in the Guardian makes these claims hard to take seriously. For instance, a founding member was listed on Cambridge Analytica’s website as working at “SCL Canada,” the company had no website or phone number of its own for some time, and until 2016, AIQ’s only client was Cambridge Analytica. It really looks as if AIQ is simply a Canadian shell under which operations could be said to be performed independent of CA and SCL.
Whatever the nature of the connection, it was convincing enough for Facebook to put them in the same bucket. The company said in a statement to the National Observer:
In light of recent reports that AggregateIQ may be affiliated with SCL and may, as a result, have improperly received (Facebook) user data, we have added them to the list of entities we have suspended from our platform while we investigate.
That will put a damper on SCL Canada’s work for a bit — it’s hard to do social media targeting work when you’re not allowed on the premises of the biggest social network of them all. Note that no specific wrongdoing on AIQ’s part is suggested — it’s enough that it may be affiliated with SCL and as such may have had access to the dirty data.
I’ve asked both companies for confirmation and will update this post if I hear back.
At a time when the models of traditional social networks are being questioned, it’s more important than ever to experiment with alternatives. Arbtr is a proposed social network that limits users to sharing a single thing at any given time, encouraging “ruthless self-editing” and avoiding “nasty things” like endless feeds filled with trivial garbage.
Now, I know what you’re thinking. “Why would I give money to maybe join a social network eventually that might not have any of my friends on it on it? That is, if it ever even exists?” Great question.
The answer is: how else do you think we’re going to replace Facebook? Someone with a smart, different idea has to come along and we have to support them. If we won’t spare the cost of a cup of coffee for a purpose like that, then we deserve the social networks we’ve got. (And if I’m honest, I’ve had very similar ideas over the last few years and I’m eager to see how they might play out in reality.)
The fundamental feature is, of course, the single-sharing thing. You can only show off one item at a time, and when you post a new one, the old one (and any discussion, likes, etc) will be deleted. There will be options to keep logs of these things, and maybe premium features to access them (or perhaps metrics), but the basic proposal is, I think, quite sound — at the very least, worth trying.
Some design ideas for the app. I like the text one but it does need thumbnails.
If you’re sharing less, as Arbtr insists you will, then presumably you’ll put more love behind those things you do share. Wouldn’t that be nice?
We’re in this mess because we bought wholesale the idea that the more you share, the more connected you are. Now that we’ve found that isn’t the case – and in fact we were in effect being fattened for a perpetual slaughter — I don’t see why we shouldn’t try something else.
Will it be Arbtr? I don’t know. Probably not, but we’ve got a lot to gain by giving ideas like this a shot.