Author: Devin Coldewey

Facebook simulates itself up a better, more gradual product launch

When you’re launching a new social media product, like an image-sharing app or niche network, common wisdom is to make it available to everyone as soon as it’s ready. But simulations carried out by Facebook — and let’s be honest, a few actual launches — suggest that may be a good way to kneecap your product from the start.

It’s far from a simple problem to simulate, but in the spirit of the “spherical cow in a vacuum” it’s easy enough to make a plausible model in which to test some basic hypotheses. In this case the researchers crafted a network of nodes into which a virtual “product” could be seeded, and if certain conditions were met it would either spread to other nodes or “churn” permanently, meaning this node deleted the app in disgust.

If you’re familiar with Conway’s Game of Life it’s broadly similar but not so elegant.

In the researchers’ simulation, the spread of the product is based more or less on a handful of assumptions:

  • User satisfaction is largely governed by whether their friends are on the app
  • Users start using the app at a low rate and use it either more or less based on their satisfaction
  • If a user is unsatisfied, they leave permanently

Based on these (and a whole lot of complex math) the researchers tried various scenarios in which different numbers and groups nodes were given access to the product at once.

It wouldn’t be unreasonable to guess that under these basic conditions, giving it to as many people as possible (not everyone, since that’s not realistic) would be the right move. But the model showed that this isn’t the case, and in fact creating a few concentrated clusters of nodes had the best results.

If you think about it, it becomes clear why: When you make it available to a large number of people, the next thing that happens is a large die-off of nodes that didn’t have enough friends at the start or whose friends weren’t active enough. This die-off limits the reach of other nearby nodes, which then die off as well, and although it doesn’t start an extinction-level event for the virtual app, it does permanently limit its reach due to the number of people who have churned.

On the other hand, if you seed a few clusters that are self-sufficient and keep usage high, then introduce it to others adjacent at a regular rate, you see steady growth, low churn, and a higher usage cap since far fewer people will have bounced off the product at the beginning.

You can see how this would work in real life: get the app to a few small, active communities (socially active photographers, celebrities, or influencers and their networks) and then create adjacent nodes through invitations sent out by existing users.

Turns out, lots of apps already do this! But now it’s supported by science.

Will this affect the next big Facebook product rollout? Probably not. Chances are the people in charge have a few other factors that figure into these decisions. But research like this, simulating crowds and group decision-making, will surely only increase in accuracy and usage.

The study, by Facebook’s Shankar Iyer and Lada A. Adamic, will be presented at the International Conference on Complex Networks and their Applications.

Limiting social media use reduced loneliness and depression in new experiment

The idea that social media can be harmful to our mental and emotional well-being is not a new one, but little has been done by researchers to directly measure the effect; surveys and correlative studies are at best suggestive. A new experimental study out of Penn State, however, directly links more social media use to worse emotional states, and less use to better.

To be clear on the terminology here, a simple survey might ask people to self-report that using Instagram makes them feel bad. A correlative study would, for example, find that people who report more social media use are more likely to also experience depression. An experimental study compares the results from an experimental group with their behavior systematically modified, and a control group that’s allowed to do whatever they want.

This study, led by Melissa Hunt at Penn State’s psychology department, is the latter — which despite intense interest in this field and phenomenon is quite rare. The researchers only identified two other experimental studies, both of which only addressed Facebook use.

143 students from the school were monitored for three weeks after being assigned to either limit their social media use to about ten minutes per app (Facebook, Snapchat, and Instagram) per day or continue using it as they normally would. They were monitored for a baseline before the experimental period and assessed weekly on a variety of standard tests for depression, social support, and so on. Social media usage was monitored via the iOS battery use screen, which shows app use.

The results are clear. As the paper, published in the latest Journal of Social and Clinical Psychology, puts it:

The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.

Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.

It’s not the final word in this, however. Some scores did not see improvement, such as self-esteem and social support. And later follow-ups to see if feelings reverted or habit changes were less than temporary were limited because most of the subjects couldn’t be compelled to return. (Psychology, often summarized as “the study of undergraduates,” relies on student volunteers who have no reason to take part except for course credit, and once that’s given, they’re out.)

That said, it’s a straightforward causal link between limiting social media use and improving some aspects of emotional and social health. The exact nature of the link, however, is something at which Hunt could only speculate:

Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.

When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.

The researchers acknowledge the limited nature of their study and suggest numerous directions for colleagues in the field to take it from here. A more diverse population, for instance, or including more social media platforms. Longer experimental times and comprehensive follow-ups well after the experiment would help as well.

The 30 minute limit was chosen as a conveniently measurable one but the team does not intend to say that it is by any means the “correct” amount. Perhaps half or twice as much time would yield similar or even better results, they suggest: “It may be that there is an optimal level of use (similar to a dose response curve) that could be determined.”

Until then, we can use common sense, Hunt suggested: “In general, I would say, put your phone down and be with the people in your life.”

Facebook removed 14 million pieces of terrorist content this year, and the numbers are rising

Facebook must exert constant vigilance to prevent its platform from being taken over by ne’er-do-wells, but how exactly it does that is only really known to itself. Today, however, the company has graced us with a bit of data on what tools it’s using and what results they’re getting — for instance, more than 14 million pieces of “terrorist content” removed this year so far.

More than half of that 14 million was old content posted before 2018, some of which had been sitting around for years. But as Facebook points out, that content may very well have also been unviewed that whole time. It’s hard to imagine a terrorist recruitment post going unreported for 970 days (the median age for content in Q1) if it was seeing any kind of traffic.

Perhaps more importantly, the numbers of newer content removed (with, to Facebook’s credit, a quickly shrinking delay) appear to be growing steadily. In Q1, 1.2 million items were removed; in Q2, 2.2 million; in Q3, 2.3 million. User-reported content removals are growing as well, though they are much smaller in number — around 16,000 in Q3. Indeed, 99 percent of it, Facebook proudly reports, is removed “proactively.”

Something worth noting: Facebook is careful to avoid positive or additive verbs when talking about this content, for instance it won’t say that “terrorists posted 2.3 million pieces of content,” but rather that was the number of “takedowns” or content “surfaced.” This type of phrasing is more conservative and technically correct, as they can really only be sure of their own actions, but it also serves to soften the fact that terrorists are posting hundreds of thousands of items monthly.

The numbers are hard to contextualize. Is this a lot or a little? Both, really. The amount of content posted to Facebook is so vast that almost any number looks small next to it, even a scary one like 14 million pieces of terrorist propaganda.

It is impressive, however, to hear that Facebook has greatly expanded the scope of its automated detection tools:

Our experiments to algorithmically identify violating text posts (what we refer to as “language understanding”) now work across 19 languages.

And it fixed a bug that was massively slowing down content removal:

In Q2 2018, the median time on platform for newly uploaded content surfaced with our standard tools was about 14 hours, a significant increase from Q1 2018, when the median time was less than 1 minute. The increase was prompted by multiple factors, including fixing a bug that prevented us from removing some content that violated our policies, and rolling out new detection and enforcement systems.

The Q3 number is two minutes. It’s a work in progress.

No doubt we all wish the company had applied this level of rigor somewhat earlier, but it’s good to know that the work is being done. Notable is that a great deal of this machinery is not focused on simply removing content, but on putting it in front of the constantly growing moderation team. So the most important bit is still, thankfully and heroically, done by people.

Facebook bans hundreds of clickbait farms for ‘coordinated inauthentic behavior’

Facebook has announced a relatively small but significant purge of bad actors from the platform: 810 pages and accounts that have “consistently broken our rules against spam and coordinated inauthentic behavior.” It may not seem like a lot, but it sounds like the company is erring on the side of disclosure even if the news isn’t particularly hard-hitting.

These were not, as far as Facebook could tell, part of an organized nation-state effort or political interference campaign, like the Iranian and Russian groups previously highlighted in these ban alert posts. These are pages that use networks of fake accounts and pages to drive traffic to clickbait articles strictly for the purpose of ad revenue.

810 can’t be much more than a drop out of the bucket of fake accounts on Facebook — of which there are millions — but the company’s focus right now isn’t individual bad actors but coordinated ones.

A few hundred accounts working together to do a bit of ad fraud produces a sort of digital footprint that might look similar to a a few hundred accounts working together to push a political narrative or sow discontent.  And one can turn into the other quite easily.

There are patterns of logins, likes, visits, account creation, and so on that Facebook has been working hard to identify — recently, at least. Although they’ve designed their net to catch the nation-state actors and large-scale operations that have previously been uncovered, small fry like these spammers are getting tangled up as well. Not a bad thing.

“Given the activity we’ve seen — and its timing ahead of the US midterm elections — we wanted to give some details about the types of behavior that led to this action,” the company wrote on its blog.

No doubt they also want to give the impression that there is indeed a cop on the beat. Expect more announcements like this through the midterms as Facebook strives to make it clear that it is working round the clock to keep you, its valuable product users, safe.

Facebook can’t keep you safe

Another day, another announcement from Facebook that it has failed to protect your personal information. Were you one of the 50 million (and likely far more, given the company’s graduated disclosure style) users whose accounts were completely exposed by a coding error in play for more than a year? If not, don’t worry — you’ll get your turn being failed by Facebook . It’s incapable of keeping its users safe.

Facebook has proven over and over again that it prioritizes its own product agenda over the safety and privacy of its users. And even if it didn’t, the nature and scale of its operations make it nearly impossible to avoid major data breaches that expose highly personal data.

For one thing, the network has grown so large that its surface area is impossible to secure completely. That was certainly demonstrated Friday when it turned out that a feature rollout had let hackers essentially log in as millions of users and do who knows what. For more than a year.

This breach wasn’t a worst case scenario exactly, but it was close. To Facebook it would not have appeared that an account was behaving oddly — the hacker’s activity would have looked exactly like normal user activity. You wouldn’t have been notified via two-factor authentication, since it would be piggybacking on an existing login. Install some apps? Change some security settings? Export your personal data? All things a hacker could have done, and may very well have.

This happened because Facebook is so big and complicated that even the best software engineers in the world, many of whom do in fact work there, could not reasonably design and code well enough to avoid unforeseen consequences like the bugs in question.

I realize that sounds a bit hand-wavy, and I don’t mean simply that “tech is hard.” I mean that realistically speaking, Facebook has too many moving parts for the mere humans that run it to do so infallibly. It’s testament to their expertise that so few breaches have occurred; the big ones like Cambridge Analytica were failures of judgment, not code.

A failure is not just inevitable but highly incentivized in the hacking community. Facebook is by far the largest and most valuable collection of personal data in history. That makes it a natural target, and while it is far from an easy mark, these aren’t script kiddies trying to find sloppy scripts in their free time.

Facebook itself said that the bugs discovered Friday weren’t simple; it was a coordinated, sophisticated process to piece them together and produce the vulnerability. The people who did this were experts, and it seems likely that they have reaped enormous rewards for their work.

The consequences of failure are also huge. All your eggs are in the same basket. A single problem like this one could expose all the data you put on the platform, and potentially everything your friends make visible to you as well. Not only that, but even a tiny error, a highly specific combination of minor flaws in the code, will affect astronomical numbers of people.

Of course, a bit of social engineering or a badly configured website elsewhere could get someone your login and password as well. This wouldn’t be Facebook’s error, exactly, but it is a simple fact that because of the way Facebook has been designed — a centralized repository of all the personal data it can coax out of its users — a minor error could result in a total loss of privacy.

I’m not saying other social platforms could do much better. I’m saying this is just another situation in which Facebook has no way to keep you safe.

And if your data doesn’t get taken, Facebook will find a way to give it away. Because it’s the only thing of value that they have; the only thing anyone will pay for.

The Cambridge Analytica scandal, while it was the most visible, was only one of probably hundreds of operations that leveraged lax access controls into enormous datasets scraped with Facebook’s implicit permission. It was their job to keep that data safe, and they gave it to anyone who asked.

It’s worth noting here that not only does it only take one failure along the line to expose all your data, but failures beyond the first are in a way redundant. All that personal information you’ve put online can’t be magically sucked back in. In a situation where, for example, your credit card has been skimmed and duplicated, the risk of abuse is real, but it ends as soon as you get a new card. For personal data, once it’s out there, that’s it. Your privacy is irreversibly damaged. Facebook can’t change that.

Well, that’s not exactly right. It could, for example, sandbox all data older than three months and require verification to access it. That would limit breach damage considerably. It could also limit its advertising profiles to data from that period, so it isn’t building a sort of shadow profile of you based on analysis of years of data. It could even opt not to read everything you write and instead let you self-report categories for advertising. That would solve a lot of privacy issues right there. It won’t, though. No money in that.

One more thing Facebook can’t protect you from is the content on Facebook itself. The spam, bots, hate, echo chambers — all that is baked on in. The 20,000-strong moderation team they’ve put on the task is almost certainly totally inadequate, and of course the complexity of the global stage and all its cultures and laws ensures that there will always be conflict and unhappiness on this subject. At the very best it can remove the worst of it after it’s already been posted or streamed.

Again, it’s not really Facebook’s fault exactly that there are people abusing its platform. People are the worst, after all. But Facebook can’t save you from them. It can’t prevent the new category of harm that it has created.

What can you do about it? Nothing. It’s out of your hands. Even if you were to quit Facebook right now, your personal data may already have been leaked and no amount of quitting will stop it from propagating online forever. If it hasn’t already, it’s probably just a matter of time. There’s nothing you, or Facebook, can do about it. The sooner we, and Facebook, accept this as the new normal, the sooner we can get to work taking real measures towards our security and privacy.

Skype finally adds call recording

Skype is the communication tool of choice (and necessity) for millions, but it has always lacked a basic feature that no doubt many of those millions have requested: call recording. Well, Microsoft finally heard our cries, and recording is now built into Skype on both desktop and mobile.

Inexplicably, the ability is available in the latest version of the app on every platform except Windows 10. Apparently it’ll be added in a few weeks.

Recording is pretty simple to activate. Once you’re in a call, just hit the plus sign on the lower right and then select “Start recording.”

The others on the call will see a little banner announcing the call is being recorded, “so there are no surprises.” But Microsoft is clearly leery of consent laws and reminds you via that same banner to verbally inform your interlocutors that you’re recording it.

When the call is finished, the recording — video and audio — is stored online as an MP4 for up to 30 days, during which time you and anyone who was on the call can save it locally or share a link to it.

It doesn’t seem like there’s a way to record only audio, which is a bit annoying. A call with 3 people on video can get big fast. Hopefully they’ll address that in an update.

People have used third party apps for years to record their Skype conversations; I’ve been using MP3 Skype Recorder, and it’s been pretty solid. I’m afraid that it might not survive the duplication of its key feature — recording, obviously — inside the app on which it piggybacks. But because, among other things, I’m paranoid, I’ll probably keep it installed as a backup. I’ve asked the creator what he thinks of Skype’s latest feature and what it means for apps like his.

In the meantime everyone except Windows 10 users should start Skyping like never before and recording everything to do a bit of system stressing for Microsoft. It’s what they’d want.

Political anonymity may help us see both sides of a divisive issue online

Some topics are so politically charged that even to attempt a discussion online is to invite toxicity and rigid disagreement among participants. But a new study finds that exposure to the views of others, minus their political affiliation, could help us overcome our own biases.

Researchers from the University of Pennsylvania, led by sociologist Damon Centola, examined how people’s interpretations of some commonly misunderstood climate change data changed after seeing those of people in opposing political parties.

The theory is that by exposing people to information sans partisan affiliation, we might be able to break the “motivated reasoning” that leads us to interpret data in a preconceived way.

The data in this case was a NASA study indicating that sea ice levels will decrease but frequently misinterpreted as suggesting the opposite. The misunderstanding isn’t entirely partisan in nature: 40 percent of self-identified Republicans and 26 percent of Democrats polled in the study adopted the mistaken latter view.

The NASA graph used in the study. As you can see it’s not crazy to think that the sea ice levels would increase, though it is incorrect.

Thousands from both parties, recruited via Mechanical Turk, were asked to indicate whether sea ice levels were rising or falling, and how much. After their initial guess, they were shown how others had answered and allowed to adjust their answer afterwards. Some were shown their peers answers with those peers’ political affiliation, and some were shown it without.

When political party was not attached to the answers, there was a considerable effect on people’s answers. Republicans jumped from about 65 percent getting it right to around 90, and Democrats went from 75 to 85 percent. When party was shown, improvements were much smaller; and when people were only exposed to those from their own party, there was practically no improvement at all.

Obviously this isn’t going to fix the problem of viral misinformation or the near-constant flame wars raging across every major online service. But it’s amazing that doing something as simple as stripping the political context from communications may lead to those communications being taken more seriously.

Perhaps something along these lines could help put the brakes on runaway articles: showing highly-cited views from people with no indication of their political beliefs. Will you be so quick to dismiss or accept someone’s argument if you can’t be sure of their agenda? At worst it may force people to take a second and evaluate those ideas on their merits, and that’s hardly a bad thing.

The study was published today in the Proceedings of the National Academy of Sciences.

What is this weird Twitter army of Amazon drones cheerfully defending warehouse work?

Here is a strange little online community to puzzle at. Amazon has developed an unnerving, Stepford-like presence on Twitter in the form of several accounts of definitely real on-the-floor workers who regurgitate talking points and assure the world that all is right in the company’s infamously punishing warehouse jobs.

After Flamboyant Shoes Guy called out the phenomenon, I found 15 accounts (please don’t abuse them — they get enough of that already). All with “Amazon smiles” as their backgrounds and several with animals as profile pictures. All have the same bio structure: “(Job titles) @(warehouse shorthand location). (Duration) Amazonian. (2- or 3-item list of things they like.)” All have “FC Ambassador” in their name. All have links to an Amazon warehouse tour service.

And all ceaselessly communicate upbeat messages about how great it is to work at an Amazon warehouse and assuring everyone that they are not being forced to do this. The messages all seem cut from the same cloth, frequently along the same exact patterns:

The workers say that they don’t receive compensation for being ambassadors; it’s a “totally optional role” they have taken on voluntarily. They also claim to be warehouse employees in the ordinary sense. If so, they’re putting their numbers at risk by taking the time out to bang out long tweets hourly on how great they’re doing.

Their most frequent topics of conversation are how they get bathroom breaks, the pleasant temperature of the warehouses, the excellent benefits and suitable wages, friendly management and how the job isn’t monotonous or tiring at all. FC Ambassador Carol, for example, is downright elated to be a picker, and is clearly a Bezos admirer.

You can practically hear the smile on her face.

I have a friend who worked as a picker for a while, admittedly some years back. He said it was some of the most mind-numbing yet physically demanding work he’s ever done. I understand that some folks may just be happy to have a job with full pay and benefits — I’d never begrudge anyone that, I’ve sure felt that — but the unanimous and highly specific positivity on display in these ambassador accounts really seems like something else.

It’s no secret, after all, that Amazon has an image problem when it comes to labor. Reports have for years described grueling labor at these “fulfillment centers,” where footsore workers must meet ever-increasing daily goals, their time rigidly structured and room for advancement cramped. Just recently Gizmodo’s Brian Menegus has had a couple of great stories on current — not past — labor conditions at the company, and of course there have been dozens of such stories detailing exploitation or generally poor conditions over the last few years. And not just here in the U.S., either.

Certainly Amazon may have improved those conditions. And certainly they would want to get the message out. But these accounts are equally certainly not the grassroots advocacy they seem to be. (There’s already a parody account, naturally, or perhaps one of the ambassadors slipped the leash.)

I’ve asked Amazon for more details on what this program really consists of, and how it comes to pass that warehouse workers are being not paid to monitor Twitter, regularly rebutting critics with clearly canned stats and the kind of forced humor one would imagine they would indulge in if their overalls hid a shock collar. I’ll update this post if I hear back.

Facebook bans first app since Cambridge Analytica, myPersonality, and suspends hundreds more

Facebook announced today that it had banned the app myPersonality for improper data controls and suspended hundreds more. So far this is only the second app to be banned as a result of the company’s large-scale audit begun in March; but as myPersonality hasn’t been active since 2012, and was to all appearances a legitimate academic operation, it’s a bit of a mystery why they bothered.

The total number of app suspensions has reached 400, twice the number we last heard Facebook announce publicly. Suspensions aren’t listed publicly, however, and apps may be suspended and reinstated without any user notification. The only other app to be banned via this process is Cambridge Analytica.

myPersonality was created by researchers at the Cambridge Psychometrics Centre (no relation to Cambridge Analytica — this is an actual academic institution) to source data from Facebook users via personality quizzes. It operated from 2007 to 2012, and was quite successful, gathering data on some four million users (directly, not via friends) when it was operational.

The dataset was used for the Centre’s own studies and other academics could request access to it via an online form; applications were vetted by CPC staff and had to be approved by the petitioner’s university’s ethics committee.

It transpired in May that a more or less complete set of the project’s data was available for anyone to download from GitHub, put there by some misguided scholar who had received access and decided to post it where their students could access it more easily.

Facebook suspended the app around then, saying “we believe that it may have violated Facebook’s policies.” That suspension has graduated into a ban, because the creators “fail[ed] to agree to our request to audit and because it’s clear that they shared information with researchers as well as companies with only limited protections in place.”

This is, of course, a pot-meet-kettle situation, as well as something of a self-indictment. I contacted David Stillwell, one of the app’s creators and currently deputy director of the CPC, having previously heard from him and collaborator Michel Kosinski about the dataset and Facebook’s sudden animosity.

“Facebook has long been aware of the application’s use of data for research,” Stillwell said in a statement. “In 2009 Facebook certified the app as compliant with their terms by making it one of their first ‘verified applications.’ In 2011 Facebook invited me to a meeting in Silicon Valley (and paid my travel expenses) for a workshop organised by Facebook precisely because it wanted more academics to use its data, and in 2015 Facebook invited Dr Kosinski to present our research at their headquarters.”

During that time, Kosinski and Stillwell both told me, dozens of universities had published in total more than a hundred social science research papers using the data. No one at Facebook or elsewhere seems to have raised any issues with how the data was stored or distributed during all that time.

“It is therefore odd that Facebook should suddenly now profess itself to have been unaware of the myPersonality research and to believe that the data may have been misused,” Stillwell said.

Examples of datasets available via the myPersonality project.

A Facebook representative told me they were concerned that the vetting process for getting access to the dataset was too loose, and furthermore that the data was not adequately anonymized.

But Facebook would, ostensibly, have approved these processes during the repeated verifications of myPersonality’s data. Why would it suddenly decide in 2018, when the app had been inactive for years, that it had been in violation all that time? The most obvious answer would be that its auditors never looked very closely in the first place, despite a cozy relationship with the researchers.

“When the app was suspended three months ago I asked Facebook to explain which of their terms was broken but so far they have been unable to cite any instances,” said Stillwell.

Ironically, Facebook’s accusation that myPersonality failed to secure user data correctly is exactly what the company itself appears to be guilty of, and at a far greater scale. Just as CPC could not control what a researcher did with the data (for example, mistakenly post it publicly) once they had been approved by multiple other academics, Facebook could not control what companies like Cambridge Analytica did with data once it had been siphoned out under the respectable guise of research purposes. (Notably, it is projects like myPersonality that seem to have made that guise respectable to begin with.)

Perhaps Facebook’s standards have changed and what was okay by them in 2012 — and, apparently, in 2015 — is not acceptable now. Good — users want stronger protections. But this banning of an app inactive for years and used successfully by real academics for actual research purposes has an air of theatricality. It helps no one and will change nothing about myPersonality itself, which Stillwell and others stopped maintaining years ago, or the dataset it created, which may very well still be analyzed for new insights by some enterprising social science grad student.

Facebook has mobilized a full-time barn door closing operation years after the horses bolted, as evident by today’s ban. So when you and the other four million people get a notification that Facebook is protecting your privacy by banning an app you used a decade ago, take it with a grain of salt.

Facebook and Twitter remove hundreds of accounts linked to Iranian and Russian political meddling

Facebook has removed hundreds of accounts and pages for what it calls “coordinated inauthentic behavior,” generally networks of ostensibly independent outlets that were in fact controlled centrally by Russia and Iran. Some of these accounts were identified as much as a year ago.

In a post by the company’s head of cybersecurity policy, Nathaniel Gleicher, the company described three major operations that it had monitored and eventually rolled up with the help of security firm FireEye. The latter provided its own initial analysis, with more to come.

Notably, few or none of these were focused on manipulating the 2018 midterm elections here in the states, but rather had a variety of topics and apparent goals. The common theme is certainly attempting to sway political opinion — just not in Ohio.

For instance a page may purport to be an organization trying to raise awareness about violence perpetrated by immigrants, but is in fact operated by a larger shadowy group attempting to steer public opinion on the topic. The networks seem to originate in Iran, and were promoting narratives including “anti-Saudi, anti-Israeli, and pro-Palestinian themes, as well as support for specific U.S. policies favorable to Iran,” as FireEye describes them.

The first network Facebook describes, “Liberty Front Press,” comprised 74 pages, 70 accounts, and 3 groups on Facebook, and 76 accounts on Instagram. Some 155,000 people followed at least one piece of the Facebook network and they had 48,000 Instagram followers. They were generally promoting political views in the Middle East and only recently expanded to the States; they spent $6,000 on ads beginning in January 2015 up until this month.

A related network to this one also engaged in cyberattacks and hacking attempts. Its 12 pages and 66 accounts, plus 9 on Instagram, were posing as news organizations.

A third network had accounts going back to 2011; it was sharing content in the Middle East as well, about local, U.S., and U.K. political issues. With 168 pages and 140 Facebook accounts and 31 Instagram accounts, this was a big one. As you’ll recall, the big takedown of Russia’s IRA accounts only amounted to 135. (The full operation was of course much larger than that.)

This network had 813,000 accounts following it on Facebook and 10,000 on Instagram, and had also spent about $6,000 on ads between 2012 and April of this year. Notably that means that Facebook was taking ad dollars from a network it was investigating for “coordinated inauthentic behavior.” I’ve asked Facebook to explain this — perhaps it was done so as not to tip off the network that it was under investigation.

Interestingly this network also hosted 25 events, meaning it was not just a bunch of people in dark rooms posting under multiple pseudonyms and fake accounts. People attended real-life events for these pages, suggesting the accounts supported real communities despite being sockpuppets for some other organization.

Twitter, almost immediately after Facebook’s post, announced that it had banned 284 of accounts for “coordinated manipulation” originating in Iran.

The Iranian networks were not alleged to be necessarily the product of state-backed operations, but of course the implication is there and not at all unreasonable. But Facebook also announced that it was removing pages and accounts “linked to sources the U.S. government has previously identified as Russian military intelligence services.”

The number and nature of these accounts is not gone into in detail, except to say that their activity was focused more on Syrian and Ukrainian political issues. “To date, we have not found activity by the accounts targeting the U.S.,” the post reads. But at least the origin is relatively clear: Russian state actors.

This should be a warning that it isn’t just the U.S. that is the target of coordinated disinformation campaigns online — wherever one country has something to gain by promoting a certain viewpoint or narrative, you will find propaganda and other efforts underway via whatever platforms are available.

Senator Mark Warner (D-VA) issued a brief I-told-you-so following the news.

“I’ve been saying for months that there’s no way the problem of social media manipulation is limited to a single troll farm in St. Petersburg, and that fact is now beyond a doubt,” he said in a statement. “We also learned today that the Iranians are now following the Kremlin’s playbook from 2016. While I’m encouraged to see Facebook taking steps to rid their platforms of these bad actors, there’s clearly more work to be done.”

He said he plans to bring this up at the Senate Intelligence Committee’s grilling of Facebook, Twitter, and Google leadership on September 5th.