Author: Zack Whittaker

Facebook bug let websites read ‘likes’ and interests from a user’s profile

Facebook has fixed a bug that let any website pull information from a user’s profile — including their ‘likes’ and interests — without that user’s knowledge.

That’s the findings from Ron Masas, a security researcher at Imperva, who found that Facebook search results weren’t properly protected from cross-site request forgery (CSRF) attacks. In other words, a website could quietly siphon off certain bits of data from your logged-in Facebook profile in another tab.

Masas demonstrated how a website acting in bad faith could embed an IFRAME — used to nest a webpage within a webpage — to silently collect profile information.

“This allowed information to cross over domains — essentially meaning that if a user visits a particular website, an attacker can open Facebook and can collect information about the user and their friends,” said Masas.

The malicious website could open several Facebook search queries in a new tab, and run queries that could return “yes” or “no” responses — such as if a Facebook user likes a page, for example. Masas said that the search queries could return more complex results — such as returning all a user’s friends with a particular name, a user’s posts with certain keywords, and even more personal demographics — such as all of a person’s friends with a certain religion in a named city.

“The vulnerability exposed the user and their friends’ interests, even if their privacy settings were set so that interests were only visible to the user’s friends,” he said.

A snippet from a proof-of-concept built by Masas to show him exploiting the bug. (Image: Imperva/supplied)

In fairness, it’s not a problem unique to Facebook nor is it particularly covert. But given the kind of data available, Masas said this kind of data would be “attractive” to ad companies.

Imperva privately disclosed the bug in May. Facebook fixed the bug days later by adding CSRF protections and paid out $8,000 in two separate bug bounties.

Facebook told TechCrunch that the company hasn’t seen any abuse.

“We appreciate this researcher’s report to our bug bounty program,” said Facebook spokesperson Margarita Zolotova in a statement. “As the underlying behavior is not specific to Facebook, we’ve made recommendations to browser makers and relevant web standards groups to encourage them to take steps to prevent this type of issue from occurring in other web applications.”

It’s the latest in a string of data exposures and bugs that have put Facebook user data at risk after the Cambridge Analytica scandal this year, which saw a political data firm vacuum up profiles on 87 million users to use for election profiling — including users’ likes and interests.

Months later, the social media giant admitted millions of user account tokens had been stolen from hackers who exploited a chain of bugs.

Hours before U.S. election day, Facebook pulls dozens of accounts for ‘coordinated inauthentic behavior’

Facebook has pulled the plug on 30 accounts and 85 Instagram accounts that the company says were engaged in “coordinated inauthentic behavior.”

Facebook’s head of cybersecurity policy Nathaniel Gleicher revealed the latest batch of findings in a late-night blog post Monday.

“On Sunday evening, U.S. law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities,” said Gleicher, without naming the law enforcement agency. “We immediately blocked these accounts and are now investigating them in more detail.”

The company didn’t have much more to share, only that the Facebook Pages associated with the accounts “appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate,” he said.

In his post, Gleicher conceded that the company “would be further along with our analysis before announcing anything publicly,” but pledged to post more once the company digs in — including if the accounts are linked to earlier account takedowns linked to Iran.

When reached, a Facebook spokesperson did not comment further.

It’s the latest batch in account takedowns in recent weeks, ahead of the U.S. midterm elections — later on Tuesday — when millions of Americans will go to the polls to vote for new congressional lawmakers and state governors. The election is largely seen as a barometer for the health of the Trump administration, two years after the president was elected amid a concerted state-backed effort by Russian intelligence to spread disinformation and discord on his Democratic opponent.

Only earlier on Monday, a new report from Columbia University’s Tow Center for Digital Journalism found that election interference remains a major problem for the platform, despite repeated promises from high-level executives that the company is doing what it can to fight false news and misinformation.

Twitter removes thousands of accounts that tried to dissuade Democrats from voting

Twitter has deleted thousands of automated accounts posting messages that tried to discourage and dissuade voters from casting their ballot in the upcoming election next week.

Some 10,000 accounts were removed across late September and early October after they were first flagged by staff at the Democratic Party, the company has confirmed.

“We removed a series of accounts for engaging in attempts to share disinformation in an automated fashion – a violation of our policies,” said a Twitter spokesperson in an email to TechCrunch. “We stopped this quickly and at its source.” But the company did not provide examples of the kinds of accounts it removed, or say who or what might have been behind the activity.

The accounts posed as Democrats and try to convince key demographics to stay at home and not vote, likely as an attempt to sway the results in key election battlegrounds, according to Reuters, which first reported the news.

A spokesperson for the Democratic National Committee did not return a request for comment outside its business hours.

The removals are a drop in the ocean to the wider threats that Twitter faces. Earlier this year, the social networking giant deleted 1.2 million accounts for sharing and promoting terrorist content. In May alone, the company deleted just shy of 10 million accounts each week for sending malicious, automated messages.

Twitter had 335 million monthly active users as of its latest earnings report in July.

But the company has faced criticism from lawmakers for not doing more to proactively remove content that violates its rules or spreads disinformation and false news. With just days before Americans are set to vote in the U.S. midterms, this latest batch of takedowns is likely to spark further concern that Twitter did not automatically detect the malicious accounts.

Following the publication of Reuters’ report, Yoel Roth, Twitter’s head of site integrity, said in a tweet thread that public research identifying bots is often “deeply flawed” and that many are identifying bots “based on probability, not certainty,” since “nobody other than Twitter can see non-public, internal account data.”

Twitter does not have a strict policy on the spread of disinformation in the run-up to election season, unlike Facebook, which recently banned content that tried to suppress voters with false and misleading information. Instead, Twitter said last year that its “open and real-time nature” is a “powerful antidote to the spreading of all types of false information.” But researchers have been critical of that approach. Research published last month found that more than 700,000 accounts that were active during the 2016 presidential election are still active to this day — pushing a million tweets each day.

A Twitter spokesperson added that for the election this year, the company has “established open lines of communication and direct, easy escalation paths for state election officials, Homeland Security, and campaign organizations from both major parties to help us enforce our policies vigorously and protect conversational health on our service.”

Facebook takes down more disinformation activity linked to Iran

Facebook has removed 82 pages, groups and accounts for “coordinated inauthentic behavior” that originated out of Iran.

The social networking giant discovered the “inauthentic behavior” late last week, according to a blog post by the company’s cybersecurity policy chief Nathaniel Gleicher. He said the operation relied on posing as U.S. and U.K. citizens, and “posted about politically charged topics such as race relations, opposition to the President, and immigration.” The company said that although its investigation is in its early stages, it traced the activity back to Iran but does not yet know who is responsible.

Facebook said that a little over one million accounts followed at least one of the pages run by the Iranian actors. The takedown also included 16 accounts on Instagram.

The company shared its findings with the FBI prior to the takedowns, Gleicher added on a call.

It’s the latest batch of account and content takedowns in recent months. Facebook took down hundreds of accounts and pages in August with help from security firm FireEye, which found a widespread Iranian influencing operation on the social media platform. Although previous efforts by Facebook to take down accounts linked with spreading disinformation aimed at elections, the Iranian-backed campaign was targeting a scattering of issues. FireEye said in its analysis that the various narratives employed by the Iranians include “anti-Saudi, anti-Israeli, and pro-Palestinian themes, as well as support for specific U.S. policies favorable to Iran, such as the U.S.-Iran nuclear deal.”

Tech titans like Facebook have faced increasing pressure from lawmakers to better police their platforms from disinformation and the spread of false news from state-backed actors in the wake of the 2016 presidential election.

Although much of the focus has been on activity linked to trolls working for the Russian government, which used disinformation spreading tactics to try to influence the outcome of the election, Iran has emerged as a separate powerhouse in its use of spreading disinformation on the platform.

More soon…

Twitter now puts live broadcasts at the top of your timeline

Twitter will now put live streams and broadcasts started by accounts you follow at the top of your timeline, making it easier to see what they’re doing in realtime.

In a tweet, Twitter said that that the new feature will include breaking news, personalities and sports.

The social networking giant included the new feature in its iOS and Android apps, updated this week. Among the updates, Twitter said it’s now also supporting audio-only live broadcasts, as well as through its sister broadcast service Periscope.

Last month, Twitter discontinued its app for iOS 9 and lower versions, which according to Apple’s own data still harbors some 5 percent of all iPhone and iPad users.

Justice Dept. says social media giants may be ‘intentionally stifling’ free speech

The Justice Department has confirmed that Attorney General Jeff Sessions has expressed a “growing concern” that social media giants may be “hurting competition” and “intentionally stifling” free speech and expression.

The comments come as Facebook chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey gave testimony to the Senate Intelligence Committee on Wednesday, as lawmakers investigate foreign influence campaigns on their platforms.

Social media companies have been under the spotlight in recent years after threat actors, believed to be working closely with the Russian and Iranian governments, used disinformation spreading tactics to try to influence the outcome of the election.

“The Attorney General has convened a meeting with a number of state attorneys general this month to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms,” said Justice Department spokesman Devin O’Malley in an email.

It’s not clear exactly if the Justice Department is pushing for regulation or actively investigating the platforms for issues relating to competition — or antitrust. Social media companies aren’t covered under US free speech laws — like the First Amendment — but have long said they support free speech and expression across their platforms, including for users in parts of the world where freedom of speech is more restrictive.

When reached, Facebook did not immediately have comment. Twitter declined to comment.

Facebook, Twitter: US intelligence could help us more in fighting election interference

Facebook’s chief operating officer Sheryl Sandberg has admitted that the social networking giant could have done more to prevent foreign interference on its platforms, but said that the government also needs to step up its intelligence sharing efforts.

The remarks are ahead of an open hearing at the Senate Intelligence Committee on Wednesday, where Sandberg and Twitter chief executive Jack Dorsey will testify on foreign interference and election meddling on social media platforms. Google’s Larry Page was invited, but declined to attend.

“We were too slow to spot this and too slow to act,” said Sandberg in prepared remarks. “That’s on us.”

The hearing comes in the aftermath of Russian interference in the 2016 presidential election. Social media companies have been increasingly under the spotlight after foreign actors, believed to be working for or closely to the Russian government, used disinformation spreading tactics to try to influence the outcome of the election, as well as in the run-up to the midterm elections later this year.

Both Facebook and Twitter have removed accounts and bots from their sites believed to be involved in spreading disinformation and false news. Google said last year that it found Russian meddling efforts on its platforms.

“We’re getting better at finding and combating our adversaries, from financially motivated troll farms to sophisticated military intelligence operations,” said Sandberg.

But Facebook’s second-in-command also said that the US government could do more to help companies understand the wider picture from Russian interference.

“We continue to monitor our service for abuse and share information with law enforcement and others in our industry about these threats,” she said. “Our understanding of overall Russian activity in 2016 is limited because we do not have access to the information or investigative tools that the U.S. government and this Committee have,” she said.

Later, Twitter’s Dorsey also said in his own statement: “The threat we face requires extensive partnership and collaboration with our government partners and industry peers,” adding: “We each possess information the other does not have, and the combined information is more powerful in combating these threats.”

Both Sandberg and Dorsey are subtly referring to classified information that the government has but private companies don’t get to see — information that is considered a state secret.

Tech companies have in recent years pushed for more access to knowledge that federal agencies have, not least to help protect against increasing cybersecurity threats and hostile nation state actors. The theory goes that the idea of sharing intelligence can help companies defend against the best resourced hackers. But efforts to introduce legislation has proven controversial because critics argue that in sharing threat information with the government private user data would also be collected and sent to US intelligence agencies for further investigation.

Instead, tech companies are now pushing for information from Homeland Security to better understand the threats they face — to independently fend off future attacks.

As reported, tech companies last month met in secret to discuss preparations to counter foreign manipulation on their platforms. But attendees, including Facebook, Twitter, and Google and Microsoft are said to have “left the meeting discouraged” that they received little insight from the government.

Google, Facebook, Twitter chiefs called back to Senate Intelligence Committee

Twitter chief executive Jack Dorsey and Facebook chief operations officer Sheryl Sandberg will testify in an open hearing at the Senate Intelligence Committee next week, the committee’s chairman has confirmed.

Larry Page, chief executive of Google parent company Alphabet, was also invited but has not confirmed his attendance, a committee spokesperson confirmed to TechCrunch.

Sen. Richard Burr (R-NC) said in a release that the social media giants will be asked about their responses to foreign influence operations on their platforms in an open hearing on September 5.

It will be the second time the Senate Intelligence Committee, which oversees the government’s intelligence and surveillance efforts, will have called the companies to testify. But it will be the first time that senior leadership will attend — though, Facebook chief executive Mark Zuckerberg did attend a House Energy and Commerce Committee hearing in April.

It comes in the wake of Twitter and Facebook recently announcing the suspension of accounts from their platforms that they believe to be linked to Iranian and Russian political meddling. Social media companies have been increasingly under the spotlight in the past years following Russian efforts to influence the 2016 presidential election with disinformation.

A Twitter spokesperson said the company didn’t yet have details to share on the committee’s prospective questions. TechCrunch also reached out to Google and Facebook for comment and will update when we hear back.