Author: Zack Whittaker

Facebook now says its password leak affected ‘millions’ of Instagram users

Facebook has confirmed its password-related security incident last month now affects “millions” of Instagram users, not “tens of thousands” as first thought.

The social media giant confirmed the new information in its updated blog post, first published on March 21.

“We discovered additional logs of Instagram passwords being stored in a readable format,” the company said. “We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.”

“Our investigation has determined that these stored passwords were not internally abused or improperly accessed,” the updated post said, but the company still has not said how it made that determination.

The social media giant did not say how many millions were affected, however.

Last month, Facebook admitted it had inadvertently stored “hundreds of millions” of user account passwords in plaintext for years, said to have dated as far back as 2012. The company said the unencrypted passwords were stored in logs accessible to some 2,000 engineers and developers. The data was not leaked outside of the company, however. Facebook still hasn’t explained how the bug occurred.

Facebook posted the update at 10am ET — an hour before the Special Counsel’s report into Russian election interference was set to be published.

When reached, spokesperson Liz Bourgeois said Facebook does not have “a precise number” yet to share, and declined to say exactly when the additional discovery was made.

Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

Flip the “days since last Facebook security incident” back to zero.

Facebook confirmed Thursday in a blog post, prompted by a report by cybersecurity reporter Brian Krebs, that it stored “hundreds of millions” of account passwords in plaintext for years.

The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.

Krebs said the bug dated back to 2012.

“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.

Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.

Krebs said as many as 600 million users could be affected — about one-fifth of the company’s 2.7 billion users, but Facebook has yet to confirm the figure.

Facebook also didn’t say how the bug came to be. Storing passwords in readable plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.

Twitter and GitHub were hit by similar but independent bugs last year. Both companies said passwords were stored in plaintext and not scrambled.

It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.

It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.

We’ve contacted the Irish data protection office, which covers Facebook’s European operations, but did not hear back.

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.

Facebook is not equipped to stop the spread of authoritarianism

After the driver of a speeding bus ran over and killed two college students in Dhaka in July, student protesters took to the streets. They forced the ordinarily disorganized local traffic to drive in strict lanes and stopped vehicles to inspect license and registration papers. They even halted the vehicle of the Chief of Bangladesh Police Bureau of Investigation and found that his license was expired. And they posted videos and information about the protests on Facebook.

The fatal road accident that led to these protests was hardly an isolated incident. Dhaka, Bangladesh’s capital, which was ranked the second least livable city in the world in the Economist Intelligence Unit’s 2018 global liveability index, scored 26.8 out of 100 in the infrastructure category included in the rating. But the regional government chose to stifle the highway safety protests anyway. It went so far as raids of residential areas adjacent to universities to check social media activity, leading to the arrest of 20 students. Although there were many images of Bangladesh Chhatra League, or BCL men, committing acts of violence on students, none of them were arrested. (The BCL is the student wing of the ruling Awami League, one of the major political parties of Bangladesh.)

Students were forced to log into their Facebook profiles and were arrested or beaten for their posts, photographs, and videos. In one instance, BCL men called three students into the dorm’s guestroom, quizzed them over Facebook posts, beat them, and then handed them over to police. They were reportedly tortured in custody.

A pregnant school teacher was arrested and jailed for just over two weeks for “spreading rumors” due to sharing a Facebook post about student protests. A photographer and social justice activist spent more than 100 days in jail for describing police violence during these protests; he told reporters he was beaten in custody. And a university professor was jailed for 37 days for his Facebook posts.

A Dhaka resident who spoke on the condition of anonymity out of fear for their safety said that the crackdown on social media posts essentially silenced student protesters, many of which removed photos, videos, and status updates about the protests from their profiles entirely. While the person thought that students were continuing to be arrested, they said, “nobody is talking about it anymore — at least in my network — because everyone kind of ‘got the memo’ if you know what I mean.”

This isn’t the first time Bangladeshi citizens have been arrested for Facebook posts. As just one example, in April 2017, a rubber plantation worker in southern Bangladesh was arrested and detained for three months for liking and sharing a Facebook post that criticized the prime minister’s visit to India, according to Human Rights Watch.

Bangladesh is far from alone. Government harassment to silence dissent on social media has occurred across the region and in other regions as well — and it often comes hand-in-hand with governments filing takedown requests with Facebook and requesting data on users.

Facebook has removed posts critical of the prime minister in Cambodia and reportedly “agreed to coordinate in the monitoring and removal of content” in Vietnam. Facebook was criticized for not stopping the repression of Rohingya Muslims in Myanmar, where military personnel created fake accounts to spread propaganda which human rights groups say fueled violence and forced displacement. Facebook has since undertaken a human rights impact assessment in Myanmar, and it has also taken down coordinated inauthentic accounts in the country.

UNITED STATES – APRIL 10: Facebook CEO Mark Zuckerberg testifies during the Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing on “Facebook, Social Media Privacy, and the Use and Abuse of Data”on Tuesday, April 10, 2018. (Photo By Bill Clark/CQ Roll Call)

Protesters scrubbing Facebook data for fears of repercussions isn’t uncommon. Over and over again, authoritarian-leaning regimes have utilized low-tech strategies to quell dissent. And aside from providing resources related to online privacy and security, Facebook still has little in place to protect its most vulnerable users from these pernicious efforts. As various countries pass laws calling for a local presence and increased regulation, it is possible that the social media conglomerate doesn’t always even want to.

“In many situations, the platforms are under pressure,” said Raman Jit Singh Chima, policy director at Access Now. “Tech companies are being directly sent takedown orders, user data requests. The danger of that is that companies will potentially be overcomplying or responding far too quickly to government demands when they are able to push back on those requests,” he said.

Elections are often a critical moment for oppressive behavior from governments — Uganda, Chad, and Vietnam have specifically targeted citizens — and candidates — during election time. Facebook announced just last Thursday that it had taken down nine Facebook pages and six Facebook accounts for engaging in coordinated inauthentic behavior in Bangladesh. These pages, which Facebook believes were linked to people associated with the Bangladesh government, were “designed to look like independent news outlets and posted pro-government and anti-opposition content.” The sites masqueraded as news outlets, including fake BBC Bengali, BDSNews24, and Bangla Tribune and news pages with photoshopped blue checkmarks, according to the Atlantic Council’s Digital Forensic Research Lab.

Still, the imminent election in Bangladesh doesn’t bode well for anyone who might wish to express dissent. In October, a digital security bill that regulates some types of controversial speech was passed in the country, signaling to companies that as the regulatory environment tightens, they too could become targets.

More restrictive regulation is part of a greater trend around the world, said Naman M. Aggarwal, Asia policy associate at Access Now. Some countries, like Brazil and India, have passed “fake news” laws. (A similar law was proposed in Malaysia, but it was blocked in the Senate.) These types of laws are frequently followed by content takedowns. (In Bangladesh, the government warned broadcasters not to air footage that could create panic or disorder, essentially halting news programming on the protests.)

Other governments in the Middle East and North Africa — such as Egypt, Algeria, United Arab Emirates, Saudi Arabia, and Bahrain — clamp down on free expression on social media under the threat of fines or prison time. And countries like Vietnam have passed laws requiring social media companies to localize their storage and have a presence in the country — typically an indication of greater content regulation and pressure on the companies from local governments. In India, WhatsApp and other financial tech services were told to open offices in the country.

And crackdowns on posts about protests on social media come hand-in-hand with government requests for data. Facebook’s biannual transparency report provides detail on the percentage of government requests the company complies within each country, but most people don’t know until long after the fact. Between January and June, the company received 134 emergency requests and 18 legal processes from Bangladeshi authorities for 205 users or accounts. Facebook turned over at least some data in 61 percent of emergency requests and 28 percent of legal processes.

Facebook said in a statement that it “believes people deserve to have a voice, and that everyone has the right to express themselves in a safe environment,” and that it handles requests for user data “extremely carefully.'”

The company pointed to its Facebook for Journalists resources and said it is “saddened by governments using broad and vague regulation or other practices to silence, criminalize or imprison journalists, activists, and others who speak out against them,” but the company said it also helps journalists, activists, and other people around the world to “tell their stories in more innovative ways, reach global audiences, and connect directly with people.”

But there are policies that Facebook could enact that would help people in these vulnerable positions, like allowing users to post anonymously.

“Facebook’s real names policy doesn’t exactly protect anonymity, and has created issues for people in countries like Vietnam,” said Aggarwal. “If platforms provide leeway, or enough space for anonymous posting, and anonymous interactions, that is really helpful to people on ground.”

BERLIN, GERMANY – SEPTEMBER 12: A visitor uses a mobile phone in front of the Facebook logo at the #CDUdigital conference on September 12, 2015 in Berlin, Germany. (Photo by Adam Berry/Getty Images)

A German court found the policy illegal under its decade-old privacy law in February. Facebook said it plans to appeal the decision.

“I’m not sure if Facebook even has an effective strategy or understanding of strategy in the long term,’ said Sean O’Brien, lead researcher at Yale Privacy Lab. “In some cases, Facebook is taking a very proactive role… but in other cases, it won’t.” In any case, these decisions require a nuanced understanding of the population, culture, and political spectrum in various regions — something it’s not clear Facebook has.

Facebook isn’t responsible for government decisions to clamp down on free expression. But the question remains: How can companies stop assisting authoritarian governments, inadvertently or otherwise?

“If Facebook knows about this kind of repression, they should probably have… some sort of mechanism to at the very least heavily try to convince people not to post things publicly that they think they could get in trouble for,” said O’Brien. “It would have a chilling effect on speech, of course, which is a whole other issue, but at least it would allow people to make that decision for themselves.”

This could be an opt-in feature, but O’Brien acknowledges that it could create legal liabilities for Facebook, leading the social media giant to create lists of “dangerous speech” or profiles on “dissidents,” and could theoretically shut them down or report them to the police. Still, Facebook could consider rolling a “speech alert” feature to an entire city or country if that area becomes volatile politically and dangerous for speech, he said.

O’Brien says that social media companies could consider responding to situations where a person is being detained illegally and potentially coerced into giving their passwords in a way that could protect them, perhaps by triggering a temporary account reset or freeze to prevent anyone from accessing the account without proper legal process. Some actions that might trigger the reset or freeze could be news about an individual’s arrest — if Facebook is alerted to it, contact from the authorities, or contact from friends and loved ones, as evaluated by humans. There could even be a “panic button” type trigger, like Guardian Project’s PanicKit, but for Facebook — allowing users to wipe or freeze their own accounts or posts tagged preemptively with a codeword only the owner knows.

“One of the issues with computer interfaces is that when people log into a site, they get a false sense of privacy even when the things they’re posting in that site are widely available to the public,” said O’Brien. Case in point: this year, women anonymously shared their experiences of abusive coworkers in a shared Google Doc — the so-called “Shitty Media Men” list, likely without realizing that a lawsuit could unmask them. That’s exactly what is happening.

Instead, activists and journalists often need to tap into resources and gain assistance from groups like Access Now, which runs a digital security helpline, and the Committee to Protect Journalists. These organizations can provide personal advice tailored to their specific country and situation. They can access Facebook over the Tor anonymity network. Then can use VPNs, and end-to-end encrypted messaging tools, and non-phone-based two-factor authentication methods. But many may not realize what the threat is until it’s too late.

The violent crackdown on free speech in Bangladesh accompanied government-imposed Internet restrictions, including the throttling of Internet access around the country. Users at home with a broadband connection did not feel the effects of this, but “it was the students on the streets who couldn’t go live or publish any photos of what was going on,” the Dhaka resident said.

Elections will take place in Bangladesh on December 30.

In the few months leading up to the election, Access Now says it’s noticed an increase in Bangladeshi residents expressing concern that their data has been compromised and seeking assistance from the Digital Security hotline.

Other rights groups have also found an uptick in malicious activity.

Meenakshi Ganguly, South Asia director at Human Rights Watch, said in an email that the organization is “extremely concerned about the ongoing crackdown on the political opposition and on freedom of expression, which has created a climate of fear ahead of national elections.”

Ganguly cited politically motivated cases against thousands of opposition supporters, many of which have been arrested, as well as candidates that have been attacked.

Human Rights Watch issued a statement about the situation, warning that the Rapid Action Battalion, a “paramilitary force implicated in serious human rights violations including extrajudicial killings and enforced disappearances,” and has been “tasked with monitoring social media for ‘anti-state propaganda, rumors, fake news, and provocations.'” This is in addition to a nine-member monitoring cell and around 100 police teams dedicated to quashing so-called “rumors” on social media, amid the looming threat of news website shutdowns.

“The security forces continue to arrest people for any criticism of the government, including on social media,” Ganguly said. “We hope that the international community will urge the Awami League government to create conditions that will uphold the rights of all Bangladeshis to participate in a free and fair vote.”

Facebook bug let websites read ‘likes’ and interests from a user’s profile

Facebook has fixed a bug that let any website pull information from a user’s profile — including their ‘likes’ and interests — without that user’s knowledge.

That’s the findings from Ron Masas, a security researcher at Imperva, who found that Facebook search results weren’t properly protected from cross-site request forgery (CSRF) attacks. In other words, a website could quietly siphon off certain bits of data from your logged-in Facebook profile in another tab.

Masas demonstrated how a website acting in bad faith could embed an IFRAME — used to nest a webpage within a webpage — to silently collect profile information.

“This allowed information to cross over domains — essentially meaning that if a user visits a particular website, an attacker can open Facebook and can collect information about the user and their friends,” said Masas.

The malicious website could open several Facebook search queries in a new tab, and run queries that could return “yes” or “no” responses — such as if a Facebook user likes a page, for example. Masas said that the search queries could return more complex results — such as returning all a user’s friends with a particular name, a user’s posts with certain keywords, and even more personal demographics — such as all of a person’s friends with a certain religion in a named city.

“The vulnerability exposed the user and their friends’ interests, even if their privacy settings were set so that interests were only visible to the user’s friends,” he said.

A snippet from a proof-of-concept built by Masas to show him exploiting the bug. (Image: Imperva/supplied)

In fairness, it’s not a problem unique to Facebook nor is it particularly covert. But given the kind of data available, Masas said this kind of data would be “attractive” to ad companies.

Imperva privately disclosed the bug in May. Facebook fixed the bug days later by adding CSRF protections and paid out $8,000 in two separate bug bounties.

Facebook told TechCrunch that the company hasn’t seen any abuse.

“We appreciate this researcher’s report to our bug bounty program,” said Facebook spokesperson Margarita Zolotova in a statement. “As the underlying behavior is not specific to Facebook, we’ve made recommendations to browser makers and relevant web standards groups to encourage them to take steps to prevent this type of issue from occurring in other web applications.”

It’s the latest in a string of data exposures and bugs that have put Facebook user data at risk after the Cambridge Analytica scandal this year, which saw a political data firm vacuum up profiles on 87 million users to use for election profiling — including users’ likes and interests.

Months later, the social media giant admitted millions of user account tokens had been stolen from hackers who exploited a chain of bugs.

Hours before U.S. election day, Facebook pulls dozens of accounts for ‘coordinated inauthentic behavior’

Facebook has pulled the plug on 30 accounts and 85 Instagram accounts that the company says were engaged in “coordinated inauthentic behavior.”

Facebook’s head of cybersecurity policy Nathaniel Gleicher revealed the latest batch of findings in a late-night blog post Monday.

“On Sunday evening, U.S. law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities,” said Gleicher, without naming the law enforcement agency. “We immediately blocked these accounts and are now investigating them in more detail.”

The company didn’t have much more to share, only that the Facebook Pages associated with the accounts “appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate,” he said.

In his post, Gleicher conceded that the company “would be further along with our analysis before announcing anything publicly,” but pledged to post more once the company digs in — including if the accounts are linked to earlier account takedowns linked to Iran.

When reached, a Facebook spokesperson did not comment further.

It’s the latest batch in account takedowns in recent weeks, ahead of the U.S. midterm elections — later on Tuesday — when millions of Americans will go to the polls to vote for new congressional lawmakers and state governors. The election is largely seen as a barometer for the health of the Trump administration, two years after the president was elected amid a concerted state-backed effort by Russian intelligence to spread disinformation and discord on his Democratic opponent.

Only earlier on Monday, a new report from Columbia University’s Tow Center for Digital Journalism found that election interference remains a major problem for the platform, despite repeated promises from high-level executives that the company is doing what it can to fight false news and misinformation.

Twitter removes thousands of accounts that tried to dissuade Democrats from voting

Twitter has deleted thousands of automated accounts posting messages that tried to discourage and dissuade voters from casting their ballot in the upcoming election next week.

Some 10,000 accounts were removed across late September and early October after they were first flagged by staff at the Democratic Party, the company has confirmed.

“We removed a series of accounts for engaging in attempts to share disinformation in an automated fashion – a violation of our policies,” said a Twitter spokesperson in an email to TechCrunch. “We stopped this quickly and at its source.” But the company did not provide examples of the kinds of accounts it removed, or say who or what might have been behind the activity.

The accounts posed as Democrats and try to convince key demographics to stay at home and not vote, likely as an attempt to sway the results in key election battlegrounds, according to Reuters, which first reported the news.

A spokesperson for the Democratic National Committee did not return a request for comment outside its business hours.

The removals are a drop in the ocean to the wider threats that Twitter faces. Earlier this year, the social networking giant deleted 1.2 million accounts for sharing and promoting terrorist content. In May alone, the company deleted just shy of 10 million accounts each week for sending malicious, automated messages.

Twitter had 335 million monthly active users as of its latest earnings report in July.

But the company has faced criticism from lawmakers for not doing more to proactively remove content that violates its rules or spreads disinformation and false news. With just days before Americans are set to vote in the U.S. midterms, this latest batch of takedowns is likely to spark further concern that Twitter did not automatically detect the malicious accounts.

Following the publication of Reuters’ report, Yoel Roth, Twitter’s head of site integrity, said in a tweet thread that public research identifying bots is often “deeply flawed” and that many are identifying bots “based on probability, not certainty,” since “nobody other than Twitter can see non-public, internal account data.”

Twitter does not have a strict policy on the spread of disinformation in the run-up to election season, unlike Facebook, which recently banned content that tried to suppress voters with false and misleading information. Instead, Twitter said last year that its “open and real-time nature” is a “powerful antidote to the spreading of all types of false information.” But researchers have been critical of that approach. Research published last month found that more than 700,000 accounts that were active during the 2016 presidential election are still active to this day — pushing a million tweets each day.

A Twitter spokesperson added that for the election this year, the company has “established open lines of communication and direct, easy escalation paths for state election officials, Homeland Security, and campaign organizations from both major parties to help us enforce our policies vigorously and protect conversational health on our service.”

Facebook takes down more disinformation activity linked to Iran

Facebook has removed 82 pages, groups and accounts for “coordinated inauthentic behavior” that originated out of Iran.

The social networking giant discovered the “inauthentic behavior” late last week, according to a blog post by the company’s cybersecurity policy chief Nathaniel Gleicher. He said the operation relied on posing as U.S. and U.K. citizens, and “posted about politically charged topics such as race relations, opposition to the President, and immigration.” The company said that although its investigation is in its early stages, it traced the activity back to Iran but does not yet know who is responsible.

Facebook said that a little over one million accounts followed at least one of the pages run by the Iranian actors. The takedown also included 16 accounts on Instagram.

The company shared its findings with the FBI prior to the takedowns, Gleicher added on a call.

It’s the latest batch of account and content takedowns in recent months. Facebook took down hundreds of accounts and pages in August with help from security firm FireEye, which found a widespread Iranian influencing operation on the social media platform. Although previous efforts by Facebook to take down accounts linked with spreading disinformation aimed at elections, the Iranian-backed campaign was targeting a scattering of issues. FireEye said in its analysis that the various narratives employed by the Iranians include “anti-Saudi, anti-Israeli, and pro-Palestinian themes, as well as support for specific U.S. policies favorable to Iran, such as the U.S.-Iran nuclear deal.”

Tech titans like Facebook have faced increasing pressure from lawmakers to better police their platforms from disinformation and the spread of false news from state-backed actors in the wake of the 2016 presidential election.

Although much of the focus has been on activity linked to trolls working for the Russian government, which used disinformation spreading tactics to try to influence the outcome of the election, Iran has emerged as a separate powerhouse in its use of spreading disinformation on the platform.

More soon…

Twitter now puts live broadcasts at the top of your timeline

Twitter will now put live streams and broadcasts started by accounts you follow at the top of your timeline, making it easier to see what they’re doing in realtime.

In a tweet, Twitter said that that the new feature will include breaking news, personalities and sports.

The social networking giant included the new feature in its iOS and Android apps, updated this week. Among the updates, Twitter said it’s now also supporting audio-only live broadcasts, as well as through its sister broadcast service Periscope.

Last month, Twitter discontinued its app for iOS 9 and lower versions, which according to Apple’s own data still harbors some 5 percent of all iPhone and iPad users.

Justice Dept. says social media giants may be ‘intentionally stifling’ free speech

The Justice Department has confirmed that Attorney General Jeff Sessions has expressed a “growing concern” that social media giants may be “hurting competition” and “intentionally stifling” free speech and expression.

The comments come as Facebook chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey gave testimony to the Senate Intelligence Committee on Wednesday, as lawmakers investigate foreign influence campaigns on their platforms.

Social media companies have been under the spotlight in recent years after threat actors, believed to be working closely with the Russian and Iranian governments, used disinformation spreading tactics to try to influence the outcome of the election.

“The Attorney General has convened a meeting with a number of state attorneys general this month to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms,” said Justice Department spokesman Devin O’Malley in an email.

It’s not clear exactly if the Justice Department is pushing for regulation or actively investigating the platforms for issues relating to competition — or antitrust. Social media companies aren’t covered under US free speech laws — like the First Amendment — but have long said they support free speech and expression across their platforms, including for users in parts of the world where freedom of speech is more restrictive.

When reached, Facebook did not immediately have comment. Twitter declined to comment.