Author: Jon Russell

Jack Dorsey and Twitter ignored opportunity to meet with civic group on Myanmar issues

Responding to criticism from his recent trip to Myanmar, Twitter CEO Jack Dorsey said he’s keen to learn about the country’s racial tension and human rights atrocities, but it has emerged that both he and Twitter’s public policy team ignored an opportunity to connect with a key civic group in the country.

A loose group of six companies in Myanmar has engaged with Facebook in a bid to help improve the situation around usage of its services in the country — often with frustrating results — and key members of that alliance, including Omidyar-backed accelerator firm Phandeeyar, contacted Dorsey via Twitter DM and emailed the company’s public policy contacts when they learned that the CEO was visiting Myanmar.

The plan was to arrange a forum to discuss the social media concerns in Myanmar to help Dorsey gain an understanding of life on the ground in one of the world’s fastest-growing internet markets.

“The Myanmar tech community was all excited, and wondering where he was going,” Jes Kaliebe Petersen, the Phandeeyar CEO, told TechCrunch in an interview. “We wondered: ‘Can we get him in a room, maybe at a public event, and talk about technology in Myanmar or social media, whatever he is happy with?'”

The DMs went unread. In a response to the email, a Twitter staff member told the group that Dorsey was visiting the country strictly on personal time with no plans for business. The Myanmar-based group responded with an offer to set up a remote, phone-based briefing for Twitter’s public policy team with the ultimate goal of getting information to Dorsey and key executives, but that email went unanswered.

When we contacted Twitter, a spokesperson initially pointed us to a tweet from Dorsey in which he said: “I had no conversations with the government or NGOs during my trip.”

However, within two hours of our inquiry, a member of Twitter’s team responded to the group’s email in an effort to restart the conversation and set up a phone meeting in January.

“We’ve been in discussions with the group prior to your outreach,” a Twitter spokesperson told TechCrunch in a subsequent email exchange.

That statement is incorrect.

Still, on the bright side, it appears that the group may get an opportunity to brief Twitter on its concerns on social media usage in the country after all.

The micro-blogging service isn’t as well-used in Myanmar as Facebook, which has some 20 million monthly users and is practically the de facto internet, but there have been concerns in Myanmar. For one thing, there was been the development of a somewhat sinister bot army in Myanmar and other parts of Southeast Asia, while it remains a key platform for influencers and thought-leaders.

“[Dorsey is] the head of a social media company and, given the massive issues here in Myanmar, I think it’s irresponsible of him to not address that,” Petersen told TechCrunch.

“Twitter isn’t as widely used as Facebook but that doesn’t mean it doesn’t have concerns happening with it,” he added. “As we’d tell Facebook or any large tech company with a prominent presence in Myanmar, it’s important to spend time on the ground like they’d do in any other market where they have a substantial presence.”

The UN has concluded that Facebook plays a “determining” role in accelerating ethnic violence in Myanmar. While Facebook has tried to address the issues, it hasn’t committed to opening an office in the country and it released a key report on the situation on the eve of the U.S. mid-term elections, a strategy that appeared designed to deflect attention from the findings. All of which suggests that it isn’t really serious about Myanmar.

Sheryl Sandberg claims she didn’t know Facebook hired agency behind ‘abhorrent’ anti-Soros campaign

Sheryl Sandberg has denied that she obstructed early investigations into election meddling and claimed that she was unaware Facebook was involved with an agency that ran “abhorrent” anti-Semitic campaigns that targeted George Soros, among others.

Facebook, the world’s largest social network, with more than 2.2 billion users, spent Thursday doing its best to fight a media relations forest fire that followed an explosive New York Times article revealing a campaign to smear George Soros, and other revelations.

The company fired PR and research firm Definers, the center of some of the story, it disputed allegations that it tried to hide details around Russian hacking and it held an hour-long call with journalists and CEO Mark Zuckerberg.

Now Sandberg has joined Zuckerberg and Facebook itself in distancing herself from some of the core claims of the Times report, which paints her in a particularly poor light.

“On a number of issues – including spotting and understanding the Russian interference we saw in the 2016 election – Mark and I have said many times we were too slow,” she wrote in a rebuttal posted to Facebook. “But to suggest that we weren’t interested in knowing the truth, or we wanted to hide what we knew, or that we tried to prevent investigations, is simply untrue.”

Sandberg repeated a common refrain at Facebook: the company wasn’t aware of the scale of the attacks it received until it was too late and it is now committed to “investing heavily” to prevent recurrences.

“While we will always have more work to do, I believe we’ve started to see some of that work pay off, as we saw in the recent US midterms and elections around the world where we have found and taken down further attempts at interference,” she wrote.

But perhaps the most striking part of Sandberg’s post is a brief passage in which she claims that she — Facebook’s chief operating officer — was unaware of the exact scope of Definers’ work for the company, which included disinformation campaigns against Apple, Google and the George Soros-backed Open Society Foundations.

From her post:

I also want to address the issue that has been raised about a PR firm, Definers. We’re no longer working with them but at the time, they were trying to show that some of the activity against us that appeared to be grassroots also had major organizations behind them. I did not know we hired them or about the work they were doing, but I should have. I have great respect for George Soros – and the anti-Semitic conspiracy theories against him are abhorrent.

Indeed, the claim that Sandberg didn’t even know the agency worked for Facebook flies in the face of the company’s original response, in which it wrote that its “relationship with Definers was well-known by the media.”

According to those statements, the relationship was well-known by the media but unknown to the company COO? OK then.

The New York Times’ allegations are hugely serious, enough to solicit fast and concerned responses from a multitude of politicians and prompt Facebook’s campaign PR machine to splutter into frenzied activity — don’t expect this issue to disappear soon.

Here’s a quick recap of what you need to know so far.

If you didn’t yet do so, go read The New York Times report.

And here’s the response from Sandberg in full:

I want to address some of the claims that have been made in the last 24 hours.

On a number of issues – including spotting and understanding the Russian interference we saw in the 2016 election – Mark and I have said many times we were too slow. But to suggest that we weren’t interested in knowing the truth, or we wanted to hide what we knew, or that we tried to prevent investigations, is simply untrue. The allegations saying I personally stood in the way are also just plain wrong. This was an investigation of a foreign actor trying to interfere in our election. Nothing could be more important to me or to Facebook.

As Mark and I both told Congress, leading up to Election Day in November 2016, we detected and dealt with several threats with ties to Russia and reported what we found to law enforcement. These were known traditional cyberattacks like hacking and malware. It was not until after the election that we became aware of the widespread misinformation campaigns run by the IRA. Once we were, we began investing heavily in more people and better technology to protect our platform. While we will always have more work to do, I believe we’ve started to see some of that work pay off, as we saw in the recent US midterms and elections around the world where we have found and taken down further attempts at interference.

I also want to address the issue that has been raised about a PR firm, Definers. We’re no longer working with them but at the time, they were trying to show that some of the activity against us that appeared to be grassroots also had major organizations behind them. I did not know we hired them or about the work they were doing, but I should have. I have great respect for George Soros – and the anti-Semitic conspiracy theories against him are abhorrent.

At Facebook, we are making the investments that we need to stamp out abuse in our system and ensure the good things people love about Facebook can keep happening. It won’t be easy. It will take time and will never be complete. This mission is critical and I am committed to seeing it through.

India’s Meesho, which enables social commerce via WhatsApp, raises $50M

Meesho, a Bangalore-based social commerce startup, has closed a $50 million investment to grow its business in its Indian homeland ahead of future international expansion.

This Series C round means that Meesho, which graduated Y Combinator in 2016, has now raised three funding rounds in the past year. Its $3.4 million Series A came in October 2017 with an $11.5 million Series B closing in June of this year. That’s quite the rollercoaster and over the last year, Meesho has seen its top line revenue grow by over 100X so co-founder and CEO Vidit Aatrey told TechCrunch in an interview.

This time around, the $50 million raise includes new investors Shunwei Capital from China, DST Partners and RPS Ventures, as well as returning backers Sequoia India, SAIF Partners, Venture Highway and Y Combinator.

Meesho has adjusted its focus considerably since it graduated YC, and today it operates as an enabler for people in India wanting to sell products using social media. Primarily the focus is WhatsApp, the world’s most popular messaging app which counts India as its largest market with over 200 million monthly users.

The company providers sellers with products (which it sources from suppliers) and inventory management and other basic seller tools. In turn, sellers hawk their catalog to friends and family as they please. Meesho handles all payment and logistics, providing a cut of the transaction to sellers.

Interestingly, there’s no fixed price for products. That means that sellers can vary the price and even haggle with their customers just as they’d do in real life.

“We want to simulate the exact experience that happens offline,” Aatrey explained. “Sellers have the liberty to sell to 10 different people at 10 different prices.”

Sales typically happen between friends and family because there is a trusted relationship. Selling consistently to family members doesn’t seem like an easy task, but Meesho operates in a range of verticals, including fashion, living, cosmetics and more, which the company said makes repeat custom easier. The firm is working on technology that helps sellers figure out which products to push to their customer list, but Aatrey believes a good seller has a knack for what their customers will want on a given day or week.

Aatrey — who started Meesho with fellow IIT-Delhi graduate Sanjeev Barnwal in 2015 — told TechCrunch that the startup is picky about who it selects as a seller, and those who are not active enough are removed from the platform — although he said the latter doesn’t happen a lot. Instead, Meesho offers training and skill development programs to sellers who perform well.

“We go and intentionally invest more to scale up the sellers who show more promise,” he explained.

(Left to right) Meesho founders Sanjeev Barnwal and Vidit Aatrey

Meesho says it has registered some two million sellers to date but the goal is to reach 20 million by 2020. A majority 80 percent are female because the startup first targeted housewives, but increasingly, Aatrey said, it is seeing male sellers grow. Nearly one-third of sellers are students and many others use the app part-time to add to an existing income source.

In one example, Aatrey explained that typically households that earn 30,000 INR ($410) per month can make 8,000-10,000 INR in additional capital if one of the homemakers uses Meesho full-time. That’s a pretty significant addition.

One of the more intriguing pieces of the Meshoo business is that by tapping into people’s trusted relationships and offer them incentives to sell products without requiring operating capital, the business has cut a lot of the expensive overheads associated with e-commerce. Customer acquisition cost is low, for example, while there’s no need to dole out discounts, both of which are expensive line items for Amazon India and its rival Flipkart, which is owned by Walmart.

“We don’t burn a lot of money,” Aatrey said, although he declined to provide specific financial information.

With this new money in the bank, Meesho is working to go deeper into its existing areas of business. That’ll include offering more product categories, bringing on more suppliers, extending its supply chain and developing tools to help sellers sell better.

Aatrey also confirmed that the company is also looking to develop a supply chain in China, that’s where Shunwei and its network will come into play. He also revealed that the company is beginning to think about the potential for its own labeled product — an Amazon-style move — although that isn’t likely to happen just yet.

Another longer-term objective is international expansion.

“For the next 12 months we won’t go beyond India,” Aatrey explained. “But what we are doing here is very similar to Southeast Asia, Latin America and even the Middle East so at some point we’ll think about venturing overseas.”

With three funding rounds in the past year, the Meesho CEO revealed that the company is well capitalized but he didn’t rule out the potential to raise money again.

“If we get a good offer that makes sense for the growth of the business, we are open to it,” he said.

Facebook bans Myanmar military accounts for ‘enabling human rights abuses’

Facebook is cracking down on the military leadership in Myanmar, the Southeast Asian country where the social network has been identified as a factor contributing to ethnic tension and violence.

The U.S. company said today that it removed accounts belonging to Senior General Min Aung Hlaing, who is commander-in-chief of the armed forces, and the military-owned Myawady television network.

In total, the purge has swept up 18 Facebook accounts, 52 Facebook Pages and an Instagram account after the company “found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country.”

Some 30 million of Myanmar’s 50 million population is estimated to use Facebook, making it a hugely effective broadcast network. But with wide reach comes the potential with misuse, as has been most evident in the U.S.

But the Facebook effect is also huge far from the U.S. A report from the UN issued in March determined that Facebook had played a “determining role” in Myanmar’s crisis. The situation in the country is so severe that an estimated 700,000 Rohingya Muslim refugees are thought to have fled to neighboring Bangladesh following a Myanmar government crackdown that began in August. U.S. Secretary of State Rex Tillerson has labeled the actions as ethnic cleansing.

Facebook’s action today comes a week after an investigative report from Reuters found more than 1,000 posts, comments and images that attacked Rohingya and other Muslim users on the platform.

“During a recent investigation, we discovered that they used seemingly independent news and opinion Pages to covertly push the messages of the Myanmar military. This type of behavior is banned on Facebook because we want people to be able to trust the connections they make,” Facebook said in a statement.

“While we were too slow to act, we’re now making progress – with better technology to identify hate speech, improved reporting tools, and more people to review content,” it added.

Twitter puts Infowars’ Alex Jones in the ‘read-only’ sin bin for 7 days

Twitter has finally taken action against Infowars creator Alex Jones, but it isn’t what you might think.

While Apple, Facebook, Google/YouTube, Spotify and many others have removed Jones and his conspiracy-peddling organization Infowars from their platforms, Twitter has remained unmoved with its claim that Jones hasn’t violated rules on its platform.

That was helped in no small way by the mysterious removal of some tweets last week, but now Jones has been found to have violated Twitter’s rules, as CNET first noted.

Twitter is punishing Jones for a tweet that violates its community standards but it isn’t locking him out forever. Instead, a spokesperson for the company confirmed that Jones’ account is in “read-only mode” for up to seven days.

That means he will still be able to use the service and look up content via his account, but he’ll be unable to engage with it. That means no tweets, likes, retweets, comments, etc. He’s also been ordered to delete the offending tweet — more on that below — in order to qualify for a fully functioning account again.

That restoration doesn’t happen immediately, though. Twitter policy states that the read-only sin bin can last for up to seven days “depending on the nature of the violation.” We’re imagining Jones got the full one-week penalty, but we’re waiting on Twitter to confirm that.

The offending tweet in question is a link to a story claiming President “Trump must take action against web censorship.” It looks like the tweet has already been deleted, but not before Twitter judged that it violates its policy on abuse:

Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

When you consider the things Infowars and Jones have said or written — 9/11 conspiracies, harassment of Sandy Hook victim families and more — the content in question seems fairly innocuous. Indeed, you could look at President Trump’s tweets and find seemingly more punishable content without much difficulty.

But here we are.

The weirdest part of this Twitter caning is one of the reference points that the company gave to media. These days, it is common for the company to point reporters to specific tweets that it believes encapsulate its position on an issue, or provide additional color in certain situations.

In this case, Twitter pointed us — and presumably other reporters — to this tweet from Infowars’ Paul Joseph Watson:

WTF, Twitter…

Twitter vows to continue spam fight despite negative impact on user numbers

Twitter has no intention of easing up on its fight against spam users and other factors that jeopardize the “health” of its service, despite the approach costing it three million in ‘lost’ monthly active users.

Investor panic sent Twitter’s stock price down by nearly 20 percent in early trading today following its latest financial report. Twitter posted a record profit of $100 million for Q2, but its monthly user count dropped by one million, with its U.S. number in particular down to 68 million from 69 million in the previous quarter.

The company said on an earnings call that efforts aimed at “prioritizing the health of the platform” combined with other factors cost it three million monthly users — a number which could have turned the user decline into a more favorable story of growth.

The company is anticipating another drop in the next quarter as it continues to double down on fighting spam and bots on its service. That isn’t the only factor reducing numbers, however. A reassessment of its paid partnerships with carriers worldwide — which help bring in and retain new users — in response to the development of its Lite app is also forecast to reduce MAU.

Investors may be concerned, but Twitter is bullish that an increase in the quality of users is ultimately better in the long run that the short-term gain of higher numbers.

Answering questions on an earnings call, Twitter CEO Jack Dorsey said the clean-up strategy would be ongoing as Twitter intends to “build [concerns for platform health] into our DNA.”

“When we do focus on removing some of the burden of people blocking/muting, we see positive results in our numbers,” he added. “We believe this will encourage our growth story.”

Yet the execs also played down the material impact by explaining that “many” of the “tens of millions” of removed accounts were already not counted within Twitter’s MAU metrics. Some, they added, had never been counted because they had been identified as questionable right from when they were registered.

Twitter explained as much in its earnings release:

When we suspend accounts, many of the removed accounts have already been excluded from MAU or DAU, either because the accounts were already inactive for more than one month at the time of suspension, or because they were caught at signup and were never included in MAU or DAU. We will continue to work hard to improve the health of the platform, providing updates on our progress at least quarterly, and prioritizing health efforts regardless of the near-term impact on metrics, as we believe the best driver of long-term growth of Twitter as a daily utility is a healthy conversation.

On the positive side, the executives played up the development of overseas revenue, which grew 44 percent year-on-year and now accounts for 48 percent of Twitter’s total income.

Facebook trips on its own moderation failures

After weeks of speculation around how it plans to handle conspiracy website Infowars, its creator Alex Jones and others that spread false information, Facebook finally gave us an answer: inconsistently.

The company hit Jones with a 30-day ban after it removed four videos that he shared on the Infowars Facebook Page.

The move is Facebook’s first that curtails the reach of Jones, who has been a major talking point in the media because he is continually allowed a voice on the social network, despite spreading “alternative theories” on events like 9/11 and the San Bernardino shootings.

Confusion

Sounds good so far, but, for a six-hour period today, it didn’t seem as though Facebook itself even knew what is going on.

CNET reported that Jones’ had been hit by a 30-day suspension for posting four videos that violate its community standards on the Infowars page that counts him as a moderator. When reached by TechCrunch to confirm the report, Facebook said Jones had only been handed a warning and that, in the event of another warning, a 30-day ban would then follow.

After hours of waiting for further confirmation and emails to the contrary, Facebook clarified that in fact Jones’ personal account was given a 30-day ban, while Infowars received a warning but no ban.

Facebook is literally shooting the messenger but allowing the page — which pushed the video out to its audience — to remain in place.

In subsequent emails, Facebook explained that the inconsistency is because Jones’ personal account had already received a past warning, which triggers the 30-day ban. Surprisingly, though, this is a first warning for the Infowars page.

At least, that’s what we think has happened because Facebook hasn’t fully clarified the exact summary of events. (We have asked.)

Beyond the four videos, there’s a lot riding on this decision — it sets a precedent. Infowars is one of the largest of its kind, but there are plenty of other organizations that thrive on pumping out misleading/false content that plays into insecurities, misplayed nationalistic pride and more.

That’s why Infowars (involuntarily) became the subject of two Facebook video events held with press his month. On both occasions, Facebook executives said that even those peddling false information deserve to have a voice on the social network, no matter how questionable or inflammatory their views may be. CEO Mark Zuckerberg himself even said Holocaust deniers have free speech on the service.

Based on today, so long as they spew their message within the Facebook community rules, they are fine.

Follow fast

In fact, you could take it further and suggest that if they don’t raise the suspicions of rival platforms like YouTube, they’ll remain untouched on Facebook.

The Jones/Infowars videos were pulled by Facebook days after being removed from YouTube. Indeed, one of the Facebook videos had even survived a review after it was flagged to Facebook moderators last month. The reviewer marked the video as acceptable and it remained on the platform — until this week.

Facebook called that decision a mistake, but arguably it’s a mistake that wouldn’t have been rectified had YouTube not raised the alarm by banning the videos on its platform first. (YouTube has well-documented content moderation problems so that it is it running circles around Facebook should draw much concern from the social network’s management.)

That Facebook is unable to communicate a significant decision like this in a cohesive manner doesn’t give the confidence to think it has its house in order when it comes to video moderation. If anything, it shows that the social network is playing catch up and winging what is a critical topic.

Its platform is being used nefariously worldwide, whether it is to sway elections or incite racial violence in foreign lands, so now, more than ever, Facebook needs to nail down the basics of handling malicious content like Infowars which, unlike those other threats, is hiding in plain sight.

Facebook also removes 4 Infowars videos, including one it previously cleared

Days after defending its decision to give a voice to conspiracy theory peddler Alex Jones and his Infowars site, Facebook has removed four of his videos for violating its community standards.

But one of the four had already been allowed to slip through the firm’s review system. A source within Facebook told TechCrunch that one of the videos had previously been flagged for review in June but, after being looked over by a checker, it was allowed remain on the social network. That decision was described as “erroneous” and it has now been removed.

Facebook’s removal of the videos comes days after YouTube scrubbed four videos from Jones from its site for violating its policies on content. The Facebook source confirmed that three of the videos it has removed were flagged for the first time on Wednesday — presumably after, or in conjunction with, them being highlighted to YouTube — but the fact that one had gotten the all-clear one again raises question marks about the consistency of Facebook’s review process.

Contrary to some media reports, Jones has not received a 30-day ban from Facebook following these removals. TechCrunch understands that such a ban will be issued if Jones violates the company’s policies in the future, but, for now, he has been given a warning.

“Our Community Standards make it clear that we prohibit content that encourages physical harm [bullying], or attacks someone based on their religious affiliation or gender identity [hate speech]. We remove content that violates our standards as soon as we’re aware of it. In this case, we received reports related to four different videos on the Pages that Infowars and Alex Jones maintain on Facebook. We reviewed the content against our Community Standards and determined that it violates. All four videos have been removed from Facebook,” a spokesperson said in a statement.

Earlier this month, the company’s head of News Feed John Hegeman said of Infowars content — which includes claims 9/11 was an inside job and alternate theories to the San Bernardino shootings — that “just for being false, doesn’t violate the community standards.” He added: “We created Facebook to be a place where different people can have a voice.”

Facebook seemed to double down on that stance on Monday when, at another event, VP of product Fidji Simo called Infowars “absolutely atrocious” but then said that “if you are saying something untrue on Facebook, you’re allowed to say it as long as you’re an authentic person and you are meeting the community standards.”

It’s not been a good week for Facebook. A poor earnings report spooked investors and caused its valuation drop by $123 billion in what is the largest-single market cap wipeout in U.S. trading history. That’s not the kind of record Facebook will want to own.

RIP Klout

Remember Klout?

The influencer market service that purportedly let social media influencers get free stuff is finally closing its doors this month.

Perhaps, like me, you’re surprised that Klout is still running in 2018, but time is nearly up. The closure will happen May 25 — you have until then to see what topics you’re apparently an expert on. The shutdown comes more than four years after it was acquired by social media software company Lithium Technologies for a reported $200 million. The plan was for Lithium to IPO, but that never happened.

Lithium operates a range of social media services, including products that handle social media marketing campaigns and engagement with customers, and now it has decided that Klout is no longer part of its vision.

“The Klout acquisition provided Lithium with valuable artificial intelligence (AI) and machine learning capabilities but Klout as a standalone service is not aligned with our long-term strategy,” CEO Pete Hess wrote in a short note.

Hess said those apparent AI and ML smarts will be put to work in the company’s other product lines.

He did tease a potential Klout replacement in the form of “a new social impact scoring methodology based on Twitter” that Lithium is apparently planning to release soon. I’m pretty sure someone out there is already pledging to bring Klout back on the blockchain and is frantically writing up an ICO whitepaper as we speak because that’s how it is these days.

RIP Klout

Twitter doesn’t care that someone is building a bot army in Southeast Asia

Facebook’s lack of attention to how third parties are using its service to reach users ended up with CEO Mark Zuckerberg taking questions from Congressional committees. With that in mind, you’d think that others in the social media space might be more attentive than usual to potentially malicious actors on their platforms.

Twitter, however, is turning the other way and insisting all is normal in Southeast Asia, despite the emergence of thousands of bot-like accounts that have followed prominent users in the region en masse over the past month.

Scores of reporters and Twitter users with large followers — yours truly included — have noticed swarms of accounts with generic names, no profile photo, no bio and no tweets have followed them over the past month.

These accounts might be evidence of a new ‘bot farm’ — the creation of large numbers of accounts for sale or usage on-demand which Twitter has cracked down on — or the groundwork for more nefarious activities, it’s too early to tell.

In what appears to be the first regional Twitter bot campaign, a flood of suspicious new followers has been reported by users across Southeast Asia and beyond, including Thailand, Myanmar Cambodia, Hong Kong, China, Taiwan, Sri Lanka among other places.

While it is true that the new accounts have done nothing yet, the fact that a large number of newly-created accounts have popped up out of nowhere with the aim of following the region’s most influential voices should be enough to concern Twitter. Especially since this is Southeast Asia, a region where Facebook is beset with controversies — from its role inciting ethnic hatred in Myanmar, to allegedly assisting censors in Vietnam, witnessing users jailed for violating lese majeste in Thailand, and aiding the election of controversial Philippines leader Duterte.

Then there are governments themselves. Vietnam has pledged to build a cyber army to combat “wrongful views,” while other regimes in Southeast Asia have clamped down on social media users.

Despite that, Twitter isn’t commenting.

The U.S. company issued a no comment to TechCrunch when we asked for further information about this rush of new accounts, and what action Twitter will take.

A source close to the company suggested that the sudden accumulation of new followers is “a pretty standard sign-up, or onboarding, issue” that is down to new accounts selecting to follow the suggested accounts that Twitter proposes during the new account creation process.

Twitter is more than 10 years old, and since this is the first example of this happening in Southeast Asia that explanation already seems inadequate at face value. More generally, the dismissive approach seems particularly naive. Twitter should be looking into the issue more closely, even if for now the apparent bot army isn’t being put to use yet.

Facebook is considered to be the internet by many in Southeast Asia, and the social network is considerably more popular than Twitter in the region, but there remains a cause for concern here.

“If we’ve learned anything from the Facebook scandal, it’s that what can at first seem innocuous can be leveraged to quite insidious and invasive effect down the line,” Francis Wade, who recently published a book on violence in Myanmar, told the Financial Times this week. “That makes Twitter’s casual dismissal of concerns around this all the more unsettling.”