Category Archives: Social Media

Facebook mistakenly leaked developer analytics reports to testers

Set the “days without a Facebook’s privacy problem” counter to zero. This week, an alarmed developer contacted TechCrunch, informing us that their Facebook App Analytics weekly summary email had been delivered to someone outside their company. It contains sensitive business information including weekly average users, page views, and new users.

43 hours after we contacted Facebook about the issue, the social network now confirms to TechCrunch that 3 percent of apps using Facebook Analytics had their weekly summary reports sent to their app’s testers, instead of only the app’s developers, admins, and analysts.

Testers are often people outside of a developer’s company. If the leaked info got to an app’s competitors, it could provide them an advantage. At least they weren’t allowed to click through to view more extensive historical analytics data on Facebook’s site.

Facebook tells us it has fixed the problem and no personally identifiable information or contact info was improperly disclosed. It plans to notify all impacted developers about the leak today and has already begun. Below you can find the email the company is sending:

Subject line: We recently resolved an error with your weekly summary email

We wanted to let you know about a recent error where a summary e-mail from Facebook Analytics about your app was sent to testers of your app ‘[APP NAME WILL BE DYNAMICALLY INSERTED HERE]’. As you know, we send weekly summary emails to keep you up to date with some of your top-level metrics — these emails go to people you’ve identified as Admins, Analysts and Developers. You can also add Testers to your account, people designated by you to help test your apps when they’re in development.

We mistakenly sent the last weekly email summary to your Testers, in addition to the usual group of Admins, Analysts and Developers who get updates. Testers were only able to see the high-level summary information in the email, and were not able to access any other account information; if they clicked “View Dashboard” they did not have access to any of your Facebook Analytics information.

We apologize for the error and have made updates to prevent this from happening again.

One affected developer told TechCrunch “Not sure why it would ever be appropriate to send business metrics to an app user. When I created my app (in beta) I added dozens of people as testers as it only meant they could login to the app…not access info!” They’re still waiting for the disclosure from Facebook.

Facebook wouldn’t disclose a ballpark number of apps impacted by the error. Last year it announced 1 million apps, sites, and bots were on Facebook Analytics. However, this issue only affected apps, and only 3% of them.

The mistake comes just weeks after a bug caused 14 million users’ Facebook status update composers to change their default privacy setting to public. And Facebook has had problems with misdelivering business information before. In 2014, Facebook accidentally sent advertisers receipts for other business’ ad campaigns, causing significant confusion. The company has also misreported metrics about Page reach and more on several occasions. Though user data didn’t leak and today’s issue isn’t as severe as others Facebook has dealt with, developers still consider their business metrics to be private, making this a breach of that privacy.

While Facebook has been working diligently to patch app platform privacy holes since the Cambridge Analytica scandal, removing access to many APIs and strengthening human reviews of apps, issues like today’s make it hard to believe Facebook has a proper handle on the data of its 2 billion users.

Facebook prototypes tool to show how many minutes you spend on it

Are you ready for some scary numbers? After months of Mark Zuckerberg talking about how “Protecting our community is more important than maximizing our profits”, Facebook is preparing to turn that commitment into a Time Well Spent product.

Buried in Facebook’s Android app is an unreleased “Your Time On Facebook” feature. It shows the tally of how much time you spent on the Facebook app on your phone on each of the last seven days, and your average time spent per day. It lets you set a daily reminder that alerts you when you’ve reached your self imposed limit, plus a shortcut to change your Facebook notification settings.

Facebook confirmed the feature development to TechCrunch, with a spokesperson telling us “We’re always working on new ways to help make sure people’s time on Facebook is time well spent.”

The feature could help Facebook users stay mindful of how long they’re staring at the social network. This self-policing could be important since both iOS and Android are launching their own screen time monitoring dashboards that reveal which apps are dominating your attention and can alert you or lock you out of apps when you hit your time limit. When Apple demoed the feature at WWDC, it used Facebook as an example of an app you might use too much.

Images of Facebook’s digital wellbeing tool come courtesy of our favorite tipster and app investigator Jane Manchun Wong. She previously helped TechCrunch scoop the development of features like Facebook Avatars, Twitter encrypted DMs, and Instagram Usage Insights — a Time Well Spent feature that looks very similar to this one on Facebook.

Our report on Instagram Usage Insights led the sub-company’s CEO Kevin Systrom to confirm the upcoming feature, saying ““It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Facebook has already made changes to its News Feed algorithm designed to reduce the presence of low-quality but eye-catching viral videos. That led to Facebook’s first ever usage decline in North America in Q4 2017, with a loss of 700,000 daily active users in the region. Zuckerberg said on the earnings call that this change “reduced time spent on Facebook by roughly 50 million hours every day.”

Zuckerberg has been adamant that all time spent on Facebook isn’t bad. Instead as we argued in our piece “The Difference Between Good And Bad Facebooking”, its asocial, zombie-like passive browsing and video watching that’s harmful to people’s wellbeing, while active sharing, commenting, and chatting can make users feel more connected and supported.

But that distinction isn’t visible in this prototype of the “Your Time On Facebook Tool” which appears to treat all time spent the same. If Facebook was able to measure our active vs passive time on its app and impress the health difference, it could start to encourage us to either put down the app, or use it to communicate directly with friends when we find ourselves mindlessly scrolling the feed or enviously viewing people’s photos.

Messenger Kids expands outside the U.S., rolls out ‘kindness’ features

Facebook’s kid-friendly messaging app, Messenger Kids, is expanding to its first countries outside the U.S. today, with launches in Canada and Peru. It’s also introducing French and Spanish versions of its app, and rolling out a handful of new features focused on promoting respect and empathy, including a “Messenger Kids Pledge” and something called “Kindness Stickers,” which are meant to inspire more positive emotions when communicating online.

The stickers say things like “MY BFF” or “Well Done!” or “Best Artist,” and are designed to be placed on shared photos.

Also helpful is the new “Messenger Kids Pledge,” which is designed for both parents and children to read together, and includes some basic guidelines about how to be behave online. For example, it reminds everyone to “be kind when you communicate,” and to “be respectful,” explaining also that when someone doesn’t respond right away, they may just be too busy. “Be safe” and “have fun” are also a part of the guidelines.

This seems like a small addition, but it’s the kind of thing parents should already be doing with their kids when they introduce new technology – and many do not. Some parents don’t even know what apps kids are using, which has allowed those less secure apps to become hunting grounds for predators.

Messenger Kids works differently, as it requires parental involvement. Kids can’t add any friends without parental approval, and the app can be managed directly from parents’ Facebook.

While it’s understandable that people have a hard time trusting Facebook these days, there isn’t any viable alternative that allows kids to “practice” communicating or socializing online in a more controlled environment. Kids instead beg for apps aimed at adults and older teens, like Snapchat, Instagram, and Musical.ly – apps I personally won’t install for a “tween.”

Messenger Kids at least gives kids a way to privately socialize with approved people – kids whose parents you know and trust, and family members on Facebook. They’re at an age where you can still look over their shoulder, and correct bad behavior as it arises.

The alternative to using Messenger Kids is what a lot parents do – they refuse all social apps until kids reach a certain age, then throw them to the wolves on the internet. Is that really better?

Despite its sandboxed nature, kids like Messenger Kids because it has the features they actually want from the adult-oriented apps – like photo filters and stickers. (If the app would please add Facebook’s new lip-sync feature so I could stop hearing the begs for Musical.ly on a daily basis, I’d be much appreciative.)

Related to its push for kindness and respect, Messenger Kids will also soon roll out an interactive guide within its app called the “Appreciation Mission” which will encourage kids to discover and express appreciation for their friends and family. This will live in the “Mission” section of the app, where kids learn how to use features, like starting a video call or sending a photo.

Facebook says it consults with the Yale Center for Emotional Intelligence and a global group of advisors on the development of the features focused on these principles of social and emotional learning. (The Yale Center is a paid advisor.)

Come to think of it, a lot of adults could benefit from these sorts of features, too. Maybe Facebook and Twitter should add their own in-app kindness reminders, as well?

Messenger Kids also added support for two parents to manage kids’ accounts, based on customer feedback.

The app is a free download on iOS and Android.

 

Messenger Kids I Safer Messaging and Video Chat

Messenger Kids is a free, safer messaging and video chat app that provides more control for parents and more fun for kids.

Posted by Messenger Kids on Tuesday, June 19, 2018

Happn takes on Tinder Places with an interactive map of missed connections

Dating app Happn, whose “missed connections” type of dating experience connects people who have crossed paths in real life, is fighting back at Tinder. Seemingly inspired by Happn’s location-based features, Tinder recently began piloting something called Tinder Places – a feature that tracks your location to match you with those people who visit your same haunts – like a favorite bar, bookshop, gym, restaurant, and more.

Of course Tinder’s move into location-based dating should worry Happn, which had built its entire dating app around the idea of matching up people who could have met in real life, but just missed doing so.

Now, Happn is challenging Tinder Places with a new feature of its own. It’s debuting an interactive map where users can discover those people they’ve crossed paths with over the past seven days.

Happn founder, French entrepreneur Didier Rappaport, dismisses the Tinder threat.

“We don’t see it as a threat at all but as a good thing,” he tells TechCrunch. “Find the people you’ve crossed paths with has always been in Happn’s DNA since the beginning….We are very flattered that Tinder wants to include the same feature in its product. However, we will never use the swipe in our product,” he says.

Rappaport believes swiping is wrong because it makes you think of the other person as a product, and that’s not Happn’s philosophy.

“We want to [give our users a chance] to interact or not with a person, to take their time to decide, to be able to move back in their timeline if suddenly they change their mind and want to have a second chance,” he notes.

To use Happn’s map, you’ll tap on a specific location you’ve visited, and are then presented with potential matches who have been there too, or within 250 meters of that spot. The map will use the same geolocation data that Happn already uses to create its timeline, but just displays it in another form.

For those who aren’t comfortable sharing their location all the time with a dating app (um, everyone?), Happn also offers an “invisibility” mode that lets people hide their location during particular parts of the day – for example, while they’re at work.

While Happn’s new feature is a nice upgrade for regular users, Tinder’s location-based features – we’re sorry to report – are more elegantly designed.

Today, Happn’s invisibility mode has to be turned on when you want to use it, or you have to pay for a subscription to schedule to come on automatically at certain times. That means it requires more effort to use on a day-to-day basis.

Meanwhile, Tinder Places lets you block a regular place you visit – like, say, the gym – from ever being recorded as a place you want to show up for matches. It also automatically removes places that would be inappropriate, including your home and work addresses, and alerts you when it’s adding a new one – so you can quickly take action to remove it, if you choose. Tinder Places is also free. (It’s just not rolled out worldwide at this time).

Happn, however, does offer a way to hide your profile information and other details from select users, and never shows your current location in real time, also like Tinder.

Happn, which launched back in 2014, now claims nearly 50 million users worldwide, across 50 major cities and 40 countries. It claims to have 6.5 million monthly users – but that’s much smaller, compared with Tinder’s estimated 50 million actives.

And with Tinder parent Match Group snatching up Hinge, suing Bumble, and effectively copying the idea of using “missed connections,” one has to wonder how much life rival dating apps, especially those of Happn’s size, have left.

The app is a free download on the App Store, Play Store and Windows Store.

Fb Messenger auto-translation chips at US/Mexico language wall

Facebook’s been criticized for tearing America apart, but now it will try to help us forge bonds with our neighbors to the south. Facebook Messenger will now offer optional auto-translation of English to Spanish and vice-versa for all users in the United States and Mexico. It’s a timely launch given the family separation troubles at the nations’ border.

The feature could facilitate cross-border and cross-language friendships, business, and discussion that might show people in the two countries that deep down we’re all just human. It could be especially powerful for US companies looking to use Messenger for conversational commerce without having to self-translate everything.

Facebook tells me “we were pleased with the results” following a test using AI to translate the language pair in Messenger for US Facebook Marketplace users in April.

Now when users receive a message that is different from their default language, Messenger’s AI assistant M will ask if they want it translated. All future messages will in that thread will be auto-translated unless a user turns it off. Facebook plans to bring the feature to more language pairs and countries soon.

A Facebook spokesperson tells me “The goal with this launch is really to enable people to communicate with people they wouldn’t have been able to otherwise, in a way that is natural and seamless.”

Starting in 2011, Facebook began offering translation technology for News Feed posts and comments. For years it relied on Microsoft Bing’s translation technology, but Facebook switched to its own stack in mid-2016. By then it was translating 2 billion pieces of text a day for 800 million users.

Conversational translation is a lot tougher than social media posts, though. When we chat with friends, it’s more colloquial and full of slang. We’re also usually typing in more a hurry and can be less accurate. But if Facebook can reliably figure out what we’re saying, Messenger could become the modern day Babel Fish. At 2016’s F8, Facebook CEO Mark Zuckerberg through shade on Donald Trump saying “instead of building walls, we can build bridges.” Trump still doesn’t have that wall, and now Zuck is building a bridge with technology.

Facebook expands fact-checking program, adopts new technology for fighting fake news

Facebook this morning announced an expansion of its fact-checking program and other actions it’s taking to combat the scourge of fake news on its social network. The company, which was found to be compromised by Russian trolls whose disinformation campaigns around the November 2016 presidential election reached 150 million Americans, has been increasing its efforts at fact-checking news through a combination of technology and human review in the months since.

The company began fact-checking news on its site last spring, with help from independent third-party fact-checkers certified through the non-partisan International Fact-Checking Network. These fact checkers rate the accuracy of the story, allowing Facebook to take action on those rated false by lowering them in the News Feed, and reduced the distribution of those Pages that are repeat offenders.

Today, Facebook says it has since expanded this program to 14 countries around the world, and plans to roll it out to more countries by year-end. It also claims the impact of fact-checking reduced the distribution of fake news by an average of 80 percent.

The company also announced the expansion of its program for fact-checking photos and video. First unveiled this spring, Facebook has been working to fact-check things like manipulated  videos or misused photos where images are taken out of context in order to push a political agenda. This is a huge issue, because memes have become a popular way of rallying people around a cause on the internet, but they often do so by completely misrepresenting the facts by using images from different events, places, and times.

One current example of this is the photo used by Drudge Report showing young boys holding guns in a story about the U.S.-Mexico border battle. The photo was actually taken nowhere near the border, but rather was snapped in Syria in 2012 and was captioned: “Four young Syrian boys with toy guns are posing in front of my camera during my visit to Azaz, Syria. Most people I met were giving the peace sign. This little city was taken by the Free Syrian Army in the summer of 2012 during the Battle of Azaz.”

Using fake or misleading images to stoke fear, disgust, or hatred of another group of people is a common way photos and videos are misused online.

Facebook also says it’s taking advantage of new machine learning technology to help it find duplicates of already debunked stories, and will working with fact-checking partners to use Schema.org‘s Claim Review, an open-source framework that will allow fact-checkers to share ratings with Facebook so the company can act more quickly, especially in times of crisis.

And the company says it will expand its efforts in downranking fake news by using machine learning to demote foreign Pages that are spreading financially-motivated hoaxes to people in other countries.

In the weeks ahead, an elections research commission working in partnership with Facebook to measure the volume and effect of misinformation on the social network will launch its website and its first request for proposals.

The company had already announced its plans to further investigate the role social media plans in elections and in democracy. The commission will receive access to privacy-protected data sets with a sample of links that people engaged with on Facebook, which will allow it to understand what sort of content is being shared. Facebook says the research will “help keep us accountable and track our progress.”

Twitter acquires anti-abuse technology provider Smyte

Twitter this morning announced it has agreed to buy San Francisco-based technology company Smyte, which describes itself as “trust and safety as a service.” Founded in 2014 by former Google and Instagram engineers, Smyte offers tools to stop online abuse, harassment, and spam, and protect user accounts.

Terms of the deal were not disclosed, but this is Twitter’s first acquisition since buying consumer mobile startup Yes, Inc. back in December 2016

Online harassment has been of particular concern to Twitter in recent months, as the level of online discourse across the web has become increasingly hate-filled and abusive. The company has attempted to combat this problem with new policies focused on the reduction of hate speech, violent threats, and harassment on its platform, but it’s fair to say that problem is nowhere near solved.

As anyone who uses Twitter will tell you, the site continues to be filled with trolls, abusers, bots, and scams – and especially crypto scams, as of late.

This is where Smyte’s technology – and its team – could help.

The company was founded by engineers with backgrounds in spam, fraud and security.

Smyte CEO Pete Hunt previously led Instagram’s web team, built Instagram’s business analytics products, and helped to open source Facebook’s React.js; co-founder Julian Tempelsman worked on Gmail’s spam and abuse team, and before that Google Wallet’s anti-fraud team and the Google Drive anti-abuse team; and co-founder Josh Yudaken was a member of Instagram’s core infrastructure team.

The startup launched out of Y Combinator in 2015, with a focus on preventing online fraud.

Today, its solutions are capable of stopping all sorts of unwanted online behavior, including phishing, spam, fake accounts, cyberbullying, hate speech and trolling, the company’s website claims.

Smyte offer customers access to its technology via a REST API, or it can pull data directly from its customer’s app or data warehouse to analyze. Smyte would then import the existing rules, and use machine learning to create new rules and other machine learning models suited to the business’s specific needs.

The customers data scientists could also use Smyte to deploy (but not train) their own custom machine learning models, too.

Smyte’s system includes a dashboard where analysts can surface emerging trends in real-time, as well as conduct manual reviews of individual entities or clusters of related entities and take bulk actions.

Non-technical analysts could use Smyte to create custom rules tested on historical data, then roll them out to production and watch how they perform in real-time.

For Twitter, the use case for Smyte is obvious – its technology will be integrated with Twitter itself and its backend systems for monitoring and managing reports of abuse, while also taking aim at bots, scammers and a number of other threats today’s social networks typically face.

Of course, combatting abuse and bullying will remain Twitter’s most pressing area of concern – especially as it’s the place where President Trump tweets, and the daily news is reported and discussed (and angrily fought about).

But Twitter could use some help with its troll and bot problem, too. The company, along with Facebook, was home to Russian propaganda during the 2016 U.S presidential election. In January, Twitter notified at least 1.4 million users they saw content created by Russian trolls; it also was found  to have hosted roughly 50,000 Russian bots tweeting election-related content in November 2016.

Presumably, Smyte’s technology could help weed out some of these bad actors, if it works as well as described.

Twitter didn’t provide much detail as to how, specifically, it plans to put Smyte’s technology to use.

Instead, the company largely touted the team’s expertise and the “proactive” nature of Smyte’s anti-abuse systems, in today’s announcement:

From ensuring safety and security at some of the world’s largest companies to specialized domain expertise, Smyte’s years of experience with these issues brings valuable insight to our team. The Smyte team has dealt with many unique issues facing online safety and believes in the same proactive approach that we’re taking for Twitter: stopping abusive behavior before it impacts anyone’s experience. We can’t wait until they join our team to help us make changes that will further improve the health of the public conversation.

According to Smyte’s website, the company has a number of high-profile clients, including Indiegogo, GoFundMe, npm, Musical.ly, TaskRabbit, Meetup, OLX, ThredUp, YouNow, 99 Designs, Carousell, and Zendesk.

Twitter tells us that Smyte will wind down its operations with those customers – it didn’t acquire Smyte for its revenue-generation potential, but rather for its talent and IP.

 

LinkedIn reports there are only a couple dozen employees at Smyte today, including the founders. But Smtye’s own website lists just nineteen. Twitter wouldn’t confirm Smtye’s current headcount but says it’s working to find positions for all.

Terms of the deal were not disclosed, but Smyte had raised $6.3 million in funding from Y Combinator, Baseline Ventures, Founder Collective, Upside Partnership, Avalon Ventures, and Harrison Metal, according to Crunchbase.