Category Archives: Social Media

Instagram’s fundraiser stickers could lure credit card numbers

Mark Zuckerberg recently revealed that commerce is a huge part of the 2019 roadmap for Facebook’s family of apps. But before people can easily buy things from Instagram etc, Facebook needs their credit card info on file. That’s a potentially lucrative side effect of Instagram’s plan to launch a Fundraiser sticker in 2019. Facebook’s own Donate buttons have raised $1 billion, and bringing them to Instagram’s 1 billion users could do a lot of good while furthering Facebook’s commerce strategy.

New code and imagery dug out of Instagram’s Android app reveals how the Fundraiser stickers will allow you to search for non-profits and add a Donate button for them to your Instagram Story. After you’ve donated to something once, Instagram could offer instant checkout on stuff you want to buy using the same payment details.

Back in 2013 when Facebook launched its Donate button, I suggested that it could add a “remove credit card after checkout” option to its fundraisers if it wanted to make it clear that the feature was purely altruistic. Facebook never did that. You still need to go into your payment settings or click through the See Receipt option after donating and then edit your account settings to remove your credit card. We’ll see if Instagram is any different. We’ve also asked whether Instagrammers will be able to raise money for personal causes, which would make it more of a competitor to GoFundMe — which has sadly become the social safety net for many facing healthcare crises.

Facebook mentioned at its Communities Summit earlier this month that it’d be building Instagram Fundraiser stickers, but the announcement was largely overshadowed by the company’s reveal of new Groups features. This week, TechCrunch tipster Ishan Agarwal found code in the Instagram Android app detailing how users will be able search for non-profits or browse collections of Suggested charities and ones they follow. They can then overlay a Donate button sticker on their Instagram Story that their followers can click through to contribute.

We then asked reverse engineering specialist Jane Manchun Wong to take a look, and she was able to generate the screenshots seen above that show a green heart icon for the Fundraiser sticker plus the non-profit search engine. A Facebook’s spokespeople tell me that “We are in early stages and working hard to bring this experience to our community . . . Instagram is all about bringing you closer to the people and things you love, and a big part of that is showing support for and bringing awareness to meaningful communities and causes. Later this year, people will be able to raise money and help support nonprofits that are important to them through a donation sticker in Instagram Stories. We’re excited to bring this experience to our community and will share more updates in the coming months.”

Zuckerbeg said during the Q4 2018 earnings call last month that “In Instagram, one of the areas I’m most excited about this year is commerce and shopping . . . there’s also a very big opportunity in basically enabling the transactions and making it so that the buying experience is good”. Streamlining those transactions through saved payment details means more people will complete their purchase rather than abandoning their cart. Facebook CFO David Wehner noted on the call that “Continuing to build good advertising products for our e-commerce clients on the advertising side will be a more important contributor to revenue in the foreseeable future”. Even though Facebook isn’t charging a fee on transactions, powering higher commerce conversion rates convinces merchants to buy more ads on the platform.

With all the talk of envy spiraling, phone addiction, bullying, and political propaganda, enabling donations is at least one way Instagram can prove it’s beneficial to the world. Snapchat lacks formal charity features, and Twitter appears to have ended its experiment allowing non-profits to tweet donate buttons. Despite all the flack Facebook rightfully takes, the company has shown a strong track record with philanthropy that mirrors Zuckerberg’s own $47 billion commitment through the Chan Zuckerberg Initiative. And if having some relatively benign secondary business benefit speeds companies towards assisting non-profits, that’s a trade-off we should be willing to embrace.

YouTube under fire for recommending videos of kids with inappropriate comments

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an icebath or ice lolly sucking “challenge”.

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole”, accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party”.

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its ‘up next’ section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines”, or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017 several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul”.

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew towards recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

Although the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report accounts found making inappropriate comments about kids to law enforcement.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned vs the scale of the task.

Another key point which YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context it’s been consumed in.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways”, citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the US.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for Internet companies to face clear legal liabilities and even a legal duty care towards users vis-a-vis the content they distribute and monetize.

For example UK regulators have made legislating on Internet and social media safety a policy priority — with the government due to publish a White Paper setting out its plans for ruling platforms this winter.

The UK government thinks it’s time for Facebook to be regulated

For the last 18 months, UK lawmakers have investigated Facebook, and it's recommended they and other social media giants be regulated.

A damning report released on Monday said after years of self-regulation, these companies were unable to protect users data and privacy, or from disinformation.

The UK parliament's Digital, Culture, Media and Sport Committee (DCMS) final report recommended an independent regulator be set up (like UK's Ofcom or the FCC), and a compulsory code of ethics for social media companies. Read more...

More about Facebook, Uk, Mark Zuckerberg, Social Media, and Fake News

UK parliament calls for antitrust, data abuse probe of Facebook

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report.

Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

VCs aren’t falling in love with dating startups

Some 17 years ago, when internet dating was popular but still kind of embarrassing to talk about, I interviewed an author who was particularly bullish on the practice. Millions of people, he said, have found gratifying relationships online. Were it not for the internet, they would probably never have met.

A lot of years have passed since then. Yet thanks to Joe Schwartz, an author of a 20-year-old dating advice book, “gratifying relationship” is still the term that sticks in my mind when contemplating the end-goal of internet dating tools.

Gratifying is a vague term, yet also uniquely accurate. It encompasses everything from the forever love of a soul mate to the temporary fix of a one-night stand. Romantics can talk about true love. Yet when it comes to the algorithm-and-swipe-driven world of online dating, it’s all about gratification.

It is with this in mind, coincident with the arrival of Valentine’s Day, that Crunchbase News is taking a look at the state of that most awkward of pairings: startups and the pursuit of finding a mate.

Pairing money

Before we go further, be forewarned: This article will do nothing to help you navigate the features of new dating platforms, fine-tune your profile or find your soul mate. It is written by someone whose core expertise is staring at startup funding data and coming up with trends.

So, if you’re OK with that, let’s proceed. We’ll start with the initial observation that while online dating is a vast and often very profitable industry, it isn’t a huge magnet for venture funding.

In 2018, for instance, venture investors put $127 million globally into 27 startups categorized by Crunchbase as dating-focused. While that’s not chump change, it’s certainly tiny compared to the more than $300 billion in global venture investment across all sectors last year.

In the chart below, we look at global venture investment in dating-focused startups over the past five years. The general finding is that round counts fluctuate moderately year-to-year, while investment totals fluctuate heavily. The latter is due to a handful of giant funding rounds for China-based startups.

While the U.S. gets the most commitments, China gets the biggest ones

While the U.S. is home to the majority of funded startups in the Crunchbase dating category, the bulk of investment has gone to China.

In 2018, for instance, nearly 80 percent of dating-related investment went to a single company, China-based Blued, a Grindr-style hookup app for gay men. In 2017, the bulk of capital went to Chinese mobile dating app Tantan, and in 2014, Beijing-based matchmaking site Baihe raised a staggering $250 million.

Meanwhile, in the U.S., we are seeing an assortment of startups raising smaller rounds, but no big disclosed financings in the past three years. In the chart below, we look at a few of the largest funding recipients.

 

Dating app outcomes

Dating sites and apps have generated some solid exits in the past few years, as well as some less-stellar outcomes.

Mobile-focused matchmaking app Zoosk is one of the most heavily funded players in the space that has yet to generate an exit. The San Francisco company raised more than $60 million between 2008 and 2012, but had to withdraw a planned IPO in 2015 due to flagging market interest.

Startups without known venture funding, meanwhile, have managed to bring in some bigger outcomes. One standout in this category is Grindr, the geolocation-powered dating and hookup app for gay men. China-based tech firm Kunlun Group bought 60 percent of the West Hollywood-based company in 2016 for $93 million and reportedly paid around $150 million for the remaining stake a year ago. Another apparent success story is OkCupid, which sold to Match.com in 2011 for $50 million.

As for venture-backed companies, one of the earlier-funded startups in the online matchmaking space, eHarmony, did score an exit last fall with an acquisition by German media company ProSiebenSat.1 Media SE. But terms weren’t disclosed, making it difficult to gauge returns.

One startup VCs are assuredly happy they passed on is Ashley Madison, a site best known for targeting married people seeking affairs. A venture investor pitched by the company years ago told me its financials were quite impressive, but its focus area would not pass muster with firm investors or the VCs’ spouses.

The dating site eventually found itself engulfed in scandal in 2015 when hackers stole and released virtually all of its customer data. Notably, the site is still around, a unit of Canada-based dating network ruby. It has changed its motto, however, from “Life is short. Have an affair,” to “Find Your Moment.”

An algorithm-chosen match

With the spirit of Valentine’s Day in the air, it occurs that I should restate the obvious: Startup funding databases do not contain much about romantic love.

The Crunchbase data set produced no funded U.S. startups with “romantic” in their business descriptions. Just five used the word “romance” (of which one is a cold brew tea company).

We get it. Our cultural conceptions of romance are decidedly low-tech. We think of poetry, flowers, loaves of bread and jugs of wine. We do not think of algorithms and swipe-driven mobile platforms.

Dating sites, too, seem to prefer promoting themselves on practicality and effectiveness, rather than romance. Take how Match Group, the largest publicly traded player in the dating game, describes its business via that most swoon-inducing of epistles, the 10-K report: “Our strategy focuses on a brand portfolio approach, through which we attempt to offer dating products that collectively appeal to the broadest spectrum of consumers.”

That kind of writing might turn off romantics, but shareholders love it. Shares of Match Group, whose portfolio includes Tinder, have more than tripled since Valentine’s Day 2017. Its current market cap is around $16 billion.

So, complain about the company’s dating products all you like. But it’s clear investors are having a gratifying relationship with Match. When it comes to startups, however, it appears they’re still mostly swiping left.

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.

TikTok spotted testing native video ads

TikTok is testing a new ad product: a sponsored video ad that directs users to the advertiser’s website. The test was spotted in the beta version of the U.S. TikTok app, where a video labeled “Sponsored” from the bike retailer Specialized is showing up in the main feed, along with a blue “Lean More” button that directs users to tap to get more information.

Presumably, this button could be customized to send users to the advertiser’s website or any other web address, but for the time being it only opened the Specialized Bikes (@specializedbikes) profile page within the TikTok app.

However, the profile page itself also sported a few new features, including what appeared to be a tweaked version of the verified account badge.

Below the @specializedbikes username was “Specialized Bikes Page” and a blue checkmark (see below). On other social networks, checkmarks like this usually indicate a user whose account has gone through a verification process of some kind.

Typical TikTok user profiles don’t look like this — they generally only include the username. In some cases, we’ve seen them sport other labels like “popular creator” or “Official Account” — but these have been tagged with a yellowish-orange checkmark, not a blue one.

In addition, a pop-up banner overlay appeared at the bottom of the profile page, which directed users to “Go to Website” followed by another blue “Learn More” button.

Oddly, this pop-up banner didn’t show up all the time, and the “Learn More” button didn’t work — it only re-opened the retailer’s profile page.

As for the video itself, it features a Valentine’s Day heart that you can send to a crush, and, of course, some bikes.

The music backing the clip is Breakbot’s “By Your Side,” but is labeled “Promoted Music.” Weirdly, when you tap on the “Promoted Music” you’re not taken to the soundbite on TikTok like usual, but instead get an error message saying “Ad videos currently do not support this feature.”

The glitches indicate this video ad unit is still very much in the process of being tested, and not a publicly available ad product at this time.

TikTok parent ByteDance only just began to experiment with advertising in the U.S. and U.K. in January.

So far, public tests have only included an app launch pre-roll ad. But according to a leaked pitch deck published by Digiday, there are four TikTok ad products in the works: a brand takeover, an in-feed native video ad, a hashtag challenge and a Snapchat-style 2D lens filter for photos; 3D and AR lens were listed as “coming soon.”

TikTok previously worked with GUESS on a hashtag challenge last year, and has more recently been running app launch pre-roll ads for companies like GrubHub, Disney’s Kingdom Hearts and others. However, a native video ad hadn’t yet been spotted in the wild until now.

According to estimates from Sensor Tower, TikTok has grown to nearly 800 million lifetime installs, not counting Android in China. Factoring that in, it’s fair to say the app has topped 1 billion downloads. As of last July, TikTok claimed to have more than 500 million monthly active users worldwide, excluding the 100 million users it gained from acquiring Musical.ly.

That’s a massive user base, and attractive to advertisers. Plus, native video ads like the one seen in testing would allow brands to participate in the community, instead of interrupting the experience the way video pre-rolls do.

TikTok has been reached for comment, but was not able to provide one at this time. We’ll update if that changes. Specialized declined to comment.

First look at Twitter’s Snapchatty new Camera feature

Twitter has been secretly developing an enhanced camera feature that’s accessible with a swipe from the home screen and allows you to overlay captions on photos, videos, and Live broadcasts before sharing them to the timeline. Twitter is already used by people to post pictures and videos, but as it builds up its profile as a media company, and in the age of Snapchat and Instagram, it is working on the feature in hopes it will get people doing that even more.

Described in Twitter’s code as the “News Camera”, the Snapchat-style visual sharing option could turn more people into citizen journalists… or just get them sharing more selfies, reaction shots, and the world around them. Getting more original visual content into Twitter spices up the feed and could also help photo and video ads blend in.

Prototypes of the new Twitter camera were first spotted by social media consultant Matt Navarra a week ago, and he produced a video of the feature today.

He describes the ability to swipe left from the homescreen to bring up the new unified capture screen. After you shoot some media, overlays appear prompting you to add a location and a caption to describe “what’s happening”. Users can choose from six colored backgrounds for the caption and location overlay card before posting, which lets you unite words and imagery on Twitter for the first time to make a splash with your tweets.

Meanwhile, code digger and frequent TechCrunch tipster Jane Manchun Wong has found Twitter code describing how users should “Try the updated Twitter camera” to “capture photos, videos, and go live”. Bloomberg and CNBC had previously reported that Twitter was building an improved camera, but without feature details or screenshots.

Twitter confirmed to TechCrunch that it’s currently developing the new camera feature. A Twitter spokesperson told us “I can confirm that we’re working on an easier way to share thing like images and videos on Twitter. What you’re seeing is in mid-development so it’s tough to comment on what things will look like in the final stage. The team is still actively working on what we’ll actually end up shipping.” When asked when it would launch, the spokesperson told us “Unfortunately we don’t have a timeline right now. You could expect the first half of this year.”

Twitter has largely sat by as visual sharing overtook the rest of the social media landscape. It’s yet to launch a Snapchat Stories feature like almost every other app — although you could argue that Moments was an effort to do that — and it seems to have neglected Persicope as the Live broadcasting trend waned. But the information density of all the words on Twitter might make it daunting to mainstream users compared to something easy and visual like Instagram.

This month, as it turns away from reporting monthly active users, Twitter reported daily active users for the first time, revealing it has 126 million that are monetizable compared to Snapchat’s 186 million while Instagram has over 500 million.

The new Twitter camera could make the service more appealing for people who see something worth sharing, but don’t always know what to say,

Iranian spies allegedly used Facebook to target U.S. intelligence agents

It was just a simple friend request. However, nothing is ever simple when the U.S. intelligence community is involved.

A press release released Wednesday by the Department of Justice details an alleged effort by Iranian government agents to use Facebook to hack members of the American intelligence community. And they had unexpected help. Specifically, a former Department of Defense contractor turned Iranian agent. 

The details of this case are pretty wild, and focus on 39-year-old Monica Elfriede Witt. Witt, the press release notes, is both a former Air Force intelligence specialist and a special agent of the Air Force Office of Special Investigations. She also worked as a Department of Defense contractor, and was granted a "high-level" security clearance. That was all before 2012, when things allegedly took a turn for the treasonous.  Read more...

More about Facebook, Iran, Social Media, Tech, and Cybersecurity