Category Archives: Instagram

Alpaca accounts are underrated social media treasures

In the vast world of animals with social media accounts, common household pets like cats and dogs typically reign supreme. But if you’re not following your fair share of alpacas on the internet, you’re sorely missing out.

Though social media accounts dedicated to alpacas are rare, they're remarkable —  like delicious pieces of hay in the ridiculous needle stack that is the internet. You have to do a bit more searching than you would to find a cat or dog account, sure. But when you do happen upon a dedicated farm or fan posting camelid content, it doesn't disappoint.

Since following several alpaca accounts like Alpacas of Instagram, Barnacre Alpacas, and The Woolly Army, I've found the animals' presence in my daily digital life, though small, to be a real mood booster. After noticing that lighthearted alpaca content makes Twitter and Instagram significantly more bearable, I decided to reach out to some leaders of the alpaca social media movement to learn more about the underrated animals, and what it's like making a space for them online. Read more...

More about Twitter, Instagram, Animals, Social Media, and Web Culture

Instagram confirms that a bug is causing follower counts to change

Instagram confirmed today that an issue has been causing some accounts’ follower numbers to change. Users began noticing the bug about 10 hours ago and the drastic drop in followers caused some to wonder if Instagram was culling inactive and fake accounts, as part of its fight against spam.

“We’re aware of an issue that is causing a change in account follower numbers for some people right now. We’re working to resolve this as quickly as possible,” the company said on Twitter.

The Instagram bug comes a few hours after a Twitter bug messed with the Like count on tweets, causing users to wonder if accounts were being suspended en masse or if they were just very bad at tweeting.

Instagram seems to be testing direct messaging on web

There's no dearth of messaging platforms on the web, but Instagram DMs have likely become a big part of your online life.

A prototype, spotted by software engineer Jane Manchun Wong, shows the platform making moves toward making its direct messaging service, Direct, accessible via your browser.

Given how many of our interactions happen on Instagram these days, it makes sense to make Direct available outside of the app. If it turns out to be a thing, Direct appears to be available for both desktop and mobile. Read more...

More about Tech, Instagram, Social Media, Messaging, and Tech

Instagram is now testing a web version of Direct messages

Insta-chat addicts, rejoice. You could soon be trading memes and emojis from your computer. Instagram is internally testing a web version of Instagram Direct messaging that lets people chat without the app. If, or more likely, when this rolls out publicly, users on a desktop or laptop PC or Mac, a non-Android or iPhone, or that access Instagram via a mobile web browser will be able to privately message other Instagrammers.

Instagram web DMs was one of the features I called for in a product wishlist I published in December alongside a See More Like This button for the feed and an upload quality indicator so your Stories don’t look crappy if you’re on a slow connection.

A web version could make Instagram Direct a more full-fledged SMS alternative rather than just a tacked-on feature for discussing the photo and video app’s content. Messages are a massive driver of engagement that frequently draws people back to an app, and knowing friends can receive them anywhere could get users sending more. While Facebook doesn’t monetize Instagram Direct itself, it could get users browsing through more ads while they wait for replies.

Given Facebook’s own chat feature started on the web before going mobile and getting its own Messenger app, and WhatsApp launched a web portal in 2015 followed by desktop clients in 2016, it’s sensible for Instagram Direct to embrace the web too. It could also pave the way for Facebook’s upcoming unification of the backend infrastructure for Messenger, WhatsApp, and Instagram Direct that should expand encryption and allow cross-app chat, as reported by the New York Times’ Mike Isaac.

Mobile reverse-engineering specialist and frequent TechCrunch tipster Jane Manchun Wong alerted us to Instagram’s test. It’s not available to users yet, as it’s still being internally “dogfooded” — used heavily by employees to identify bugs or necessary product changes. But she was able to dig past security and access the feature from both a desktop computer and mobile web browser.

In the current design, Direct on the web is available from a Direct arrow icon in the top right of the screen. The feature looks like it will use an Instagram.com/direct/…. URL structure. If the feature becomes popular, perhaps Facebook will break it out with its own Direct destination website similar to https://www.messenger.com which launched in 2015. Instagram began testing a standalone Direct app last year, but it’s yet to be officially launched and doesn’t seem exceedingly popular.

Instagram did not respond to requests for comment before press time. The company rarely provides a statement on internal features in development until they’re being externally tested on the public, at which point it typically tells us “We’re always testing ways to improve the Instagram experience.”

After cloning Snapchat Stories to create Instagram Stories, the Facebook-owned app decimated Snap’s growth rate. That left Snapchat to focus on premium video and messaging. Last year Instagram built IGTV to compete with Snapchat Discover. And now with it testing a web version of Direct, it seems poised to challenge Snap for chat too.

This automation tool could change the way you use Instagram (for the better)

Connecting with your Instagram followers is fun... until it isn't. Once it becomes part of your job description for marketing purposes, the hassle of scheduling, posting, and following up on #content can swiftly outweigh the thrill of racking up likes. And if those tiny red heart notifications stop sparking joy for you entirely, well, what's the point of anything anymore?

That's why Postable created an Instagram automation tool: to make it easier for individuals, brands, and bloggers alike to maintain an active presence on the photo-sharing platform. Once you begin a subscription, you can connect to your Dropbox or OneDrive accounts and upload photos from there to save time, then easily schedule posts to drive optimal follower engagement on an unlimited number of accounts. Read more...

More about Instagram, Social Media, Apps And Software, Mashable Shopping, and Shopping Stackcommerce

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Instagram thinks you want IGTV previews in your home feed

If you can’t beat or join them… force feed ’em? That appears to be Instagram’s latest strategy for IGTV, which is now being shoved right into Instagram’s main feed, the company announced today. Instagram says that it will now add one-minute IGTV previews to the feed, making it “even easier” to discover and watch content from IGTV.

Uh.

IGTV, you may recall, was launched last year as a way for Instagram to woo creators. With IGTV, creators are able to share long-form videos within the Instagram platform instead of just short-form content to the Feed or Stories.

The videos, before today, could be viewed in Instagram itself by tapping the IGTV icon at the top-right of the screen, or within the separate IGTV standalone app.Instagram’s hope was that IGTV would give the company a means of better competing with larger video sites, like Google’s YouTube or Amazon’s Twitch.

Its users, however, haven’t found IGTV as compelling.

As of last fall, few creators were working on content exclusively for IGTV, and rumor was the viewing audience for IGTV content remained quite small, compared with rivals like Snapchat or Facebook. Many creators just weren’t finding it worth investing additional resources into IGTV, so were repurposing content designed for other platforms, like YouTube or Snapchat.

That means the bigger creators weren’t developing premium content or exclusives for IGTV, but were instead experimenting by replaying the content their fans could find elsewhere. Many are still not even sure what the IGTV audience wants to watch.

IGTV’s standalone app doesn’t seem to have gained much of a following either.

The app today is ranked a lowly No. 228 on the U.S. App Store’s “Photo and Video” top chart. Despite being run by Instagram — an app that topped a billion monthly users last summer, and is currently the No. 1 free app on iOS — fewer are downloading IGTV.

After seeing 1.5 million downloads in its first month last year — largely out of curiosity — the IGTV app today has only grown to 3.5 million total installs worldwide, according to Sensor Tower data. While those may be good numbers for a brand-new startup, for a spin-off from one of the world’s biggest apps, they’re relatively small.Instagram’s new video initiative also represents another shot across the bow of Instagram purists.

As BuzzFeed reporter Katie Notopoulos opined last year, “I’m Sorry To Report Instagram Is Bad Now.” Her point of concern was the impact that Stories had on the Instagram Feed — people were sharing to Stories instead of the Feed, which made the Feed pretty boring. At yet, the Stories content wasn’t good either, having become a firehose of the throwaway posts that didn’t deserve being shared directly on users’ profiles.

On top of all this, it seems the Instagram Feed is now going to be cluttered with IGTV previews. That’s. Just. Great.

Instagram says you’ll see the one-minute previews in the Feed, and can tap on them to turn on the audio. Tap the IGTV icon on the preview and you’ll be able to watch the full version in IGTV. When the video is finished, you’re returned to the Feed. Or, if you want to see more from IGTV, you can swipe up while the video plays to start browsing.

IGTV previews is only one way Instagram has been developing the product to attract more views in recent months. It has also integrated IGTV in Explore, allowed the sharing of IGTV videos to Stories, added the ability to save IGTV Videos and launched IGTV Web Embeds.

Instagram and Facebook will start censoring ‘graphic images’ of self-harm

In light of a recent tragedy, Instagram is updating the way it handles pictures depicting self-harm. Instagram and Facebook announced changes to their policies around content depicting cutting and other forms of self harm in dual blog posts Thursday.

The changes comes about in light of the 2017 suicide of a 14 year old girl named Molly Russell, a UK resident who took her own life in 2017. Following her death, her family discovered that Russell was engaged with accounts that depicted and promoted self harm on the platform.

As the controversy unfolded, Instagram Head of Product Adam Mosseri penned an op-ed in the Telegraph to atone for the platform’s at times high consequence shortcomings. Mosseri previously announced that Instagram would implement “sensitivity screens” to obscure self harm content, but the new changes go a step further.

Starting soon, both platforms will no longer allow any “graphic images of self-harm” most notably those that depict cutting. This content was previously allowed because the platforms worked under the assumption that allowing people to connect and confide around these issues was better than the alternative. After a “comprehensive review with global experts and academics on youth, mental health and suicide prevention” those policies are shifting.

“… It was advised that graphic images of self-harm – even when it is someone admitting their struggles – has the potential to unintentionally promote self-harm,” Mosseri said.

Instagram will also begin burying non-graphic images about self harm (pictures of healed scars, for example) so they don’t show up in search, relevant hashtags or on the explore tab. “We are not removing this type of content from Instagram entirely, as we don’t want want to stigmatize or isolate people who may be in distress and posting self-harm related content as a cry for help,” Mosseri said.

According to the blog post, after consulting with groups like the Centre for Mental Health and Save.org, Instagram tried to strike a balance that would still allow users to express their personal struggles without encouraging others to hurt themselves. For self harm, like disordered eating, that’s a particularly difficult line to walk. It’s further complicated by the fact that not all people who self harm have suicidal intentions and the behavior has its own nuances apart from suicidality.

“Up until now, we’ve focused most of our approach on trying to help the individual who is sharing their experiences around self-harm. We have allowed content that shows contemplation or admission of self-harm because experts have told us it can help people get the support they need. But we need to do more to consider the effect of these images on other people who might see them. This is a difficult but important balance to get right.”

Mental health research and treatment teams have long been aware of “peer influence processes” that can make self destructive behaviors take on a kind of social contagiousness. While online communities can also serve as a vital support system for anyone engaged in self destructive behaviors, the wrong kind of peer support can backfire, reinforcing the behaviors or even popularizing them. Instagram’s failure to sufficiently safeguard for the potential impact this kind of content can have on a hashtag-powered social network is fairly remarkable considering that the both Instagram and Facebook claim to have worked with mental health groups to get it right.

These changes are expected in the “coming weeks.” For now, a simple search of Instagram’s #selfharm hashtag still reveals a huge ecosystem of self-harmers on Instagram, including self-harm related memes (some hopeful, some not) and many very graphic photos of cutting.

“It will take time and we have a responsibility to get this right,” Mosseri said. “Our aim is to have no graphic self-harm or graphic suicide related content on Instagram… while still ensuring we support those using Instagram to connect with communities of support.”

Tech platforms called to support public interest research into mental health impacts

The tech industry has been called on to share data with public sector researchers so the mental health and psychosocial impacts of their service on vulnerable users can be better understood, and also to contribute to funding the necessary independent research over the next ten years.

The UK’s chief medical officers have made the call in a document setting out advice and guidance for the government about children’s and young people’s screen use. They have also called for the industry to agree a code of conduct around the issue.

Concerns have been growing in the UK about the mental health impacts of digital technologies on minors and vulnerable young people.

Last year the government committed to legislate on social media and safety. It’s due to publish a white paper setting out the detail of its plans before the end of the winter, and there have been calls for platforms to be regulated as publishers by placing a legal duty of care on them to protect non-adult users from harm. Though it’s not yet clear whether the government intends to go that far.

“The technology industry must share data they hold in an anonymised form with recognised and registered public sector researchers for ethically agreed research, in order to improve our scientific evidence base and understanding,” the chief medical officers write now.

After reviewing the existing evidence the CMOs say they were unable to establish a clear link between screen-based activities and mental health problems.

“Scientific research is currently insufficiently conclusive to support UK CMO evidence-based guidelines on optimal amounts of screen use or online activities (such as social media use),” they note, hence calling for platforms to support further academic research into public health issues.

Last week the UK parliament’s Science and Technology Committee made a similar call for high quality anonymized data to be provided to further public interest research into the impacts of social media technologies.

We asked Facebook-owned Instagram whether it will agree to provide data to public sector mental health and wellbeing researchers earlier this week. But at the time of writing we’re still waiting for a response. We’ve also reached out to Facebook for a reaction to the CMOs’ recommendations.

Update: A Facebook spokesperson said:

We want the time young people spend online to be meaningful and, above all, safe. We welcome this valuable piece of work and agree wholeheartedly with the Chief Medical Officers on the need for industry to work closely together with government and wider society to ensure young people are given the right guidance to help them make the most of the internet while staying safe.

Instagram’s boss, Adam Mosseri, is meeting with the UK health secretary today to discuss concerns about underage users being exposed to disturbing content on the social media platform.

The meeting follows public outrage over the suicide of a schoolgirl whose family said she had been exposed to Instagram accounts that shared self-harm imagery, including some accounts they said actively encouraged suicide. Ahead of the meeting Instagram announced some policy tweaks — saying it would no longer recommend self-harm content to users, and would start to screen sensitive imagery, requiring users click to view it.

In the guidance document the CMOs write that they support the government’s move to legislate “to set clear expectations of the technology industry”. They also urge the technology industry to establish a voluntary code of conduct to address how they safeguard children and young people using their platforms, in consultation with civil society and independent experts.

Areas that the CMOs flag for possible inclusion in such a code include “clear terms of use that children can understand”, as well as active enforcement of their own T&Cs — and “effective age verification” (they suggest working with the government on that).

They also suggest platforms include commitments to “remove addictive capabilities” from the UX design of their services, criticism so-called “persuasive” design.

They also suggest platforms commit to ensure “appropriate age specific adverts only”.

The code should ensure that “no normalisation of harmful behaviour (such as bullying and selfharming) occurs”, they suggest, as well as incorporate ongoing work on safety issues such as bullying and grooming, in their view.

In advice to parents and carers also included in the document, the CMOs encourage the setting of usage boundaries around devices — saying children should not be allowed to take devices into their bedrooms at bedtime to prevent disruption to sleep.

Parents also encourage screen-free meal time to allow families to “enjoy face-to-face conversation”.

The CMOs also suggest parents and guardians talk to children about device use to encourage sensible social sharing — also pointing out adults should never assume children are happy for their photo to be shared. “When in doubt, don’t upload,” they add.