Category Archives: Instagram

Openbook is the latest dream of a digital life beyond Facebook

As tech’s social giants wrestle with antisocial demons that appear to be both an emergent property of their platform power, and a consequence of specific leadership and values failures (evident as they publicly fail to enforce even the standards they claim to have), there are still people dreaming of a better way. Of social networking beyond outrage-fuelled adtech giants like Facebook and Twitter.

There have been many such attempts to build a ‘better’ social network of course. Most have ended in the deadpool. A few are still around with varying degrees of success/usage (Snapchat, Ello and Mastodon are three that spring to mine). None has usurped Zuckerberg’s throne of course.

This is principally because Facebook acquired Instagram and WhatsApp. It has also bought and closed down smaller potential future rivals (tbh). So by hogging network power, and the resources that flow from that, Facebook the company continues to dominate the social space. But that doesn’t stop people imagining something better — a platform that could win friends and influence the mainstream by being better ethically and in terms of functionality.

And so meet the latest dreamer with a double-sided social mission: Openbook.

The idea (currently it’s just that; a small self-funded team; a manifesto; a prototype; a nearly spent Kickstarter campaign; and, well, a lot of hopeful ambition) is to build an open source platform that rethinks social networking to make it friendly and customizable, rather than sticky and creepy.

Their vision to protect privacy as a for-profit platform involves a business model that’s based on honest fees — and an on-platform digital currency — rather than ever watchful ads and trackers.

There’s nothing exactly new in any of their core ideas. But in the face of massive and flagrant data misuse by platform giants these are ideas that seem to sound increasingly like sense. So the element of timing is perhaps the most notable thing here — with Facebook facing greater scrutiny than ever before, and even taking some hits to user growth and to its perceived valuation as a result of ongoing failures of leadership and a management philosophy that’s been attacked by at least one of its outgoing senior execs as manipulative and ethically out of touch.

The Openbook vision of a better way belongs to Joel Hernández who has been dreaming for a couple of years, brainstorming ideas on the side of other projects, and gathering similarly minded people around him to collectively come up with an alternative social network manifesto — whose primary pledge is a commitment to be honest.

“And then the data scandals started happening and every time they would, they would give me hope. Hope that existing social networks were not a given and immutable thing, that they could be changed, improved, replaced,” he tells TechCrunch.

Rather ironically Hernández says it was overhearing the lunchtime conversation of a group of people sitting near him — complaining about a laundry list of social networking ills; “creepy ads, being spammed with messages and notifications all the time, constantly seeing the same kind of content in their newsfeed” — that gave him the final push to pick up the paper manifesto and have a go at actually building (or, well, trying to fund building… ) an alternative platform. 

At the time of writing Openbook’s Kickstarter crowdfunding campaign has a handful of days to go and is only around a third of the way to reaching its (modest) target of $115k, with just over 1,000 backers chipping in. So the funding challenge is looking tough.

The team behind Openbook includes crypto(graphy) royalty, Phil Zimmermann — aka the father of PGP — who is on board as an advisor initially but billed as its “chief cryptographer”, as that’s what he’d be building for the platform if/when the time came. 

Hernández worked with Zimmermann at the Dutch telecom KPN building security and privacy tools for internal usage — so called him up and invited him for a coffee to get his thoughts on the idea.

“As soon as I opened the website with the name Openbook, his face lit up like I had never seen before,” says Hernández. “You see, he wanted to use Facebook. He lives far away from his family and facebook was the way to stay in the loop with his family. But using it would also mean giving away his privacy and therefore accepting defeat on his life-long fight for it, so he never did. He was thrilled at the possibility of an actual alternative.”

On the Kickstarter page there’s a video of Zimmermann explaining the ills of the current landscape of for-profit social platforms, as he views it. “If you go back a century, Coca Cola had cocaine in it and we were giving it to children,” he says here. “It’s crazy what we were doing a century ago. I think there will come a time, some years in the future, when we’re going to look back on social networks today, and what we were doing to ourselves, the harm we were doing to ourselves with social networks.”

“We need an alternative to the social network work revenue model that we have today,” he adds. “The problem with having these deep machine learning neural nets that are monitoring our behaviour and pulling us into deeper and deeper engagement is they already seem to know that nothing drives engagement as much as outrage.

“And this outrage deepens the political divides in our culture, it creates attack vectors against democratic institutions, it undermines our elections, it makes people angry at each other and provides opportunities to divide us. And that’s in addition to the destruction of our privacy by revenue models that are all about exploiting our personal information. So we need some alternative to this.”

Hernández actually pinged TechCrunch’s tips line back in April — soon after the Cambridge Analytica Facebook scandal went global — saying “we’re building the first ever privacy and security first, open-source, social network”.

We’ve heard plenty of similar pitches before, of course. Yet Facebook has continued to harvest global eyeballs by the billions. And even now, after a string of massive data and ethics scandals, it’s all but impossible to imagine users leaving the site en masse. Such is the powerful lock-in of The Social Network effect.

Regulation could present a greater threat to Facebook, though others argue more rules will simply cement its current dominance.

Openbook’s challenger idea is to apply product innovation to try to unstick Zuckerberg. Aka “building functionality that could stand for itself”, as Hernández puts it.

“We openly recognise that privacy will never be enough to get any significant user share from existing social networks,” he says. “That’s why we want to create a more customisable, fun and overall social experience. We won’t follow the footsteps of existing social networks.”

Data portability is an important ingredient to even being able to dream this dream — getting people to switch from a dominant network is hard enough without having to ask them to leave all their stuff behind as well as their friends. Which means that “making the transition process as smooth as possible” is another project focus.

Hernández says they’re building data importers that can parse the archive users are able to request from their existing social networks — to “tell you what’s in there and allow you to select what you want to import into Openbook”.

These sorts of efforts are aided by updated regulations in Europe — which bolster portability requirements on controllers of personal data. “I wouldn’t say it made the project possible but… it provided us a with a unique opportunity no other initiative had before,” says Hernández of the EU’s GDPR.

“Whether it will play a significant role in the mass adoption of the network, we can’t tell for sure but it’s simply an opportunity too good to ignore.”

On the product front, he says they have lots of ideas — reeling off a list that includes the likes of “a topic-roulette for chats, embracing Internet challenges as another kind of content, widgets, profile avatars, AR chatrooms…” for starters.

“Some of these might sound silly but the idea is to break the status quo when it comes to the definition of what a social network can do,” he adds.

Asked why he believes other efforts to build ‘ethical’ alternatives to Facebook have failed he argues it’s usually because they’ve focused on technology rather than product.

“This is still the most predominant [reason for failure],” he suggests. “A project comes up offering a radical new way to do social networking behind the scenes. They focus all their efforts in building the brand new tech needed to do the very basic things a social network can already do. Next thing you know, years have passed. They’re still thousands of miles away from anything similar to the functionality of existing social networks and their core supporters have moved into yet another initiative making the same promises. And the cycle goes on.”

He also reckons disruptive efforts have fizzled out because they were too tightly focused on being just a solution to an existing platform problem and nothing more.

So, in other words, people were trying to build an ‘anti-Facebook’, rather than a distinctly interesting service in its own right. (The latter innovation, you could argue, is how Snap managed to carve out a space for itself in spite of Facebook sitting alongside it — even as Facebook has since sought to crush Snap’s creative market opportunity by cloning its products.)

“This one applies not only to social network initiatives but privacy-friendly products too,” argues Hernández. “The problem with that approach is that the problems they solve or claim to solve are most of the time not mainstream. Such as the lack of privacy.

“While these products might do okay with the people that understand the problems, at the end of the day that’s a very tiny percentage of the market. The solution these products often present to this issue is educating the population about the problems. This process takes too long. And in topics like privacy and security, it’s not easy to educate people. They are topics that require a knowledge level beyond the one required to use the technology and are hard to explain with examples without entering into the conspiracy theorist spectrum.”

So the Openbook team’s philosophy is to shake things up by getting people excited for alternative social networking features and opportunities, with merely the added benefit of not being hostile to privacy nor algorithmically chain-linked to stoking fires of human outrage.

The reliance on digital currency for the business model does present another challenge, though, as getting people to buy into this could be tricky. After all payments equal friction.

To begin with, Hernández says the digital currency component of the platform would be used to let users list secondhand items for sale. Down the line, the vision extends to being able to support a community of creators getting a sustainable income — thanks to the same baked in coin mechanism enabling other users to pay to access content or just appreciate it (via a tip).

So, the idea is, that creators on Openbook would be able to benefit from the social network effect via direct financial payments derived from the platform (instead of merely ad-based payments, such as are available to YouTube creators) — albeit, that’s assuming reaching the necessary critical usage mass. Which of course is the really, really tough bit.

“Lower cuts than any existing solution, great content creation tools, great administration and overview panels, fine-grained control over the view-ability of their content and more possibilities for making a stable and predictable income such as creating extra rewards for people that accept to donate for a fixed period of time such as five months instead of a month to month basis,” says Hernández, listing some of the ideas they have to stand out from existing creator platforms.

“Once we have such a platform and people start using tips for this purpose (which is not such a strange use of a digital token), we will start expanding on its capabilities,” he adds. (He’s also written the requisite Medium article discussing some other potential use cases for the digital currency portion of the plan.)

At this nascent prototype and still-not-actually-funded stage they haven’t made any firm technical decisions on this front either. And also don’t want to end up accidentally getting into bed with an unethical tech.

“Digital currency wise, we’re really concerned about the environmental impact and scalability of the blockchain,” he says — which could risk Openbook contradicting stated green aims in its manifesto and looking hypocritical, given its plan is to plough 30% of its revenues into ‘give-back’ projects, such as environmental and sustainability efforts and also education.

“We want a decentralised currency but we don’t want to rush into decisions without some in-depth research. Currently, we’re going through IOTA’s whitepapers,” he adds.

They do also believe in decentralizing the platform — or at least parts of it — though that would not be their first focus on account of the strategic decision to prioritize product. So they’re not going to win fans from the (other) crypto community. Though that’s hardly a big deal given their target user-base is far more mainstream.

“Initially it will be built on a centralised manner. This will allow us to focus in innovating in regards to the user experience and functionality product rather than coming up with a brand new behind the scenes technology,” he says. “In the future, we’re looking into decentralisation from very specific angles and for different things. Application wise, resiliency and data ownership.”

“A project we’re keeping an eye on and that shares some of our vision on this is Tim Berners Lee’s MIT Solid project. It’s all about decoupling applications from the data they use,” he adds.

So that’s the dream. And the dream sounds good and right. The problem is finding enough funding and wider support — call it ‘belief equity’ — in a market so denuded of competitive possibility as a result of monopolistic platform power that few can even dream an alternative digital reality is possible.

In early April, Hernández posted a link to a basic website with details of Openbook to a few online privacy and tech communities asking for feedback. The response was predictably discouraging. “Some 90% of the replies were a mix between critiques and plain discouraging responses such as “keep dreaming”, “it will never happen”, “don’t you have anything better to do”,” he says.

(Asked this April by US lawmakers whether he thinks he has a monopoly, Zuckerberg paused and then quipped: “It certainly doesn’t feel like that to me!”)

Still, Hernández stuck with it, working on a prototype and launching the Kickstarter. He’s got that far — and wants to build so much more — but getting enough people to believe that a better, fairer social network is even possible might be the biggest challenge of all. 

For now, though, Hernández doesn’t want to stop dreaming.

“We are committed to make Openbook happen,” he says. “Our back-up plan involves grants and impact investment capital. Nothing will be as good as getting our first version through Kickstarter though. Kickstarter funding translates to absolute freedom for innovation, no strings attached.”

You can check out the Openbook crowdfunding pitch here.

Chilling effects

The removal of conspiracy enthusiast content by InfoWars brings us to an interesting and important point in the history of online discourse. The current form of Internet content distribution has made it a broadcast medium akin to television or radio. Apps distribute our cat pics, our workouts, and our YouTube rants to specific audiences of followers, audiences that were nearly impossible to monetize in the early days of the Internet but, thanks to gullible marketing managers, can be sold as influencer media.

The source of all of this came from Gen X’s deep love of authenticity. They formed a new vein of content that, after breeding DIY music and zines, begat blogging, and, ultimately, created an endless expanse of user generated content (UGC). In the “old days” of the Internet this Cluetrain-manifesto-waving post gatekeeper attitude served the slacker well. But this move from a few institutional voices into a scattered legion of micro-fandoms led us to where we are today: in a shithole of absolute confusion and disruption.

As I wrote a year ago, user generated content supplanted and all but destroyed “real news.” While much of what is published now is true in a journalistic sense, the ability for falsehood and conspiracy to masquerade as truth is the real problem and it is what caused a vacuum as old media slowed down and new media sped up. In this emptiness a number of parasitic organisms sprung up including sites like Gizmodo and TechCrunch, micro-celebrity systems like Instagram and Vine, and sites catering to a different consumer, sites like InfoWars and Stormfront. It should be noted that InfoWars has been spouting its deepstate meanderings since 1999 and Alex Jones himself was a gravelly-voice radio star as early as 1996. The Internet allowed any number of niche content services to juke around the gatekeepers of propriety and give folks like Jones and, arguably, TechCrunch founder Mike Arrington, Gawker founder Nick Denton, and countless members of the “Internet-famous club,” deep influence over the last decades media landscape.

The last twenty years have been good for UGC. You could get rich making it, get informed reading it, and its traditions and habits began redefining how news-gathering operated. There is no longer just a wall between advertising and editorial. There is also a wall between editorial and the myriad bloggers who write about poop on Mt. Everest. In this sort of world we readers find ourselves at a distinct loss. What is true? What is entertainment? When the Internet is made flesh in the form of Pizzagate shootings and Unite the Right Marches, who is to blame?

The simple answer? We are to blame. We are to blame because we scrolled endlessly past bad news to get to the news that was applicable to us. We trained robots to spoon feed us our opinions and then force feed us associated content. We allowed ourselves to enter into a pact with a devil so invisible and pernicious that it easily convinced the most confused among us to mobilize against Quixotic causes and immobilized the smartest among us who were lulled into a Soma-like sleep of liking, sharing, and smileys. And now a new reckoning is coming. We have come full circle.

Once upon a time old gatekeepers were careful to let only carefully controlled views and opinions out over the airwaves. The medium was so immediate that in the 1940s broadcasters forbade the transmission of recordings and instead forced broadcasters to offer only live events. This was wonderful if you had the time to mic a children’s choir at Christmas but this rigidity was bed for a reporter’s health. Take William Shirer and Edward R. Murrow’s complaints about being unable to record and play back bombing raids in Nazi-held territories – their chafing at old ideas are almost palpable to modern bloggers.

There were other handicaps to the ban on recording that hampered us in taking full advantage of this new medium in journalism. On any given day there might be several developments, each of which could have been recorded as it happened and then put together and edited for the evening broadcast. In Berlin, for example, there might be a bellicose proclamation, troop movements through the capital, sensational headlines in the newspapers, a protest by an angry ambassador, a fiery speech by Hitler, Goring or Goebbels threatening Nazi Germany’s next victim—all in the course of the day. We could have recorded them at the moment they happened and put them together for a report in depth at the end of the day. Newspapers could not do this. Only radio could. But [CBS President] Paley forbade it.

Murrow and I tried to point out to him that the ban on recording was not only hampering our efforts to cover the crisis in Europe but would make it impossible to really cover the war, if war came. In order to broadcast live, we had to have a telephone line leading from our mike to a shortwave transmitter. You could not follow an advancing or retreating army dragging a telephone line along with you. You could not get your mike close enough to a battle to cover the sounds of combat. With a compact little recorder you could get into the thick of it and capture the awesome sounds of war.

And so now instead of CBS and the Censorship Bureau we have Facebook and Twitter. Instead of calling for the ability to record and playback an event we want permission to offer our own slants on events, no matter how far removed we are from the action. Instead of working diligently to spread only the truth, we consume the truth as others know it. And that’s what we are now chafing against: the commercialization and professionalization of user generated content.

Every medium goes through this confusion. From Penny Dreadfuls to Pall Mall sponsoring nearly every single new television show in the 1940s, media has grown, entered a disruptive phase that changes all media around it, and is then curtailed into boredom and commoditization. It is important to remember that we are in the era of Peak TV not because we all have more time to watch 20 hours of Breaking Bad. We are in Peak TV because we have gotten so good at making good shows – and the average consumer is ravenous for new content – that there is no financial reason not to take a flyer on a miniseries. In short, it’s gotten boring to make good TV.

And so we are now entering the latest stage of Internet content, the blowback. This blowback is not coming from governments. Trump, for his part, sees something wrong but cannot or will not verbalize it past the idea of “Fake News”. There is absolutely a Fake News problem but it is not what he thinks it is. Instead, the Fake News problem is rooted in the idea that all content deserves equal respect. My Medium post is as good as a CNN which is as good as an InfoWars screed about pedophiles on Mars. In a world defined by free speech then all speech is protected. Until, of course, it affects the bottom line of the company hosting it.

So Facebook and Twitter are walking a thin line. They want to remain true to the ancillary GenX credo that can be best described as “garbage in, garbage out” but many of its readers have taken that deeply open invitation to share their lives far too openly. These platforms have come to define personalities. They have come to define news cycles. They have driven men and women into hiding and they have given the trolls weapons they never had before, including the ability to destroy media organizations at will. They don’t want to censor but now that they have shareholders then they simply must.

So get ready for the next wave of media. And the next. And the next. As it gets more and more boring to visit Facebook I foresee a few other rising and falling media outlets based on new media – perhaps through VR or video – that will knock social media out of the way. And wait for more wholesale destruction of UGC creators new and old as monetization becomes more important than “truth.”

I am not here to weep for InfoWars. I think it’s garbage. I’m here to tell you that InfoWars is the latest in a long line of disrupted modes of distribution that began with the printing press and will end god knows where. There are no chilling effects here, just changes. And we’d best get used to them.

Instagram’s CEO on vindication after 2 years of reinventing Stories

“I think the mistake everyone made was to think that Stories was a photography product” says Instagram CEO Kevin Systrom. “If you look at all these interactivity features we’ve added, we’ve really made Stories something else. We’ve really innovated and made it our own.”

His version of the ephemeral slideshow format turns two years old today. By all accounts, it’s a wild success. Instagram Stories has 400 million daily users, compared to 191 million on Snapchat which pioneered Stories. While the first year was about getting to parity with augmented reality filters and stickers, the two have since diverged. Instagram chose the viral path.

Snapchat has become more and more like Photoshop, with its magic eraser for removing objects, its green screen-style, background changer, scissors for cut-and-pasting things, and its fill-in paint bucket. These tools are remarkably powerful for living in a teen-centric consumer app. But many of these artistic concepts are too complicated for day-to-day Snapping. People don’t even think of using them when they could. And while what they produce is beautiful, they get tapped past and disappear just like any other photo or video.

Instagram could have become Photoshop. Its early photo-only feed’s editing filters and sliders pointed in that direction. Instead, it chose to focus not on the “visual” but the “communication”. Instagram increasingly treats Stories as a two-way connection between creators and fans, or between friends. It’s not just one-to-many. It’s many-to-one as well.

Instagram Stories arrived three years after Snapchat Stories, yet it was the first to let you tag friends so they’d get a notification. Now those friends can repost Stories you tag them in, or public posts they want to comment on. You could finally dunk on other Instagrammers like you do with quote-tweets. It built polls with sliders friends can move to give you feedback about “how ridiculous is my outfit today?” Music stickers let you give a corny joke a corny soundtrack or share the epic song you heard in your head while looking out upon a beautiful landscape. And most recently, it launched the Question sticker so you can query friends through your Story and then share their answers there too. Suddenly, anyone could star in their own “Ask Me Anything”.

None of these Instagram tools require much ‘skill’. They’re designed not for designers, but for normal people trying to convey how they feel about the world around them. Since we’re social creatures, that perception is largely colored by their friends or audience. Instagram lets you make them part of the Story. And the result is a product that grabs non-users or casual users and pulls them deeper into the Instagram universe, exposing people to the joy of creating something the last until tomorrow, not always forever.

Snap has been trying to get more interactive too, adding tagging for instance. It’s also got new multiplayer filter games called Snappables you play with your face and can then post the footage to your Story. But again, they feel overly involved and therefore less accessible than where Instagram is going.

Mimicking Photoshop reinforces the idea that everything has to look polished. That’s the opposite of what Systrom was going for with the launch of Instagram Stories. “There will always be an element in any public broadcast system of trying to show off” Systrom explains. “But what I see is it moving in the other direction. GIF stickers allow you to be way more informal than you used to be. Type mode means now people are just typing in thoughts rather than actually taking photos. Things like Superzoom with the TV effect or the beats — it’s anything but polished. If anything it’s a joke. Quantitatively people feel comfortable to post way more stories than to feed.”

Systrom is about to go on paternity leave, and has been using Stories from friends with kids to collect ideas about what to do with his own. When asked if he thinks Stories produces less of the dangerous envy inherent in the feeds of social media success theater we passively consume, Systrom tells me “Just personally, it’s inspired me rather than it’s created any sense that I’m missing out”. Of course, that might be related to the fact that his life of attending the Met Gala and bicycling through Europe doesn’t leave much to envy.

AR filters have become table-stakes for Stories. On the left, Instagram. On the right, Snapchat.

The sense of comfort powered by Instagram purposefully pushing Stories to diverge from its classy feed has contributed to its explosion in popularity — not just for Stories but Instagram as a whole. It now has over 1 billion users, in part driven by it introducing Stories to developing countries Snapchat never penetrated.

“Remember how at the launch of Stories, I said it was a format and we want to make it original? And there was a bunch of criticism around us adopting this format? My response was this is a format and we’re going to innovate and make it our own. The whole idea there is to make it not just about photography but about expression. It’s a canvas for you to express yourself.” Stories emerged as how Instagram expressed itself too, allowing it to break away from the staid perfection of the feed, becoming something much more goofy.

Last week when Facebook announced its revenue was decelerating as users shifted attention from its lucrative News Feed to Stories where it’s still educating advertisers, its share price tanked, deleting $120 billion in market cap. Yet imagine how much further it would have dropped if Systrom hadn’t been willing to put his pride aside, take Snapchat Stories, and give it the Insta spin? Instead, it led the way to Facebook now having over 1.1 billion (duplicated) daily Stories users across its apps. The poises Facebook and Instagram to earn a ton off of Stories.”

“There was a long time that desktop advertising worked really really well but we knew the future was mobile and we’d have to go there. There was some short term pain. Everyone was worried that went wouldn’t monetize as well” Systrom remembers. “We believe the future is the combination of feed and Stories, and it just takes time for Stories to get to the same level or even exceed feed.”

So does he feel vindicated in that once-derided decision to think of Stories not as Evan Spiegel’s property but a medium meant for everyone? “I don’t wake up everyday trying to feel vindicated. I wake up everyday trying to make sure our billion users have amazing stuff to use. I just feel lucky that they love what we produce” Systrom says with a laugh. “I don’t know if that fits your definition of vindicated.”

Facebook’s ’Time Management’ tool shows it hasn’t stopped treating users like psychological guinea pigs

Facebook is evolving to tackle problems the company itself unwittingly enabled, like election meddling and screen addiction. But the fast-paced nature of Silicon Valley is ultimately holding it back from true accountability.

On Wednesday, Facebook launched "Time Management," or what are essentially tools that allow users to see how much time they're spending in the app, mute notifications, and to set a notification to limit how much they use the app.

Time Management is part of Facebook's overarching goal to improve "wellbeing" on Facebook. Or, to make people enjoy the time they spend on Facebook again, and reverse users' perception of it as an addictive, time-sucking venue for spam and endless flame wars. Read more...

More about Facebook, Instagram, Social Media, Addiction, and Tech

Facebook and Instagram now show how many minutes you use them

It’s passive zombie feed scrolling, not active communication with friends that hurts our health, according to studies Facebook has been pointing to for the last seven months. Yet it’s treating all our social networking the same with today’s launch of its digital wellbeing screentime management dashboards for Facebook and Instagram in the US before rolling them out to everyone in the coming weeks.

Giving users a raw count of the minutes you’ve spent in their apps each day in the last week plus your average across the week is a good start to making users more mindful. But by burying them largely out of sight, giving them no real way to compel less usage, and not distinguishing between passive and active behavior, they seem destined to be ignored while missing the point the company itself stresses.

TechCrunch scooped the designs of the two separate but identical Instagram and Facebook tools over the past few months thanks to screenshots generated from the apps’ code by Jane Manchun Wong. What’s launching today is what we saw, with the dashboards located in Facebook’s “Settings” -> “Your Time On Facebook” and Instagram’s “Settings” -> “Your Activity”.

Beyond the daily and average minute counts, you can set a daily “limit” in minutes after which either app will send you a reminder that you’ve crossed your self-imposed threshold. But they won’t stop you from browsing and liking, or force you to dig into the settings menu to extend your limit. You’ll need the willpower to cut yourself off. The tools also let you mute push notifications (you’ll still see in-app alerts), but only for as much as 8 hours. If you want anything more permanent, you’ll have to dig into their separate push notification options menu or your phone’s settings.

The announcement follows Instagram CEO Kevin Systrom’s comments about our original scoop, where he tweeted “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Users got their first taste of Instagram trying to curtail overuse with its “You’re All Caught Up” notices that show when you’ve seen all your feed posts from the past two days. Both apps will now provide callouts to users teaching them about the new activity monitoring tools. Facebook says it has no plans to use whether you open the tools or set daily limits to target ads. It will track how people use the tools to tweak the designs, but it sounds like that’s more about what time increments to show in the Daily Reminder and Mute Notifications options than drastic strengthenings of their muscle. Facebook will quietly keep a tiny fraction of users from getting the features to measure if the launch impacts behavior.

“It’s really important for people who use Instagram and Facebook that the time they spend with us is time well spent” Ameet Ranadive, Instagram’s Product Director of Well-Being, told reporters on a conference call. “There may be some tradeoff with other metrics for the company and that’s a tradeoff we’re willing to live with, because in the longer term we think this is important to the community and we’re willing to invest in it.”

Facebook Needs Stronger Screen Time Tools That Deter Passive Browsing

Facebook has already felt some of the brunt of that tradeoff. It’s been trying to improve digital wellbeing by showing fewer low quality viral videos and clickbait news stories, and more from your friends since a big algorithm change in January. That’s contributed to a flatlining of its growth in North America, and even a temporary drop of 700,000 users early this year while it also lost 1 million users in Europe this past quarter. That led to Facebook’s slowest user growth rates in history, triggering a 20 percent, $120 billion market cap drop in its share price. “The changes to the News Feed back in January were one step . . . giving people a sense of their time so they’re more mindful of it is the second step” says Ranadive.

The fact that Facebook is willing to put its finances on the line for digital wellbeing is a great step. It’s a smart long-term business decision too. If we feel good about our overall usage, we won’t ditch the apps entirely and could keep seeing their ads for another decade. But it’s likely to be changes to the Facebook and Instagram feeds that prioritize content you’ll comment on rather than look at and silently scroll past that will contribute more to healthy social networking than today’s toothless tools.

While iOS 12’s Screen Time and Android’s new Digital Wellbeing features both count your minutes on different apps too, they offer more drastic ways to enforce your own good intentions. iOS will deliver a weekly usage report to remind you the features exist. Android’s is best-in-class because it grays out an app’s icon and requires you to open your settings to unlock an app after you exceed your daily limit.

iOS Screen Time (left) and Android Digital Wellbeing (right)

To live up to the responsibility Systrom promised, Facebook and Instagram will have to do more to actually keep us mindful of the time we spend in their apps and help us help ourselves. Let us actually lock ourselves out of the apps, turn them grayscale, fade their app icons, or persistently show our minute count onscreen once we pass our limit. Anything to make being healthy on their apps something you can’t just ignore like any other push notification.

Or follow the research and have the dashboards actually divide our sharing, commenting, and messaging time from our feed scrolling, Stories tapping, video watching, and photo stalking. The whole point is that social networking isn’t all bad, but there are behaviors that hurt. Most of us aren’t going to give up Facebook and Instagram. Even just trying to spend less time on them is difficult. But by guiding us towards the activities that interconnect us rather than isolate us, Facebook could get us to shift our time in the right direction.

See the trippy propaganda images attacking the midterms on Facebook

Facebook just confirmed that an unknown group is waging a propaganda war against the U.S. midterm elections. Using images and event invites to rallies in Washington next week, the attackers are attempting to sow discord into the American political landscape. Facebook has not identified whether Russian intelligence organizations were responsible, like with their 2016 election attacks, as this operation was more sophisticated than previous strategies that Facebook has implemented safeguards to thwart.

For now, Facebook has removed 32 pages and accounts associated with the group, including “Mindful Being” and “Resisters,” some of which shared psychedelic memes in an attempt to ingratiate themselves with receptive users. Last week I wrote that Facebook had dodged the question of whether it had evidence of attacks on the midterm elections. Now we have the answer: yes.

Scroll down or click through to see a sample of the images used in the attacks. TechCrunch does not endorse any of this imagery.

For more info, read our full-story on these attacks on democracy.

Fake news inquiry calls for social media levy to defend democracy

A UK parliamentary committee which has been running a multi-month investigation into the impact of online disinformation on political campaigning — and on democracy itself — has published a preliminary report highlighting what it describes as “significant concerns” over the risks to “shared values and the integrity of our democratic institutions”.

It’s calling for “urgent action” from government and regulatory bodies to “build resilience against misinformation and disinformation into our democratic system”.

“We are faced with a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views,” the DCMS committee warns. “In particular, we heard evidence of Russian state-sponsored attempts to influence elections in the US and the UK through social media, of the efforts of private companies to do the same, and of law-breaking by certain Leave campaign groups in the UK’s EU Referendum in their use of social media.”

The inquiry, which was conceived of and begun in the previous UK parliament — before relaunching in fall 2017, after the June General Election — has found itself slap-bang in the middle of one of the major scandals of the modern era, as revelations about the extent of disinformation and social media data misuse and allegations of election fiddling and law bending have piled up thick and fast, especially in recent months (albeit, concerns have been rising steadily, ever since the 2016 US presidential election and revelations about the cottage industry of fake news purveyors spun up to feed US voters, in addition to Kremlin troll farm activity.)

Yet the Facebook-Cambridge Analytica data misuse saga (which snowballed into a major global scandal this March) is just one of the strands of the committee’s inquiry. Hence they’ve opted to publish multiple reports — the initial one recommending urgent actions for the government and regulators, which will be followed by another report covering the inquiry’s “wider terms of reference” and including a closer look at the role of advertising. (The latter report is slated to land in the fall.)

For now, the committee is suggesting “principle-based recommendations” designed to be “sufficiently adaptive to deal with fast-moving technological developments”. 

Among a very long list of recommendations are:

  • a levy on social media and tech giants to fund expanding a “major investment” in the UK’s data watchdog so the body is able to “attract and employ more technically-skilled engineers who not only can analyse current technologies, but have the capacity to predict future technologies” — with the tech company levy operating in “a similar vein to the way in which the banking sector pays for the upkeep of the Financial Conduct Authority”. Additionally, the committee also wants the government put forward proposals for an educational levy to be raised by social media companies, “to finance a comprehensive educational framework (developed by charities and non-governmental organisations) and based online”. “Digital literacy should be the fourth pillar of education, alongside reading, writing and maths,” the committee writes. “The DCMS Department should co-ordinate with the Department for Education, in highlighting proposals to include digital literacy, as part of the Physical, Social, Health and Economic curriculum (PSHE). The social media educational levy should be used, in part, by the government, to finance this additional part of the curriculum.” It also wants to see a rolling unified public awareness initiative, part-funded by a tech company levy, to “set the context of social media content, explain to people what their rights over their data are… and set out ways in which people can interact with political campaigning on social media. “The public should be made more aware of their ability to report digital campaigning that they think is misleading, or unlawful,” it adds
  • amendments to UK Electoral Law to reflect use of new technologies — with the committee backing the Electoral Commission’s suggestion that “all electronic campaigning should have easily accessible digital imprint requirements, including information on the publishing organisation and who is legally responsible for the spending, so that it is obvious at a glance who has sponsored that campaigning material, thereby bringing all online advertisements and messages into line with physically published leaflets, circulars and advertisements”. It also suggests government should “consider the feasibility of clear, persistent banners on all paid-for political adverts and videos, indicating the source and making it easy for users to identify what is in the adverts, and who the advertiser is”. And urges the government to carry out “a comprehensive review of the current rules and regulations surrounding political work during elections and referenda, including: increasing the length of the regulated period; definitions of what constitutes political campaigning; absolute transparency of online political campaigning; a category introduced for digital spending on campaigns; reducing the time for spending returns to be sent to the Electoral Commission (the current time for large political organisations is six months)”.
  • the Electoral Commission to establish a code for advertising through social media during election periods “giving consideration to whether such activity should be restricted during the regulated period, to political organisations or campaigns that have registered with the Commission”. It also urges the Commission to propose “more stringent requirements for major donors to demonstrate the source of their donations”, and backs its suggestion of a change in the rules covering political spending so that limits are put on the amount of money an individual can donate
  • a major increase in the maximum fine that can be levied by the Electoral Commission (currently just £20,000) — saying this should rather be based on a fixed percentage of turnover. It also suggests the body should have the ability to refer matters to the Crown Prosecution Service, before their investigations have been completed; and urges the government to consider giving it the power to compel organisations that it does not specifically regulate, including tech companies and individuals, to provide information relevant to their inquiries, subject to due process.
  • a public register for political advertising — “requiring all political advertising work to be listed for public display so that, even if work is not requiring regulation, it is accountable, clear, and transparent for all to see”. So it also wants the government to conduct a review of UK law to ensure that digital campaigning is defined in a way that includes online adverts that use political terminology but are not sponsored by a specific political party.
  • a ban on micro-targeted political advertising to lookalikes online, and a minimum limit for the number of voters sent individual political messages to be agreed at a national level. The committee also suggests the Electoral Commission and the ICO should consider the ethics of Facebook or other relevant social media companies selling lookalike political audiences to advertisers during the regulated period, saying they should consider whether users should have the right to opt out from being included in such lookalike audiences
  • a recommendation to formulate a new regulatory category for tech companies that is not necessarily either a platform or a publisher, and which “tightens tech companies’ liabilities”
  • a suggestion that the government consider (as part of an existing review of digital advertising) whether the Advertising Standards Agency could regulate digital advertising. “It is our recommendation that this process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms,” the committee writes. “This should include both content that has been referred to them for takedown by their users, and other content that should have been easy for the tech companies to identify for themselves. In these cases, failure to act on behalf of the tech companies could leave them open to legal proceedings launched either by a public regulator, and/or by individuals or organisations who have suffered as a result of this content being freely disseminated on a social media platform.”
  • another suggestion that the government consider establishing a “digital Atlantic Charter as a new mechanism to reassure users that their digital rights are guaranteed” — with the committee also raising concerns that the UK risks a privacy loophole opening up after it leave the EU when US-based companies will be able to take UK citizens’ data to the US for processing without the protections afforded by the EU’s GDPR framework (as the UK will then be a third country)
  • a suggestion that a professional “global Code of Ethics” should be developed by tech companies, in collaboration with international governments, academics and other “interested parties” (including the World Summit on Information Society), in order to “set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies”. “New products should be tested to ensure that products are fit-for-purpose and do not constitute dangers to the users, or to society,” it suggests. “The Code of Ethics should be the backbone of tech companies’ work, and should be continually referred to when developing new technologies and algorithms. If companies fail to adhere to their own Code of Ethics, the UK Government should introduce regulation to make such ethical rules compulsory.”
  • the committee also suggests the government avoids using the (charged and confusing) term ‘fake news’ — and instead puts forward an agreed definition of the words ‘misinformation’ and ‘disinformation’. It should also support research into the methods by which misinformation and disinformation are created and spread across the internet, including support for fact-checking. “We recommend that the government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites,” it writes
  • a suggestion that tech companies should be subject to security and algorithmic auditing — with the committee writing: “Just as the finances of companies are audited and scrutinised, the same type of auditing and scrutinising should be carried out on the non-financial aspects of technology companies, including their security mechanisms and algorithms, to ensure they are operating responsibly. The Government should provide the appropriate body with the power to audit these companies, including algorithmic auditing, and we reiterate the point that the ICO’s powers should be substantially strengthened in these respects”. The committee also floats the idea that the Competition and Markets Authority considers conducting an audit of the operation of the advertising market on social media (given the risk of fake accounts leading to ad fraud)
  • a requirement for tech companies to make full disclosure of targeting used as part of advert transparency. The committee says tech companies must also address the issue of shell corporations and “other professional attempts to hide identity in advert purchasing.

How the government will respond to the committee’s laundry list of recommendations for cleaning up online political advertising remains to be seen, although the issue of Kremlin-backed disinformation campaigns was at least raised publicly by the prime minister last year. Although Theresa May has been rather quieter on revelations about EU referendum-related data misuse and election law breaches.

While the committee uses the term “tech companies” throughout its report to refer to multiple companies, Facebook specifically comes in for some excoriating criticism, with the committee accusing the company of misleading by omission and actively seeking to impede the progress of the inquiry.

It also reiterates its call — for something like the fifth time at this point — for founder Mark Zuckerberg to give evidence. Facebook has provided several witnesses to the committee, including its CTO, but Zuckerberg has declined its requests he appear, even via video link. (And even though he did find time for a couple of hours in front of the EU parliament back in May.)

The committee writes:

We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry, unless it had been expressly asked for. It provided witnesses who have been unwilling or unable to give full answers to the Committee’s questions. This is the reason why the Committee has continued to press for Mark Zuckerberg to appear as a witness as, by his own admission, he is the person who decides what happens at Facebook.

Tech companies are not passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. This manipulation of the sites by tech companies must be made more transparent. Facebook has all of the information. Those outside of the company have none of it, unless Facebook chooses to release it. Facebook was reluctant to share information with the Committee, which does not bode well for future transparency. We ask, once more, for Mr Zuckerberg to come to the Committee to answer the many outstanding questions to which Facebook has not responded adequately, to date.

The committee suggests that the UK’s Defamation Act 2013 means Facebook and other social media companies have a duty to publish and to follow transparent rules — arguing that the Act has provisions which state that “if a user is defamed on social media, and the offending individual cannot be identified, the liability rests with the platform”.

“We urge the government to examine the effectiveness of these provisions, and to monitor tech companies to ensure they are complying with court orders in the UK and to provide details of the source of disputed content– including advertisements — to ensure that they are operating in accordance with the law, or any future industry Codes of Ethics or Conduct. Tech companies also have a responsibility to ensure full disclosure of the source of any political advertising they carry,” it adds.

The committee is especially damning of Facebook’s actions in Burma (as indeed many others have also been), condemning the company’s failure to prevent its platform from being used to spread hate and fuel violence against the Rohingya ethnic minority — and citing the UN’s similarly damning assessment.

“Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed. Time and again we heard from Facebook about mistakes being made and then (sometimes) rectified, rather than designing the product ethically from the beginning of the process. Facebook has a ‘Code of Conduct’, which highlights the principles by which Facebook staff carry out their work, and states that employees are expected to “act lawfully, honestly, ethically, and in the best interests of the company while performing duties on behalf of Facebook”. Facebook has fallen well below this standard in Burma,” it argues.

The committee also directly blames Facebook’s actions for undermining the UK’s international aid efforts in the country — writing:

The United Nations has named Facebook as being responsible for inciting hatred against the Rohingya Muslim minority in Burma, through its ‘Free Basics’ service. It provides people free mobile phone access without data charges, but is also responsible for the spread disinformation and propaganda. The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority.

The hate speech against the Rohingya—built up on Facebook, much of which is disseminated through fake accounts—and subsequent ethnic cleansing, has potentially resulted in the success of DFID’s [the UK Department for International Development] aid programmes being greatly reduced, based on the qualifications they set for success. The activity of Facebook undermines international aid to Burma, including the UK Government’s work. Facebook is releasing a product that is dangerous to consumers and deeply unethical. We urge the Government to demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity. This is a further example of Facebook failing to take responsibility for the misuse of its platform.

We reached out to Facebook for a response to the committee’s report, and in an email statement — attributed to Richard Allan, VP policy — the company told us:

The Committee has raised some important issues and we were pleased to be able to contribute to their work.

We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent. We provide more information on the Pages behind any ad and you can now see all the ads any Facebook Page is running, even if they are not targeted at you. We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.

We’re also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone needed to report this to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly.

The statement makes no mention of Burma. Nor indeed of the committee’s suggestion that social media firms should be taxed to pay for defending democracy and civil society against the damaging excesses of their tools.

On Thursday, rolling out its latest ads transparency features, Facebook announced that users could now see the ads a Page is running across Facebook, Instagram, Messenger and its partner network “even if those ads aren’t shown to you”.

To do so, users have to log into Facebook, visit any Page and select “Info and Ads”. “You’ll see ad creative and copy, and you can flag anything suspicious by clicking on ‘Report Ad’,” it added.

It also flagged a ‘more Page information’ feature that users can use to get more details about a Page such as recent name changes and the date it was created.

“The vast majority of ads on Facebook are run by legitimate organizations — whether it’s a small business looking for new customers, an advocacy group raising money for their cause, or a politician running for office. But we’ve seen that bad actors can misuse our products, too,” Facebook wrote, adding that the features being announced “are just the start” of its efforts “to improve” and “root out abuse”.

Brexit drama

The committee’s interim report was pushed out at the weekend ahead of the original embargo as a result of yet more Brexiteer-induced drama — after the campaign director of the UK’s official Brexit supporting ‘Vote Leave’ campaign, Dominic Cummings, deliberately broke the embargo by publishing the report on his blog in order to spin his own response before the report had been widely covered by the media.

Last week the Electoral Commission published its own report following a multi-month investigation into Brexit campaign spending. The oversight body concluded that Vote Leave broke UK electoral law by massively overspending via a joint working arrangement with another Brexit supporting campaign (BeLeave) — an arrangement via which an additional almost half a million pound’s worth of targeted Facebook ads were co-ordinated by Vote Leave in the final days of the campaign when it had already reached its spending limit (Facebook finally released some of the 2016 Brexit campaign ads that had been microtargeted at UK voters via its platform to the committee, which published these ads last week. Many of Vote Leave’s (up to that point ‘dark’) adverts show the official Brexit campaign generating fake news of its own with ads that, for example, claim Turkey is on the cusp of joining the EU and flooding the UK with millions of migrants; or spreading the widely debunked claim that the UK would be able to spend £350M more per week on the NHS if it left the EU.

In general, dog whistle racism appears to have been Vote Leave’s preferred ‘persuasion’ tactic of microtargeted ad choice — and thanks to Facebook’s ad platform, no one other than each ad’s chosen recipients would have been likely to see the messages.

Cummings also comes in for a major roasting in the committee’s report after his failure to appear before it to answer questions, despite multiple summons (including an unprecedented House of Commons motion ordering him to appear — which he nonetheless also ignored).

“Mr Cummings’ contemptuous behaviour is unprecedented in the history of this Committee’s inquiries and underlines concerns about the difficulties of enforcing co-operation with Parliamentary scrutiny in the modern age,” it writes, adding: “We will return to this issue in our Report in the autumn, and believe it to be an urgent matter for consideration by the Privileges Committee and by Parliament as a whole.”

On his blog, Cummings claims the committee offered him dates they knew he couldn’t do; slams its investigation as ‘fake news’; moans copiously that the committee is made up of Remain supporting MPs; and argues that the investigation should be under oath — as his major defense seems to be that key campaign whistleblowers are lying (despite ex-Cambridge Analytica employee Chris Wylie and ex-BeLeave treasurer Shahmir Sanni having provided copious amounts of documentary evidence to back up their claims; evidence which both the Electoral Commission and the UK’s data watchdog, the ICO, have found convincing enough to announce some of the largest fines they can issue — in the latter case, the ICO announced its intention to fine Facebook the maximum penalty possible (under the prior UK data protection regime) for failing to protect users’ information. The data watchdog is continuing to investigate multiple aspects of what is a hugely complex (technically and politically) online ad ops story, and earlier this month commissioner Elizabeth Denham called for an ‘ethical pause’ on the use of online ad platforms for microtargeting voters with political messages, arguing — as the DCMS committee is — that there are very real and very stark risks for democratic processes.

There’s much, much more self-piteous whining on Cummings blog for anyone who wants to make themselves queasy reading. But do also bear in mind the Electoral Commission’s withering criticism of the Vote Leave campaign, specifically — for not so much a failure to co-operate with its investigation but for intentional obstruction. (See: Pages 12-13 of the Commission’s report.)