Category Archives: Twitter

Twitter tests out ‘annotations’ in Moments

Twitter is trying out a small new change to Moments that would provide contextual information within its curated stories. Spotted by Twitter user @kwatt and confirmed by a number of Twitter product team members, the little snippets appear sandwiched between tweets in a Moment.

Called “annotations” — not to be confused with Twitter’s metadata annotations of yore — the morsels of info aim to clarify and provide context for the tweets that comprise Twitter’s curated trending content. According to the product team, they are authored by Twitter’s curation group.

In our testing, annotations only appear on the mobile app and not on the same Moments on desktop. So far we’ve seen them on a story about the NFL, one about Moviepass and another about staffing changes in the White House.

While it’s a tiny feature tweak, annotations are another sign that Twitter is exploring ways to infuse its platform with value and veracity in the face of what so far appears to be an intractable misinformation crisis.

Macaw will curate Twitter for you, help expand your network

Twitter today inserts activity-based tweets into your timeline, alerting you to things like the popular tweets liked by people you follow, or those Twitter accounts that a lot of people in your network have just started to follow. These alerts can be useful, but their timing is sporadic and they can be easily missed. Plus, if you turn off Twitter’s algorithmic timeline (as may be possible for some), you’ll lose access to this sort of info. A new Twitter app called Macaw aims to help.

Macaw, which recently launched on Product Hunt, offers a set of similar information as Twitter does, with a few changes.

Macaw works by first pulling in a list of people you follow. It then tracks what tweets they like throughout the day and turns that into a feed of tweets that were most popular. Macaw does the same thing for users, too – that is, it shows you if a number of people have suddenly started following someone, for example.

Beyond this, Macaw will also show you the “Latest” tweets receiving likes from your network in a separate tab, as well as tweets where someone has asked a question.

This “Asks” section will highlight tweets where someone on Twitter has asked something like “Does anyone know…?” or “what are the best…?”, for example. This can help you find new conversations to participate in and help you expand your network.

The end result is a curated version of Twitter, where you can catch up with what’s important, without so much endless scrolling through your timeline.

Even if you’re on Twitter itself a lot, Macaw can still be useful.

Its default setting will hide top tweets posted by someone in your network – because, chances are, you’ve already read them. With this setting turned on, you’ll only be shown top tweets by users you don’t yet follow.

You can also configure how many likes are required for something to be considered a “top” tweet. By default, this is set to 25, but you can change it to 10, 100, or even 1,000. You can adjust the default setting for the age of the tweet, too, from 6 hours to 2 hours, 24 hours, or 96 hours, based on how often you check in.

The app, however, is not a Twitter client.

That is, it doesn’t take the place of Twitter or other apps like Twitterific or Tweetbot, as you can’t use it to post tweets, access direct messages, update your profile, or follow users. You’ll need a different app, like the main Twitter client, for that. But a tap in Macaw will launch Twitter for you, making the transition feel seamless.

The app was built by Zachary Hamed, who had previously built Daily 140 for tracking a similar set of data, shared via email. He says he started building Macaw as a side project and launched it into private beta in August. It doesn’t currently have a business model, beyond a plan to maybe charge for additional features later on.

In some ways, Macaw is similar to Nuzzel, another Twitter summarization app that provides a list of top links that your network is sharing and discussing. But many of the best things on Twitter aren’t links, they’re individual tweets or tweetstorms. (Like that recent Google+ rant, for example).

Hamed admits Nuzzel was a source of inspiration for Macaw (a bird that screams constantly, by the way. Ha!)

“I was actually inspired by those notifications in the main Twitter app since I’ve always found them fascinating and by Nuzzel, which is one of my most used apps – and whose founder Jonathan I really respect,” Hamed says. “I think there is a lot of hidden insight to be found in posts people have liked and who they start following, especially if there is momentum around certain names or topics. As of now, Twitter only shares one to two of those recommendations, not all of it,” he adds.

*While we do like Macaw, the app, one thing we’re not a fan of are the fake reviews on the Macaw website, which pretend to be from @Jack, Mary Meeker, and Chamath Palihapitiya. It’s obviously meant to be a joke, but it falls flat – Macaw doesn’t need this sort of false promotion, and it’s wrong because it could confuse less savvy users.

Macaw is a free download on the App Store.

 

This perfect song reminds us why it’s not such a scary time for men

It's a super scary time for men, according to our president. But one musician is out to prove just how ridiculous of an idea this is with her latest song.

On Monday, Lynzy Lab tweeted a video of herself singing an anthem for all of the scared men and boys. "It's a really scary time for dudes right now. So I wrote a song about it. Go #vote friends!  #TheResistance #1Thing @ACLU @WC4SJ #letsmakesomenoise," she wrote.

It's a really scary time for dudes right now. So I wrote a song about it. Go #vote friends! #TheResistance #1Thing @ACLU @WC4SJ #letsmakesomenoise pic.twitter.com/hz7E3xMRqR

— Lynzy Lab (@mercedeslynz) October 8, 2018 Read more...

More about Twitter, Social Media, Vote, 2018 Midterm Elections, and Culture

David Harbour agrees to officiate a wedding as Hellboy for 666,000 retweets

David Harbour is back into the retweet game.

The Hellboy star promised to officiate film reporter Spencer Perry's wedding in full Hellboy costume — that is, if Perry gets 666,000 retweets. 

666k
Of this tweet.
Big Red officiates. Full Gear. In his saintly best.
Impossible number?
Think of how difficult it will be for me to get this character ordained by a Christian church😈🙄
(P.S. - I’ll knock off 500k if you can get @artofmmignola to read a poem at the service) https://t.co/cnzHrcnsOo

— David Harbour (@DavidKHarbour) October 6, 2018

Harbour's done similar stunts before, but this one has a particularly fun twist: He pledged to knock 500K retweets off the minimum if Hellboy creator Mike Mignola promised to read a poem at the ceremony. Read more...

More about Twitter, Social Media, David Harbour, Hellboy, and Culture

A Twitter convo about self-appreciation was the best thing on the internet this week

This is One Good Thing, a weekly column where we tell you about one of the few nice things that happened this week.

This week, a bunch of people had a really nice conversation about change, self-acceptance, and personal growth. On Twitter. (Yes, that is possible.) 

It all started when writer and 112BK host Ashley Ford posed a question to her followers on Sunday. "What's something you hated about yourself as a kid or teenager that you now consider a strength?" she asked.

What's something you hated about yourself as a kid or teenager that you now consider a strength?

— Ashley C. Ford (@iSmashFizzle) September 30, 2018 Read more...

More about Twitter, Social Media, One Good Thing, Culture, and Web Culture

Twitter widens its view of bad actors to fight election fiddlers

Twitter has announced more changes to its rules to try to make it harder for people to use its platform to spread politically charged disinformation and thereby erode democratic processes.

In an update on its “elections integrity work” yesterday, the company flagged several new changes to the Twitter Rules which it said are intended to provide “clearer guidance” on behaviors it’s cracking down on.

In the problem area of “spam and fake accounts”, Twitter says it’s responding to feedback that, to date, it’s been too conservative in how it thinks about spammers on its platform, and only taking account of “common spam tactics like selling fake goods”. So it’s expanding its net to try to catch more types of “inauthentic activity” — by taking into account more factors when determining whether an account is fake.

As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Twitter writes. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”

Some of the factors it says it will now also take into account when making a ‘spammer or not’ judgement are:

  •         Use of stock or stolen avatar photos
  •         Use of stolen or copied profile bios
  •         Use of intentionally misleading profile information, including profile location

Kremlin-backed online disinformation agents have been known to use stolen photos for avatars and also to claim accounts are US based, despite spambots being operated out of Russia. So it’s pretty clear why Twitter is cracking down on fake profiles pics and location claims.

Less clear: Why it took so long for Twitter’s spam detection systems to be able to take account of these suspicious signals. But, well, progress is still progress.

(Intentionally satirical ‘Twitter fakes’ (aka parody accounts) should not be caught in this net, as Twitter has had a longstanding policy of requiring parody and fan accounts to be directly labeled as such in their Twitter bios.)

Pulling the threads of spambots

In another major-sounding policy change, the company says it’s targeting what it dubs “attributed activity” — so that when/if it “reliably” identifies an entity behind a rule-breaking account it can apply the same penalty actions against any additional accounts associated with that entity, regardless of whether the accounts themselves were breaking its rules or not.

This is potentially a very important change, given that spambot operators often create accounts long before they make active malicious use of them, leaving these spammer-in-waiting accounts entirely dormant, or doing something totally innocuous, sometimes for years before they get deployed for an active spam or disinformation operation.

So if Twitter is able to link an active disinformation campaign with spambots lurking in waiting to carry out the next operation it could successfully disrupt the long term planning of election fiddlers. Which would be great news.

Albeit, the devil will be in the detail of how Twitter enforces this new policy — such as how high a bar it’s setting itself with the word “reliably”.

Obviously there’s a risk that, if defined too loosely, Twitter could shut innocent newbs off its platform by incorrectly connecting them to a previously identified bad actor. Which it clearly won’t want to do.

The hope is that behind the scenes Twitter has got better at spotting patterns of behavior it can reliably associate with spammers — and will thus be able to put this new policy to good use.

There’s certainly good external research being done in this area. For example, recent work by Duo Security has yielded an open source methodology for identifying account automation on Twitter.

The team also dug into botnet architectures — and were able to spot a cryptocurrency scam botnet which Twitter had previously been recommending other users follow. So, again hopefully, the company has been taking close note of such research, and better botnet analysis underpins this policy change.

There’s also more on this front: “We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules,” Twitter also writes.

This additional element is also notable. It essentially means Twitter has given itself a policy allowing it to act against entire malicious ideologies — i.e. against groups of people trying to spread the same sort of disinformation, not just any a single identified bad actor connected to a number of accounts.

To use the example of the fake news peddler behind InfoWars, Alex Jones, who Twitter finally permanently banned last month, Twitter’s new policy suggests any attempts by followers of Jones to create ‘in the style of’ copycat InfoWars accounts on its platform, i.e. to try to indirectly return Jones’ disinformation to Twitter, would — or, well, could — face the same enforcement action it has already meted out to Jones’ own accounts.

Though Twitter does have a reputation for inconsistently applying its own policies. So it remains to be seen how it will, in fact, act.

And how enthusiastic it will be about slapping down disinformation ideologies — given its longstanding position as a free speech champion, and in the face of criticism that it is ‘censoring’ certain viewpoints.

Hacked materials

Another change being announced by Twitter now is a clampdown on the distribution of hacked materials via its platform.

Leaking hacked emails of political officials at key moments during an election cycle has been a key tactic for democracy fiddlers in recent years — such as the leak of emails sent by top officials in the Democratic National Committee during the 2016 US presidential election.

Or  the last minute email leak in France during the presidential election last year.

Twitter notes that its rules already prohibit the distribution of hacked material which contains “private information or trade secrets, or could put people in harm’s way” — but says it’s now expanding “the criteria for when we will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts”.

So it seems, generally, to be broadening its policy to cover a wider support ecosystem around election hackers — or hacking more generally.

Twitter’s platform does frequently host hackers — who use anonymous Twitter accounts to crow about their hacks and/or direct attack threats at other users…

Presumably Twitter will be shutting that kind of hacker activity down in future.

Though it’s unclear what the new policy might mean for a hacktivist group like Anonymous (which is very active on Twitter).

Twitter’s new policy might also have repercussions for Wikileaks — which was directly involved in the spreading of the DNC leaked emails, for example, yet nonetheless has not previously been penalized by Twitter. (And thus remains on its platform so far.)

One also wonders how Twitter might respond to a future tweet from, say, US president Trump encouraging the hacking of a political opponent….

Safe to say, this policy could get pretty murky and tricky for Twitter.

“Commentary about a hack or hacked materials, such as news articles discussing a hack, are generally not considered a violation of this policy,” it also writes, giving itself a bit of wiggle room on how it will apply (or not apply) the policy.

Daily spam decline

In the same blog post, Twitter gives an update on detection and enforcement actions related to its stated mission of improving “conversational health” and information integrity on its platform — including reiterating the action it took against Iran-based disinformation accounts in August.

It also notes that it removed ~50 accounts that had been misrepresenting themselves as members of various state Republican parties that same month and using Twitter to share “media regarding elections and political issues with misleading or incorrect party affiliation information”.

“We continue to partner closely with the RNC, DNC, and state election institutions to improve how we handle these issues,” it adds. 

On the automated detections front — where Twitter announced a fresh squeeze just three months ago — it reports that in the first half of September it challenged an average of 9.4 million accounts per week. (Though it does not specify how many of those challenges turned out to be bona fide spammers, or at least went unchallenged).

It also reports a continued decline in the average number of spam-related reports from users — down from an average of ~17,000 daily in May, to ~16,000 daily in September.

This summer it introduced a new registration process for developers requesting access to its APIs — intended to prevent the registration of what it describes as “spammy and low quality apps”.

Now it says it’s suspending, on average, ~30,000 applications per month as a result of efforts “to make it more difficult for these kinds of apps to operate in the first place”.

Elsewhere, Twitter also says it’s working on new proprietary systems to identify and remove “ban evaders at speed and scale”, as part of ongoing efforts to improve “proactive enforcements against common policy violations”.

In the blog, the company flags a number of product changes it has made this year too, including a recent change it announced two weeks ago which brings back the chronological timeline (via a setting users can toggle) — and which it now says it has rolled out.

“We recently updated the timeline personalization setting to allow people to select a strictly reverse-chronological experience, without recommended content and recaps. This ensures you have more control of how you experience what’s happening on our service,” it writes, saying this is also intended to help people “stay informed”.

Though, given that a chronological timeline remains not the default on Twitter, with algorithmically surfaced ‘interesting tweets’ instead being most actively pushed at users, it seems unlikely this change will have a major impact on mitigating any disinformation campaigns.

Those in the know (that they can change settings) being able to stay more informed is not how election fiddling will be defeated.

US midterm focus

Twitter also says it’s continuing to roll out new features to show more context around accounts — giving the example of the launch of election labels earlier this year, as a beta for candidates in the 2018 U.S. midterm elections. Though it’s clearly got lots of work to do on that front — given all the other elections continuously taking place in the rest of the world.

With an eye on the security of the US midterms as a first focus, Twitter says it will send election candidates a message prompt to ensure they have two-factor authentication enabled on their account to boost security.

“We are offering electoral institutions increased support via an elections-specific support portal, which is designed to ensure we receive and review critical feedback about emerging issues as quickly as possible. We will continue to expand this program ahead of the elections and will provide information about the feedback we receive in the near future,” it adds, again showing that its initial candidate support efforts are US-focused.

On the civic engagement front, Twitter says it is also actively encouraging US-based users to vote and to register to vote, as well as aiming to increase access to relevant voter registration info.

“As part of our civic engagement efforts, we are building conversation around the hashtag #BeAVoter with a custom emoji, sending U.S.-based users a prompt in their home timeline with information on how to register to vote, and drawing attention to these conversations and resources through the top US trend,” it writes. “This trend is being promoted by @TwitterGov, which will create even more access to voter registration information, including election reminders and an absentee ballot FAQ.”

Someone mashed up Kavanaugh’s testimony with ‘Pulp Fiction’ and it works disturbingly well

Are you a big Pulp Fiction fan? Let us ruin that for you.

The production company Elara Pictures posted a disturbing video mashup to Instagram Friday: the hamburger scene from Pulp Fiction alongside a few choice moments from Brett Kavanaugh's petulant testimony before the Senate Judiciary Committee.

Remember how, in the movie, Samuel L. Jackson says "Check out the big brain on Brett?" And remember when Kavanaugh (a Brett) bragged about going to Yale Law School yesterday? You see where this is going.

View this post on Instagram

CHECK OUT THE BIG BRAIN ON BRETT

A post shared by Elara (@elarapictures) on Read more...

More about Twitter, Social Media, Brett Kavanaugh, Culture, and Web Culture

Congrats to Brett Kavanaugh on getting to be angry

After Dr. Christine Blasey Ford's heart-wrenching and credible testimony before the Senate Judiciary Committee on Thursday, pundits wondered whether Brett Kavanaugh would adopt a gentler tone during his own appearance.

He did not. Instead, he leaned in to being angry.

He spoke in a near-constant loud monotone. At times, he yelled. At other points, his face scrunched up with rage, appearing nearly vicious. Unlike Ford, who was sympathetic, measured, and vulnerable, Kavanaugh was aggressive. Unabashedly emotional. Self-righteously angry even when he began to cry.

Kavanaugh lashes out at Democrats over allegations and attacks: “You may defeat me in the final vote, but you’ll never get me to quit” pic.twitter.com/h5RmNhdqmP

— Marcus Gilmer (@marcusgilmer) September 27, 2018 Read more...

More about Twitter, Social Media, Brett Kavanaugh, Culture, and Politics

Twitter says it will now ask everyone for feedback about its policy changes, starting today

Twitter says it’s going to change the way it creates rules regarding the use of its service to also now include community feedback. Previously, the company followed its own policy development process, including taking input from its Trust and Safety Council and various experts. Now, it says it’s going to try something new: it’s going to ask its users.

According to announcement published this morning, Twitter says it will ask everyone for feedback on a new policy before it becomes a part of Twitter’s official Rules.

It’s kicking off this change by asking for feedback on its new policy around dehumanizing language on Twitter, it says.

Over the past three months, Twitter has been working to create a policy that addresses language that “makes someone feel less than human” – something that can have real-world repercussions, including “normalizing serious violence,” the company explains.

To some extent, dehumanizing language is covered under Twitter’s existing hateful conduct policy, which addresses hate speech that includes the promotion of violence, or direct attacks or threats against people based on factors like their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.

However, there are still ways to be abusive on Twitter outside of those guidelines, and dehumanizing language is one of them.

The new policy is meant to expand the hateful conduct policy to also prohibit language that dehumanizes others based on ” their membership in an identifiable group, even when the material does not include a direct target,” says Twitter.

The company isn’t soliciting user feedback over email or Twitter, however.

Instead, it has launched a survey.

Available until October 9 at 6:00 AM PT, the survey asks only a few questions after presenting the new policy’s language for you to read through.

For example, it asks users to rate the clarity of the policy itself on a scale of one to five. It then gives you 280 characters max – just like on Twitter – to suggest how the policy could be improved. Similarly, you have 280 characters to offer examples of speech that contribute to a healthy conversation, but may violate this policy – Twitter’s attempt at finding any loopholes or exceptions.

And it gives you another 280 characters to offer additional feedback or thoughts.

You also have to provide your age, gender, (optionally) your username, and say if you’re willing to receive an email follow-up if Twitter has more questions about your responses.

Twitter doesn’t say how much community feedback will guide its decision-making, though. It simply says that after the feedback, it will then continue with its regular process, which passes the policy through a cross-functional working group, including members of its policy development, user research, engineering, and enforcement teams.

The idea to involve the community in policy-making is a notable change, and one that could make people feel more involved with the definition of the rules, and therefore – perhaps! – more likely to respect them.

But Twitter’s issues around abuse and hate speech on its network don’t really stem from poor policies – its policies actually spell things out fairly well, in many cases, about what should be allowed and what should not.

Twitter’s problems tend to stem from lax enforcement. The company has far too often declined to penalize or ban users whose content is clearly hateful in its nature, in an effort to remain an open platform for “all voices” – including those with extreme ideologies. Case in point: it was effectively the last of the large social platforms to ban the abusive content posted by Alex Jones and his website Infowars.

Users also regularly complain that they have been subject to tweets that violate Twitter guidelines and rules, but no action is taken.

It’s interesting, at times, to consider how differently Twitter could have evolved if community moderation – similar to the moderation on Reddit or even the moderation that takes place on open source Twitter clone Mastodon – had been a part of Twitter’s service from day one. Or how things would look if marginalized groups and those who are often victims of harassment and hate speech had been involved directly with building the platform in the early days. Would Twitter be a different place?

But that’s not where we are.

The new dehumanization policy Twitter is asking about is below:

Twitter’s Dehumanization Policy

You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.

Definitions:

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

Zendaya (Meechee) is a big fan of the ‘Zendaya is Meechee’ meme

One need not understand memes to enjoy them fully, which is lucky because they become more nonsensical by the day. 

The latest: Zendaya is Meechee, a meme based on a fake song about a promotional poster for the upcoming animated film Smallfoot, which is about a group of Yeti who discover that humans are real. Simple!

The song was written by YouTuber Gabriel Gundacker, whom you might also recognize from this flawless series of Vines. As songs go, it is almost unbearably catchy, to the extent that you should think carefully before viewing. Do you want to have "Zendaya is Meechee" stuck in your head for the rest of the day, or perhaps the week? If so, proceed. Read more...

More about Twitter, Memes, Zendaya, Social Media, and Culture