Author: John Biggs

Robots can develop prejudices just like humans

In a fascinating study by researchers at Cardiff University and MIT, we learn that robots can develop prejudices when working together. The robots, which ran inside a teamwork simulator, expressed prejudice against other robots not on their team. In short, write the researchers, “groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behavior from one another.”

To test the theory, researchers ran a simple game in a simulator. The game involved donating to parties outside or inside the robot’s personal group based on reputation as well as donation strategy. They were able to measure the level of prejudice against outsiders. As the simulation ran, they saw a rise in prejudice against outsiders over time.

The researchers found the prejudice was easy to grow in the simulator, a fact that should give us pause as we give robots more autonomy.

“Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others. Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse,” said Cardiff University Professor Roger Whitaker. “It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.”

Interestingly, prejudice fell when there were “more distinct subpopulations being present within a population,” an important consideration in human prejudice as well.

“With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group,” Professor Whitaker said.

Chilling effects

The removal of conspiracy enthusiast content by InfoWars brings us to an interesting and important point in the history of online discourse. The current form of Internet content distribution has made it a broadcast medium akin to television or radio. Apps distribute our cat pics, our workouts, and our YouTube rants to specific audiences of followers, audiences that were nearly impossible to monetize in the early days of the Internet but, thanks to gullible marketing managers, can be sold as influencer media.

The source of all of this came from Gen X’s deep love of authenticity. They formed a new vein of content that, after breeding DIY music and zines, begat blogging, and, ultimately, created an endless expanse of user generated content (UGC). In the “old days” of the Internet this Cluetrain-manifesto-waving post gatekeeper attitude served the slacker well. But this move from a few institutional voices into a scattered legion of micro-fandoms led us to where we are today: in a shithole of absolute confusion and disruption.

As I wrote a year ago, user generated content supplanted and all but destroyed “real news.” While much of what is published now is true in a journalistic sense, the ability for falsehood and conspiracy to masquerade as truth is the real problem and it is what caused a vacuum as old media slowed down and new media sped up. In this emptiness a number of parasitic organisms sprung up including sites like Gizmodo and TechCrunch, micro-celebrity systems like Instagram and Vine, and sites catering to a different consumer, sites like InfoWars and Stormfront. It should be noted that InfoWars has been spouting its deepstate meanderings since 1999 and Alex Jones himself was a gravelly-voice radio star as early as 1996. The Internet allowed any number of niche content services to juke around the gatekeepers of propriety and give folks like Jones and, arguably, TechCrunch founder Mike Arrington, Gawker founder Nick Denton, and countless members of the “Internet-famous club,” deep influence over the last decades media landscape.

The last twenty years have been good for UGC. You could get rich making it, get informed reading it, and its traditions and habits began redefining how news-gathering operated. There is no longer just a wall between advertising and editorial. There is also a wall between editorial and the myriad bloggers who write about poop on Mt. Everest. In this sort of world we readers find ourselves at a distinct loss. What is true? What is entertainment? When the Internet is made flesh in the form of Pizzagate shootings and Unite the Right Marches, who is to blame?

The simple answer? We are to blame. We are to blame because we scrolled endlessly past bad news to get to the news that was applicable to us. We trained robots to spoon feed us our opinions and then force feed us associated content. We allowed ourselves to enter into a pact with a devil so invisible and pernicious that it easily convinced the most confused among us to mobilize against Quixotic causes and immobilized the smartest among us who were lulled into a Soma-like sleep of liking, sharing, and smileys. And now a new reckoning is coming. We have come full circle.

Once upon a time old gatekeepers were careful to let only carefully controlled views and opinions out over the airwaves. The medium was so immediate that in the 1940s broadcasters forbade the transmission of recordings and instead forced broadcasters to offer only live events. This was wonderful if you had the time to mic a children’s choir at Christmas but this rigidity was bed for a reporter’s health. Take William Shirer and Edward R. Murrow’s complaints about being unable to record and play back bombing raids in Nazi-held territories – their chafing at old ideas are almost palpable to modern bloggers.

There were other handicaps to the ban on recording that hampered us in taking full advantage of this new medium in journalism. On any given day there might be several developments, each of which could have been recorded as it happened and then put together and edited for the evening broadcast. In Berlin, for example, there might be a bellicose proclamation, troop movements through the capital, sensational headlines in the newspapers, a protest by an angry ambassador, a fiery speech by Hitler, Goring or Goebbels threatening Nazi Germany’s next victim—all in the course of the day. We could have recorded them at the moment they happened and put them together for a report in depth at the end of the day. Newspapers could not do this. Only radio could. But [CBS President] Paley forbade it.

Murrow and I tried to point out to him that the ban on recording was not only hampering our efforts to cover the crisis in Europe but would make it impossible to really cover the war, if war came. In order to broadcast live, we had to have a telephone line leading from our mike to a shortwave transmitter. You could not follow an advancing or retreating army dragging a telephone line along with you. You could not get your mike close enough to a battle to cover the sounds of combat. With a compact little recorder you could get into the thick of it and capture the awesome sounds of war.

And so now instead of CBS and the Censorship Bureau we have Facebook and Twitter. Instead of calling for the ability to record and playback an event we want permission to offer our own slants on events, no matter how far removed we are from the action. Instead of working diligently to spread only the truth, we consume the truth as others know it. And that’s what we are now chafing against: the commercialization and professionalization of user generated content.

Every medium goes through this confusion. From Penny Dreadfuls to Pall Mall sponsoring nearly every single new television show in the 1940s, media has grown, entered a disruptive phase that changes all media around it, and is then curtailed into boredom and commoditization. It is important to remember that we are in the era of Peak TV not because we all have more time to watch 20 hours of Breaking Bad. We are in Peak TV because we have gotten so good at making good shows – and the average consumer is ravenous for new content – that there is no financial reason not to take a flyer on a miniseries. In short, it’s gotten boring to make good TV.

And so we are now entering the latest stage of Internet content, the blowback. This blowback is not coming from governments. Trump, for his part, sees something wrong but cannot or will not verbalize it past the idea of “Fake News”. There is absolutely a Fake News problem but it is not what he thinks it is. Instead, the Fake News problem is rooted in the idea that all content deserves equal respect. My Medium post is as good as a CNN which is as good as an InfoWars screed about pedophiles on Mars. In a world defined by free speech then all speech is protected. Until, of course, it affects the bottom line of the company hosting it.

So Facebook and Twitter are walking a thin line. They want to remain true to the ancillary GenX credo that can be best described as “garbage in, garbage out” but many of its readers have taken that deeply open invitation to share their lives far too openly. These platforms have come to define personalities. They have come to define news cycles. They have driven men and women into hiding and they have given the trolls weapons they never had before, including the ability to destroy media organizations at will. They don’t want to censor but now that they have shareholders then they simply must.

So get ready for the next wave of media. And the next. And the next. As it gets more and more boring to visit Facebook I foresee a few other rising and falling media outlets based on new media – perhaps through VR or video – that will knock social media out of the way. And wait for more wholesale destruction of UGC creators new and old as monetization becomes more important than “truth.”

I am not here to weep for InfoWars. I think it’s garbage. I’m here to tell you that InfoWars is the latest in a long line of disrupted modes of distribution that began with the printing press and will end god knows where. There are no chilling effects here, just changes. And we’d best get used to them.

Researchers find that filters don’t prevent porn

In a paper entitled Internet Filtering and Adolescent Exposure to Online Sexual Material, Oxford Internet Institute researchers Victoria Nash and Andrew Przybylski found that Internet filters rarely work to keep adolescents away from online porn.

“It’s important to consider the efficacy of Internet filtering,” said Dr, Nash. “Internet filtering tools are expensive to develop and maintain, and can easily ‘underblock’ due to the constant development of new ways of sharing content. Additionally, there are concerns about human rights violations – filtering can lead to ‘overblocking’, where young people are not able to access legitimate health and relationship information.”

This research follows the controversial news that the UK government was exploring a country-wide porn filter, a product that will most likely fail. The UK would join countries around the world who filter the public Internet for religious or political reasons.

The bottom line? Filters are expensive and they don’t work.

Given these substantial costs and limitations, it is noteworthy that there is little consistent evidence that filtering is effective at shielding young people from online sexual material. A pair of studies reporting on data collected in 2005, before the rise of smartphones and tablets, provides tentative evidence that Internet filtering might reduce the relative risk of young people countering sexual material. A more recent study, analyzing data collected a decade after these papers, provided strong evidence that caregivers’ use of Internet filtering technologies did not reduce children’s exposure to a range of aversive online experiences including, but not limited to, encountering sexual content that made them feel uncomfortable.21 Given studies on this topic are few in number and the findings are decidedly mixed, the evidence base supporting the widespread use of Internet filtering is currently weak.

The researchers “found that Internet filtering tools are ineffective and in most cases [and] were an insignificant factor in whether young people had seen explicit sexual content.”

The study’s most interesting finding was that between 17 and 77 households “would need to use Internet filtering tools in order to prevent a single young person from accessing sexual content” and even then a filter “showed no statistically or practically significant protective effects.”

The study looked at 9,352 male and 9,357 female subjects from the EU and the UK and found that almost 50 percent of the subjects had some sort of Internet filter at home. Regardless of the filters installed subjects still saw approximately the same amount of porn.

“Many caregivers and policy makers consider Internet filters a useful technology for keeping young people safe online. Although this position might make intuitive sense, there is little empirical evidence that Internet filters provide an effective means to limit children’s and adolescents’ exposure to online sexual material. There are nontrivial economic, informational, and human rights costs associated with filtering that need to be balanced against any observed benefits,” wrote the researchers. “Given this, it is critical to know possible benefits can be balanced against their costs. Our studies were conducted to test this proposition, and our findings indicated that filtering does not play a practically significant protective role.”

Given the popularity – and lucrative nature – of filtering software this news should encourage parents and caregivers to look more closely and how and why they are filtering their home Internet. Ultimately, they might find, supervision is more important than software.

Photos on social media can predict the health of neighborhoods

The images that appear on social media – happy people eating, cultural happenings, and smiling dogs – can actually predict the likelihood that a neighborhood is “healthy” as well as its level of gentrification.

From the report:

So says a groundbreaking study published in Frontiers in Physics, in which researchers used social media images of cultural events in London and New York City to create a model that can predict neighborhoods where residents enjoy a high level of wellbeing — and even anticipate gentrification by 5 years. With more than half of the world’s population living in cities, the model could help policymakers ensure human wellbeing in dense urban settings.

The idea is based on the concept of “cultural capital” – the more there is, the better the neighborhood becomes. For example, if there are many pictures of fun events in a certain spot you can expect a higher level of well-being in that area’s denizens. The research also suggests that investing in arts and culture will actively improve a neighborhood.

“Culture has many benefits to an individual: it opens our minds to new emotional experiences and enriches our lives,” said Dr. Daniele Quercia. “We’ve known for decades that this ‘cultural capital’ plays a huge role in a person’s success. Our new model shows the same correlation for neighborhoods and cities, with those neighborhoods experiencing the greatest growth having high cultural capital. So, for every city or school district debating whether to invest in arts programs or technology centers, the answer should be a resounding ‘Yes!'”

The Cambridge-based team looked at “millions of Flickr images” taken at cultural events in New York and London and overlaid them on maps of these cities. The findings, as we can imagine, were obvious.

“We were able to see that the presence of culture is directly tied to the growth of certain neighborhoods, rising home values and median income. Our model can even predict gentrification within five years,” said Quercia. “This could help city planners and councils think through interventions to prevent people from being displaced as a result of gentrification.”

The team expects to be able to assess the health of citizens using the same method, overlaying pictures of food on maps in order to find food deserts and spots where cafes and croissants are on the rise. Just imagine: all those Instagrammed photos of your favorite sandwiches will some day help researchers build happier cities.

The erosion of Web 2.0

It seems quaint to imagine now but the original vision for the web was not an information superhighway. Instead, it was a newspaper that fed us only the news we wanted. This was the central thesis brought forward in the late 1990s and prophesied by thinkers like Bill Gates – who expected a beautiful, customized “road ahead” – and Clifford Stoll who saw only snake oil. At the time, it was the most compelling use of the Internet those thinkers thought possible. This concept – that we were to be coddled by a hive brain designed to show us exactly what we needed to know when we needed to know it – continued apace until it was supplanted by the concept of User Generated Content – UGC – a related movement that tore down gatekeepers and all but destroyed propriety in the online world.

That was the arc of Web 2.0: the move from one-to-one conversations in Usenet or IRC and into the global newspaper. Further, this created a million one-to-many conversations targeted at tailor-made audiences of fans, supporters, and, more often, trolls. This change gave us what we have today: a broken prism that refracts humanity into none of the colors except black or white. UGC, that once-great idea that anyone could be as popular as a rock star, fell away to an unmonetizable free-for-all that forced brands and advertisers to rethink how they reached audiences. After all, on a UGC site it’s not a lot of fun for Procter & Gamble to have Downy Fabric Softener advertised next to someone’s racist rant against Muslims in a Starbucks .

Still the Valley took these concepts and built monetized cesspools of self-expression. Facebook, Instagram, YouTube, and Twitter are the biggest beneficiaries of outrage culture and the eyeballs brought in by its continuous refreshment feed their further growth. These sites are Web 2.0 at its darkest epitome, a quiver of arrows that strikes at our deepest, most cherished institutions and bleeds us of kindness and forethought.

So when advertisers faced either the direct monetization of random hate speech or the erosion of customer privacy, they choose the latter. Facebook created lookalike audiences that let advertisers sell to a certain subset of humanity on a deeply granular level, a move that delivered us the same shoe advertisement constantly, from site to site, until we were all sure we had gone mad. In the guise of saving our sanity further we invited always-on microphones into our homes that could watch our listening and browsing habits and sell to us against them. We gave up our very DNA to companies like Ancestry and 23andMe, a decision that mankind may soon regret. We shared everything with everyone in the grand hope that our evolution into homo ligarus – the networked man – would lead us to become homo deus.

This didn’t happen.

And so the pendulum swings back. The GDPR, as toothless as it is, is a wake up call to every spammer that ever slammed your email or followed you around the web. Further, Apple’s upcoming cookie control software in Safari should make those omnipresent ads disappear, forcing the advertiser to sell to an undifferentiated mob rather than a single person. This is obviously cold comfort in an era defined by both the reification of the Internet as a font for all knowledge (correct or incorrect) and the genesis of an web-based political cobra that whips back to bite its handlers with regularity. But it’s a start.

We are currently in an interstitial period of technology, a cake baked of the hearty camaraderie and “Fuck the system” punk rock Gen X but frosted with millennial pragmatism and desire for the artisanal. As we move out of the era of UGC and Web 2.0 we will see the old ways cast aside, the old models broken, and the old invasions of privacy inverted. While I won’t go as far to say that blockchain will save us all, pervasive encryption and full data control will pave the way toward true control of our personal lives as well as the beginnings of a research-based minimum income. We should be able to sell our opinions, our thoughts, and even our DNA to the highest bidder and once the rapacious Web 2.0 vultures are all shooed away, we will find ourselves in an interesting new world.

As a technoutopianist I’m sure that were are heading in the right direction. We are, however, taking turns that none of us could have imagined in the era of Clinton and the fax machine and there are still more turns to come. Luckily, however, we are coming out of our last major skid.

 

Photo by George Fitzmaurice on Unsplash

Minds aims to decentralize the social network

Decentralization is the buzzword du jour. Everything – from our currencies to our databases – are supposed to exist, immutably, in this strange new world. And Bill Ottman wants to add our social media to the mix.

Ottman, an intense young man with a passion to fix the world, is the founder of Minds.com, a New York-based startup that has been receiving waves of new users as zealots and the the not-so-zealous have been leaving other networks. In fact, Zuckerberg’s bad news is music to Ottman’s ears.

Ottman started Minds in 2011 “with the goal of bringing a free, open source and sustainable social network to the world,” he said. He and his CTO, Mark Harding, have worked in various non-profits including Code To Inspire, a group that teaches Afghani women to code. He said his vision is to get us out from under social media’s thumb.

“We started Minds in my basement after being disillusioned by user abuse on Facebook and other big tech services. We saw spying, data mining, algorithm manipulation, and no revenue sharing,” he said. “To us, it’s inevitable that an open source social network becomes dominant, as was the case with Wikipedia and proprietary encyclopedias.”

His efforts have paid off. The team now has over 1 million registered users and over 105,000 monthly active users. They are working on a number of initiatives, including an ICO, and the site makes money through “boosting” – essentially the ability to pay to have a piece of content float higher in the feed.

The company raised $350K in 2013 and then a little over a million dollars in a Reg CF Equity Crowdfunding raise.

Unlike Facebook, Minds is built on almost radical transparency. The code is entirely open source and it includes encrypted messenger services and optional anonymity for users. The goal, ultimately, is to have the data be decentralized and any user should be able to remove his or her data. It’s also non-partisan, a fact that Ottman emphasized.

“We are not pushing a political agenda, but are more concerned with transparency, Internet freedom and giving control back to the user,” he said. “It’s a sad state of affairs when every network that cares about free speech gets lumped in with extremists.”

He was disappointed, for example, when people read that Reddit’s choice to shut down toxic sub-Reddits was a success. It wasn’t, he said. Instead, those users just flocked to other, more permissive sites. However, he doesn’t think those sites have be cesspools of hate.

“We are a community-owned social network dedicated to transparency, privacy and rewarding people for their contributions. We are called Minds because it’s meant to be a representation of the network itself,” he said. “Our mission is Internet freedom with privacy, transparency, free speech within the law and user control. Additionally, we want to provide our users with revenue opportunity and the ability to truly expand their reach and earn rewards for their contributions to the network.”

Microsoft can ban you for using offensive language

A report by CSOOnline presented the possibility that Microsoft would be able to ban “offensive language” from Skype, Xbox, and, inexplicably, Office. The post, which cites Microsoft’s new terms of use, said that the company would not allow users to “publicly display or use the Services to share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, or criminal activity)” and that you could lose your Xbox Live Membership if you curse out a kid Overwatch.

“We are committed to providing our customers with safe and secure experiences while using our services. The recent changes to the Microsoft Service Agreement’s Code of Conduct provide transparency on how we respond to customer reports of inappropriate public content,” said a Microsoft spokesperson. The company notes that “Microsoft Agents” do not watch Skype calls and that they can only respond to complaints with clear evidence of abuse. The changes, which go into effect May 1, allows Microsoft to ban you from it services if you’re found passing “inappropriate content” or using “offensive language.”

These new rules give Microsoft more power over abusive users and it seems like Microsoft is cracking down on bad behavior on its platforms. This is good news for victims of abuse in private communications channels on Microsoft products and may give trolls pause before they yell something about your mother on Xbox. We can only dare to dream.