Author: Jonathan Shieber

Are algorithms hacking our thoughts?

 

As Facebook shapes our access to information, Twitter dictates public opinion and Tinder influences our dating decisions, the algorithms we’ve developed to help us navigate choice are now actively driving every aspect of our lives.

But as we increasingly rely on them for everything from how we seek out news to how we relate to the people around us, have we automated the way we behave? Is human thinking beginning to mimic algorithmic processes? And is the Cambridge Analytica debacle a warning sign of what’s to come — and of happens when algorithms hack into our collective thoughts?

It wasn’t supposed to go this way. Overwhelmed by choice — in products, people and the sheer abundance of information coming at us at all times — we’ve programmed a better, faster, easier way to navigate the world around us. Using clear parameters and a set of simple rules, algorithms help us make sense of complex issues. They’re our digital companions, solving real-world problems we encounter at every step, and optimizing the way we make decisions. What’s the best restaurant in my neighborhood? Google knows it. How do I get to my destination? Apple Maps to the rescue. What’s the latest Trump scandal making the headlines? Facebook may or may not tell you.

Wouldn’t it be nice if code and algorithms knew us so well — our likes, our dislikes, our preferences — that they could anticipate our every need and desire? That way, we wouldn’t have to waste any time thinking about it: We could just read the one article that’s best suited to reinforce our opinions, date whoever meets our personalized criteria and revel in the thrill of familiar surprise. Imagine all the time we’d free up, so we could focus on what truly matters: carefully curating our digital personas and projecting our identities on Instagram.

It was Karl Marx who first said our thoughts are determined by our machinery, an idea that Ellen Ullman references in her 1997 book, Close to the Machine, which predicts many of the challenges we’re grappling with today. Beginning with the invention of the internet, the algorithms we’ve built to make our lives easier have ended up programming the way we behave.

Photo courtesy of Shutterstock/Lightspring

Here are three algorithmic processes and the ways in which they’ve hacked their way into human thinking, hijacking our behavior.

Product comparison: From online shopping to dating

Amazon’s algorithm allows us to browse and compare products, save them for later and eventually make our purchase. But what started as a tool designed to improve our e-commerce experience now extends much beyond that. We’ve internalized this algorithm and are applying it to other areas of our lives — like relationships.

Dating today is much like online shopping. Enabled by social platforms and apps, we browse endless options, compare their features and select the one that taps into our desires and perfectly fits our exact personal preferences. Or just endlessly save it for later, as we navigate the illusion of choice that permeates both the world of e-commerce and the digital dating universe.

Online, the world becomes an infinite supply of products, and now, people. “The web opens access to an unprecedented range of goods and services from which you can select the one thing that will please you the most,” Ullman explains in Life in Code. “[There is the idea] that from that choice comes happiness. A sea of empty, illusory, misery-inducing choice.”

We all like to think that our needs are completely unique — and there’s a certain sense of seduction and pleasure that we derive from the promise of finding the one thing that will perfectly match our desires.

Whether it’s shopping or dating, we’ve been programmed to constantly search, evaluate and compare. Driven by algorithms, and in a larger sense, by web design and code, we’re always browsing for more options. In Ullman’s words, the web reinforces the idea that “you are special, your needs are unique, and [the algorithm] will help you find the one thing that perfectly meets your unique need and desire.”

In short, the way we go about our lives mimics the way we engage with the internet. Algorithms are an easy way out, because they allow us to take the messiness of human life, the tangled web of relationships and potential matches, and do one of two things: Apply a clear, algorithmic framework to deal with it, or just let the actual algorithm make the choice for us. We’re forced to adapt to and work around algorithms, rather than use technology on our terms.

Which leads us to another real-life phenomenon that started with a simple digital act: rating products and experiences.

Quantifying people: Ratings & reviews

As with all other well-meaning algorithms, this one is designed with you and only you in mind. Using your feedback, companies can better serve your needs, provide targeted recommendations just for you and serve you more of what you’ve historically shown to like, so you can carry on mindlessly consuming it.

From your Uber ride to your Postmate delivery to your Handy cleaning appointment, nearly every real-life interaction is rated on a scale of 1-5 and reduced to a digital score.

As a society we’ve never been more concerned with how we’re perceived, how we perform and how we compare to others’ expectations. We’re suddenly able to quantify something as subjective as our Airbnb host’s design taste or cleanliness. And the sense of urgency with which we do it is incredible — you’re barely out of your Uber car when you neurotically tap all five stars, tipping with wild abandon in a quest to improve your passenger rating. And the rush of being reviewed in return! It just fills you with utmost joy.

Yes, you might be thinking of that dystopian Black Mirror scenario, or that oddly relatable Portlandia sketch, but we’re not too far off from a world where our digital score simultaneously replaces and drives all meaning in our lives.

We’ve automated the way we interact with people, where we’re constantly measuring and optimizing those interactions in an endless cycle of self-improvement. It started with an algorithm, but it’s now second nature.

As Jaron Lainier wrote in his introduction to Close to the Machine, “We create programs using ideas we can feed into them, but then [as] we live through the program. . .we accept the ideas embedded in it as facts of nature.”

That’s because technology makes abstract and often elusive, desirable qualities quantifiable. Through algorithms, trust translates into ratings and reviews, popularity equals likes and social status means followers. Algorithms create a sort of Baudrillardian simulation, where each rating has completely replaced the reality it refers to, and where the digital review feels more real, and certainly more meaningful, than the actual, real-life experience.

In facing the complexity and chaos of real life, algorithms help us find ways to simplify it; to take the awkwardness out of social interaction and the insecurity that comes with opinions and real-life feedback, and make it all fit neatly into a ratings box.

But as we adopt programming language, code and algorithms as part of our own thinking, are human nature and artificial intelligence merging into one? We’re used to thinking of AI as an external force, something we have little control over. What if the most immediate threat of AI is less about robots taking over the world, and more about technology becoming more embedded into our consciousness and subjectivity?

In the same way that smartphones became extensions of our senses and our bodies, as Marshall McLuhan might say, algorithms are essentially becoming extensions of our thoughts. But what do we do when they replace the very qualities that make us human?

And, as Lainier asks, “As computers mediate human language more and more over time, will language itself start to change?”

Image: antoniokhr/iStock

Automating language: Keywords and buzzwords

Google indexes search results based on keywords. SEO makes websites rise to the top of search results, based on specific tactics. To achieve this, we work around the algorithm, figure out what makes it tick, and sprinkle websites with keywords that make it more likely to stand out in Google’s eyes.

But much like Google’s algorithm, our mind prioritizes information based on keywords, repetition and quick cues.

It started as a strategy we built around technology, but it now seeps into everything we do — from the way we write headlines to how we generate “engagement” with our tweets to how we express ourselves in business and everyday life.

Take the buzzword mania that dominates both the media landscape and the startup scene. A quick look at some of the top startups out there will show that the best way to capture people’s attention — and investors’ money — is to add “AI,” “crypto” or “blockchain” into your company manifesto.

Companies are being valuated based on what they’re signifying to the world through keywords. The buzzier the keywords in the pitch deck, the higher the chances a distracted investor will throw some money at it. Similarly, a headline that contains buzzwords is far more likely to be clicked on, so the buzzwords start outweighing the actual content — clickbait being one symptom of that.

Where do we go from here?

Technology gives us clear patterns; online shopping offers simple ways to navigate an abundance of choice. Therefore there’s no need to think — we just operate under the assumption that algorithms know best. We don’t exactly understand how they work, and that’s because code is hidden: we can’t see it, the algorithm just magically presents results and solutions. As Ullman warns in Life in Code, “When we allow complexity to be hidden and handled for us, we should at least notice what we are giving up. We risk becoming users of components. . .[as we] work with mechanisms that we do not understand in crucial ways. This not-knowing is fine while everything works as expected. But when something breaks or goes wrong or needs fundamental change, what will we do except stand helpless in the face of our own creations?”

Cue fake news, misinformation and social media targeting in the age of Trump.

Image courtesy of Intellectual Take Out.

So how do we encourage critical thinking, how do we spark more interest in programming, how do we bring back good-old-fashioned debate and disagreement? What can we do to foster difference of opinion, let it thrive and allow it to challenge our views?

When we operate within the bubble of distraction that technology creates around us, and when our social media feeds consist of people who think just like us, how can we expect social change? What ends up happening is we operate exactly as the algorithm intended us to. The alternative is questioning the status quo, analyzing the facts and arriving at our own conclusions. But no one has time for that. So we become cogs in the Facebook machine, more susceptible to propaganda, blissfully unaware of the algorithm at work — and of all the ways in which it has inserted itself into our thought processes.

As users of algorithms rather than programmers or architects of our own decisions, our own intelligence become artificial. It’s “program or be programmed” as Douglas Rushkoff would say. If we’ve learned anything from Cambridge Analytica and the 2016 U.S. elections, it’s that it is surprisingly easy to reverse-engineer public opinion, to influence outcomes and to create a world where data, targeting and bots lead to a false sense of consensus.

What’s even more disturbing is that the algorithms we trust so much — the ones that are deeply embedded in the fabric of our lives, driving our most personal choices — continue to hack into our thought processes, in increasingly bigger and more significant ways. And they will ultimately prevail in shaping the future of our society, unless we reclaim our role as programmers, rather than users of algorithms.

Can data science save social media?

The unfettered internet is too often used for malicious purposes and is frequently woefully inaccurate. Social media — especially Facebook — has failed miserably at protecting user privacy and blocking miscreants from sowing discord.

That’s why CEO Mark Zuckerberg was just forced to testify about user privacy before both houses of Congress. And now governmental regulation of Facebook and other social media appears to be a fait accompli.

At this key juncture, the crucial question is whether regulation — in concert with Facebook’s promises to aggressively mitigate its weaknesses — will correct the privacy abuses and continue to fulfill Facebook’s goal of giving people the power to build transparent communities, bringing the world closer together?

The answer is maybe.

What has not been said is that Facebook must embrace data science methodologies initially created in the bowels of the federal government to help protect its two billion users. Simultaneously, Facebook must still enable advertisers — its sole source of revenue — to get the user data required to justify their expenditures.

Specifically, Facebook must promulgate and embrace what is known in high-level security circles as homomorphic encryption (HE), often considered the “Holy Grail” of cryptography, and data provenance (DP). HE would enable Facebook, for example, to generate aggregated reports about its user psychographic profiles so that advertisers could still accurately target groups of prospective customers without knowing their actual identities.

Meanwhile, data provenance — the process of tracing and recording true identities and the origins of data and its movement between databases — could unearth the true identities of Russian perpetrators and other malefactors, or at least identify unknown provenance, adding much-needed transparency in cyberspace.

Both methodologies are extraordinarily complex. IBM and Microsoft, in addition to the National Security Agency, have been working on HE for years, but the technology has suffered from significant performance challenges. Progress is being made, however. IBM, for example, has been granted a patent on a particular HE method — a strong hint it’s seeking a practical solution — and last month proudly announced that its rewritten HE encryption library now works up to 75 times faster. Maryland-based ENVEIL, a startup staffed by the former NSA HE team, has broken the performance barriers required to produce a commercially viable version of HE, benchmarking millions of times faster than IBM in tested use cases.

How homomorphic encryption would help Facebook

HE is a technique used to operate on and draw useful conclusions from encrypted data without decrypting it, simultaneously protecting the source of the information. It is useful to Facebook because its massive inventory of personally identifiable information is the foundation of the economics underlying its business model. The more comprehensive the data sets about individuals, the more precisely advertising can be targeted.

HE could keep Facebook information safe from hackers and inappropriate disclosure, but still extract the essence of what the data tells advertisers. It would convert encrypted data into strings of numbers, do math with these strings, then decrypt the results to get the same answer it would if the data wasn’t encrypted at all.

A particularly promising sign for HE emerged last year, when Google revealed a new marketing measurement tool that relies on this technology to allow advertisers to see whether their online ads result in in-store purchases.

Unearthing this information requires analyzing data sets belonging to separate organizations, notwithstanding the fact that these organizations pledge to protect the privacy and personal information of the data subjects. HE skirts this by generating aggregated, non-specific reports about the comparisons between these data sets.

In pilot tests, HE enabled Google to successfully analyze encrypted data about who clicked on an advertisement in combination with another encrypted multi-company data set that recorded credit card purchase records. With this data in hand, Google was able to provide reports to advertisers summarizing the relationship between the two databases to conclude, for example, that five percent of the people who clicked on an ad wound up purchasing in a store.

Data provenance

Data provenance has a markedly different core principle. It’s based on the fact that digital information is atomized into 1s and 0s with no intrinsic truth. The dual digits exist only to disseminate information, whether accurate or widely fabricated. A well-crafted lie can easily be indistinguishable from the truth and distributed across the internet. What counts is the source of these 1s and 0s. In short, is it legitimate? What is the history of the 1s and 0s?

The art market, as an example, deploys DP to combat fakes and forgeries of the world’s greatest paintings, drawings and sculptures. It uses DP techniques to create a verifiable, chain-of-custody for each piece of the artwork, preserving the integrity of the market.

Much the same thing can be done in the online world. For example, a Facebook post referencing a formal statement by a politician, with an accompanying photo, would have provenance records directly linking the post to the politician’s press release and even the specifics of the photographer’s camera. The goal — again — is ensuring that data content is legitimate.

Companies such as Walmart, Kroger, British-based Tesco and Swedish-based H&M, an international clothing retailer, are using or experimenting with new technologies to provide provenance data to the marketplace.

Let’s hope that Facebook and its social media brethren begin studying HE and DP thoroughly and implement it as soon as feasible. Other strong measures — such as the upcoming implementation of the European Union’s General Data Protection Regulation, which will use a big stick to secure personally identifiable information — essentially should be cloned in the U.S. What is best, however, are multiple avenues to enhance user privacy and security, while hopefully preventing breaches in the first place. Nothing less than the long-term viability of social media giants is at stake.

Virtual Instagram celebrity ‘Lil Miquela’ has had her account hacked

The Instagram account for the virtual celebrity known as Lil Miquela has been hacked.

The multi-racial fashionista and advocate for multiculturalism, whose account is followed by nearly 1 million people, has had “her” account taken over by another animated Instagram account holder named “Bermuda.”

Welcome to the spring of 2018.

The hack of the @Lilmiquela account started earlier today, but the Bermuda avatar has long considered Miquela her digital nemesis and has taken steps to hack other of Miquela’s social accounts — like Spotify — before.

Because this is the twenty-first century — and given the polarization of the current political climate — it’s not surprising that the very real culture wars between proponents of pluralism and the Make America Great Again movement would take their fight to feuding avatars.

In posts on the Lil Maquela account, Bermuda proudly flaunts her artificial identity… and a decidedly pro-Trump message.

Unlike Miquela, whose account plays with the notion of a physical presence for a virtual avatar, Bermuda is very clearly a simulation. And one with political views that are diametrically opposed to those espoused by Miquela (whose promotion of openness and racial equality has been a feature that’s endeared the account to followers and fashion and culture magazines alike).

Miquela Sousa, a Brazilian-American from Downey, Calif., launched her Instagram account in 2016. Since the account’s appearance, Miquela has been a subject of speculation in the press and online.

Appearing on magazine covers, and consenting to do interviews with reporters, Miquela has been exploring notions of celebrity, influence and culture since her debut on Facebook’s new most popular social media site.

A person familiar with the Lil Miquela account said that Instagram was working on regaining control.

Palmer Luckey, political martyr?

In the middle of testimony over Facebook’s privacy scandal, Sen. Ted Cruz of Texas took a moment to grill Mark Zuckerberg over his company’s political loyalties.

In the course of a testy exchange between Sen. Cruz and Zuckerberg, the senator brought up the dismissal of Palmer Luckey, the controversial founder of virtual reality tech development pioneer, Oculus .

It was part of Cruz’s broader questioning about whether or not Facebook is biased in the ways it moderates the posts and accounts of members — and in its staffing policies.

Here’s the exchange:

Cruz: Do you know the political orientation of those 15 to 20,000 people engaged in content review?

Zuckerberg: No senator, we do not generally ask people about their political orientation when they’re joining the company.

Cruz: So, as CEO Have you ever made hiring or firing decisions based on political positions or what candidates they supported?

Zuckerberg: No.

Cruz: Why was Palmer Luckey fired?

Zuckerberg: That is a specific personnel matter that seems like it would be inappropriate to speak to here.

Cruz: You just made a specific representation that you didn’t make decisions based on political views, is that accurate?

Zuckerberg: I can commit that it was not because of a political view.

Luckey left Facebook last March, after reports surfaced that he was a member of a pro-Trump troll farm called Nimble America.

Luckey’s departure follows a lengthy period of absence from public view brought about by a Daily Beast piece revealing his involvement and funding of a pro-Trump troll group called Nimble America. News of his support came during a time when very few figures in Silicon Valley were publicly showing support for candidate Trump, the most notable being Peter Thiel, an early investor in Facebook who started the VC firm Founders Fund, which backed Oculus, as well.

Though Luckey initially denied funding the group, he ultimately took to social media to apologize in the midst of an upheaval that had many developers threatening to leave the platform. His last public statement (on Facebook, of course) was a mixture of regret and defense, reading, in part, “I am deeply sorry that my actions are negatively impacting the perception of Oculus and its partners. The recent news stories about me do not accurately represent my views… my actions were my own and do not represent Oculus. I’m sorry for the impact my actions are having on the community.”

Sheryl Sandberg says Facebook leadership should have spoken sooner, is open to regulation

The days of silence from Facebook’s top executives after the company banned the political advisory service Cambridge Analytica from its platform were a mistake, according to Sheryl Sandberg.

In a brief interview on CNBC, Sandberg said that the decision for her and company chief executive and founder Mark Zuckerberg to wait before speaking publicly about the evolving crisis was a mistake.

“Sometimes we speak too slowly,” says Sandberg. “If I look back I would have had Mark and myself speak sooner.”

It was the only significant new word from the top level of leadership at Facebook following the full-court press made by Mark Zuckerberg yesterday.

The firestorm that erupted over Facebook’s decision to ban Cambridge Analytica — and the ensuing revelations that the user data of 50 million Facebook users were accessed by the political consulting and marketing firm without those users’ permission — has slashed Facebook stock and brought calls for regulation for social media companies.

Even as $60 billion of shareholder value disappeared, Zuckerberg and Sandberg remained quiet.

The other piece of information from Sandberg’s CNBC interview was her admission that the company is “open” to government regulation. But even that formulation suggests what is a basic misunderstanding at best and cynical contempt at worst for the role of government in the process of protecting Facebook’s users.

Ultimately, it doesn’t matter whether Facebook is open to regulation or not. If the government and U.S. citizens want more controls, the regulations will come.

And it looks like Facebook’s proposed solution will end up costing the company a pretty penny as well, as it brings in forensic auditors to track who else might have abused the data harvesting permissions that the company had put in place in 2007 and only sunset in 2015. 

Before the policy change, companies that aggressively acquired data from Facebook would come in for meetings with the social media company and discuss how the data was being used. One company founder — who was a power user of Facebook data — said that the company’s representatives had told him “If you weren’t pushing the envelope, we wouldn’t respect you.”

Collecting user data before 2015 was actually something the company encouraged, under the banner of increased utility for Facebook users — so that calendars could bring in information about the birthdays of friends, for instance.

Indeed, the Obama campaign used Facebook data from friends in much the same way as Cambridge Analytica, albeit with a far greater degree of transparency.

The issue is that users don’t know where their data went in the years before Facebook shut the door on collection of data from a users’ network of friends in 2015.

That’s what Facebook — and the government is trying to find out.

 

After selling his company to Facebook for $19B, Brian Acton joins #deleteFacebook

Brian Acton, the co-founder of messaging service WhatsApp (which Facebook bought in 2014 for $19 billion), is now joining the chorus of the #deletefacebook movement.

A tipster alerted us to the fact that Acton made the same call… on Facebook… as well.

Since the sale of WhatsApp (which has made Acton an incredibly wealthy man), Acton has been actively financing more secure (and private) messaging platforms for users.

Acton has already used some of his WhatsApp wealth to give $50 million to the Signal Foundation.

While some may say it’s hypocritical to reap millions from Facebook and then call for users to jump ship, Acton has always had a penchant for supporting privacy. Back in its earliest days, WhatsApp’s stated goal was to never make money from ads:

Why we don’t sell ads

No one wakes up excited to see more advertising, no one goes to sleep thinking about the ads they’ll see tomorrow. We know people go to sleep excited about who they chatted with that day (and disappointed about who they didn’t). We want WhatsApp to be the product that keeps you awake… and that you reach for in the morning. No one jumps up from a nap and runs to see an advertisement.

Advertising isn’t just the disruption of aesthetics, the insults to your intelligence and the interruption of your train of thought. At every company that sells ads, a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packaged and shipped out… And at the end of the day the result of it all is a slightly different advertising banner in your browser or on your mobile screen.

Remember, when advertising is involved you the user are the product. – June 18, 2012 — WhatsApp blog

It may be that this latest scandal was the straw that borked the camel’s back.

I’ve reached out to Acton for comment.

Facebook hired a forensics firm to investigate Cambridge Analytica as stock falls 7%

Hoping to tamp down the furor that erupted over reports that its user data was improperly acquired by Cambridge Analytica, Facebook has hired the digital forensics firm Stroz Friedberg to perform an audit on the political consulting and marketing firm.

In a statement, Facebook said that Cambridge Analytica has agreed to comply and give Stroz Friedberg access to their servers and systems.

Facebook has also reached out to the whistleblower, Christopher Wylie, and Aleksandr Kogan, the Cambridge University professor who developed an application that collected data that he then sold to Cambridge Analytica.

Kogan has consented to the audit, but Wylie, who has positioned himself as one of the architects for the data collection scheme before becoming a whistleblower, declined, according to Facebook.

The move comes after a brutal day for Facebook’s stock on the Nasdaq stock exchange. Facebook shares plummeted 7 percent, erasing roughly $40 billion in market capitalization amid fears that the growing scandal could lead to greater regulation of the social media juggernaut.

Indeed both the Dow Jones Industrial Average and the Nasdaq fell sharply as worries over increased regulations for technology companies ricocheted around trading floors, forcing a sell-off.

“This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists. This is data Cambridge Analytica, SCL, Mr. Wylie, and Mr. Kogan certified to Facebook had been destroyed. If this data still exists, it would be a grave violation of Facebook’s policies and an unacceptable violation of trust and the commitments these groups made,” Facebook said in a statement.

However, as more than one Twitter user noted, this is an instance where they’re trying to close Pandora’s Box but the only thing that the company has left inside is… hope.

The bigger issue is that Facebook had known about the data leak as early as two years ago, but did nothing to inform its users — because the violation was not a “breach” of Facebook’s security protocols.

Facebook’s own argument for the protections it now has in place is a sign of its too-little, too-late response to a problem it created for itself with its initial policies.

“We are moving aggressively to determine the accuracy of these claims. We remain committed to vigorously enforcing our policies to protect people’s information. We also want to be clear that today when developers create apps that ask for certain information from people, we conduct a robust review to identify potential policy violations and to assess whether the app has a legitimate use for the data,” the company said in a statement. “We actually reject a significant number of apps through this process. Kogan’s app would not be permitted access to detailed friends’ data today.”

It doesn’t take a billionaire Harvard dropout genius to know that allowing third parties to access personal data without an individual’s consent is shady. And that’s what Facebook’s policies used to allow by letting Facebook “friends” basically authorize the use of a user’s personal data for them.

As we noted when the API changes first took effect in 2015:

Apps don’t have to delete data they’ve already pulled. If someone gave your data to an app, it could go on using it. However, if you request that a developer delete your data, it has to. However, how you submit those requests could be through a form, via email, or in other ways that vary app to app. You can also always go to your App Privacy Settings and remove permissions for an app to pull more data about you in the future.