Author: Devin Coldewey

Facebook says it gave ‘identical support’ to Trump and Clinton campaigns

Facebook’s hundreds of pages of follow-ups to Senators make for decidedly uninteresting reading. Give lawyers a couple months and they will always find a way to respond non-substantively to the most penetrating questions. One section may at least help put a few rumors to rest about Facebook’s role in the 2016 Presidential campaigns, though of course much is still left to the imagination.

Senator Kamala Harris (D-CA), whose dogged questioning managed to put Mark Zuckerberg on his back foot during the questioning, had several pages of questions sent over afterwards. Among the many topics was that of the 2016 campaign and reports that Facebook employees were “embedded” in the Trump campaign specifically, as claimed by the person who ran the digital side of that campaign.

This has raised questions as to whether Facebook was offering some kind of premium service to one candidate or another, or whether one candidate got tips on how to juice the algorithm, how to target better, and so on.

Here are the takeaways from the answers, which you can find in full on page 167 of the document at the bottom of this post.

  • The advice to the campaigns is described as similar to that given to “other, non-political” accounts.
  • No one was “assigned full-time” on either the Trump or Clinton campaign.
  • Campaigns did not get to hand pick who from Facebook came to advise them.
  • Facebook provided “identical support” and tools to both campaigns.
  • Sales reps are trained to comply with federal election law, and to report “improper activity.”
  • No such “improper activity” was reported by Facebook employees on either campaign.
  • Facebook employees did work directly with Cambridge Analytica employees.
  • No one identified any issues with Cambridge Analytica, its data, or its intended use of that data.
  • Facebook did not work with Cambridge Analytica or related companies on other campaigns (e.g. Brexit).

It’s not exactly fire, but we don’t really need more fire these days. This at least is on the record and relatively straightforward; whatever Facebook’s sins during the election cycle may have been, it does not appear that preferential treatment of the two major campaigns was among them.

Incidentally, if you’re curious whether Facebook finally answered Sen. Harris’s questions about who made the decision not to inform users of the Cambridge Analytica issue back in 2015, or how that decision was made — no, it didn’t. In fact the silence here is so deafening it almost certainly indicates a direct hit.

Harris asked how and when it came to the decision not to inform users that their data had been misappropriated, who made that decision and why, and lastly when Zuckerberg entered the loop. Facebook’s response does not even come close to answering any of these questions:

When Facebook learned about Kogan’s breach of Facebook’s data use policies in December 2015, it took immediate action. The company retained an outside firm to assist in investigating Kogan’s actions, to demand that Kogan and each party he had shared data with delete the data and any derivatives of the data, and to obtain certifications that they had done so. Because Kogan’s app could no longer collect most categories of data due to changes in Facebook’s platform, the company’s highest priority at that time was ensuring deletion of the data that Kogan may have accessed before these changes took place. With the benefit of hindsight, we wish we had notified people whose information may have been impacted. Facebook has since notified all people potentially impacted with a detailed notice at the top of their newsfeed.

This answer has literally nothing to do with the questions.

It seems likely from the company’s careful and repeated refusal to answer this question that the story is an ugly one — top executives making a decision to keep users in the dark for as long as possible, if I had to guess.

At least with the campaign issues Facebook was more forthcoming, and as a result will put down several lines of speculation. Not so with this evasive maneuver.

Embedded below are Facebook’s answers to the Senate Judiciary Committee, and the other set is here:

How Facebook’s new 3D photos work

In May, Facebook teased a new feature called 3D photos, and it’s just what it sounds like. But beyond a short video and the name, little was said about it. But the company’s computational photography team has just published the research behind how the feature feature works and, having tried it myself, I can attest that the results are really quite compelling.

In case you missed the teaser, 3D photos will live in your news feed just like any other photos, except when you scroll by them, touch or click them, or tilt your phone, they respond as if the photo is actually a window into a tiny diorama, with corresponding changes in perspective. It will work for both ordinary pictures of people and dogs, but also landscapes and panoramas.

It sounds a little hokey, and I’m about as skeptical as they come, but the effect won me over quite quickly. The illusion of depth is very convincing, and it does feel like a little magic window looking into a time and place rather than some 3D model — which, of course, it is. Here’s what it looks like in action:

I talked about the method of creating these little experiences with Johannes Kopf, a research scientist at Facebook’s Seattle office, where its Camera and computational photography departments are based. Kopf is co-author (with University College London’s Peter Hedman) of the paper describing the methods by which the depth-enhanced imagery is created; they will present it at SIGGRAPH in August.

Interestingly, the origin of 3D photos wasn’t an idea for how to enhance snapshots, but rather how to democratize the creation of VR content. It’s all synthetic, Kopf pointed out. And no casual Facebook user has the tools or inclination to build 3D models and populate a virtual space.

One exception to that is panoramic and 360 imagery, which is usually wide enough that it can be effectively explored via VR. But the experience is little better than looking at the picture printed on butcher paper floating a few feet away. Not exactly transformative. What’s lacking is any sense of depth — so Kopf decided to add it.

The first version I saw had users moving their ordinary cameras in a pattern capturing a whole scene; by careful analysis of parallax (essentially how objects at different distances shift different amounts when the camera moves) and phone motion, that scene could be reconstructed very nicely in 3D (complete with normal maps, if you know what those are).

But inferring depth data from a single camera’s rapid-fire images is a CPU-hungry process and, though effective in a way, also rather dated as a technique. Especially when many modern cameras actually have two cameras, like a tiny pair of eyes. And it is dual-camera phones that will be able to create 3D photos (though there are plans to bring the feature downmarket).

By capturing images with both cameras at the same time, parallax differences can be observed even for objects in motion. And because the device is in the exact same position for both shots, the depth data is far less noisy, involving less number-crunching to get into usable shape.

Here’s how it works. The phone’s two cameras take a pair of images, and immediately the device does its own work to calculate a “depth map” from them, an image encoding the calculated distance of everything in the frame. The result looks something like this:

Apple, Samsung, Huawei, Google — they all have their own methods for doing this baked into their phones, though so far it’s mainly been used to create artificial background blur.

The problem with that is that the depth map created doesn’t have some kind of absolute scale — for example, light yellow doesn’t mean 10 feet, while dark red means 100 feet. An image taken a few feet to the left with a person in it might have yellow indicating 1 foot and red meaning 10. The scale is different for every photo, which means if you take more than one, let alone dozens or a hundred, there’s little consistent indication of how far away a given object actually is, which makes stitching them together realistically a pain.

That’s the problem Kopf and Hedman and their colleagues took on. In their system, the user takes multiple images of their surroundings by moving their phone around; it captures an image (technically two images and a resulting depth map) every second and starts adding it to its collection.

In the background, an algorithm looks at both the depth maps and the tiny movements of the camera captured by the phone’s motion detection systems. Then the depth maps are essentially massaged into the correct shape to line up with their neighbors. This part is impossible for me to explain because it’s the secret mathematical sauce that the researchers cooked up. If you’re curious and like Greek, click here.

Not only does this create a smooth and accurate depth map across multiple exposures, but it does so really quickly: about a second per image, which is why the tool they created shoots at that rate, and why they call the paper “Instant 3D Photography.”

Next the actual images are stitched together, the way a panorama normally would be. But by utilizing the new and improved depth map, this process can be expedited and reduced in difficulty by, they claim, around an order of magnitude.

Because different images captured depth differently, aligning them can be difficult, as the left and center examples show — many parts will be excluded or produce incorrect depth data. The one on the right is Facebook’s method.

Then the depth maps are turned into 3D meshes (a sort of two-dimensional model or shell) — think of it like a papier-mache version of the landscape. But then the mesh is examined for obvious edges, such as a railing in the foreground occluding the landscape in the background, and “torn” along these edges. This spaces out the various objects so they appear to be at their various depths, and move with changes in perspective as if they are.

Although this effectively creates the diorama effect I described at first, you may have guessed that the foreground would appear to be little more than a paper cutout, since, if it were a person’s face captured from straight on, there would be no information about the sides or back of their head.

This is where the final step comes in of “hallucinating” the remainder of the image via a convolutional neural network. It’s a bit like a content-aware fill, guessing on what goes where by what’s nearby. If there’s hair, well, that hair probably continues along. And if it’s a skin tone, it probably continues too. So it convincingly recreates those textures along an estimation of how the object might be shaped, closing the gap so that when you change perspective slightly, it appears that you’re really looking “around” the object.

The end result is an image that responds realistically to changes in perspective, making it viewable in VR or as a diorama-type 3D photo in the news feed.

In practice it doesn’t require anyone to do anything different, like download a plug-in or learn a new gesture. Scrolling past these photos changes the perspective slightly, alerting people to their presence, and from there all the interactions feel natural. It isn’t perfect — there are artifacts and weirdness in the stitched images if you look closely and of course mileage varies on the hallucinated content — but it is fun and engaging, which is much more important.

The plan is to roll the feature out mid-summer. For now the creation of 3D photos will be limited to devices with two cameras — that’s a limitation of the technique — but anyone will be able to view them.

But the paper does also address the possibility of single-camera creation by way of another convolutional neural network. The results, only briefly touched on, are not as good as the dual-camera systems, but still respectable and better and faster than some other methods currently in use. So those of us still living in the dark age of single cameras have something to hope for.

Facebook’s latest privacy blunder has already attracted congressional ire

The news that Facebook offered to partners until just recently a form of the friend-scraping capability it claimed to have discontinued back in 2014 has, within hours, brought rebuke and a call to action from the House of Representatives.

“It’s deeply concerning that Facebook continues to withhold critical details about the information it has and shares with others. This is just the latest example of Facebook only coming forward when forced to do so by a media outlet,” reads a statement from Rep. Frank Pallone (D-NJ).

Indeed, the question of whether and how a user’s friends’ data was being shared with third parties was brought up during Zuckerberg’s testimony. It is, after all, likely that this is the vector by which millions of users’ data was exfiltrated by agents both malicious and benign.

In the same line of thinking as “don’t talk to the cops,” the CEO was almost certainly instructed not to volunteer any disadvantageous information unless directly asked. Therefore, it should surprise no one that he failed to mention that there existed until quite recently a similar program allowing third parties to collect data on unsuspecting friends.

It’s telling of Facebook’s current predicament that before they can adequately answer some questions, even more arise.

“Our Committee is also still waiting for a lot of answers from Facebook to questions Mr. Zuckerberg could not or would not answer at our hearing,” Pallone said.

He also called for the FTC to get involved: “The Federal Trade Commission must conduct a full review to determine if the consent decree was violated.” I’ve asked if the Representative will be appealing to the FTC directly, and/or whether any existing investigation (the FTC is quiet about these) will be affected.

Pallone is just one among hundreds of senators and representatives, but he is one of the crew responsible for the pending Congressional Review Act rollback of the FCC’s new, weaker net neutrality rules. So it’s not a surprise to see him weigh in quickly on another tech issue. Here’s hoping it helps keep Facebook accountable.

It’s OK to leave Facebook

The slow-motion privacy train wreck that is Facebook has many users, perhaps you, thinking about leaving or at least changing the way you use the social network. Fortunately for everyone but Mark Zuckerberg, it’s not nearly has hard to leave as it once was. The main thing to remember is that social media is for you to use, and not vice versa.

Social media has now become such an ordinary part of modern life that, rather than have it define our interactions, we can choose how we engage with it. That’s great! It means that everyone is free to design their own experience, taking from it what they need instead of participating to an extent dictated by social norms or the progress of technology.

Here’s why now is a better time than ever to take control of your social media experience. I’m going to focus on Facebook, but much of this is applicable to Instagram, Twitter, LinkedIn, and other networks as well.

Stalled innovation means a stable product

The Facebooks of 2005, 2010, and 2015 were very different things and existed in very different environments. Among other things over that eventful ten-year period, mobile and fixed broadband exploded in capabilities and popularity; the modern world of web-native platforms matured and became secure and reliable; phones went from dumb to smart to, for many, their primary computer; and internet-based companies like Google, Facebook, and Amazon graduated from niche players to embrace and dominate the world at large.

It’s been a transformative period for lots of reasons and in lots of ways. And products and services that have been there the whole time have been transformed almost continuously. You’d probably be surprised at what they looked like and how limited they were not long ago. Many things we take for granted today online were invented and popularized just in the last decade.

But the last few years have seen drastically diminished returns. Where Facebook used to add features regularly that made you rely on it more and more, now it is desperately working to find ways to keep people online. Why is that?

Well, we just sort of reached the limit of what a platform like Facebook can or should do, that’s all! Nothing wrong with that.

It’s like improving a car — no matter how many features you add or engines you swap in, it’ll always be a car. Cars are useful things, and so is Facebook. But a car isn’t a truck, or a bike, or an apple, and Facebook isn’t (for example) a broadcast medium, a place for building strong connections, or a VR platform (as hard as they’re trying).

The things that Facebook does well and that we have all found so useful — sharing news and photos with friends, organizing events, getting and staying in contact with people — haven’t changed considerably in a long time. And as the novelty has worn off those things, we naturally engage in them less frequently and in ways that make more sense to us.

Facebook has become the platform it was intended to be all along, with its own strengths and weaknesses, and its failure to advance beyond that isn’t a bad thing. In fact, I think stability is a good thing. Once you know what something is and will be, you can make an informed choice about it.

The downsides have become obvious

Every technology has its naysayers, and social media was no exception — I was and to some extent remain one myself. But over the years of changes these platforms have gone through, some fears were shown to be unfounded or old-fashioned.

The idea that people would cease interacting in the “real world” and live in their devices has played out differently from how we expected, surely; trying to instruct the next generation on the proper way to communicate with each other has never worked out well for the olds. And if you told someone in 2007 that foreign election interference would be as much a worry for Facebook as oversharing and privacy problems, you might be met with incredulous looks.

Other downsides were for the most part unforeseen. The development of the bubble or echo chamber, for instance, would have been difficult to predict when our social media systems weren’t also our news-gathering systems. And the phenomenon of seeing only the highlights of others’ lives posted online, leading to self esteem issues in those who view them with envy, is an interesting but sad development.

Whether some risk inherent to social media was predicted or not, or proven or not, people now take such risks seriously. The ideas that one can spend too much time on social networks, or suffer deleterious effects from them, or feel real pain or turmoil because of interactions on them are accepted (though sadly not always without question).

Taking the downsides of something as seriously as the upsides is another indicator of the maturity of that thing, at least in terms of how society interacts with it. When the hype cycle winds down, realistic judgment takes its place and the full complexities of a relationship like the one between people and social media can be examined without interference.

Between the stability of social media’s capabilities and the realism with which those capabilities are now being considered, choice is no longer arbitrary or absolute. Your engagement is not being determined by them any more.

Social media has become a rich set of personal choices

Your experience may differ from mine here, but I feel that in those days of innovation among social networks your participation was more of a binary. You were either on or you were off.

The way they were advancing and changing defined how you engaged with them by adding and opting you into features, or changing layouts and algorithms. It was hard to really choose how to engage in any meaningful way when the sands were shifting under your feet (or rather, fingertips). Every few months brought new features and toys and apps, and you sort of had to be there, using them as proscribed, or risk being left behind. So people either kept up or voluntarily stayed off.

Now all that has changed. The ground rules are set, and have been for long enough that there is no risk that if you left for a few months and come back, things would be drastically different.

As social networks have become stable tools used by billions, any combination or style of engagement with them has become inherently valid.

Your choice to engage with Facebook or Instagram does not boil down to simply whether you are on it or not any more, and the acceptance of social media as a platform for expression and creation as well as socializing means that however you use it or present on it is natural and no longer (for the most part) subject to judgment.

That extends from choosing to make it an indispensable tool in your everyday life to quitting and not engaging at all. There’s no longer an expectation that the former is how a person must use social media, and there is no longer a stigma to the latter of disconnectedness or Luddism.

You and I are different people. We live in different places, read different books, enjoy different music. We drive different cars, prefer different restaurants, like different drinks. Why should we be the same in anything as complex as how we use and present ourselves on social media?

It’s analogous, again, to a car: you can own one and use it every day for a commute, or use it rarely, or not have one at all — who would judge you? It has nothing to do with what cars are or aren’t, and everything to do with what a person wants or needs in the circumstances of their own life.

For instance, I made the choice to remove Facebook from my phone over a year ago. I’m happier and less distracted, and engage with it deliberately, on my terms, rather than it reaching out and engaging me. But I have friends who maintain and derive great value from their loose network of scattered acquaintances, and enjoy the immediacy of knowing and interacting with them on the scale of minutes or seconds. And I have friends who have never been drawn to the platform in the first place, content to select from the myriad other ways to stay in touch.

These are all perfectly good ways to use Facebook! Yet only a few years ago the zeitgeist around social media and its exaggerated role in everyday life — resulting from novelty for the most part — meant that to engage only sporadically would be more difficult, and to disengage entirely would be to miss out on a great deal (or fear that enough that quitting became fraught with anxiety). People would be surprised that you weren’t on Facebook and wonder how you got by.

Try it and be delighted

Social networks are here to improve your life the same way that cars, keyboards, search engines, cameras, coffee makers, and everything else are: by giving you the power to do something. But those networks and the companies behind them were also exerting power over you and over society in general, the way (for example) cars and car makers exerted power over society in the ’50s and ’60s, favoring highways over public transportation.

Some people and some places, more than others, are still subject to the influence of car makers — ever try getting around L.A. without one? And the same goes for social media — ever try planning a birthday party without it? But the last few years have helped weaken that influence and allow us to make meaningful choices for ourselves.

The networks aren’t going anywhere, so you can leave and come back. Social media doesn’t control your presence.

It isn’t all or nothing, so you can engage at 100 percent, or zero, or anywhere in between. Social media doesn’t decide how you use it.

You won’t miss anything important, because you decide what is important to you. Social media doesn’t share your priorities.

Your friends won’t mind, because they know different people need different things. Social media doesn’t care about you.

Give it a shot. Pick up your phone right now and delete Facebook. Why not? The absolute worst that will happen is you download it again tomorrow and you’re back where you started. But it could also be, as it was for me and has been for many people I’ve known, like shrugging off a weight you didn’t even realize you were bearing. Try it.

Teens dump Facebook for YouTube, Instagram and Snapchat

A Pew survey of teens and the ways they use technology finds that kids have largely ditched Facebook for the visually stimulating alternatives of Snapchat, YouTube, and Instagram. Nearly half said they’re online “almost constantly,” which will probably be used as a source of FUD, but really is just fine. Even teens, bless their honest little hearts, have doubts about whether social media is good or evil.

The survey is the first by Pew since 2015, and plenty has changed. The one that has driven the most change seems to be the ubiquity and power of smartphones, which 95 percent of respondents said they had access to. Fewer, especially among lower income families, had laptops and desktops.

This mobile-native cohort has opted for mobile-native content and apps, which means highly visual and easily browsable. That’s much more the style on the top three apps: YouTube takes first place with 85 percent reporting they use it, then Instagram at 72 percent, and Snapchat at 69.

Facebook, at 51 percent, is a far cry from the 71 percent who used it back in 2015, when it was top of the heap by far. Interestingly, the 51 percent average is not representative of any of the income groups polled; 36 percent of higher income households used it, while 70 percent of teens from lower income households did.

What could account for this divergence? The latest and greatest hardware isn’t required to run the top three apps, nor (necessarily) an expensive data plan. With no data to go on from the surveys and no teens nearby to ask, I’ll leave this to the professionals to look into. No doubt Facebook will be interested to learn this — though who am I kidding, it probably knows already. (There’s even a teen tutorial.)

Twice as many teens reported being “online constantly,” but really, it’s hard to say when any of us is truly “offline.” Teens aren’t literally looking at their phones all day, much as that may seem to be the case, but they — and the rest of us — are rarely more than a second or two away from checking messages, looking something up, and so on. I’m surprised the “constantly” number isn’t higher, honestly.

Gaming is still dominated by males, almost all of whom play in some fashion, but 83 percent of teen girls also said they gamed, so the gap is closing.

When asked whether social media had a positive or negative effect, teens were split. They valued it for connecting with friends and family, finding news and information, and meeting new people. But they decried its use in bullying and spreading rumors, its complicated effect on in-person relationships, and how it distracts from and distorts real life.

Here are some quotes from real teens demonstrating real insight.

Those who feel it has an overall positive effect:

  • “I feel that social media can make people my age feel less lonely or alone. It creates a space where you can interact with people.”
  • “My mom had to get a ride to the library to get what I have in my hand all the time. She reminds me of that a lot.”
  • “We can connect easier with people from different places and we are more likely to ask for help through social media which can save people.”
  • “It has given many kids my age an outlet to express their opinions and emotions, and connect with people who feel the same way.”

And those who feel it’s negative:

  • “People can say whatever they want with anonymity and I think that has a negative impact.”
  • “Gives people a bigger audience to speak and teach hate and belittle each other.”
  • “It makes it harder for people to socialize in real life, because they become accustomed to not interacting with people in person.”
  • “Because teens are killing people all because of the things they see on social media or because of the things that happened on social media.”

That last one is scary.

You can read the rest of the report and scrutinize Pew’s methodology here.

Students confront the unethical side of tech in ‘Designing for Evil’ course

Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.

“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.

What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?

I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.

The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas such as utilitarianism and deontology.

“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”

The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.

As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.

I found the students fell into one of three categories.

Not fundamentally unethical (but could use an ethical tune-up)

WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.

Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.

Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.

Fundamentally unethical (fixes are still worth making)

FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!

China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.

Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some dealbreaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).

Fundamentally unethical (fixes are essentially impossible)

The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.

Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps, and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.


To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.

That may be the difference in a meeting between being able to saying something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.

As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.

With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self esteem.

Elon Musk has a very bad idea for a website rating journalists

Elon Musk has, as I imagine he often does during meetings or long car rides, come up with an idea for a new thing. Unlike the Hyperloop, which was cool, and various space-related ideas, which we know he’s at least partly expert about, this one is just plain bad. It’s basically Yelp But For Journalism.

He may as well have said, I found this great can marked “worms” and I’m going to open it, or, I’ve determined a new method for herding cats.

The idea of holding publications and people accountable is great. Unfortunately it is the kind of problem that does not yield to even the best of intentions and smart engineering, because it is quickly complicated by the ethical, procedural and practical questions of crowdsourcing “the truth.”

He agreed with another Twitter user, whose comment is indistinguishable from sarcasm:

My guess is Musk does not often use Yelp, and has never operated a small business like a restaurant or salon.

Especially in today’s fiercely divided internet landscape, there is no reliable metric for truth or accountability. Some will say The New York Times is the most trusted newspaper in America — others will call it a debased rag with a liberal agenda. Individual stories will receive the same treatment, with some disputing what they believe are biases and others disputing those same things as totally factual.

And while the truth lies somewhere in-between these extremes, it is unlikely to be the mathematical mean of them. The “wisdom of the crowd,” so often evoked but so seldom demonstrated, cannot depend on an equal number of people being totally wrong in opposite ways, producing a sort of stable system of bias.

The forces at work here — psychological, political, sociological, institutional — are subtle and incalculable.

The origins of this faith, and of the idea that there is somehow a quorum of truth-seekers in this age of deception, are unclear.

Facebook’s attempts to crowdsource the legitimacy of news stories has had mixed results, and the predictable outcome is, of course, that people simply report as false news with which they disagree. Independent adjudicators are needed, and Facebook has fired and hired them by the hundreds, yet to arrive at some system that produces results worth talking about.

Fact-checking sites perform an invaluable service, but they are labor-intensive, not a self-regulating system like what Musk proposes. Such systems are inevitably and notoriously ruled by chaos, vote brigades, bots, infiltrators, agents provocateur and so on.

Easier said than done — in fact, often said and never done, for years and years and years, by some of the smartest people in the industry. It’s not to say it is impossible, but Musk’s glib positivity and ignorance or dismissal of a decade and more of efforts on this front are not inspiring. (Nate Silver, for one, is furious.)

Likely as a demonstration of his “faith in the people,” if there are any on bot-ridden Twitter, he has put the idea up for public evaluation.

Currently the vote is about 90 percent yes. It’s hard to explain how dumb this is. Yet like most efforts it will be instructive, both to others attempting to tame the zeitgeist, and hopefully to Musk.