Fill your inbox with inspiring stories every week.

Reading Time: 5 minutes

By Friday, January 8, over two months after the Presidential election and mere days after a series of seditious events that need no explaining, President Donald Trump had been banned from Twitter. 

And Facebook. 

And Snapchat. And Twitch. And nearly every other digital platform on which he had a presence. 

These moves aren’t all that surprising; in fact, they’d been in the making for some time. Two months earlier, Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey spoke before the Senate about the proliferation of political misinformation. The hearings highlighted what Americans have known since the election of 2016—content moderation gets to the difficulty of identifying truth when truth has become a partisan issue.

Building trust with ethical design, in the post-truth, pandemic landscape of 2021, is a challenge facing politicians, businesses, marketers, neighbors alike. It’s also one that the builders of the interfaces through which we interact with the world are particularly well-suited to tackle. Here are some of the ways that user experience experts are thinking about ethical design in 2021 and beyond. 

Design for the worst

There’s a common perception that the too-big-to-regulate tech companies are expertly engineered to manipulate their users. Rob Walker addresses this in a celebrated blog post on Medium, “Why Every CEO Needs to Think Like a Hacker, Stalker, or White Nationalist.” He explains that the great irony of the UX of digital design is even though we may feel these products were designed to take advantage of us, the truth is that many of these companies have failed to protect themselves from the bad actors who might try to manipulate them. In other words, what we’re really experiencing is negligence. “It would have been smart,” Walker writes, “to think ahead about how neo-Nazis might use Twitter, how pedophiles might use YouTube, or how a mass murderer might use Facebook Live.” 

Walker cites the startup Superhuman, an invitation-only email experience that costs $30 a month, as an example. In June of 2019, the company came under fire for one particular feature that bordered on surveillance: a read status, which could let email senders learn not only whether recipients opened their emails, but when, where, and how many times. Superhuman’s founder Rahul Vohra responded to the backlash by making some changes to the app’s read statuses, and apologized by saying, “When we built Superhuman, we focused only on the needs of our customers. We did not consider potential bad actors. I wholeheartedly apologize for not thinking through this more fully.” 

Walker argues that this confession, while naive, should be taken at face value—Vohra really didn’t think about bad actors, and his lack of awareness speaks to so many other CEOs who are similarly not thinking about the uses of their tools beyond how they are intending for them to be used. 

The solution, as Walker sees it, is to practice what he calls Design for the Worst. That is, designing any tool, any app, with the worst possible scenarios in mind. “Imagine,” he writes, “a sort of Black Mirror Department, devoted to nothing but figuring out how the product can be abused—and thus how to minimize malign misuse.”

Design that’s worse

Of course, this doesn’t speak to the issue of intentionally manipulative designs. “Dark patterns”—which describe the manipulative tricks employed to get users to do things that they don’t intend to—are everywhere. One recent example comes from Google, whose 2020 redesign of their organic search results made it so that it’s nearly impossible to tell the difference between a search result and an ad.

UX designer Harry Brignull, who coined the term in 2010, will be the first to tell you that if Google is any indication, dark patterns are still proliferating. “Dark patterns have gotten worse, not better,” Brignull said. “We need to ask why.” Thing is, we more or less know the answer to the “why”: manipulative design is an effective tool for improving financial bottom lines, and the tech industry is notoriously unregulated

In recent years, the tech industry has tried the dubious tactic of self-regulation—touting company principles for AI applications, and making public pledges to protect privileged data. These showy tactics are less about regulation (or ethics) as they are about public perception. As Ben Wagner told The Verge, the tech industry’s focus on ethical design is a form of “ethics washing”. 

“Most of the ethics principles developed now lack any institutional framework,” Wagner said. “They’re non-binding.” In other words, they’re more about keeping the watching government at bay than about reform, or the prevention of dark patterns. 

This ethical tension is also currently at the center of the debate around deepfakes, the highly deceptive synthetic audio and visual media that leverages AI and machine learning to transpose one person’s likeness onto another. 

https://www.youtube.com/watch?v=cQ54GDm1eL0&amp

Designers, journalists, and policymakers have been calling for oversight of deepfakes since they cropped up on Reddit. But while we wait for Congress to pass the DEEPFAKES Accountability Act, others are grappling with the non-technological ramifications—the social implications—of this technology. In a recent article about how UX designers can respond to deepfakes (and cheap fakes), James Cartwright puts forth the idea that the best defense against deepfakes actually rests with the users. “Can improving society’s digital literacy suppress the sensationalized false narratives and compelling confirmation biases on which deepfake content thrives?” he asks. 

What if the way to combat unethical technology is to empower users, across social, political, and cultural hierarchies, to disengage with it? After all, deepfakes aren’t the first truth-bending media—truth is something that all media has a complicated relationship with. The only difference now, Cartwright argues, is the speed at which this particular kind of disinformation travels.

Design for the good?

They say that no information is better than bad information. But that’s an adage that no longer applies to our social-media addicted world, when 500 million tweets are posted in a day, over 1 billion hours of video are watched on YouTube, and 43% of U.S. adults report getting their news from Facebook. In this endless stream, opting for “no information” isn’t a neutral choice, and never really was.

Creating anything—any tool, any app, no matter how good the intentions are behind it—comes with a host of these same ethical issues. Sharlene Gandhi skirts around some of these questions in her recent piece about carbon-tracking apps, which give users the opportunity to track their individual footprint (which is incredibly difficult to accurately measure). Although these apps are intended to foster a sense of responsibility for the environment in their users, there’s no real way to test whether they’re having a positive effect on the environment, or influencing a person’s behavior in a lasting way. “Beyond the figure representing the carbon footprint of the user,” she writes, “there is very little push from these apps to drive climate-conscious activity outside of the app.” 

It raises the question, then, if such a tracking app isn’t having a proven effect on the environment, is it possible that it could be doing harm? Is it making its users complacent, as so many apps can, believing that they’re doing enough just because they’re monitoring themselves? Are push notifications, as Gandhi suggests, a way to encourage positive user behavior, or does it push users into frustrating spaces of information overload that eventually cause them to disengage entirely? 

What’s the election got to do with it? 

Ethical design feels like a particularly potent issue in the shadow of the 2020 Presidential election. The right to vote is a reminder that misinformation is a structural issue—not just in terms of the policies of the big tech companies, but in terms of our social scale. And that information, like any resource, comes down to access. Designers, as the creative agents behind the world’s systems, are beautifully poised to solve the ethical design problems facing this democracy, one light pattern at a time. 

Looking to create more engaging digital content?

Ceros is the best way to create interactive content without writing a single line of code.

Learn More