In Today’s Digital World, How Should Humans Play a Role?
February 14, 2018

Categories
TrendsTactics

New in Digital is a blog series dedicated to highlighting digital news from across the web and explaining what those developments mean for organizations in the public affairs environment.

If there’s one lesson we learned about digital platforms last year, it’s that we can’t expect algorithms or vague guidelines to replace ethical human judgement. We’re still learning how much bots and outside actors were able to use social media to influence the 2016 election. Since then, Facebook has received significant criticisms for hosting fake news, Twitter has seen backlash over their verification badge, and YouTube’s content moderation has come under scrutiny after Logan Paul’s ill-considered video about his visit to Japan’s suicide forest. All three tech companies have altered their processes to include more policing and monitoring by humans. However, the question becomes this: will more human involvement help or hinder users’ online experience? Two recent stories explore the different sides.

Woman's hand with laptop
Ethical design could solve some of social media’s problems

When one thinks of user experience design, they typically consider how the technology is objectively responding to the user’s wants or needs. For example, an organization with a consistent stream of think pieces could add a recommendation engine to their website that provides content suggestions to users based on what they previously read.

However, while that methodology is useful and necessary, digital designers and developers are starting to realize that there’s a missing element – ethical design. It’s not only about how people can use the technology, but also how the technology makes them feel.

Today, organizations have a social responsibility to anticipate how their digital channels could affect the well-being of their visitors, and only humans possess the contextual skills to make the final call on whether an impact is positive or negative.

Human moderators may seem like the answer, but are they?

As we mentioned earlier, many of the big social platforms have recently updated their guidelines and processes to prevent problematic content from appearing on their channels. While more human gatekeepers may seem like a good and even necessary step, it also gives rise to some concerns.

We can all probably agree that zero censorship on social networks is not an option (nobody wants offensive and hateful content in their feed). However, can we trust the big tech companies like Twitter, Facebook, Google and YouTube to police and monitor content? Humans have biases and when humans get involved in determining what’s acceptable and what’s not, those biases creep in. This can cause inconsistencies, deter people from using different platforms, and potentially turn each network into “its own massive filter bubble.”

Walking the line between monitoring and censoring online content has always been tricky. The past couple of months have made the digital world realize that there’s still a lot of work to do to get the balance right.

Stay tuned for more New in Digital posts.