Today’s cancel culture has corporations of all sizes reacting quickly to avoid online controversy of any kind. But if they cast their net too wide – implementing broad, defensive policies and regulations designed to protect their brand at all costs – they may end up harming the very causes they stand for in the first place.
That’s the finding of a recent study led by leading communications and culture expert Steph Hill, who is calling for a more thoughtful approach to managing online content that moves beyond brand reputation to strike a balance between controversy and social advocacy, ensuring meaningful conversations about important issues can still take place.
Hill, a PhD candidate in the joint Communication and Culture graduate program at Ryerson and York Universities, will discuss the increasing pressure companies are under to manage online content at a time when it’s impossible to predict when the next social media firestorm will hit. Concern over brand reputation has put companies on the defensive, she said, leading to actions that limit the type of content that’s allowed to exist in the public sphere – sometimes to the detriment of legitimate causes.
“What’s happening is we’re teaching companies to be afraid of associating with any and all controversy, when the goal should be to be afraid of racism, misogyny or more specific political terms,” she explained, noting that companies now live in constant fear of being the one that didn’t get it right. “Online platforms are becoming a minefield and that’s why we’re seeing these blanket, controversy-adverse policies emerge.”
Hill points to the COVID-19 pandemic as an example. In the first few months, many corporations blocked their ads from appearing next to online content that referenced the word “coronavirus” because they didn’t want to appear as “doing anything untoward during a global crisis,” she said. “It may seem harmless on the surface, but those policies blocked ads on most major news outlets, so all of a sudden we saw this drop in funding journalism at a time when there was an important story to tell.”
Similarly, a move by YouTube to block advertisers from using social and racial justice terms such as “Black Lives Matter” to target videos was intended to block ads that criticize the movement. But it could just as well prevent other content, said Hill. “If a Black Lives Matter activist wants to talk about their experience in a video, there should be a way to support that,” she said.
Hill is calling for companies to be more decisive about the causes they are trying to support and more careful about the content they want to remove or distance themselves from. “It’s tricky to navigate, but with careful thought, more targeted policies can be achieved,” she said. “What we don’t want is an online system that disadvantages perfectly acceptable content simply to protect a brand.”