How marginalized creators are using “algospeak” to bypass the algorithms and reach wider audiences
Listen to this article
Al Gore Rhythms. Unalive. Seggs. Out of context, the words might sound like newfangled slang—Gen Z’s new way of saying “on fleek.” But there’s a specific reason creators use words and phrases like these online. Simply put, they have to.
It’s known as “algospeak,” and its premise is simple. Some online platforms use algorithms to censor or ban content by seeking out key phrases. Creators find sneaky ways of not saying those phrases while still getting their basic meaning across to their audience. Algorithms becomes Al Gore Rhythms. Dead becomes unalive. Sex becomes seggs.
It’s not just a snarky way to avoid TikTok or Twitter dinging your content and sending you to the digital equivalent of a timeout. For some, it’s a necessity. One study examined how “social media work is not just materially concealed, but rendered socially invisible through its lack of … marginal status.”
In other words, creators from marginalized groups, like the LGBTQ community, can see penalties just for discussing these issues in the digital public. That’s why we’ve seen the rise of this alternative form of slang—and calling LGTBQ issues “leg booty.”
How we got here: the rise of algospeak
It’s tempting to say algospeak got its start at the dawn of the social media platform. Facebook first appeared in 2004, Reddit in 2005, Twitter in 2006, and TikTok in 2017. But in a recent episode of Fast Company’s Creative Control podcast, KC Ifeanyi asked Sean Szolek-VanValkenburgh, a social media manager, to trace the origins of algospeak. And it turns out that algospeak arises out of the Terms of Service of each platform than anything else.
This, VanValkenburgh says, leads to people stumbling on terms of service they didn’t even know existed. And in many cases, updated Terms of Service grant the platform all sorts of powers users didn’t know they had.
“People think [platforms] have to have a reason to pull your content down,” said VanValkenburgh. “If [your content] is unfavorable to the platform, they have full right to pull it down. You’ve agreed to it.”
This puts any creator in a tough spot. If they tackle controversial subject matters—or anything the platform deems worthy of a violation—they’ll naturally get less exposure than creators within sponsor-friendly niches.
This could prove troublesome to many. What if an influencer wants to have a frank discussion about mental health and suicide on TikTok, offering all sorts of helpful things to say on the subject?
As the Washington Post notes, “When young people began to discuss struggling with mental health, they talked about ‘becoming unalive’ in order to have frank conversations about suicide without algorithmic punishment.
Controversial topics tend to get the most scrutiny. The COVID-19 pandemic became the “Backstreet Boys Reunion Tour.” Sex workers sharing safety tips? They might have to refer to themselves as “accountants.” Which may come as a big surprise to anyone looking to TikTok for tips on filling out their 1099-MISC forms.
And therein lies the rub: the more controversial and tricky a topic is, the more creators may have to rely on “algospeak” to have a frank discussion about it. That fact is weighing on marginalized creators who don’t have many options for building a following online.
How “algospeak” is the only option for some marginalized creators
It may all trace back to the “adpocalypse.”
In 2017, big-time brands like Coca-Cola started noticing that their ads might play during controversial and potentially upsetting videos on YouTube, and started pulling their money out of the platform.
YouTube made waves by updating their ToS—sound familiar?—and empowered advertisers to opt out of videos which addressed the controversial and possibly risky subject matter. This series of events was known as the “adpocalypse” and gave us the tight-lipped social media platform policies we know today.
In doing so, social media platforms started to codify their corporate-friendly ad policies into their Terms of Service. Enforcing these through algorithms designed to seek out content on behalf of the advertisers, platforms can make it difficult on creators who have reached their audience in addressing more controversial subjects.
“[Algo-speak] allows content creators to open discussion on critical social issues,” said Nunzio Ross, founder and CEO of Majesty Coffee. “Social media algorithms aggressively attack words that pertain directly to these conversations, enforcing a disproportionate impact on marginalized and vulnerable groups wanting to share their experiences and stories online.”
“[Algo-speak] disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people creating that verbiage and coming up with the colloquiums,” VanValkenburgh told the Washington Post. “There’s a line we have to toe, it’s an unending battle of saying something and trying to get the message across without directly saying it.”
Having to type “le$bian” or say “le dollar bean” in place of “lesbian” has an obviously disproportionate impact on creators who address LGTBQ issues. “This happens on TikTok, Instagram, and Twitter,” wrote Lesbians Who Tech & Allies on Medium. They even have trouble with email platforms. “In fact, many of our emails go directly to spam due to the name ‘Lesbians Who Tech & Allies.’”
Additionally, anyone who wants to address sensitive topics may have to adopt algo-speak just to stay relevant. “Topics that include the words ‘sex’ or ‘"contraceptives’ are ranked lower on TikTok,” said Josh Tyler, CEO of GiantFreakinRobot. “But informative discussions and sex education are beneficial to anyone. The inability of algorithms to tag context into these filtered words and topics drives content creators advocating for social issues to leverage ‘algo-speak.’”
“Algospeak” as a barometer of the times
It can’t be easy running a social media platform. Genuinely hateful and disturbing content has to be swept aside to protect the general public. And determining what is and isn’t appropriate for a social media platform can be a matter of differing personal opinion.
“While algospeak is one way to promote freedom of speech on social media platforms,” says Nat Miletic, CEO of Clio Websites, “it’s also a driver for harmful communities to propagate their thoughts online. It's dangerous when Facebook groups promoting misleading information online bypass bannable filters to spread hate and sow mistrust. It's exactly why tech experts are looking for more ways to add context to social media algorithms to identify discussions that should or shouldn't exist on the platform.
In other words, algospeak can empower marginalized creators to continue to engage online and possibly spread a positive message, in spite of the sweeping rules brought on by algorithms blindly enforcing terms of service. But it can also provide nefarious groups with a way to speak in code—and gain followers online.
On one hand, using sunflower emojis to symbolize solidarity with Ukraine in their ongoing war with Russia is a way for people to express their support without algorithms misclassifying them as geopolitical bots. On the other hand, the Washington Post even reports pro-anorexia groups will use variations on keywords to avoid getting picked up by algorithms.
Where platforms go from here
TikTok removed 113 million videos between April and June of this year, according to the company. It’s appropriate to ask if these algorithms sometimes toss the babies out with the bathwater.
Take the experience of Kahlil Greene, TikTok’s “Gen Z historian,” who reportedly altered a Martin Luther King Jr. quote because it included key phrases like “Ku Klux Klanner” and “white moderate.” When MLK quotes require algospeak, it’s clear the Terms of Service and content-banning algorithms of the major social media platforms have some catching up to do with the world.
Until then, it seems clear creators may continue to speak in slang that dodges algorithms to keep their discussions alive.