Social media platforms are often celebrated as spaces where anyone can have a voice. Whether you’re sharing a funny video, promoting your art, or advocating for change, these platforms offer the promise of connection and expression. However, for many marginalized communities, this promise falls short. While social media has allowed people from all walks of life to share their perspectives, it’s also a space where voices can be silenced, devalued, or outright erased.
The erasure of marginalized voices on social media isn’t always obvious, but it’s a widespread issue. From algorithmic bias to targeted harassment, systemic disadvantages often echo in the digital world. This phenomenon limits the reach of important conversations around race, gender, disability, and more. By learning about how this erasure happens, we can push for platforms that truly empower everyone, not just the most privileged.
What Does "Erasure" Mean?
Before digging into specifics, it’s important to understand what we mean by the erasure of voices. Erasure can happen in several ways. It could mean literally removing someone's posts, whether by platform moderation, user reporting, or algorithmic oversight. It can also mean minimizing or overlooking certain perspectives, drowning them out with louder, more widely accepted narratives.
Sometimes, this erasure is deliberate, such as targeted campaigns to silence marginalized creators. Other times, it’s the result of biases baked into platform designs or algorithms. Regardless of intent, the results are the same: people in marginalized communities are prevented from fully engaging in discussions and sharing their experiences.
How Algorithms Contribute to Erasure
One major way marginalized voices are silenced is through algorithms. Algorithms are what social media platforms use to decide what content you see on your feed. The goal is to show you what the platform thinks you're most likely to engage with. But the way these systems work can push certain voices and perspectives into the background.
The Problem with "Neutrality"
Social media algorithms are often described as "neutral," but that’s far from the truth. These systems are created by humans, and humans have biases. These biases can unintentionally be built into the way algorithms function. For example, posts that align with popular opinions or trends are more likely to be boosted, while those that challenge the status quo might not get the same attention.
For marginalized creators, this can mean their work isn’t given equal visibility. If an algorithm favors content that’s considered “mainstream,” it might suppress posts that deal with issues affecting smaller or underrepresented groups. Over time, this creates a cycle where only certain narratives dominate while others are ignored.
Shadowbanning
Another common issue is shadowbanning. This happens when a platform limits the reach of your posts without telling you. People in marginalized communities often report being shadowbanned after posting about issues like racism, LGBTQ+ rights, or police brutality. By quietly reducing the visibility of these creators, platforms limit their ability to spread awareness and grow their communities.
The Role of Harassment in Silencing Voices
Beyond algorithms, targeted harassment is another tool used to erase marginalized voices on social media. Often, when someone from an underrepresented group speaks out, they become the target of hateful comments, threats, or even coordinated attacks.
Coordinated Harassment Campaigns
Some harassment campaigns are carefully organized, with groups targeting specific creators. This is especially common for women, people of color, and LGBTQ+ individuals. When an individual becomes a target, they may feel forced to leave the platform for their safety or mental well-being.
The Cost of Emotional Labor
Even when the harassment isn’t as extreme, the constant stream of hateful or dismissive comments takes a toll. Marginalized creators often say they feel burdened by the emotional labor of constantly defending themselves and their communities online. This added pressure can discourage them from continuing to share their experiences, further silencing their voices.
Lack of Platform Accountability
Social media platforms frequently fail to protect users from harassment. Reporting tools are often flawed, with many victims finding that their complaints are ignored or dismissed. Meanwhile, harmful content can remain online for weeks or months, contributing to the unsafe environment marginalized users face daily.
Moderation Biases and Unequal Enforcement
Content moderation is supposed to create a safer environment for everyone, but it often works more harshly against marginalized groups. Posts from Black users discussing racism or from LGBTQ+ creators celebrating their identity can be flagged and removed for "violating community guidelines," even when they don’t break any rules. At the same time, harmful or offensive posts targeting these groups are sometimes overlooked.
Examples of Double Standards
These double standards came into focus during movements like #BlackLivesMatter and #StopAsianHate, with activists frequently reporting their posts being taken down while hate speech against their communities remained visible. For example, hashtags associated with these movements were restricted or suppressed at times, leading to widespread outrage over how platforms were failing the very groups they claimed to champion.
Who Gets to Speak Freely?
Platforms often justify their actions by citing their neutrality or efforts to avoid conflict. However, by disproportionately silencing people discussing systemic oppression, they create an online space where some voices are deemed more acceptable than others.
Cultural Appropriation and the Exploitation of Marginalized Creators
Another way marginalized voices are erased is through cultural appropriation. While underrepresented creators often struggle to gain recognition for their work, their ideas, trends, or art can be copied and popularized by more dominant groups, leaving the originators behind.
For instance, dance challenges on platforms like TikTok often originate in Black communities, yet the creators themselves don’t always get credit. When a mainstream creator replicates their moves or ideas, they may gain followers, deals, and recognition that should have gone to the original creator. This pattern erases the contributions of marginalized individuals, giving their hard work to others who are already more visible.
Fighting Back Against Erasure
Though the challenges are significant, many users aren’t giving up. Marginalized communities have found ways to push back against erasure and reclaim their space on social media.
Community Support
One of the greatest strengths of social media is its ability to connect people. Marginalized users often rely on each other for support, creating online communities where they can share advice, amplify each other’s voices, and celebrate their culture.
Platform Accountability Movements
Some users are holding platforms accountable through movements like #FixTheAlgorithm. By drawing attention to the ways algorithms and moderation practices harm marginalized voices, these movements pressure social media companies to create fairer and more inclusive policies.
Uplifting Smaller Creators
Another way to counter erasure is by actively supporting marginalized creators. Following, sharing, and engaging with their content helps boost their visibility, breaking through the barriers imposed by algorithms.
What Needs to Change?
To ensure that all voices are heard, social media companies need to confront the biases ingrained in their platforms. This might include:
- Improving algorithms to promote diverse perspectives, not just those that appeal to mainstream audiences.
- Hiring diverse teams to oversee moderation practices and ensure fair treatment for marginalized users.
- Providing stronger protections against harassment, including better reporting tools and quicker responses to flagged content.
- Being transparent about policies like shadowbanning to build trust with users.