May 25, 2021

How Trust-Safety Addresses Violent Extremism on Discord


Since the shocking events of the U.S. Capitol Insurrection on January 6, 2021, the issue of violent extremism has been front and center of many conversations in the United States. At Discord, we consider violent extremism to be the support, encouragement, promotion, or organization of violent acts or ideologies that advocate for the destruction of society, often by blaming certain individuals or groups and calling for violence against them.

We don’t allow users or servers to engage in this behavior, and we also don’t allow the glorification of known violent extremist groups or movements. With this topic on the minds of many, we wanted to delve into it a bit more and explain how we review, when we take action against this kind of content, and why it’s important to us to stop the proliferation of violent extremism on Discord.

Our History with Violent Extremism

We’ve been paying close attention to violent extremist groups and movements ever since we learned how the organizers of the 2017 Unite the Right Rally in Charlottesville, Virginia utilized Discord to plan their hateful activities.

Back then, Discord Trust & Safety was a team of one, just beginning to make difficult decisions about how to properly moderate the platform. Almost four years later, our Trust & Safety Team makes up 15% of Discord’s near-400 employees and splits its time between responding to user reports and now proactively finding and removing servers and users engaging in high-harm activity like violent extremist organizing.

Trust & Safety has spent a lot of time since 2017 trying to ensure that another event like Charlottesville isn’t planned on our platform. Our team developed frameworks based on academic research on violent extremist radicalization and behavior to better identify extremist users who try to use Discord to recruit or organize. We keep up-to-date on research that can lend insight into how to evaluate and understand extremist behavior online, and our recent partnerships with organizations like the Global Internet Forum for Countering Terrorism (GIFCT) and Tech Against Terrorism (TAT) are intended to support this effort.

How We Identify Violent Extremism

Categorizing violent extremism itself is difficult because not all extremists have the same motives or believe in the same ideas. Some individuals who adopt violent ideologies act on their beliefs by joining organized hate, terrorist, or violent extremist groups.

Others don’t want to officially identify themselves as belonging to a particular movement, and may instead form looser connections with others who have adopted the same worldview. Different cultural contexts also influence belief systems and behaviors, so violent extremist ideologies in one country will naturally be different from those on the other side of the world.

Violent extremism is nuanced and the ideologies and tactics behind them evolve fast. We don’t try to apply our own labels or identify a certain “type” of extremism.

Instead, we evaluate user accounts, servers, and content that is flagged to us based on common characteristics and patterns of behavior, such as:

  • Individual accounts, servers, or organized hate groups promote or embrace radical and dangerous ideas that are intended to cause or lead to real-world violence
  • These accounts, servers, or groups target other groups or individuals who they perceive as enemies of their community, usually based on a sensitive attribute.
  • They don’t allow opinions or ideas opposing their ideologies to be expressed or accepted.
  • They express a desire to recruit others who are like them or believe in the same things to their communities and cause.

It’s important to note that the presence of one or two of these signals doesn’t automatically mean that we would classify a server as “violent extremist.” While we might use these signs to help us determine a user or space’s intent or purpose, we always want to understand the context in which user content is posted before taking any action.

How Did We Do on January 6?

On the day of the Insurrection, our Trust & Safety agents were reviewing reports of hate speech, glorification of violence, and misinformation about what was transpiring. We feel very fortunate that our team was able to locate and remove many of the most harmful servers dedicated to coordinating violence on January 6.

Our ability to move proactively on servers advocating for violence was thanks to two main factors: first, we were able to surface reports from users on these spaces quickly; and second, our Trust & Safety agents dedicated to countering violent extremism had been tracking these spaces ever since allegations of election fraud regarding the 2020 U.S. presidential election had begun to spread.

We believe it’s important to talk about the line we walk with Discord users who discuss politics or to organize political activities like protests. Many people are frustrated with how society works and how some governmental or societal systems are structured. Naturally, people have strong opinions on how things should or shouldn’t change.

Now more than ever, it’s important for meaningful conversations, debates, and exchanges of ideas to take place. We’re glad that users across the world have turned to Discord to discuss their opinions and beliefs, to organize, and to advocate for the change they want to see.

Discord Trust & Safety’s objective is to ensure that no harm comes to our users, or to society at large, because of actions taken on Discord, which is why we don’t tolerate activities that promote or advocate for violence. When we’re reviewing reports for violent extremism, while it’s sometimes clear when users or servers have crossed that line, in many cases there’s a lot more context to consider. One of the most difficult responsibilities of our work is balancing the mitigation of potential harm without appearing as if we are overstepping any boundaries or censoring meaningful conversation.

Looking to the Future

Because of these values, we plan to continue standing firm against ideologies of hate that violent extremist communities espouse, and we are excited to work with other platforms and organizations that seek to do the same. Stay tuned for more updates.

We know that we can’t solve violent extremism alone, but we’ll continue to do our best to make sure that the communities on Discord reflect our company values. We want Discord — and the internet as a whole — to be a space for positive interactions and creating belonging.

If you would like to report dangerous or harmful activity to the Trust & Safety team, please do so using our report form. If you’re unsure how to report a user or server, take a look at dis.gd/HowToReport.

Tags:
Policy
Reporting
Server Safety
User Safety
Transparency

Lorem Ipsum is simply