You went undercover in order to research various extremist online groups. How does this type of investigation actually work?
In recent years, there have been numerous studies dealing with extremism on the Internet. In particular, these studies analysed the change in language in extremist groups and the scope of campaigns as well as their target groups. However, I was missing the access at the human level. I wanted to better understand the social processes within extremist groups. For this reason, over the past two years I’ve built up a variety of online identities in order to go through different recruiting processes and gain access to the groups. In some cases, I have also met the members of these groups offline.
How does the recruiting and building up of these radical groups work?
Right-wing extremist groups in particular carry out the admission processes via Discord, a popular IP telephony app designed for video gaming with over 250 million users. Many of these groups first do a background check of the social media accounts that you specify in the request. In some instances, they also put together a questionnaire on ideological attitudes, political views or cultural outlook.
In other groups religion plays an important role. This is often followed by voice chats or interviews. This step is intended to prevent infiltration by journalists or the authorities. The recruitment process has gotten enormously tighter after the right-wing extremist rally in Charlottesville in 2017. For example, a neo-Nazi group in the US had me post a photo of my wrist, with the group’s logo and timestamp, to prove I'm white. Sometimes they also want to see the results of genetic testing, so that they can confirm ethnic origin.
Did you maintain your false identity when you met with individual members?
In the beginning it was not my goal to meet anyone offline. That’s why, for example, my profile photo was blonde. Later, I got a blonde wig that matched. There were also offline meetings as part of being admitted to the Identitarian movement. I was required to meet an Austrian Identitarian in Vienna. Afterwards, I had a Skype conversation with the head of the movement in Scotland, who was involved in setting up the new British and Irish offshoot. As a result, I was invited to London to a kick-off meeting of this British-Irish post.
So your research developed a momentum of its own. Could you go over this development and tell me what offers do these groups make to their members? Which people are particularly receptive to the subject matter of groups like these?
The sad part, actually, was that many of the new members in the groups were in search of love, friendship, companionship and identity. Many were in a personal crisis. This became quite clear, for example, in the “Frauenfeind*innen” misogynist group, where very many came from failed relationships and felt they were not loved. So ultimately it’s about finding a substitute for family or a circle of friends. That’s exactly what these groups offer. That’s why the susceptibility profile is very diverse. People from different age groups, educational backgrounds and socioeconomic situations apply for membership.
What these people have in common is that they are all struggling with an identity crisis. Anyone in a state like this is susceptible to radicalisation. In these groups, individual frustration is then raised to a collective level and an explanation offered. A process of socialisation takes place, to which the radicalisation and ideological indoctrination are actually initially subordinated. The group develops its own vocabulary and set of jokes, so that at times the whole thing seems a bit less serious. Ultimately, however, they always define a distinct group of enemies and propagate an extreme ideology.
Have you also identified yourself as a woman in these misogynistic groups, and thus adopted a female identity?
It has varied. I’ve set up an account in groups of female misogynists in particular. That was really fascinating, because what I encountered there is a completely different kind of radicalisation: a group in which hatred is directed not against an Other, but against people like themselves. Through my male account, I then submerged myself in the male side of this sphere of opinion. But my main focus has been on the female misogynists, because I’ve never encountered anything like this before.
Even in the media, this group doesn’t have any substantial presence.
Exactly. But it was interesting, because in some groups women are even welcomed with open arms to some extent – for example, in the Identitarian movement or in some US alt-right groups. They place a great deal of importance on having women in the top ranks, because that gives the group a more legitimate and innocuous appearance. For other groups, I was accepted despite the fact that I’m a woman. That was the case with the US neo-Nazi group, where I had to show my genetics test results.
What role do conspiracy theories play within these groups? To an outsider, these theories often come across as being rather far-fetched and one wonders how someone can seriously believe such theories.
In some cases, the recruiting specifically targets conspiracy theorists or people who are known from studies to have a “conspiracy mentality”. On the other hand, conspiracy theories can also be transmitted through a gradual, drawn-out process. This is called “redpilling”, a reference to the film “the Matrix”, where the ideological component is added step by step.
The best example of a conspiracy theory that generates hatred for a target of hostility is the one aimed at the Jewish global elite. In this way, personal fear is projected onto a perpetrator image. These fears mostly concern migration, sexual violence, societal decline or terrorist attacks. The conspiracy theory then provides an easy explanation and a simple image of the enemy. The Jew who secretly controls the entire world is actually a very old conspiracy theory, but is now linked to current political events and new social dynamics and processes.
Although people often hold very extreme positions, they are ultimately harmless because the group stays within the network. From the outside, is it possible to actually distinguish dangerous tendencies on such platforms from rather harmless interactions?
The problem is that many of these conspiracy theories and relevant ideologies propagate a kind of existential threat. The rather apocalyptic visions can then lead certain individuals to stop believing in political or metapolitical solutions. This can greatly promote the willingness to resort to violence. A racial, cultural or religious war is then the only logical form of self-defence. This can very quickly inspire the sort of terrorists that we saw in Christchurch, New Zealand, as well as in the US and in Halle, Germany.
Do the Internet and social media serve as the catalyst? Such rapid networking and radicalisation are hard to imagine outside the digital space.
“Catalyst” is exactly the right term, because the dynamics I’ve observed online aren’t very different from traditional radicalisation processes in offline networks. What’s new, however, is the way in which the groups network and mobilise at the international level.
Today, marginalised groups can make themselves heard and recruit members beyond traditional target groups. Since there are different subcultures in the individual countries, they can adapt their communication and propaganda in a precisely targeted manner. Moreover, algorithms and the infrastructure of most tech platforms are perfectly suited to their purposes. Especially in the case of “recommendation algorithms”, radical content tends to be at the top of the list, so people quickly slip into extremist echo chambers without necessarily being politically or ideologically predisposed to this.
Can individual dynamics be distinguished between the different radical groups? We’ve been talking primarily about right-wing groups – what about Islamist groups?
For several years now, Islamist groups, or at least their propaganda, have been combatted much more strongly online. There’s the Global Coalition Against Daesh, an international effort that works primarily against IS propaganda. There’s also been the alliance of the four major tech platforms – Microsoft, Google, Facebook, and Twitter – which worked to remove propaganda videos as quickly as possible and prevent any re-uploading on their pages.
But that was about 90 per cent Islamist propaganda and was not directed towards right-wing extremism. Accordingly, right-wing extremist groups have been able to be active much longer, unobserved by the authorities and tech companies, and build up their network. Moreover, right-wing groups today are working more with satire and exploiting grey areas by rebranding symbols and vocabulary than traditional neo-Nazi groups.
You say that tech companies are trying to stop radicalisation on their sites. Often, however, they’re also the focus of criticism, accused of only half-heartedly deleting hateful content. What’s your view – are tech companies doing enough?
I would say that in most cases they’re concerned about their reputation. You can see this in the fact that their actions are generally of a reactive nature. Theoretically, they would have to change their entire business model. Also, algorithms very strongly reflect the human psyche, and unfortunately, our attention is most strongly attracted to extreme content. In the past, we enjoyed watching gladiator fights, and today as well, it’s violence that holds our attention.
So it would take a more humane approach or a complete change in the algorithms and business models of these companies. In any case, to a certain degree political pressure can be used to have violent content removed. This should be done not only on the large platforms, but also on the smaller, mostly more extreme, and in part ultra-Identitarian platforms that have turned into real hotbeds of extremism. Generally, they’re completely isolated and allow the emergence of extremist echo chambers that call for violence. This is where political intervention could play a much larger role.
How well are law enforcement authorities keeping abreast of developments in identifying and monitoring radical actors on the Internet?
There’s definitely a need to catch up. Especially after Christchurch, it was clear that after 9/11 there was too strong a focus on (Islamist) terrorism. The right-wing networks on the Internet had gotten hardly any attention. There was a failure to understand this entire subculture better, in order to eventually be able to evaluate it adequately. What is potentially threatening for democracy? What can trigger a shift to violence? What, exactly, is trolling? In any case, at this time neither the German nor the international authorities have a full overview of right-wing extremist online groups. In contrast, we are much better equipped with regard to the Islamist side.
What about left-wing extremist groups?
In the course of my research I looked in all different ideological directions, but tried to map the networks with the most influence and the greatest threat to democracy and society. I noticed that it was on the right-wing extremist side where there was a strong focus on discipline and order. The far-right groups are much more coordinated and have a larger network that resembles military structures. As a result it operates much more effectively in the online space. This applies to recruitment, communication, and international networking. On the left side, I didn’t get an impression of a similar degree of coordination and such a focus on discipline and order.
On the other hand, there’s also a strong ideological difference. In both Islamist and far-right groups there’s this very strong development of hatred against an out-group, mostly against a minority group. This is ideologically not present on the left. Instead, what we see is mobilisation in response to right-wing extremism. Particularly in response to Charlottesville, there were also left-wing militant groups who wanted to take revenge. But this was just reactive, rather than hatred directed against a human out-group.
The exception is anti-authoritarianism and the anti-state mobilisation of left-wing extremists. But at the tactical level they are simply not as sophisticated and militarily organised as on the right. I’m often told that I’m blind on my left eye, but if you look objectively at the different networks, there’s nothing comparable to the right-wing extremists.
Let’s get back to your own role. You’ve written a book based on your research in extremist online groups. Are you in danger these days if you make a public appearance? Have there been any threats?
It’s amazingly quiet right now. I’ve gotten quite a few threats in the past and had prepared myself for the eventuality of threats or hate campaigns after this book release. But it was crucial to me to understand what drives people into these circles and how to bring them back out again. I’ve put a lot of effort into not revealing the identity of non-public extremists, so I hope that perhaps in return, I’ll see a measure of humanity. I’ve seen the human side of even the most extreme radicals and I have the feeling that these insights could be used as a starting point for de-radicalisation programmes in the online space.
I imagine it’s difficult to deal with radicalised positions and agitation for so long and yet still manage to keep the human side in mind.
It’s a mixed bag. Of course I was disgusted by racist jokes or ideologies with conspiracy theories that I would have wanted to refute immediately. But on the other hand, I realised that this process of radicalisation involves a large amount of human motivation and that to a great degree it all begins with the search for love or recognition. Especially for the younger members, I couldn’t help but also feel some compassion.
This interview was conducted by Claudia Detsch.