In this edition of the Auschwitz Institute podcast, Jared Knoll speaks with Andrew Stroehlein, European Media Director for Human Rights Watch, former Director of Communications at the International Crisis Group, writer and speaker on a multitude of conflict issues, and avid watcher and participant in the world of social media. His writing has appeared in the Financial Times, the Guardian, the Washington Post, and the International Herald Tribune, among others. He has also been an instructor for the Auschwitz Institute’s Raphael Lemkin Seminar for Genocide Prevention.


Welcome. I’m Jared Knoll with the Auschwitz Institute. Social media and the constantly evolving ways of instant, online reporting have dramatically changed the ways we communicate, and we see more and more how it plays into conflict situations. From Egypt’s spring rebellion to Iran’s so-called “twitter revolution” and all manner of human rights issues being brought to the digital forefront, social media demands our attention in the field of genocide and conflict prevention. Here to raise it with me is Andrew Stroehlein, European Media Director for Human Rights Watch and prolific author on the relationships between media and violent conflict for nearly two decades. Hello, Andrew. Wonderful to have you with us.

Thank you for having me.

As the media director for Human Rights Watch, how do you view the evolving role of social media, in terms of what they can do to prevent or transform violence?

Well, I think, as with all forms of media that have been developed over the ages, you see both positive and negative uses. I think we have to kind of be careful of sort of going to one extreme or the other, and you do see that, particularly with social media, that people say, ‘This is absolutely the greatest thing since sliced bread, this is going to solve all our problems,’ and another group of people saying, ‘No, actually, this is just horrific and this is just going to lead to mob violence or something.’ And the truth is both, and neither, and falling somewhere in the middle. Like a lot of things, social media has potential for being used for some good, and we’ve seen that, and it also has some potential to get quite ugly as well, and we’ve seen that too.

What are the biggest challenges to harnessing that potential for good, or for ameliorating the ways that it can cause harm?

Well, I think one of the things that has to be done is some kind of media monitoring. I think that’s always key, particularly in conflict or conflict-prone situations. You see good media monitoring happening in other media in conflict or post-conflict situations where conflict is likely to restart. Following closely in the vernacular languages what’s happening and how situations are being framed, what the narratives are that are developing, and who is really stoking things up, in terms of hate speech or even incitement to violence. And keeping, having multiple eyes on that is vitally important so you can catch things early — nip problems in the bud, as it were.

Do you see that as being a role taken up by coalitions of governments, or by the United Nations, or maybe by citizens’ advocacy?

Well, I think there’s always a danger when governments do it, isn’t there? But I think there should be multiple monitors on these things. The UN has done some good media monitoring in the past, in places, and there’s perhaps a role in some places for the UN. In some cases it’s NGOs that deal with media and media development that could take on this or at least advise on it. In other countries that role is also fulfilled, or maybe more likely fulfilled, by an independent media monitor made up of journalists and other media professionals who can create projects and create systems to keep an eye on things. Again, so you don’t have hate speech building and building, and so you don’t have incitement to violence developing. Before violence starts, you can always see these trends building in the media, and knowing when to jump in is also a very difficult question as well.

Is there a problem, or is there a challenge for us to evaluate the effectiveness in the end? The Boston Marathon bombing, maybe, as an example: There was a lot of Twitter — police using it, people sending photos and theories all over the internet — and it’s been kind of controversial as to whether they can actually tell if it helped or whether it hurt, making the picture more convoluted. Are there lessons we can take from that? Or is it always going to be a troubling thing to know whether or not it was good or bad?

Yeah, it will always be troubling to know whether something is good or bad, because you’re always working against what would have happened “if,” and you don’t really know, trying to play out what really happened, which is scenario A, against scenarios B, C, and D, which didn’t happen, and it’s impossible really to tell what was better or worse. But there are still lessons that can be learned in each of these cases. I mean, the Boston example is perhaps one of those that we would all hope would not be repeated in many ways. There was just some absolutely atrocious reporting, and you did see on the edges the development of some really nasty language. Even in some pretty mainstream media, you saw things that were really quite racist, bordering on hate speech. Then of course if you go a level or two down from that, into individual Twitter feeds and other social media, you saw some just appalling things, which was bordering on incitement to violence. But just the disinformation, or misinformation, I should say — because some of it must have been, I suppose, not intentional — that just spread. In a way it made a mockery of the idea that social media was going to be revolutionary. And honestly, it was a bit of a disaster, wasn’t it?

Do you find yourself optimistic that things will continue to improve, in terms of social media being used as a tool to prevent violence?

Well, I would hate to see people believe that it’s automatically going to be a force for good. I think there’s a serious potential for the kind of mass violence that is driven by social media in some part. It’s not revolutionary thinking exactly; we’ve seen it with every other form of media. We’ve seen mass atrocity crimes perpetrated with the help of posters, with newspapers, with radio — it’s almost inevitable that some kind of mob violence will come out of this. But it’s also inevitable that the tool will be used for good as well. And I think that’s why multiple NGOs and some international agencies perhaps, or associations of journalists, need to just be having systems in place in particularly tense areas to keep an eye on how the narratives are developing, and you know, to put it at its absolute most blunt, apply some kind of “cockroach rule” to this. You know, once people start talking about vermin and cockroaches, that’s just always the tip-off that something bad could and perhaps will go down. So I’m optimistic that these things can be used to help people who want to prevent mass atrocity crimes. I’m also very aware that these tools could be used to help perpetrate mass atrocity crimes.

Do you think that it’s likely that we’re going to have to make some compromises on protection of free speech, in order to prevent that kind of negative rallying using social media?

Well, there’s significant debates on this, and of course the sort of American approach to free speech is very different from, say, the European concept of free speech. I don’t think free speech includes being able to incite violence. And I think there are people who will disagree with that, but having seen how media plays into mass violence in places, it’s not just the yelling fire in a theater, it’s telling someone to start a fire in a theater. You don’t have that right. So I do think that keeping an eye on things — and there already have been cases where people are basically inciting violence using social media, and people don’t have a right to do that. Your free speech ends before that. And that will upset some free speech advocates, but that’s just how I see it.

How did you get started in the human rights field, and become involved with the Auschwitz Institute?

I was working at the International Crisis Group for many years, and I cannot actually remember who got in touch with me in the first place and how it actually came to be, but they essentially invited me out to one of the conferences, and it was fascinating — I mean, I’ve written about it in Foreign Policy magazine and elsewhere. I found the whole experience — it’s very enriching, you know, to go and talk about mass atrocity crimes in Auschwitz is obviously very powerful. But to take that lesson, and to get diplomats and military people and others, and realize that genocide and other mass atrocity crimes happen in places that look very different from Auschwitz. This one case is absolutely appalling, but the way it was done is not necessarily the way it’s going to be done the next time. And each genocide, the development of each sort of mass atrocity has its own specifics. And trying to find the warning signs through what’s going on. You know, not every genocide is the kind of industrial process that the Nazis did. There are other forms, and what people have to look out for, and in that, media monitoring generally is very crucial for that early warning.

What can people listening do in terms of prevention who are themselves connected to this very powerful system of social media communication?

Well, I think it’s very helpful when people call out others for hate speech and racist speech. I think that’s absolutely essential. When you’re on social media and you see others making comments that are just blatant hate speech — again, you don’t even have to get to sort of the “vermin and cockroach rule,” but even before that, you see hateful things being said, and you have to realize the power that those sorts of statements have — and simply calling people out.

Well, Andrew, I hope you’ll keep that keen eye on the media horizon, and continue to write and tell us about how we can do these sorts of things to watch out for negative aspects of this.

Thank you again for having me.