Open space for technologists, investors, tech companies and hackers in Nairobi.

iHub Research By Angela Okune / January 30, 2013

What’s Dangerous, What’s Not

2 Comments

As the Kenya general election quickly closes in on us, we are faced with a critical decision as Kenyans. Do we fall back into tribal lines and risk another bout of violence as we saw in 2007/2008? Or do we foster constructive dialogue in order to have fruitful discussions around issues that concern us?

As our guest editor Jessica Musila has aptly summarized for us, the results from our Umati research project provide a wake-up call to Kenyans online that we need to be more responsible about the content we are putting out. Instead of using our blogs like personal diaries where we vent and air our frustrations, we need to realize that there is a wider audience who can potentially be incited by some of the messaging that we are pushing out.

The Umati team has been advising Permanent Secretary Bitange Ndemo and his team at the Ministry of Information and Communication about the role of “dangerous speech” in catalyzing violence. The following are guidelines we have detailed to the National Steering Committee on Media Monitoring, which we hope will also be of use to you as you produce online content. Please feel free to share and critique. We welcome your contributions.

What is dangerous speech?

Dangerous speech is communication that may help catalyze mass violence by moving an audience to condone – or even take part in – such violence.

We focus on the speech’s effect on the audience, not the state of mind of the speaker, hence the term “dangerous” in place of “hate.” Since we want to protect freedom of expression as much as possible, we are most concerned with speech that has the potential to disrupt public order, to bring about violence (Susan Benesch).

Any or all of these five factors can make speech more dangerous, in the context in which it was made or disseminated:

  • Powerful speaker with influence over the audience most likely to react
  •   Audience vulnerable to incitement e.g. fearful
  •   A speech act understood as a call to violence by the audience most likely to react
  •  Conducive social and historical context
  •  Influential means of dissemination

In studying many cases of dangerous speech, Susan Benesch has noticed three common patterns. These patterns do not automatically make speech dangerous by themselves, but they are often found in speech that is dangerous – and we have found them in Kenyan speech online also. They are:

  • Comparing another group of people with animals, insects or a derogatory term in mother tongue,
  • Suggesting that the audience faces a serious threat or violence from another group,
  • Suggesting that some people from another group are spoiling the purity or integrity of the speakers’ group (“accusation in a mirror”).

There are four ways to deal with dangerous speech: We strongly prefer the first two, since they don’t silence anyone – and these are the ones that the online community can carry out.

  1. Empower the audience to be immune from incitement
  2. Discredit the Speaker; “Help the Audience spot the lie” (Help the Audience lose the credibility of the speaker: By educating the people to spot dangerous speech and lose the credibility of the speaker, losing their power)
  3. Punish the Speaker: (e.g. NCIC)
  4. Limit the means of dissemination (Take down Facebook/Twitter accounts, etc.)

YOU can do something about stopping the speaker and discrediting the speaker. How?

When you come across speech online that you suspect might be dangerous, you can:

 1.  “Help the Audience spot the lie”         

Help the speaker lose credibility with the audience, by educating people to spot dangerous speech, or pointing out that the speech is destructive.

 2.  Correct rumors and lies 

Instead of automatically re-tweeting content (that could in fact be a lie), help to verify or debunk the validity of the story by posting supporting or countering information that can help correct rumors or lies.

 3. Report Hate Speech

https://docs.google.com/a/ihub.co.ke/spreadsheet/viewform?formkey=dEVUZk5fUDlJQkpBUUdRbmJBWlQyLXc6MQ

 3. Put out good content

Think carefully about the content you are producing. Instead of talking about whole communities (which can often feel like personal attacks on those communities), focus on the real underlying issues. Do your research and write informed content that showcases both sides of the story. Don’t forget that while you do have the right to freedom of speech, but you must still be responsible and sensible about the content you produce.

4.  Post your commitment against the use of hate online!

Now that you know more about dangerous speech, you have a responsibility to pass that information on! Post your commitment not to use or tolerate hate online so that we can mobilize the online community to take a stand against online hate.

For any questions, please contact umati@ihub.co.ke.

Tags ,

Author : Angela Okune

Angela is Research Lead at iHub. She is keen on growing knowledge on the uptake and utility of ICTs in East Africa. She is also co-lead of Waza Experience, an iHub community initiative aimed at prompting under-privileged children to explore innovation and entrepreneurship concepts grounded in real-world experience.


2 Comments
  • Muthuri Kinyamu at 20:39:09PM Tuesday, February 5, 2013

    Isn’t monitoring social media rather a reactionary strategy which is time consuming and expensive? Set up keywords to monitor, track conversations, identify frequent sources of ‘hate speech, curate content or the evidence, move on to arrest these perpetrators….

    Due to the viral nature of social media, sticky content spreads very fast. By sticky I mean emotional content that is either positive or negative! Not NEUTRAL! So if the Umati initiative below seeks to deter hate online then they cannot do it! People will share pics of violence while asking others to stop….what that does is we all start preaching peace while preparing for WAR because we can see what’s happening elsewhere.

    I conceptualized a campaign dubbed “I am A SocialPRO” to raise awareness, public advocacy; promote responsible & ethical ways of engaging online and education on social media. This campaign sought to promote responsible ways of engaging on social media, sensitize content creators on legal risks of creating and sharing content that is inflammatory and full of hate as well as educate Kenyans on social media etiquette and ethical ways of communicating online for societal good.

    I was thinking maybe we could also focus on public advocacy campaign, have regular meet ups with the social media influencers and other content creators to push this agenda.

    Alternatively we could launch a digital campaign to push this agenda, the best approach would be this. People need to know the impact of sharing unverified, inflammatory and content full of propaganda. We need to ensure people do more to guard the passage of this hate content. Currently a few of the influential Kenyans on social media could have been paid to further the ambitions of any coalition. So as you monitor these content creators do not also forget that every active social media influences a community no matter how small by the content he/she creates, publishes and shares therefore everyone can fan & propagate hate speech.

    Reply
  • Angela Crandall at 16:56:59PM Thursday, March 14, 2013

    Thank you for your insights Muthuri. “I am A SocialPRO” sounds very interesting and in line with our “NipeUkweli” campaign, which we started a month ago under the Umati project to focus on raising awareness of the potential impact of speech (both online and offline). We have also hosted 2 roundtable workshops with the Kenyan blogging community to work with them to leverage their networks and promote good, truthful, content (to help counter the polarizing, hateful content).

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>


{{ theme:js file="jquery.fittext.js" }}