Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

You might think from anecdotal evidence that hate speech on social media by individuals and groups appears quite a lot, but one of first academic studies to examine the empirical data concludes that these extreme forms of speech on Facebook are marginal as compared with total content.

Online hate speech Lyn Lomasi BY-ND 2.0)

Researchers from the University of Oxford and Addis Ababa University examined thousands of comments made by Ethiopians on Facebook during four months around the time of  Ethiopia's general election in 2015. Hate speech is defined as statements to incite others to discriminate or act against individuals or groups on grounds of their ethnicity, nationality, religion or gender. Using a representative sample of total online statements, they found that only a tiny percentage could be classed a such, just 0.7%.  The paper says the findings may have wide implications for the many countries trying to address growing concerns about the role played by social media in promoting radicalisation or violence.

There have been increasing demands for research that can detect and monitor these types of online behaviours, says the report. Yet, until now, very little systematic research has been carried out into how people use social media to whip up hostility against others. The international research team used Ethiopian online conversations as a case study because of the country's distinct language, which meant they could target Ethiopians living in their home country and abroad. This made the task far more controlled and contained than trying to track English language speakers, for instance.

Read more on the University website (opens new window)