Cookies on this website
This website uses cookies. We use cookies to distinguish you from other users and to improve and monitor the website. A cookie is a small file of letters or numbers that we place on your device, if you agree. For more information please see our cookie statement by following the 'Find out more' link.

You might think from anecdotal evidence that hate speech on social media by individuals and groups appears quite a lot, but one of first academic studies to examine the empirical data concludes that these extreme forms of speech on Facebook are marginal as compared with total content.

Online hate speech Lyn Lomasi

Photowww.flickr.com(CC BY-ND 2.0)

Researchers from the University of Oxford and Addis Ababa University examined thousands of comments made by Ethiopians on Facebook during four months around the time of  Ethiopia's general election in 2015. Hate speech is defined as statements to incite others to discriminate or act against individuals or groups on grounds of their ethnicity, nationality, religion or gender. Using a representative sample of total online statements, they found that only a tiny percentage could be classed a such, just 0.7%.  The paper says the findings may have wide implications for the many countries trying to address growing concerns about the role played by social media in promoting radicalisation or violence.

There have been increasing demands for research that can detect and monitor these types of online behaviours, says the report. Yet, until now, very little systematic research has been carried out into how people use social media to whip up hostility against others. The international research team used Ethiopian online conversations as a case study because of the country's distinct language, which meant they could target Ethiopians living in their home country and abroad. This made the task far more controlled and contained than trying to track English language speakers, for instance.

Read more on the University website (opens new window)