TY - JOUR AU - Esau, Katharina PY - 2021/03/26 Y2 - 2024/03/29 TI - Hate speech (Hate Speech/Incivility) JF - DOCA - Database of Variables for Content Analysis JA - DOCA VL - 1 IS - 5 SE - User-Generated Media Content DO - 10.34778/5a UR - https://www.hope.uzh.ch/doca/article/view/5a SP - AB - <p>The variable <strong>hate speech</strong> is an indicator used to describe communication that expresses and/or promotes hatred towards others (Erjavec &amp; Kovačič, 2012; Rosenfeld, 2012; Ziegele, Koehler, &amp; Weber, 2018). A second element is that <strong>hate speech</strong> is directed against others on the basis of their ethnic or national origin, religion, gender, disability, sexual orientation or political conviction (Erjavec &amp; Kovačič, 2012; Rosenfeld, 2012; Waseem &amp; Hovy, 2016) and typically uses terms to denigrate, degrade and threaten others (Döring &amp; Mohseni, 2020; Gagliardone, Gal, Alves, &amp; Martínez, 2015). <strong>Hate speech</strong> and <strong>incivility</strong> are often used synonymously as hateful speech is considered part of <strong>incivility</strong> (Ziegele et al., 2018).</p><p><em><strong>Field of application/theoretical foundation:</strong></em></p><p><strong>Hate speech</strong> (see also <strong>incivility</strong>) has become an issue of growing concern both in public and academic discourses on user-generated online communication.</p><p><em><strong>References/combination with other methods of data collection:</strong></em></p><p><strong>Hate speech</strong> is examined through content analysis and can be combined with comparative or experimental designs (Muddiman, 2017; Oz, Zheng, &amp; Chen, 2017; Rowe, 2015). In addition, content analyses can be accompanied by interviews or surveys, for example to validate the results of the content analysis (Erjavec &amp; Kova<span style="font-family: 'Noto Sans', 'Noto Kufi Arabic', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;">č</span>i<span style="font-family: 'Noto Sans', 'Noto Kufi Arabic', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;">č</span>, 2012).</p><p><em><strong>Example studies:</strong></em></p><p><strong>Research question/research interest: </strong>Previous studies have been interested in the extent of <strong>hate speech</strong> in online communication (e.g. in one specific online discussion, in discussions on a specific topic or discussions on a specific platform or different platforms in comparatively) (Döring &amp; Mohseni, 2020; Poole, Giraud, &amp; Quincey, 2020; Waseem &amp; Hovy, 2016).</p><p><strong>Object of analysis: </strong>Previous studies have investigated <strong>hate speech</strong> in user comments for example on news websites, social media platforms (e.g. Twitter) and social live streaming services (e.g. YouTube, YouNow).</p><p><strong>Level of analysis: </strong>Most manual content analysis studies measure <strong>hate speech</strong> on the level of a message, for example on the level of user comments. On a higher level of analysis, the level of <strong>hate speech</strong> for a whole discussion thread or online platform could be measured or estimated. On a lower level of analysis <strong>hate speech</strong> can be measured on the level of utterances, sentences or words which are the preferred levels of analysis in automated content analyses.</p><p>Table 1. Previous manual and automated content analysis studies and measures of hate speech</p><div style="overflow-x: auto;"><table><tbody><tr><td class="t"><p><strong>Example study </strong><strong><em>(type of content analysis)</em></strong></p></td><td class="t"><p><strong>Construct</strong></p></td><td class="t"><p><strong>Dimensions/variables</strong></p></td><td class="t"><p><strong>Explanation/<br />example</strong></p></td><td class="t"><p><strong>Reliability</strong></p></td></tr><tr><td class="t" rowspan="10"><p>Waseem &amp; Hovy (2016) <br /><em>(automated content analysis)</em></p></td><td class="t" rowspan="10"><p><strong>hate speech</strong></p></td><td class="t"><p>sexist or racial slur</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>attack of a minority</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t0"><p>silencing of a minority</p></td><td class="t"><p> </p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>criticizing of a minority without argument or straw man argument</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>promotion of hate</p><p>speech or violent crime</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>misrepresentation of truth or seeking to distort views on a minority</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>problematic hash tags. e.g.</p><p>“#BanIslam”, “#whoriental”, “#whitegenocide”</p></td><td class="t"><p>-</p></td><td class="t"><p>-</p></td></tr><tr><td class="t"><p>negative stereotypes of a minority</p></td><td class="t"><p>-</p></td><td class="t"><p><em>-</em></p></td></tr><tr><td class="t"><p>defending xenophobia or sexism</p></td><td class="t"><p>-</p></td><td class="t"><p><em>-</em></p></td></tr><tr><td class="t"><p>user name that is offensive, as per the previous criteria</p></td><td class="t"><p>-</p></td><td class="t"><p><em>-</em></p></td></tr><tr><td class="t"><p> </p></td><td class="t"><p><strong> </strong></p></td><td class="t"><p><strong>hate speech</strong></p></td><td class="t"><p>-</p></td><td class="t"><p class="p1">κ = .84</p></td></tr><tr><td class="t" rowspan="2"><p>Döring &amp; Mohseni (2020) <br /><em>(manual content analysis)</em></p></td><td class="t" rowspan="2"><p><strong>hate speech</strong></p></td><td class="t"><p>explicitly or aggressively</p><p>sexual hate</p></td><td class="t"><p>e. g. “are you single, and can I lick you?”</p></td><td class="t"><p>κ = .74; <br />PA = .99</p></td></tr><tr><td class="t"><p>racist or sexist hate</p></td><td class="t"><p>e.g. “this is why ignorant whores like you belong in the fucking kitchen”, “oh my god that accent sounds like</p><p>crappy American”</p></td><td class="t"><p>κ = .66;</p><p>PA = .99</p></td></tr><tr><td class="t b"><p> </p></td><td class="t b"><p><strong> </strong></p></td><td class="t b"><p><strong>hate speech</strong></p></td><td class="t b"><p> </p></td><td class="t b"><p>κ = .70</p></td></tr></tbody></table></div><p><em>Note</em>: Previous studies used different inter-coder reliability statistics; κ = Cohen’s Kappa; PA = percentage agreement.</p><p> </p><p>More coded variables with definitions used in the study Döring &amp; Mohseni (2020) are available under: <a href="https://osf.io/da8tw/">https://osf.io/da8tw/</a></p><p> </p><p><strong>References</strong></p><p>Döring, N., &amp; Mohseni, M. R. (2020). Gendered hate speech in YouTube and YouNow comments: Results of two content analyses. <em>SCM Studies in Communication and Media</em>, <em>9</em>(1), 62–88. https://doi.org/10.5771/2192-4007-2020-1-62</p><p>Erjavec, K., &amp; Kovačič, M. P. (2012). “You Don't Understand, This is a New War! ” Analysis of Hate Speech in News Web Sites' Comments. <em>Mass Communication and Society</em>, <em>15</em>(6), 899–920. https://doi.org/10.1080/15205436.2011.619679</p><p>Gagliardone, I., Gal, D., Alves, T., &amp; Martínez, G. (2015). <em>Countering online hate speech</em>. <em>UNESCO Series on Internet Freedom</em>. Retrieved from http://unesdoc.unesco.org/images/0023/002332/233231e.pdf</p><p>Muddiman, A. (2017). : Personal and public levels of political incivility. <em>International Journal of Communication</em>, <em>11</em>, 3182–3202.</p><p>Oz, M., Zheng, P., &amp; Chen, G. M. (2017). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. <em>New Media &amp; Society</em>, <em>20</em>(9), 3400–3419. https://doi.org/10.1177/1461444817749516</p><p>Poole, E., Giraud, E. H., &amp; Quincey, E. de (2020). Tactical interventions in online hate speech: The case of #stopIslam. <em>New Media &amp; Society</em>, 146144482090331. https://doi.org/10.1177/1461444820903319</p><p>Rosenfeld, M. (2012). Hate Speech in Constitutional Jurisprudence. In M. Herz &amp; P. Molnar (Eds.), <em>The Content and Context of Hate Speech </em>(pp. 242–289). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139042871.018</p><p>Rowe, I. (2015). Civility 2.0: A comparative analysis of incivility in online political discussion. <em>Information, Communication &amp; Society</em>, <em>18</em>(2), 121–138. https://doi.org/10.1080/1369118X.2014.940365</p><p>Waseem, Z., &amp; Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In J. Andreas, E. Choi, &amp; A. Lazaridou (Chairs), <em>Proceedings of the NAACL Student Research Workshop</em>.</p><p>Ziegele, M., Koehler, C., &amp; Weber, M. (2018). Socially Destructive? Effects of Negative and Hateful User Comments on Readers’ Donation Behavior toward Refugees and Homeless Persons. <em>Journal of Broadcasting &amp; Electronic Media</em>, <em>62</em>(4), 636–653. https://doi.org/10.1080/08838151.2018.1532430</p> ER -