Belgium has laws covering speech that is discriminatory and therefore punishable. Yet there is a grey area of statements that are not punishable by law but which can be perceived as discriminatory and thus feed hatred. Scholars from the Vrije Universiteit Brussel investigated, on behalf of equal opportunities body Unia, the origin and content of such messages on social media. In parallel, UCLouvain carried out an analysis for the French-speaking part of the country. The results of the research show that in Flanders such messages are only published by right-wing parties. The authors choose implicit and suggestive language in order to avoid criticism.
To map out the strategies and linguistic characteristics of expressions between opinion and hatred, Prof Martina Temmerman, Prof Roel Coesemans and Dr Raymond Harder analysed Facebook messages and tweets published on the accounts of the main Flemish political parties (Open VLD, N-VA, sp.a, PVDA, Groen, CD&V and Vlaams Belang), their party leaders and two influential party members who profile themselves according to the themes that Unia addresses.
This corpus of a total of 3,121 Facebook messages and 24,764 tweets originating from 35 accounts contained 95 Facebook messages and 102 tweets that ā although not illegal ā could be experienced as discriminatory. The vast majority of these messages were posted by the official party account of Vlaams Belang and its politicians. There were a small number of such messages posted by N-VA, and none by the other parties. The reports were mainly about religion and origin.
Words carefully wrapped up and weighed
The researchers also found that the authors of potentially discriminatory messages were more likely to suggest rather than explicitly claim that certain groups in society are a problem or a danger. They do so, for example, by presenting two groups as opposing (insiders vs outsiders), presenting these outsiders negatively on the basis of assumed characteristics or categorising them as a homogeneous group. In addition, they often use exaggerated language with, for example, hyperbole or metaphors and undermine other opinions.
Prof Temmerman gives examples from the analysis: āA designation such as āscabies, malaria and TB migrantsā applies the characteristic ādiseaseā to an entire group. Moreover, the name also suggests that the group could be contagious to anyone who comes into contact with them. Another example: in a sentence such as āour girls are increasingly being harassedā, a group of āour girlsā is defined in relation to an unspecified other group of girls that is ānot oursā. Or if an article about the Asian hornet is shared on Facebook which states that this invasive alien wasp is a threat to honeybees, we see in the responses that readers interpret the post metaphorically and make comments such as āI am training my honeybees to be able to defend themselves against this alien Asian hornetā.ā
Prof Temmerman further explains: āBy communicating so indirectly and implicitly, the authors can guard against the criticism that they are crossing certain legal boundaries. For example, they can defend themselves by claiming they have been misunderstood. But the reactions to the messages clearly show that the readers have understood the message as intended. In other words, the authors do not need to be explicit to make their point. In science, we call this phenomenon the ādog whistleā. We also note that the readers often go a step further than the authors.ā
The full report can be found here. In parallel, UCLouvain researchers carried out an analysis for the French-speaking part of the country. That report can be found here.