19 October 2022
10am-16pm (Spain – UCT+2)
Location: Online
Freedom of speech in democratic societies is critical in guaranteeing and protecting civil and political rights. However, the overwhelming presence and intensity of information in digital settings and its prominence, especially on social networks, do not necessarily encourage democratic principles and debate. Indeed, it can provoke the opposite effect, as the anonymity and immediacy of a lot of content (fake news, emotional approaches) introduce elements that hinder forms of debate based on an analysis of the issues. All of this leads to liquid modernity, where the growing proliferation of content (disinformation or other) hinders coexistence based on democratic values because of the dissemination of hateful messages or those that foster intolerance and incitement directed at certain easily identifiable groups (e.g., migrants, LGTBI community, groups of specific religious movements).
The current digital ecosystem has seen the most effective communication strategies become varied among a more significant number of citizens (users). They increase the weight of messages that appeal to emotions, disseminate personal beliefs and undermine objective or truthful facts about specific topics. This is done to attract a more significant number of audiences (which are increasingly closed-off) in the various digital communication spaces.
As the borders in digital settings are becoming blurred between the production of news and non-news content, a communication context has arisen that is increasingly prone to exacerbating fears of the foreign (the different) in public opinion. This fosters the viral spreading of hate speech toward groups easily defined by their religious beliefs or ethnic and cultural origins, hindering social cohesion, peaceful coexistence, and political stability in democratic states.
Strategies for disseminating disinformation (e.g., Astroturfing) used in politics, public relations, and advertising take advantage of the recurring self-positioning of these “apparently anonymous” users as outsiders and anti-system to legitimize the existence of a “disinformation culture” that favors the polarization of public opinion and hate speech directed against specific groups. This is done to lay the foundations for certain (ideologically extremist or populist) belief systems associated with social and political groups starting from digital environments.
All of this hinders the development of democratic values and freedom of expression. It also promotes a scenario that undermines the role of the Gatekeeper of the mainstream media under the increasing prominence of social networks and the hybrid communication system they create. This combination of factors fosters the dissemination of unfounded content that promulgates negative expressions, prejudices, and stereotypes even though there is no shortage of verified information.
The transformation of the digital ecosystem has lessened the influence of professional and general media on citizens-users enclosed in digital communication spaces dominated by narrative strategies based on personal feelings and beliefs. This decisively influences the construction of public opinion, individually and collectively, and facilitates the normalization of hate speech and the coexistence with disinformation content. However, also contribute to re-think the role of news media as Gatekeeper and the need to debate about new ways to understand and monitor the spreading of this kind of content.
This pre-conference looks at how disinformation and hate speech spread in public opinion from different perspectives, not only focusing on understanding how to disseminate and affect general populations; but also to discern about the role assumed by fact-checkers, journalists, and news media during the widening recognition of this kind of content at local and national level; and recognize different approach (theoretical and methodological) that help to understand and identify mechanisms and practices that help monitor and control the use of disinformation and hate speech from and through social media and digital news media.
– Identifying journalists’ biases and frames influence their perception of certain phenomena and groups and preparing news about them.
– Identifying methods that help to detect hate speech and disinformation content through digital and news media;
– Identifying fact-checkers and projects initiative for detecting hate speech and disinformation content;
– Understanding how hate speech and disinformation content are built and spread in the media;
– Understanding practices and routines of journalists who disseminate or help combat the exposure of hate speech and disinformation content in public opinion;
– Studying roles and stereotypes disseminated through the news that underpin hateful attitudes towards certain groups;
– Identifying the weaknesses of the media industry and the management of content about hate speech and disinformation content on a massive scale;
– Identifying successful monitoring and controlling strategies in the media to combat hate speech.
– Understanding the narrative structure of hate speech and disinformation content on social media and digital news media.
– Understanding how and the role assumed by local media and journalism around the spreading of hate speech and disinformation content on internet.
All abstracts will be reviewed by the pre-conference organization’s committee: – Prof. Dr. Elias Said-Hung – Universidad Internacional de La Rioja (Spain) – Prof. Marta Sánchez-Esparza – Rey Juan Carlos University (Spain) – Prof. Pedro Jerónimo – Beira Interior University (Portugal) – Prof. Julio Montero – Universidad Internacional de La Rioja (Spain).
Notice of acceptance will be given by September 9, 2022.
Hatemedia’s Project (PID2020-114584GB-I00), funded by the Spanish Ministry of Science and Innovation – https://www.hatemedia.es/
MediaTrust.Lab Project (PTDC/COM-JOR/386/2020), funded by the Portuguese Foundation for Science and Technology – https://mediatrust.ubi.pt/