iTrust-Project

Linguistically Analysing Polarisation on Social Media

ABSTRACT

Polarisation of society is the key threat to the transition of our communication to social media. It is often stressed that online technologies, such as recommendation algorithms or clustering friends, encourage and escalate divisions into polarised —extreme & isolated— groups, that is into in-groups (‘us’, ‘the good guys’, ‘pals’) and out-groups (‘them’, ‘the bad guys’, ‘evils’). While a lot of attention has been paid in psychology and sociology to investigate causes and effects of polarisation, less work has been done to look at linguistic manifestations of polarisation. Even though hate speech understood as offensive and emotional language —extensively studied in computational linguistics— often overlaps with polarisation, they are not the same phenomena: I can be vulgar or emotional towards my friends and I can be perfectly polite and cold with my enemies. In this report, we make the first step towards directly addressing the question: how do we use language when we are polarised, that is, what are the features of polarising and polarised language? The document is the result of a workshop organised in March 2023 by the iTRUST project “Interventions against polarisation in society for TRUSTworthy social media: From diagnosis to therapy”. The event was structured around flash talks, panel discussion and working groups, bringing together the iTRUST team and collaborators who contribute their approaches from various areas of philosophy, linguistics, psychology, sociology, media studies and computer science.

REFERENCES

Ewelina Gajewska, Katarzyna Budzynska, Barbara Konat, Marcin Koszowy, Eds. (2023) Linguistically Analysing Polarisation on Social Media, The New Ethos Reports, Warsaw Poland, Warsaw University of Technology, vol .1, p. 1-32, DOI 10.17388/WUT.2023.0001.AINS.

LINK

Share Everywhere:

Funding Organisations:

©2024 iTrust Project - All Rights Reserved
Website by Concept57