In the recent years, digital wildfires, i.e. fast-spreading online misinformation have been identified as a considerable risk to developed societies, which raised the need for strategies to alleviate that risk. However, due to the speed with which online information spreads today, in combination with its immense volume, human monitoring of the Internet is completely infeasible, which gives rise to the need for an automated system.
Our project aims to develop improved prevention and preparedness techniques to counteract this type of misinformation. While several approaches have been developed in the recent past, almost all of these attempts attack the problem purely from the technical side, generally using machine-learning techniques. Our approach differs in that we study the problem from both sides, from the technical, but also form the human side by performing experiments and interviews aimed at understanding how people assess trustworthiness online, which content is likely to spread far, and why actors spread misinformation.
The five main objectives of UMOD
- Develop a computer program capable of detecting the topic of online news articles and the relationship between them.
- Perform experiments on how people assess the truthfulness of news items, and interview journalists on current fact-checking practices.
- Develop algorithms capable of analysing how news propagates from the original source.
- Using discourse analysis, we will analyse the content and the agendas of misinformation discovered by the automated system and assess its threat potential.
- Formulate detailed recommendation on how to best prepare for digital wildfires based on knowledge gathered in the four preceding points, and train the system to detect harmful misinformation early.
The overall objective is the prevention of digital wildfires via automated early warnings from the system, as well as enhanced preparedness for such events through intense study on the spread of such wildfires and the underlying reasons of the phenomenon.