Please use this identifier to cite or link to this item:
http://umt-ir.umt.edu.my:8080/handle/123456789/20680
Title: | MULTIMODAL FAKE NEWS DETECTION |
Authors: | PSNZ |
Keywords: | Fake news detection Crossmodal attention Residual network Convolutional neural network |
Issue Date: | Aug-2024 |
Publisher: | Universiti Malaysia Terengganu |
Abstract: | In recent years, social media has increasingly become one of the popular ways for people to consume news. As proliferation of fake news on social media has the negative impacts on individuals and society, automatic fake news detection has been explored by different research communities for combating fake news. With the development of multimedia technology, there is a phenomenon that cannot be ignored is that more and more social media news contains information with different modalities, e.g., texts, pictures and videos. The multiple information modalities show more evidence of the happening of news events and present new opportunities to detect features in fake news. First, for multimodal fake news detection task, it is a challenge of keeping the unique properties for each modality while fusing the relevant information between different modalities. Second, for some news, the information fusion between different modalities may produce the noise information which affects model’s performance. Unfortunately, existing methods fail to handle these challenges. To address these problems, we propose a multimodal fake news detection framework based on Crossmodal Attention Residual and Multichannel convolutional neural Networks (CARMN). The Crossmodal Attention Residual Network (CARN) can selectively extract the relevant information related to a target modality from another source modality while maintaining the unique information of the target modality. The Multichannel Convolutional neural Network (MCN) can mitigate the influence of noise information which may be generated by crossmodal fusion component by extracting textual feature representation from original and fused textual information simultaneously. We conduct extensive experiments on four real-world datasets and demonstrate that the proposed model outperforms the state-of-the-art methods and learns more discriminable feature representations. |
URI: | http://umt-ir.umt.edu.my:8080/handle/123456789/20680 |
Appears in Collections: | SDI UMT 2024 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
8. Multimodal Fake News Detection.pdf | 37.62 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.