Please use this identifier to cite or link to this item:
http://umt-ir.umt.edu.my:8080/handle/123456789/20680
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | PSNZ | - |
dc.date.accessioned | 2024-08-21T15:25:31Z | - |
dc.date.available | 2024-08-21T15:25:31Z | - |
dc.date.issued | 2024-08 | - |
dc.identifier.uri | http://umt-ir.umt.edu.my:8080/handle/123456789/20680 | - |
dc.description.abstract | In recent years, social media has increasingly become one of the popular ways for people to consume news. As proliferation of fake news on social media has the negative impacts on individuals and society, automatic fake news detection has been explored by different research communities for combating fake news. With the development of multimedia technology, there is a phenomenon that cannot be ignored is that more and more social media news contains information with different modalities, e.g., texts, pictures and videos. The multiple information modalities show more evidence of the happening of news events and present new opportunities to detect features in fake news. First, for multimodal fake news detection task, it is a challenge of keeping the unique properties for each modality while fusing the relevant information between different modalities. Second, for some news, the information fusion between different modalities may produce the noise information which affects model’s performance. Unfortunately, existing methods fail to handle these challenges. To address these problems, we propose a multimodal fake news detection framework based on Crossmodal Attention Residual and Multichannel convolutional neural Networks (CARMN). The Crossmodal Attention Residual Network (CARN) can selectively extract the relevant information related to a target modality from another source modality while maintaining the unique information of the target modality. The Multichannel Convolutional neural Network (MCN) can mitigate the influence of noise information which may be generated by crossmodal fusion component by extracting textual feature representation from original and fused textual information simultaneously. We conduct extensive experiments on four real-world datasets and demonstrate that the proposed model outperforms the state-of-the-art methods and learns more discriminable feature representations. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Universiti Malaysia Terengganu | en_US |
dc.subject | Fake news detection | en_US |
dc.subject | Crossmodal attention | en_US |
dc.subject | Residual network | en_US |
dc.subject | Convolutional neural network | en_US |
dc.title | MULTIMODAL FAKE NEWS DETECTION | en_US |
dc.type | Article | en_US |
Appears in Collections: | SDI UMT 2024 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
8. Multimodal Fake News Detection.pdf | 37.62 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.