Item type |
文献 / Documents(1) |
公開日 |
2024-12-20 |
アクセス権 |
|
|
アクセス権 |
embargoed access |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
出版タイプ |
|
|
出版タイプ |
NA |
|
出版タイプResource |
http://purl.org/coar/version/c_be7fb7dd8ff6fe43 |
タイトル |
|
|
タイトル |
Affective knowledge assisted bi-directional learning for Multi-modal Aspect-based Sentiment Analysis |
|
言語 |
en |
著者 |
Shi, Xuefeng
Yang, Ming
Hu, Min
任, 福継
康, 鑫
Ding, Weiping
|
抄録 |
|
|
内容記述タイプ |
Abstract |
|
内容記述 |
As a fine-grained task in the community of Multi-modal Sentiment Analysis (MSA), Multi-modal Aspect-based Sentiment Analysis (MABSA) is challenging and has attracted numerous researchers’ attention, and prominent progress has been achieved in recent years. However, there is still a lack of effective strategies for feature alignment between different modalities, and further exploration is urgently needed. Thus, this paper proposed a novel MABSA method to enhance the sentiment feature alignment, namely Affective Knowledge-Assisted Bi-directional Learning (AKABL) networks, which learn sentiment information from different modalities through multiple perspectives. Specifically, AKABL gains the textual semantic and syntactic features through encoding text modality via pre-trained language model BERT and Syntax Parser SpaCy, respectively. And then, to strengthen the expression of sentiment information in the syntactic graph, affective knowledge SenticNet is introduced to assist AKABL in comprehending textual sentiment information. On the other side, to leverage image modality efficiently, the pre-trained model Visual Transformer (ViT) is employed to extract the necessary image features. Additionally, to integrate the obtained features, this paper utilizes the module Single Modality GCN (SMGCN) to achieve the joint textual semantic and syntactic representation. And to bridge the textual and image features, the module Double Modalities GCN (DMGCN) is devised and applied to extract the sentiment information from different modalities simultaneously. Besides, to bridge the alignment gap between text and image features, this paper devises a novel alignment strategy to build the relationship between these two representations, which measures that difference with the Jensen-Shannon divergence from bi-directional perspectives. It is worth noting that cross-attention and cosine distance-based similarity are also applied in the proposed AKABL. To validate the effectiveness of the proposed method, extensive experiments are conducted on two widely used and public benchmark datasets, and the experimental results demonstrate that AKABL can improve the tasks’ performance obviously, which outperforms the optimal baseline with accuracy improvement of 0.47% and 0.72% on the two datasets. |
|
言語 |
en |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Multi-modal Aspect-based Sentiment Analysis |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Affective knowledge |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Bi-directional learning |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Cross-attention |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Cosine similarity |
書誌情報 |
en : Computer Speech & Language
巻 91,
p. 101755,
発行日 2024-12-10
|
収録物ID |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
08852308 |
収録物ID |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
10958363 |
収録物ID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA10677208 |
収録物ID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11545097 |
出版者 |
|
|
出版者 |
Elsevier |
|
言語 |
en |
備考 |
|
|
言語 |
ja |
|
値 |
論文本文は2026-12-10以降公開予定 |
権利情報 |
|
|
言語 |
en |
|
権利情報 |
© 2024. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ |
EID |
|
|
識別子 |
415558 |
|
識別子タイプ |
URI |
言語 |
|
|
言語 |
eng |