Item type |
文献 / Documents(1) |
公開日 |
2021-09-30 |
アクセス権 |
|
|
アクセス権 |
open access |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
出版社版DOI |
|
|
|
識別子タイプ |
DOI |
|
|
関連識別子 |
https://doi.org/10.1109/ACCESS.2020.3021531 |
|
|
言語 |
ja |
|
|
関連名称 |
10.1109/ACCESS.2020.3021531 |
出版タイプ |
|
|
出版タイプ |
VoR |
|
出版タイプResource |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
タイトル |
|
|
タイトル |
LAUN Improved StarGAN for Facial Emotion Recognition |
|
言語 |
en |
著者 |
Wang, Xiaohua
Gong, Jianqiao
Hu, Min
Gu, Yu
任, 福継
|
抄録 |
|
|
内容記述タイプ |
Abstract |
|
内容記述 |
In the field of facial expression recognition, deep learning is extensively used. However, insufficient and unbalanced facial training data in available public databases is a major challenge for improving the expression recognition rate. Generative Adversarial Networks (GANs) can produce more one-to-one faces with different expressions, which can be used to enhance databases. StarGAN can perform one-to-many translations for multiple expressions. Compared with original GANs, StarGAN can increase the efficiency of sample generation. Nevertheless, there are some defects in essential areas of the generated face, such as the mouth and the fuzzy side face image generation. To address these limitations, we improved StarGAN to alleviate the defects of images generation by modifying the reconstruction loss and adding the Contextual loss. Meanwhile, we added the Attention U-Net to StarGAN's generator, replacing StarGAN's original generator. Therefore, we proposed the Contextual loss and Attention U-Net (LAUN) improved StarGAN. The U-shape structure and skip connection in Attention U-Net can effectively integrate the details and semantic features of images. The network's attention structure can pay attention to the essential areas of the human face. The experimental results demonstrate that the improved model can alleviate some flaws in the face generated by the original StarGAN. Therefore, it can generate person images with better quality with different poses and expressions. The experiments were conducted on the Karolinska Directed Emotional Faces database, and the accuracy of facial expression recognition is 95.97%, 2.19% higher than that by using StarGAN. Meanwhile, the experiments were carried out on the MMI Facial Expression Database, and the accuracy of expression is 98.30%, 1.21% higher than that by using StarGAN. Moreover, experiment results have better performance based on the LAUN improved StarGAN enhanced databases than those without enhancement. |
|
言語 |
en |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
Facial expression recognition |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
data enhancement |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
generative adversarial networks |
キーワード |
|
|
言語 |
en |
|
主題Scheme |
Other |
|
主題 |
self-attention |
書誌情報 |
en : IEEE Access
巻 8,
p. 161509-161518,
発行日 2020-09-03
|
収録物ID |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
21693536 |
出版者 |
|
|
出版者 |
IEEE |
|
言語 |
en |
権利情報 |
|
|
言語 |
en |
|
権利情報 |
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ |
EID |
|
|
識別子 |
371481 |
|
識別子タイプ |
URI |
言語 |
|
|
言語 |
eng |