Perceptual Faces Completion Using SelfAttention Generative Adversarial Networks
DOI:
https://doi.org/10.61841/w0gbkv62Keywords:
Attention Mechanism, Image Completion, Semantic Completion, Neural Network, Computer VisionAbstract
This paper propose method based on self-attention generative adversarial networks
(SAGAN) to accomplish the task of image completion wherever completed images become globally and domestically consistent. Using self-attention GANs with contextual and different constraints, the generator will draw realistic images, wherever fine details are generated within the damaged region and coordinated with the entire image semantically. To train the consistent generator, i.e. image completion network, this paper tend to use global and native discriminators wherever the global discriminator is responsible for evaluating the consistency of the whole image, whereas the local discriminator assesses the local consistency by analyzing local areas containing completed regions only. Last but not least, an attentive recurrent neural block is introduced to get the attention map regarding the missing part within the image, which is able to facilitate the subsequent completion network to fill content better. By comparison of the experimental results of various approaches on CelebA data set, our technique shows relatively good results. Traditional Convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details will be generated using cues from all feature locations. Moreover, the discriminator will make sure highly elaborate features in distant parts of the image are consistent with one another. Moreover, recent work has shown that generator conditioning affects GAN performance. Investing this insight, this paper tend to apply spectral normalization to the GAN generator and find that this improves training dynamics.
Downloads
References
[1] K. E. Ak, J. Hwee, L. Jo, Y. Tham, and A. A. Kassim, “Attribute Manipulation Generative Adversarial
Networks for Fashion Images,” Iccv 2019, 2019.
[2] Y. Li, F. Wu, X. Chen, and Z. J. Zha, “Linestofacephoto: Face photo generation from lines with conditional
self-attention generative adversarial network,” in MM 2019 - Proceedings of the 27th ACM International
Conference on Multimedia, 2019.
[3] H. Zhang, B. S. Riggan, S. Hu, N. J. Short, and V. M. Patel, “Synthesis of High-Quality Visible Faces from
Polarimetric Thermal Faces using Generative Adversarial Networks,” Int. J. Comput. Vis., 2019.
[4] J. Zhang, R. Zhan, D. Sun, and G. Pan, “Symmetry-Aware Face Completion with Generative Adversarial
Networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), 2019.
[5] C. Wang, H. Zheng, Z. Yu, Z. Zheng, Z. Gu, and B. Zheng, “Discriminative Region Proposal Adversarial
Networks for High-Quality Image-to-Image Translation,” in Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018.
[6] L. Horsley and D. Perez-Liebana, “Building an automatic sprite generator with deep convolutional generative
adversarial networks,” in 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017,
2017.
[7] R. Ma and H. Hu, “Perceptual Face Completion using a Local-Global Generative Adversarial Network,” in
Proceedings - International Conference on Pattern Recognition, 2018.
[8] 9. Retrieved from aarssenl@queensu.ca Aarssen, L. W., & Crimi, L. (2016). Legacy, leisure and the ‘work
hard—Play hard’ hypothesis. The Open Psychology Journal et al., “Self-conscious emotion and existential
concerns: An examination of the effect of shame on death-related thoughts.,” 2017.
[9] 9. Retrieved from aarssenl@queensu.ca Aarssen, L. W., & Crimi, L. (2016). Legacy, leisure and the ‘work
hard—Play hard’ hypothesis. The Open Psychology Journal et al., “Playing the objectification game: How
women’s self-esteem impacts the existential consequences of objectification.,” 2018.
[10] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional
generative adversarial networks,” in 4th International Conference on Learning Representations, ICLR 2016 -
Conference Track Proceedings, 2016.
[11] J. Ho and S. Ermon, “Generative adversarial imitation learning,” in Advances in Neural Information
Processing Systems, 2016.
[12] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial
networks,” in 6th International Conference on Learning Representations, ICLR 2018 - Conference Track
Proceedings, 2018
[13] J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based generative adversarial networks,” in 5th International
Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2019.
[14] T. Xu et al., “AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial
Networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, 2018.
[15] L. Mescheder, S. Nowozin, and A. Geiger, “Adversarial variational bayes: Unifying variational autoencoders
and generative adversarial networks,” in 34th International Conference on Machine Learning, ICML 2017,
2017.
[16] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie, “Stacked generative adversarial networks,” in
Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017.
[17] X. Wang et al., “ESRGAN: Enhanced super-resolution generative adversarial networks,” in Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), 2019.
[18] T. Chen, M. Lucic, N. Houlsby, and S. Gelly, “On self modulation for generative adversarial networks,” in 7th
International Conference on Learning Representations, ICLR 2019, 2019.
[19] S. Lloyd and C. Weedbrook, “Quantum Generative Adversarial Learning,” Phys. Rev. Lett., 2018.
[20] Z. Deng et al., “Structured generative adversarial networks,” in Advances in Neural Information Processing
Systems, 2017.
[21] J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan, “Perceptual generative adversarial networks for small
object detection,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition,
CVPR 2017, 2017.
[22] I. Durugkar, I. Gemp, and S. Mahadevan, “Generative multi-adversarial networks,” in 5th International
Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2019.
[23] M. Arjovsky and L. Bottou, “Towards principled methods for training generative adversarial networks,” in 5th
International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2019.
[24] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image
synthesis,” in 33rd International Conference on Machine Learning, ICML 2016, 2016.
[25] K. Lata, M. Dave, and N. K.N., “Data Augmentation Using Generative Adversarial Network,” SSRN Electron.
J., 2019.
[26] B. Dolhansky and C. C. Ferrer, “Eye In-painting with Exemplar Generative Adversarial Networks,” in
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018.
[27] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, “Imitating driver behavior with generative
adversarial networks,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2017.
[28] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum, “Learning a probabilistic latent space of object
shapes via 3D generative-adversarial modeling,” in Advances in Neural Information Processing Systems,
2016.
[29] J. Gauthier, “Conditional generative adversarial nets for convolutional face generation,” Tech. Rep., 2014.
[30] S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, “DeLiGAN : Generative adversarial networks for
diverse and limited data,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern
Recognition, CVPR 2017, 2017.
[31] A. Jaiswal, W. AbdAlmageed, Y. Wu, and P. Natarajan, “CapsuleGAN: Generative adversarial capsule
network,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence
and Lecture Notes in Bioinformatics), 2019.
[32] G. Antipov, M. Baccouche, and J. L. Dugelay, “Face aging with conditional generative adversarial networks,”
in Proceedings - International Conference on Image Processing, ICIP, 2018.
[33] C. Donahue, J. McAuley, and M. Puckette, “Synthesizing Audio with Generative Adversarial Networks,” in
International Conference on Learning Representation (ICLR 2019), 2019.
[34] J. Toutouh, E. Hemberg, and U. M. O’Reilly, “Spatial evolutionary generative adversarial networks,” in
GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference, 2019.
[35] C. Ledig et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in
Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.