Loading [MathJax]/extensions/Safe.js
arxiv.org
arxiv-vanity.com
scholar.google.com
Softmax GAN
Min Lin
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.NE

more

[link]
Summary by Léo Paillier 7 years ago

Objective: Replace the usual GAN loss with a softmax croos-entropy loss to stabilize GAN training.

Dataset: CelebA

Inner working:

Linked to recent work such as WGAN or Loss-Sensitive GAN that focus on objective functions with non-vanishing gradients to avoid the situation where the discriminator D becomes too good and the gradient vanishes.

Thus they first introduce two targets for the discriminator D and the generator G:

screen shot 2017-04-24 at 6 18 11 pm

screen shot 2017-04-24 at 6 18 24 pm

And then the two new losses:

screen shot 2017-04-24 at 6 19 50 pm

screen shot 2017-04-24 at 6 19 55 pm

Architecture:

They use the DCGAN architecture and simply change the loss and remove the batch normalization and other empirical techniques used to stabilize training.
They show that the soft-max GAN is still robust to training.

more
Your comment:

Send Feedback
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: