Objective: Replace the usual GAN loss with a softmax croos-entropy loss to stabilize GAN training.
Dataset: CelebA
Inner working:
Linked to recent work such as WGAN or Loss-Sensitive GAN that focus on objective functions with non-vanishing gradients to avoid the situation where the discriminator D
becomes too good and the gradient vanishes.
Thus they first introduce two targets for the discriminator D
and the generator G
:


And then the two new losses:


Architecture:
They use the DCGAN architecture and simply change the loss and remove the batch normalization and other empirical techniques used to stabilize training.
They show that the soft-max GAN is still robust to training.