Loading [MathJax]/extensions/Safe.js
arxiv.org
scholar.google.com
Optimizing the Latent Space of Generative Networks
Bojanowski, Piotr and Joulin, Armand and Lopez-Paz, David and Szlam, Arthur
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords: dblp


[link]
Summary by Min Lin 7 years ago

An algorithm named GLO is proposed in this paper. The objective function of GLO:

$$\min_{\theta}\frac{1}{N}\sum_{i=1}^N[\min_{z_i}l(g^\theta(z_i),x_i)]$$

This idea dates back to Dictionary Learning.

It can be viewed as a nonlinear version of the dictionary learning by

  1. replace the dictionary $D$ with the function $g^{\theta}$.
  2. replace $r$ with $z$.
  3. use $l_2$ loss function.

Although in this way, the generator could be learned without the hassles caused by GAN objective, there could be problems.

With this method, the space of the latent vector $z$ could be structured. Although the author project $z$ to a unit ball if it falls outside, there is no guarantee that the trained $z$ would remain a Gaussian distribution that it was originally initialized from.

This could cause the problem that not every sampled noise could reconstruct a valid image, and the linear interpolation could be problematic if the support of the marginalized $p(z)$ is not a convex set.

more
Your comment:

Send Feedback
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: