Loading [MathJax]/extensions/Safe.js
arxiv.org
arxiv-vanity.com
scholar.google.com
Instance Normalization: The Missing Ingredient for Fast Stylization
Dmitry Ulyanov and Andrea Vedaldi and Victor Lempitsky
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.CV

more

[link]
Summary by Alexander Jung 7 years ago
  • Style transfer between images works - in its original form - by iteratively making changes to a content image, so that its style matches more and more the style of a chosen style image.
  • That iterative process is very slow.
  • Alternatively, one can train a single feed-forward generator network to apply a style in one forward pass. The network is trained on a dataset of input images and their stylized versions (stylized versions can be generated using the iterative approach).
  • So far, these generator networks were much faster than the iterative approach, but their quality was lower.
  • They describe a simple change to these generator networks to increase the image quality (up to the same level as the iterative approach).
How
  • In the generator networks, they simply replace all batch normalization layers with instance normalization layers.
  • Batch normalization normalizes using the information from the whole batch, while instance normalization normalizes each feature map on its own.
  • Equations
    • Let H = Height, W = Width, T = Batch size
    • Batch Normalization:
      • Batch Normalization Equations
    • Instance Normalization
      • Instance Normalization Equations
  • They apply instance normalization at test time too (identically).
Results
  • Same image quality as iterative approach (at a fraction of the runtime).
  • One content image with two different styles using their approach:
    • Example
more
Your comment:
[link]
Summary by David Stutz 5 years ago

In the context of stylization, Ulyanov et al. propose to use instance normalization instead of batch normalization. In detail, instance normalization does not compute the mean and standard deviation used for normalization over the current mini-batch in training. Instead, these statistics are computed per instance individually. This also has the benefit of having the same training and test procedure, meaning that normalization is the same in both cases – in contrast to batch normalization.

Also find this summary at davidstutz.de.

Your comment:

Send Feedback
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: