Unrolled Generative Adversarial Networks, NIPS 2016 | Luke Metz, Google Brain
Preserve Knowledge Preserve Knowledge
17.5K subscribers
1,956 views
0

 Published On Aug 30, 2017

Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein

https://arxiv.org/abs/1611.02163

NIPS 2016 Workshop on Adversarial Training Spotlight

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.

show more

Share/Embed