The International Conference on Learning Representations (ICLR) took place last week, and I had a pleasure to participate in it. ICLR is an event dedicated to research on all aspects of representation learning, commonly known as deep learning. Due to the coronavirus pandemic, the conference couldn’t take place in Addis Ababa, as planned, and went virtual instead. It didn’t change the great atmosphere of the event, quite the opposite – it was engaging and interactive, and attracted an even bigger audience than last year. If you’re interested in what organizers think about the unusual online arrangement of the conference, you can read about it here.
As an attendee, I was inspired by the presentations from over 1300 speakers and decided to create a series of blog posts summarizing the best papers in four main areas. You can catch up with the first post with deep learning papers here, and the second post with reinforcement learning papers here.
This brings us to the third post of the series – here are 7 best generative models papers from the ICLR.
Best Generative Models Papers
1. Generative Models for Effective ML on Private, Decentralized Datasets
Generative Models + Federated Learning + Differential Privacy gives data scientists a way to analyze private, decentralized data (e.g., on mobile devices) where direct inspection is prohibited.
(TL;DR, from OpenReview.net)
2. Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification.
(TL;DR, from OpenReview.net)
3. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
We identify the security weakness of skip connections in ResNet-like neural networks
(TL;DR, from OpenReview.net)

First author: Dongxian Wu
4. Enhancing Adversarial Defense by k-Winners-Take-All
We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks, using the k-winners-take-all activation function.
(TL;DR, from OpenReview.net)
First author: Chang Xiao
5. Real or Not Real, that is the Question
Generative Adversarial Networks (GANs) have been widely adopted in various topics. In the common setup the discriminator outputs a scalar value. Here, novel formulation is proposed where the discriminator outputs discrete distributions instead of a scalar.
First author: Yuanbo Xiangli
6. Adversarial Training and Provable Defenses: Bridging the Gap
We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10.
(TL;DR, from OpenReview.net)
7. Optimal Strategies Against Generative Attacks
In the GANs community, the defense against generative attack is a topic of growing importance. Here, authors formulate a problem formally and examine it in terms of sample complexity and time budget available to the attacker. Problem touches the falsification/modification of the data for malicious purposes.
Summary
Depth and breadth of the ICLR publications is quite inspiring. This post focuses on the “generative models” topic, which is only one of the areas discussed during the conference. As you can read in this analysis, the ICLR covered these main issues:
- Deep learning (here)
- Reinforcement learning (here)
- Generative models (covered in this post)
- Natural Language Processing/Understanding (here)
In order to create a more complete overview of the top papers at ICLR, we are building a series of posts, each focused on one topic mentioned above. This is the third post, so you may want to check the previous ones for a more complete overview.
Feel free to share with us other interesting papers on generative models. We would be happy to extend our list!