EPFL Accepted papers at ICML 2020

ICML 2020 poster image https://icml.cc/Conferences/2020

ICML 2020 poster image https://icml.cc/Conferences/2020

23 EPFL papers have been accepted to the thirty-seventh International Conference on Machine Learning which will take place in July 2020. As a result of the COVID-19 situation, this conference edition will be exceptionally held virtually. 
We take this opportunity to congratulate all the authors of the accepted EPFL papers.
If you are taking part in ICML 2020, we hope to meet, this time by Zoom, and discuss!

List of accepted EPFL papers:

LazyIter: A Fast Algorithm for Counting Markov Equivalent DAGs and Designing Experiments
Ali AhmadiTeshnizi (Sharif University of Technology) · Saber Salehkaleybar (Sharif University of Technology) · Negar Kiyavash (École Polytechnique Fédérale de Lausanne)

Characterizing Distribution Equivalence and Structure Learning for Cyclic and Acyclic Directed Graphs
AmirEmad Ghassami (UIUC) · Alan Yang (University of Illinois at Urbana-Champaign) · Negar Kiyavash (École Polytechnique Fédérale de Lausanne) · Kun Zhang (Carnegie Mellon University)

Dissecting Non-Vacuous Generalization Bounds based on the Mean-Field Approximation
Konstantinos Pitas (Ecole Polytechnique Federale de Lausanne)

Online metric algorithms with untrusted predictions
Antonios Antoniadis (MPII) · Christian Coester (Centrum Wiskunde & Informatica) · Marek Elias (École polytechnique fédérale de Lausanne) · Adam Polak (Jagiellonian University) · Bertrand Simon (University of Bremen)

Reliable Fidelity and Diversity Metrics for Generative Models
Muhammad Ferjad Naeem (Technical University of Munich) · Seong Joon Oh (Clova AI Research, NAVER Corp.) · Yunjey Choi (Clova AI Research, NAVER Corp.) · Youngjung Uh (Clova AI Research, NAVER Corp.) · Jaejun Yoo (EPFL)

SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Sai Praneeth Reddy Karimireddy (EPFL) · Satyen Kale (Google) · Mehryar Mohri (Google Research and Courant Institute of Mathematical Sciences) · Sashank Jakkam Reddi (Google) · Sebastian Stich (EPFL) · Ananda Theertha Suresh (Google Research)

Conditional gradient methods for stochastically constrained convex minimization
Maria-Luiza Vladarean (EPFL) · Ahmet Alacaoglu (EPFL) · Ya-Ping Hsieh (EPFL) · Volkan Cevher (EPFL)

Random extrapolation for primal-dual coordinate descent
Ahmet Alacaoglu (EPFL) · Olivier Fercoq (Telecom Paris) · Volkan Cevher (EPFL)

Scalable and Efficient Comparison-based Search without Features
Daniyar Chumbalov (EPFL) · Lucas Maystre (Spotify) · Matthias Grossglauser (EPFL)

Double-Loop Unadjusted Langevin Algorithm
Paul Rolland (Ecole Polytechnique Fédérale de Lausanne) · Armin Eftekhari (Umea University) · Ali Kavis (EPFL) · Volkan Cevher (EPFL)

Adaptive Gradient Descent without Descent
Konstantin Mishchenko (King Abdullah University of Science & Technology (KAUST)) · Yura Malitsky (EPFL)

Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Angelos Katharopoulos (Idiap & EPFL) · Apoorv Vyas (Idiap Research Institute) · Nikolaos Pappas (University of Washington) · Francois Fleuret (Idiap research institute)

Implicit Regularization of Random Feature Models
Arthur Jacot (EPFL) · berfin simsek (EPFL) · Francesco Spadaro (EPFL) · Clement Hongler (EPFL) · Franck Gabriel (EPFL)

Efficient proximal mapping of the path-norm regularizer of shallow networks
Fabian Latorre (EPFL) · Paul Rolland (Ecole Polytechnique Fédérale de Lausanne) · Nadav Hallak (EPFL) · Volkan Cevher (EPFL)

A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu (EPFL) · Yura Malitsky (EPFL) · Panayotis Mertikopoulos (CNRS) · Volkan Cevher (EPFL)

A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasiia Koloskova (EPFL) · Nicolas Loizou ( Mila, Université de Montréal ) · Sadra Boreiri (EPFL) · Martin Jaggi (EPFL) · Sebastian Stich (EPFL)

Bayesian Differential Privacy for Machine Learning
Aleksei Triastcyn (EPFL) · Boi Faltings (EPFL)

Extrapolation for Large-batch Training in Deep Learning
Tao LIN (EPFL) · Lingjing Kong (EPFL) · Sebastian Stich (EPFL) · Martin Jaggi (EPFL)

Optimizer Benchmarking Needs to Account for Hyperparameter Tuning
Prabhu Teja Sivaprasad (Idiap Research Institute) · Florian Mai (Idiap Research Institute) · Thijs Vogels (EPFL) · Martin Jaggi (EPFL) · Francois Fleuret (Idiap research institute)

Near Input Sparsity Time Kernel Embeddings via Adaptive Sampling
Amir Zandieh (EPFL) · David Woodruff (CMU)

Is Local SGD Better than Minibatch SGD?
Blake Woodworth (TTI-Chicago) · Kumar Kshitij Patel (Toyota Technological Institute at Chicago) · Sebastian Stich (EPFL) · Zhen Dai (University of Chicago) · Brian Bullins (TTI Chicago) · H. Brendan McMahan (Google) · Ohad Shamir (Weizmann Institute of Science) · Nati Srebro (Toyota Technological Institute at Chicago)

Training Binary Neural Networks using the Bayesian Learning Rule
Xiangming Meng (Riken) · Roman Bachmann (EPFL) · Mohammad Emtiyaz Khan (RIKEN)

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent
Scott Pesme (EPFL) · Aymeric Dieuleveut (École polytechnique) · Nicolas Flammarion (EPFL)