PhD Defense of Leello Dadi

Leello Dadi PhD public defense© 2026 EPFL/Volkan Cevher

Leello Dadi PhD public defense© 2026 EPFL/Volkan Cevher

On February 27h 2026, Leello Dadi, a PhD student at LIONS lab, successfully defended his PhD thesis. The thesis, entitled "Noisy Gradient Descent in Machine Learning: generalization, games, and sampling" was supervised by Professor Volkan Cevher. Congratulations to Leello!

Abstract:

This thesis investigates Noisy Gradient Descent (NGD), an idealized version of Stochastic Gradient Descent in three different settings. While commonly understood as an undesirable artifact, we demonstrate that the noise in NGD can be harnessed to ensure convergence to equilibrium in multiagent games with limited feedback, as well as to guarantee generalization of the outputs of a learning scheme, and, that it can serve as an essential tool for generative modeling. We explore these benefits by progressively decreasing control of the noise term.

First, we study multi-agent repeated games, a setting where we have complete control over the noise distribution. We show that NGD becomes a low-regret learning algorithm for each selfish player. When adopted by all players in a congestion game, this strategy guarantees convergence to a Nash Equilibrium in polynomial time, thereby demonstrating that selfish behavior can lead to stable and optimal collective outcomes.

Next, we transition to a setting where the noise is less controlled, modeling the stochasticity from mini-batching with a tractable Gaussian distribution. This approximation allows us to analyze NGD in non-strongly-convex optimization landscapes, where we establish novel {generalization and privacy guarantees} in unbounded settings. This chapter formalizes the intuition that the noise inherent in SGD-like methods acts as a regularizer, preventing overfitting and protecting data privacy.

Finally, we observe that NGD with Gaussian noise is equivalent to Langevin Monte Carlo (LMC), a well-known but biased sampling algorithm. We first re-frame LMC as an {approximate denoising scheme}. This perspective reveals the sources of its bias and allows us to devise an improved sampler. By replacing the approximate denoiser with an exact one (a RGO) we propose a sampling algorithm whose subproblems are all tractable.