Congratulations to Dr. Viet Anh Nguyen for obtaining his PhD

Prof. Daniel Kuhn and Dr. Viet Anh Nguyen © 2019 EPFL

Prof. Daniel Kuhn and Dr. Viet Anh Nguyen © 2019 EPFL

Dr. Viet Anh Nguyen obtained his PhD in September 2019. His dissertation, supervised by Prof. Daniel Kuhn, is entitled "Adversarial Analytics".


Adversarial learning is an emergent technique that provides better security to machine learning systems by deliberately protecting them against specific vulnerabilities of the learning algorithms. Many adversarial learning problems can be cast equivalently as distributionally robust optimization problems that hedge against the least favorable probability distribution in a certain ambiguity set. The main objectives of this thesis center around the development of novel analytics toolboxes using advanced probability and statistics machinery under the distributionally robust optimization/adversarial learning framework. Using a type-2 Wasserstein ambiguity set and its Gelbrich hull, which constitutes a conservative outer approximation, we propose new solutions with strong performance guarantees to several problems in statistical learning and risk management, while at the same time mitingating the curse of dimensionality inherent to these problems. The first chapter proposes a distributionally robust inverse covariance estimator that minimizes the worst-case Stein's loss. The optimal estimator admits a closed-form representation and exhibits many desirable properties, none of which are imposed ad hoc but arise naturally from the distributionally robust optimization approach. The optimal estimator is closely related to a nonlinear eigenvalue shrinkage estimator. For this reason we refer to it as the Wasserstein shrinkage estimator. Furthermore, the Wasserstein shrinkage estimator can also be interpreted as a robust maximum likelihood estimator. The second chapter proposes a distributionally robust minimum mean square error estimator. Under a mild assumption on the nominal distribution of the uncertain data, we show that the optimal estimator is an affine function of the observations, which can be constructed efficiently using a first-order optimization method to solve the underlying semidefinite program. The third chapter studies distributionally robust risk measures under the Gelbrich hull ambiguity set, which is an outer approximation of the Wasserstein type-2 ambiguity set. We prove that the robustified Gelbrich risk of many popular law-invariant risk measures admit a closed form expression. The result is extended to provide tractable reformulations for the worst-case expected loss as well as the value-at-risk of nonlinear portfolios.

Keywords: distributionally robust optimization ; Wasserstein distance ; adversarial learning ; statistical estimation ; inverse covariance matrix ; minimum mean square error ; risk measure