Tutorial by Daniel Kuhn at the INFORMS Annual Meeting in Seattle

© 2019 EPFL

© 2019 EPFL

Daniel Kuhn delivered a tutorial on Wasserstein Distributionally Robust Optimization and Machine Learning at the INFORMS Annual Meeting in Seattle, October 20-23, 2019.

The TutORials in Operations Research series is published annually by INFORMS as an introduction to emerging and classical subfields of operations research and management science. These chapters are designed to be accessible for all constituents of the INFORMS community, including current students, practitioners, faculty, and researchers. The publication allows readers to keep pace with new developments in the field, and serves as augmenting material for a selection of the tutorial presentations offered at the INFORMS Annual Meetings. The 2019 edition of the INFORMS TutORials volume features 10 chapters on the common topic of “Management Science and Operations Research in the Age of Analytics.” Daniel Kuhn contributed a tutorial on "Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning".

Abstract:

Many decision problems in science, engineering and economics are affected by uncertain parameters whose distribution is only indirectly observable through samples. The goal of data-driven decision-making is to learn a decision from finitely many training samples that will perform well on unseen test samples. This learning task is difficult even if all training and test samples are drawn from the same distribution — especially if the dimension of the uncertainty is large relative to the training sample size. Wasserstein distributionally robust optimization seeks data-driven decisions that perform well under the most adverse distribution within a certain Wasserstein distance from a nominal distribution constructed from the training samples. In this tutorial we will argue that this approach has many conceptual and computational benefits. Most prominently, the optimal decisions can often be computed by solving tractable convex optimization problems, and they enjoy rigorous out-of-sample and asymptotic consistency guarantees. We will also show that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others.