Emmanuel Abbé part of team that wins deep-learning research award
Two international teams of scientists have been awarded a total of $10 million grant from the US National Science Foundation and the Simons Foundation to research the mathematics of deep learning. Among them is Professor Emmanuel Abbé at EPFL.
The grant is one of two research awards sponsored by the National Science Foundation (NSF) Directorates for Mathematical and Physical Sciences, Computer and Information Science and Engineering, and Engineering, and the Simons Foundation Division of Mathematics and Physical Sciences.
The aim of the grants was to establish two new research collaborations between mathematicians, statisticians, electrical engineers, and theoretical computer scientists who will work on some of the most challenging questions in the general area of Mathematical and Scientific Foundations of Deep Learning (MoDL).
The NSF-Simons grant, a total of 10 million USD, has now been awarded to two large international teams of scientists who will be splitting the amount. One of the teams includes Professor Emmanuel Abbé at EPFL’s School of Basic Sciences. Professor Abbé holds EPFL’s Chair of Mathematical Data Science and his research focuses on fundamental questions in machine learning and information theory.
The team that includes Abbé also includes scientists from UC Berkeley (P. Bartlett, B. Yu), Stanford (A. Montanari), MIT (E. Mossel, S. Rakhlin, N. Sun), UCSD (M. Belkin), TTIC (N. Srebro), UCI (R. Vershynin), and the Hebrew University of Jerusalem (A. Daniely).
At EPFL, the collaboration will also run a “training center” that will run various activities for visitors, talks, workshops, summer schools, postdoc programs.
“It’s a great honor to get the award and an exciting time to start such a collaboration,” says Emmanuel Abbé. “The team has a unique synergy and I’m very glad to be part of it. We are all eager to now dive into the research program and to get the activities started across the schools.”
The success of deep learning has had a major impact across industry, commerce, science and society. But there are many aspects of this technology that are very different from classical methodology and that are poorly understood. Gaining a theoretical understanding will be crucial for overcoming its drawbacks.
The Collaboration on the Theoretical Foundations of Deep Learning aims to address these challenges: understanding the mathematical mechanisms that underpin the practical success of deep learning, using this understanding to elucidate the limitations of current methods and extending them beyond the domains where they are currently applicable, and initiating the study of the array of mathematical problems that emerge.
The team has planned a range of mechanisms to facilitate collaboration, including teleconference and in-person research meetings, a centrally organized postdoc program, and a program for visits between institutions by postdocs and graduate students.
Research outcomes from the collaboration have strong potential to directly impact the many application domains for deep learning. It will also have broad impacts through its education, human resource development and broadening participation programs, in particular through training a diverse cohort of graduate students and postdocs using an approach that emphasizes strong mentorship, flexibility, and breadth of collaboration opportunities; through an annual summer school that will deliver curriculum in the theoretical foundations of deep learning to a diverse group of graduate students, postdocs, and junior faculty; and through targeting broader participation in the collaboration’s research workshops and summer schools.
The collaboration’s research agenda is built on the following hypotheses: that overparametrization allows efficient optimization; that interpolation with implicit regularization enables generalization; and that depth confers representational richness through compositionality. It aims to formulate and rigorously study these hypotheses as abstract mathematical phenomena, with the objective of understanding deep learning, extending its applicability, and developing new methods.
Beyond enabling the development of improved deep learning methods based on principled design techniques, understanding the mathematical mechanisms that underly the success of deep learning will also have repercussions on statistics and mathematics, including a new point of view of classical statistical methods, such as reproducing kernel Hilbert spaces and decision forests, and new research directions in nonlinear matrix theory and in understanding random landscapes.
In addition, the research workshops that the collaboration will organize will be open to the public and will serve the broader research community in addressing these key challenges.