PhD Defense of Fabian Latorre
On August 18th 2023, Fabian Latorre, a PhD student at LIONS lab, successfully defended his PhD thesis. The thesis, entitled "Robust Training and Verification of Deep Neural Networks" was supervised by Prof. Volkan Cevher. Congratulations to Fabian!
Abstract: According to the proposed Artificial Intelligence Act by the European Commission (expected to pass at the end of 2023), autonomous driving vehicles based on Deep Learning would be classified as a High-Risk AI System (Title III). Article 15 in the aforementioned legislation specifies that such systems must at least comply with the following: "High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (data poisoning), inputs designed to cause the model to make a mistake (adversarial examples), or model flaws”.
The body of work that my Ph.D. thesis comprises, is a stepping stone towards robust Deep Learning systems that comply with the requirements of the AI Act. In my research I have developed theory and algorithms to efficiently train Deep Neural Networks with guarantees of robustness to adversarial perturbations, as well as random noise. The main contributions of my Ph.D. Thesis are: (1) first theoretically correct descent method for Adversarial Training, the most common algorithm for training robust networks (ICLR2023), (2) first analysis of generalization and robustness of Deep Polynomial Networks and a novel regularization scheme to improve their robustness (ICLR2022), (3) explicit regularization scheme for the regularization of Quadratic Neural Networks with guaranteed improvement in the robustness to random noise, compared to SVMs (NeurIPS2021), (4) Established a connection between the 1-path-norm of a Neural Network and its robustness to adversarial examples, and provided the first algorithm with guarantees for performing 1-path-norm regularization for Shallow Networks (ICML 2020), (5) first algorithm for certifying the robustness of Deep Neural Networks with the use of Polynomial Optimization, by upper bounding their Lipschitz constant (ICLR2020), and (6) ADMM algorithm with guarantees of fast convergence, for the problem of Denoising Adversarial
Examples using Generative Adversarial Networks as a prior (NeurIPS2019).