Google PhD Fellowship
EPFL PhD student Maksym Andriushchenko from the Theory of Machine Learning Laboratory (IC), headed by Prof. Nicolas Flammarion, was recently awarded a prestigious Google PhD Fellowship (North America/Europe)
EPFL PhD student Maksym Andriushchenko from the Theory of Machine Learning Laboratory (IC), headed by Prof. Nicolas Flammarion, was recently awarded a prestigious Google PhD Fellowship (North America/Europe). “These awards have been presented to exemplary PhD students in computer science and related fields. We have given these students unique fellowships to acknowledge their contributions to their areas of specialty and provide funding for their education and research. We look forward to working closely with them as they continue to become leaders in their respective fields”, said Google in a press release.
The award covers full tuition, fees, and a bursary for up to 3 years.
We recently caught up with Maksym to learn more about his past endeavors and current research interests. Read on to find out more!
1. What was your background before coming to EPFL?
Before starting my PhD at EPFL, I did my master's studies in computer science at Saarland University and my master's thesis at the University of Tübingen. At that time, I started doing research in machine learning which made me very interested in pursuing an academic career. It was very exciting to work on questions which had not been answered before and expand our knowledge around them, even by a tiny bit. However, my interest in machine learning started even before that, when I was working as a data scientist in the largest Ukrainian bank where I could clearly see how machine learning algorithms are being used at the scale of millions of people. This made me realize how crucial it is to develop a better understanding of these algorithms as their impact on our lives is becoming increasingly important.
2. What brought you to EPFL?
When I was applying for PhD positions, I had already read multiple scientific articles from EPFL authors, so I knew EPFL for its very high standards in research. I also knew several people from EPFL who confirmed that it is a great place to be and that Switzerland is very welcoming for international students. In hindsight, it was definitely the right decision to join EPFL as it has been providing a truly great environment to grow. Also, I can't avoid mentioning how great are the Alps and Swiss nature!
3. Can you summarize your PhD research in only one sentence?
The current state-of-the-art machine learning models are surprisingly non-robust to small perturbations of the input data, and my research is focusing on improving on this problem in a principled manner.
4. What impact do you foresee your research having on society?
The lack of robustness of machine learning models is a fundamental problem which can lead to negative impact in various domains. As an example, I recently finished an internship with Adobe Research where I was working on improving robustness of content authenticity models. These models aim at establishing authenticity of images distributed online in order to prevent the spread of manipulated content and fake news. The fact that these models can be non-robust raises a vulnerability issue which malicious actors can exploit to bypass the model and keep spreading inauthentic images. Thus, it is very important that such sensitive applications rely on robust machine learning models.
5. Why is your research a good match for the missions of Google?
Machine learning in general is one of the most important research areas for Google. It is the core technology behind their search engine, and it is widely applied across many of their products. Ensuring that their machine learning algorithms behind, e.g., Google Image Search or Google Cloud Vision API are robust can be very important. Moreover, research on robustness is also of general interest as it can shed more light on how state-of-the-art machine learning models actually work and arrive at their decisions. This is important as most of these models are very large black boxes trained on very large datasets and their inner workings are often a mystery. Research on robustness is one way to improve our understanding of them and make them more interpretable.
Interested to know more about Google PhD fellowships? Contact us at the Research Office.