EPFL Doctorate Award - 2025 - Steffen Schneider

© Steffen Schneider
Robust machine learning for neuroscientific inference
EPFL thesis n°12067
Thesis director: Prof. Mackenzie Mathis
For his pioneering work at the intersection of neuroscience and artificial intelligence. His development of novel methods for explainable neural dynamics provides unprecedented insight into how complex brain activity and artificial networks can be understood, opening new frontiers in interpretable machine learning and systems neuroscience.
Modern neuroscience produces large-scale datasets, from thousands of simultaneously recorded neurons to behavioral measurements spanning months. To uncover the hidden causes of such complex systems and their dynamics, we need robust methods for processing, analyzing, and interpreting data that also guide future experiments. For processing, I developed adaptation techniques to improve the robustness of machine learning models at deployment. For analysis, I introduced new identifiability theory for contrastive learning and proposed CEBRA, a framework that jointly models neural and behavioral data for neuroscientific discovery and hypothesis testing. Finally, I extended this model to enable interpretability and proposed an identifiable approach to generating attribution maps. This work presents a step toward making modern machine learning a reliable tool for scientific discovery, built on reproducibility and robustness.
