IC research back on the European stage

Anastasia Ailamaki, Mathias Payer, Lenka Zdeborová and Pascal Fua © 2025 EPFL

Anastasia Ailamaki, Mathias Payer, Lenka Zdeborová and Pascal Fua © 2025 EPFL

Anastasia Ailamaki, Mathias Payer and Lenka Zdeborová have been selected by the European Research Council for the 2024 call for ERC Advanced Grants for cutting-edge research into data systems, systems security and neural networks. Similarly, Pascal Fua has received an Advanced Grant from the Swiss National Science Foundation for research in computer assisted engineering.

The decision-making tools powered by artificial intelligence and machine learning open a wealth of transformations to the business ecosystem, yet AI’s explosive growth relies on equivalent advances in hardware and in database management systems for seamless data access and processing.

Software performance, however, faces a trade-off: current systems can either scale through specialization for a hardware platform or be portable across platforms; they cannot do both. At the same time, the surge in data volume and ML/AI demands strains resources.

To sustain AI’s growth, Database Management Systems must optimize the utilization of resources at execution time, moving any pre-execution assumptions and planning to execution time, when workload and available resources are known. They must also be prepared to adapt to real-time workload and resource changes.

Professor Anastasia Ailamaki, head of the Data-Intensive Applications and Systems Laboratory has been awarded funding to develop Prodasys a high-risk, high-gain system design approach that maximizes adaptability, resource utilization, and scalability.

“Prodasys will be a ground-breaking framework for system design using declarative and adaptive cross-component optimization, which is demonstrated through an end-to-end system infrastructure that manifests the aforementioned principles and executes ad- hoc data management tasks at maximum performance and resource utilization,” said Ailamaki.

-------------------------------------------------------

We face a trust crisis in software development. Despite extensive research in system security, software bugs are common and remain a primary attack vector for compromising systems. Unfortunately, formal techniques that give strong security guarantees don’t scale to today’s codebases which often exceed 100-million lines of code. And alternative techniques like automates software testing are inherently incomplete and will inevitably miss some bugs.

In the second grant awarded to the IC School, associate professor Mathias Payer, head of the HexHive Laboratory will develop LEAPS (LEAst Privilege compartmentS), an innovative approach to fine-grained, automatic compartmentalization.

LEAPS will decompose complex applications into simple, isolated compartments that interact through well-defined interfaces. Each compartment is granted minimal privileges, confining bugs to where they occur. Achieving LEAPS will require ground-breaking research across software security, computer systems, and computer architecture. On the policy side, Payer and his team will work to define the necessary constraints and properties for least-privilege compartments.

“The ever-increasing complexity of software has outpaced the ability of developers to secure all interacting components. Current security measures only raise the cost of attacks without fully preventing them, meaning that they remain common. That’s why we are proposing a ground-breaking approach: leveraging modularity to create robust security guarantees by compartmentalizing code and data automatically and efficiently into least-privilege fault compartments,” explained Payer.

All prototypes developed during the LEAPS project will be released as open-source, contributing to the broader security community.

----------------------------------------------------------

Artificial intelligence is driving significant technological advancements, yet our understanding of how to train neural networks more efficiently—with less energy, using smaller datasets, and achieving greater robustness—remains frustratingly limited. Establishing solid theoretical principles behind these technologies is crucial for developing even more efficient, reliable, and safer AI applications.

Now, Professor Lenka Zdeborová who is affiliated to both the School of Computer Science and the School of Basic Sciences, and who heads the Statistical Physics of Computation Laboratory, will use her grant to bridge the gap between the understanding of the theoretical foundations of earlier AI models, like multi-layer fully-connected neural networks and the theoretical understanding of learning with attention-based components and sequence models which lags significantly behind.

Her research project will address questions including: How do AI abilities emerge with the scale of the neural network? What are the minimal data and computational resources required to achieve a specific performance level? And how can we theoretically justify the design choices in neural networks, considering data structure and computational efficiency constraints?

“Inspired by theoretical physics, our approach focuses on simplified, analytically tractable settings to uncover the inner workings of sequence models and attention-based neural networks. By working in the large-size limit, we will leverage methods from the statistical physics of disordered systems, statistics, and probability to generate insights that can enhance the training algorithms and architecture design of future AI systems,” Zdeborová explained.
The questions addressed in this proposal encompass fields such as artificial intelligence, machine learning, statistical data processing, and the broader theory of learning.

_____________________________________

We live in a world full of manufactured objects of ever-increasing complexity that require clever engineering to be functional. Today, individual parts of these objects are often optimized separately, control is rarely accounted for at design time, and much manual tinkering is required.

These shortcomings can largely be ascribed to the composite nature of industrially-designed objects. Essentially, these are assemblies of shapes that must fit together and that should be optimized jointly. Unfortunately, current optimization techniques often rely on handcrafted, low-dimensional shape parameterizations or on dimensionality reduction techniques with limited generalization abilities, unable to fully explore the typically immense design space, while handling all the necessary physical and design constraints.

Professor Pascal Fua, Head of the Computer Vision Laboratory has received funding for an ambitious project to rectify these shortcomings by developing end-to-end trainable pipelines that simultaneously: optimize individual parts; enforce all the required constraints among them; and effectively exploit synergies between shape and controller design, without having to limit the expressiveness of the models.

“This research will require the integration of physics, geometry and topology into Deep Networks, characterizing and sampling the Design Space, and co-designing for shape and control. As such it has, the potential to radically change the way Computer Assisted Engineering is done,” outlined Fua.


Author: Tanya Petersen

Source: Computer and Communication Sciences | IC

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.