Self-supervised, label-free 3D cell imaging is here

© Mackenzie Mathis (EPFL)
EPFL researchers introduce CellSeg3D, a self-supervised tool for 3D cell segmentation in fluorescence microscopy, eliminating the need for manual labeling and enhancing accessibility for various biological studies.
In the realm of biological imaging, accurately identifying and segmenting cells in three dimensions (3D) has been a persistent challenge, often requiring extensive manual labeling. Although new deep learning approaches have emerged in recent years that work very well for 2D and some 3D applications, a major limitation has been the need for supervised training data.
The state-of-the-art AI-based 3D cell segmentation methods rely heavily on supervised learning, necessitating large datasets of manually annotated images. This can create a significant barrier, especially when dealing with diverse tissue types or rare cell populations, where annotated data is scarce or nonexistent. For example, if someone wants to train a new model to recognize cells in whole brain imaging data, which is of growing interest, it can take hundreds of hours to manually annotate the data in 3D.
A team led by Professor Mackenzie Mathis at EPFL has now developed both a new 3D dataset—which is crucial for testing models—and a new Python software called CellSeg3D. This package includes both state-of-the-art supervised models, but critically, it introduces a new self-supervised learning approach called WNet3D that segments 3D fluorescence microscopy images without the need for manual labels. By leveraging inherent patterns within the imaging data, CellSeg3D learns to identify and delineate cellular structures autonomously.
CellSeg3D started due to the lack of 3D models when the team wanted to analyze whole brain data from mice. With support from the Wyss Institute, the team worked with the new mesoSPIM microscope to collect cleared whole-brain data. Then, they set to work benchmarking models and ultimately developing CellSeg3D.
“We developed the code base nearly completely in the open, on GitHub,” says Mathis. “I am a big proponent of making usable, open source tools, but this was a particularly fun experiment spanning a team of undergrad, master’s and PhD students, a software engineer, a technician, and post-docs in my group.” This in-the-open development paid off: even before it was formally published (the version of record) on eLife, it had 95K installations.
The researchers perfomred rigorous tests across four diverse datasets, including a mouse brain dataset captured with the mesoSPIM light-sheet microscope. In all tests, CellSeg3D performed on par with or better than supervised tools: it consistently segmented densely packed nuclei and complex tissue structures, all without needing a single labeled image. This makes CellSeg3D especially appealing to researchers working with under-studied organisms or tissues, where labeled training data simply doesn’t exist.
CellSeg3D lowers the barrier to entry for 3D biological image analysis. That means more labs, including those without specialized computational teams, can turn raw imaging data into usable insights. It could speed up research in areas like brain mapping, cancer biology, and regenerative medicine.
It also carries an important educational message: several of the paper’s authors were students who contributed to the development of CellSeg3D after using it their course work at EPFL.
Cyril Achard, the first author of the eLife paper presenting the software, who started in Mathis’s lab as an undergraduate then NeuroX master’s student, says: “Going all the way from a simpler 3D annotation tool to a more substantial workflow for label-free cell segmentation was an excellent way to learn more about all these concepts; and, later on, being confronted with the whole process of writing a publication made it into a very complete and unique learning experience that strongly shaped my interests during my master’s and beyond.”
The project’s open development model didn’t just result in better software—it also trained the next generation of interdisciplinary scientists.
List of contributors
- EPFL Brain Mind Institute and Neuro X
- Wyss Center for Bio and Neuroengineering
The Vallee Foundation
Wyss Institute
Bertarelli Foundation
Cyril Achard, Timokleia Kousi, Markus Frey, Maxime Vidal, Yves Paychère Colin Hofmann, Asim Iqbal, Sebastien B. Hausmann, Stéphane Pagès, Mackenzie Weygandt Mathis. CellSeg3D, self-supervised 3D cell segmentation for fluorescence microscopy. eLife 24 June 2025. DOI: 10.7554/eLife.99848