Computerized Tomography and Reproducing Kernels

© 2025 EPFL
Publié dans la prestigieuse revue SIAM Review, un nouveau cadre développé par Ho Yun et Victor Panaretos remplace les approches traditionnelles fondées sur la transformée de Fourier par une reconstruction basée sur les noyaux reproduisants — produisant des images plus nettes dans des conditions de bruit ou d’angles limités, et ouvrant de nouvelles perspectives en imagerie médicale et scientifique.
The Mathematics of Tomography: from Frustration to Simplification.
During the Christmas break spent in southern France, a graduate student sat flipping through a book on tomography. Ho Yun, a PhD student at EPFL, had borrowed the book to dive into the mathematical intricacies of this essential imaging technique. Tomography, used in medical scanners and other fields, involves reconstructing three-dimensional objects from two-dimensional projections, a cornerstone of modern imaging science.
But this complexity left Ho frustrated. The mathematical framework underpinning tomography was dominated by a complex interplay of Fourier transforms, algebraic reconstruction, and discretization. "The theory felt unnecessarily intricate," Ho said. "I believed there had to be a more intuitive way to formulate it."
Motivated by this frustration, Ho returned to EPFL determined to streamline the theory. He shared his opinion with his supervisor, Victor Panaretos, who empathized, and referred to a ball-and-stick model he had employed nearly 15 years before- in the context of cryo Electron Microscopy- and that simplified things substantially. Could this model be turned into a complete framework? Over the subsequent weeks, under the guidance of his supervisor, Ho refined ideas and turned these heuristics into rigorous mathematics. The outcome of this effort was a new and simplified formalism for tomography in the elegant context of Reproducing Kernel Hilbert Spaces (RKHS). The consequence? A simple yet powerful theory that could enhance computational efficiency and reliability in fields like medical imaging and microscopy. It also offers pedagogical advantages, making the subject far more accessible to advanced students and researchers.
It’s a story of how reimagining a classic problem is not about adding more complexity, but in stripping it away.
What is Tomography?
Tomography has had a transformative impact on fields ranging from medical imaging to materials science. At its core, it is about seeing inside an object without cutting it open. Imagine a CT scanner (a Computed Tomography scanner) rotating around a patient, collecting dozens or hundreds of X-ray images from different angles. Each X-ray image is essentially a shadow, a projection of the object’s internal density onto a flat plane. These shadow-like images are then used to reconstruct the internal structure of the body — for example, creating a detailed image of the brain. This reconstruction is the mathematical heart of CT: how to recover a 3D object from a series of 2D projections.
This invention proved remarkably successful, earning its inventors — Godfrey Hounsfield and Allan Cormack — the 1979 Nobel Prize in Physiology or Medicine. Their breakthrough revolutionized diagnostics, enabling earlier, more accurate detection, and ultimately, better outcomes for patients.
The Traditional Math of Tomography
To understand Ho and Panaretos’ contribution, it helps to appreciate the mathematical challenges inherent in the imaging problem.
In a typical CT scan, X-ray sources and detectors capture multiple projection images – the shadows – from different angles. The resulting projections form what’s known as a sinogram: a collection of measurements that often appears as wavy patterns. In practice, sinograms face two main limitations. First, they can be noisy or distorted. Second, they contain only a finite number of projections, resulting in discrete, incomplete data. Reconstructing the object’s internal structure involves two key steps (see Figure 1). The first step is to interpolate this noisy, discrete sinogram, filling in gaps and filtering out noise. This interpolation yields what’s known as an ideal sinogram: a more coherent representation used for the next critical step: identifying and reconstructing the object. But both interpolation and reconstruction pose technical challenges, involving mathematical complexities to obtain robust solutions.
Figure 1 – From right to left: A real-world sinogram, its interpolation into an ideal sinogram (middle — one can notice the wavy pattern), and the corresponding reconstruction of the real object.
The process relies mathematically on the X-ray transform, a fundamental integral operator in imaging that maps a function, representing the object’s density, to its line integrals, corresponding to the projections recorded by a scanner. These density functions typically belong to the L2 Hilbert space, meaning their squared values remain finite when integrated across their domain. Reconstructing the object typically means inverting the X-ray transform to recover its density function. This introduces two main challenges:
- Given an ideal sinogram, the X-ray transform is invertible, but its inverse is unbounded, making it highly sensitive to observation errors.
- In practical settings, with only a finite number of angles, the X-ray transform becomes non-invertible.
To address these challenges, methods like Filtered Backprojection (FBP) — which combine Fourier analysis with frequency filtering — are commonly used. FBP works in two stages. First, it cleans up each projection by applying a filter in the Fourier domain (hence “filtered”) to correct distortions inherent to the projection process. Then it back-projects the corrected data, smearing each projection back over its original region, and combining them to approximate the image. In essence, back-projection throws the shadows back to where they came from in space, and the filtering step ensures that when they overlap, the contrast is adjusted just right to reconstruct the original object.
However, FBP has well-known quirks and limitations. Since the inversion is unbounded, the method is particularly sensitive to small measurement errors and can be significantly affected by noise or inconsistencies in the data collection. Small deviations in how the angles are distributed can lead to artifacts in the reconstructed image, making it challenging to maintain accuracy under real-world conditions. While modern techniques like machine learning address some of these limitations, they still inherit much of the conceptual and computational complexity of that traditional model.
There’s also a dimensional issue: in higher-dimensional settings, the properties of the X-ray transform make inversion impractical. Moreover, the filtering step, based on the Fourier transform, depends on the parity of the dimension, adding yet another layer of complexity.
In addition, the mathematical context it requires to overcome some of these challenges, involving Sobolev spaces, Besov spaces, or Schwarz distributions, is highly abstract, making it challenging for both implementation and theoretical understanding.
This complexity motivated the PhD student and his supervisor to seek a more elegant and robust mathematical formulation. "It was possible to build a framework that was both simpler and inherently robust", Ho explained.
A Simpler Framework Through RKHS
Ho and Panaretos’ key insight was to reinterpret the X-ray transform as an operator acting within a Reproducing Kernel Hilbert Space (RKHS), rather than the commonly used L2 Hilbert space. The inspiration for this came from a radial basis function model that Panaretos had introduced nearly 15 years before, in order to simplify and analyse the cryoEM reconstruction problem. But this was a model – an assumption. Could it be turned into a mathematical framework by moving from radial basis functions to reproducing kernels? This is the programme that was ultimately executed, and led to the RKHS framework. The RKHS framework eliminates the need for complex techniques and dramatically simplifies both the conceptual and computational structure of tomography. It reduces the problem to a regressio problem — a well-understood statistical task.
How? Kernels, a cornerstone of this methodology, provide the mathematical basis for reproducing all functions within an RKHS. A kernel is a function that defines an inner product. It acts like a gentle glue, binding data points to functions. An RKHS built from that kernel is a space where complex functions can be exactly reconstructed from their values at sample points, using the kernel itself as a building block. Kernels introduce linearity to problems that might otherwise lack it. This becomes a powerful tool with wide-ranging implications.
First, Ho and Panaretos make it possible to unify interpolation and reconstruction into a single, clean step. This unification marks a major departure from the conventional approach. To understand how this works, imagine the collection of all possible X-ray sinograms as another space of functions — call it the sinogram space. In the L2 framework, naive backprojection fails to render the original image accurately. This is why a filtering step is necessary. In the new RKHS framework, the sinogram space is itself an RKHS that is directly linked to the object’s RKHS. In other words, the RKHS is a well-crafted function space in which one does not need the filtering step. X-ray projection becomes a simple shadow play, not a complex operation needing correction. This involves determining coefficients in a finite-dimensional space, a straightforward process akin to solving a linear regression problem.
Second, they show that within this well-designed structure, the X-ray transform behaves like a simple Euclidean projection. This means that backprojection is no longer a messy inverse, but a clean, stable mapping. In practical terms, projecting and back-projecting within RKHS becomes intrinsically stable and lossless.
Third, the RKHS model incorporates regularization naturally, acting as a stabilizing force. Indeed, the kernel method ensures that the reconstruction process stays within a bounded context. As Ho explained, “Instead of getting infinite cylinders, we get French baguettes.” So their method is much less vulnerable to inconsistencies in the angular distribution of projections and measurement perturbations. In the context of classical FBP methods, by contrast, regularization is usually tacked on afterward because of the practical lack of invertibility. These methods can then struggle when the angles at which data are collected are unevenly spaced. Here, regularization is built in from the start, and limits the influence of noise and incomplete data.
Figure 2: Comparing backprojection. L2 backprojection produces "infinite cylinders," leading to a blurred reconstruction (top) and requiring additional filtering. In contrast, RKHS backprojection forms a "baguette" (bottom), naturally suppressing signals outside the bounded region.
As a result, the RKHS methodology achieves high stability and precision, even with discrete, noisy, or incomplete data. That reliability can have a major impact in real-world scenarios involving highly complex datasets, where modern techniques often struggle to balance accuracy and computational feasibility. Together, these advantages could make RKHS-based tomography both theoretically robust and practically viable, opening new doors for tackling real-world imaging challenges.
Applications Beyond Tomography
While this new theory may not immediately replace current medical imaging algorithms, it has the potential to improve them. In CT workflows, neural networks are often used to post-process or even directly reconstruct images, helping to compensate for the limitations of FBP techniques. However, even these machine learning algorithms typically begin with a quick FBP reconstruction, essentially giving the neural network a head start. If the initial FBP image contains artifacts, the neural network must compensate for them, either by learning to correct or to ignore the distortions. Ho and Panaretos propose that their kernel method could offer a better starting point for such hybrid approaches. A clearer initial reconstruction could lead to improved outcomes for FBP-based tomography.
Its broader implications extend beyond medical imaging. One particularly promising application lies in cryo-electron microscopy (Cryo-EM). This discovery, for which Jacques Dubochet was awarded the Nobel Prize in Chemistry in 2017, allows scientists to visualize biological molecules at near-atomic resolution. In Cryo-EM, proteins, viruses, and other particles are suspended in random orientations, resulting in projection data with highly irregular angular distributions. As mentioned earlier, the FBP approach struggles to handle such irregularities, leading to poor reconstructions or significant artifacts. In contrast, the RKHS-based strategy is inherently more robust to random angular sampling. Its ability to interpolate and reconstruct data without relying on uniform angular coverage makes it a natural fit for Cryo-EM.
Teaching Simplicity
The mathematical progress is noteworthy, but an equally important motivation was pedagogical. Panaretos also highlighted the teaching advantages of the RKHS formulation. “Simplifying the underlying theory makes tomography far easier to teach and understand,” he remarked. Indeed, this innovative structure makes tomography significantly more teachable. Instead of requiring weeks of preparatory material, advanced students can now grasp the core concepts in a matter of sessions. “With some basic Hilbert space theory under your belt, you are good to go” remarks Panaretos.
Conclusion
Their work ultimately stands as a beautiful example of mathematical discovery advancing clarity and accessibility. By recasting tomography within the elegant simplicity of RKHS, Ho and Panaretos have provided not just a powerful new tool for imaging scientists, but a testament to the power of intuitive mathematics. This story underscores how frustration with complexity can sometimes lead to beautifully simple solutions, reshaping not only how scientists approach tomography, but also how the next generation learns to see beneath the surface.