CoreNEURON – helping to deliver science sooner

© 2019 EPFL

© 2019 EPFL

In order to improve performance and benefit from new computing architectures the EPFL Blue Brain Project has isolated the core functionalities of the NEURON simulator and optimized them into a new simulator engine CoreNEURON. In a paper published in Frontiers in Neuroinformatics, the Blue Brain explains how CoreNEURON helps existing NEURON users to simulate their models faster, better utilize computing resources and ultimately help to deliver science sooner. 

Understanding the brain is one of the largest Big Data challenges we have today. After years of theory and experimentation, simulation neuroscience has become the third pillar of the scientific method complementing these two traditional pillars. Studying models of brain components, brain tissue or even whole brains provides new ways to integrate anatomical and physiological data and allow insights into causal mechanisms crossing scales and linking structure to function.

Over the past three decades, the NEURON simulator has been developed to enable neuroscientists to model the electrical activity of neuronal networks. The more detail that is included in these models and the larger the models become, the larger are the computational requirements of simulators such as NEURON, making it necessary to embrace advanced computational concepts and faster computers. This is the case for the Blue Brain Project whereby, computational requirements influence the amount of detail that can be incorporated when building biologically detailed, digital reconstructions and simulations of the mouse brain.

Reducing memory requirements and improving time to simulation

Taking into account the thousands of existing NEURON models, Blue Brain has developed CoreNEURON in collaboration with Michael Hines and the team at Yale University and with the European Human Brain Project with the goal of minimizing memory footprint and maximizing scalability on large supercomputers. It supports graphics processing units, makes use of memory optimized data structures, and allows efficient use of compute hardware capabilities such as SIMD units on desktop as well as large supercomputing platforms. The memory usage is reduced by 4-7x and simulation time improved up to 2-7x thereby helping existing NEURON users to simulate their models faster. In addition, the CoreNEURON simulator handles spiking network simulations including gap junction coupling with the fixed time step method, which better, utilizes computing resources and therefore helps to deliver science sooner.

“Improving memory usage and execution time with CoreNEURON is helping the Blue Brain to progress faster by adding new regions and greater detail to our models,” explains Pramod Kumbhar, HPC Architect at Blue Brain. “All of our large scale simulations at the Blue Brain Project are now being done on CoreNEURON.”

Flexibility for model building and efficiency for model simulation

A further benefit to the neuroscience community is that the offline execution mode of CoreNEURON provides flexibility to build and simulate large network models that cannot be simulated with NEURON. This is possible thanks to the modular design of CoreNEURON library that can be dynamically loaded at runtime with NEURON or can be run as a standalone application with the model built by NEURON simulator.

A tool for the community

CoreNEURON is part of the open NEURON simulation ecosystem, which is used by thousands of researchers around the world.

“Simulation of the detail and scale we and our collaborators do, requires innovation to make the best use of the computer architectures of today and tomorrow”, explains Prof. Felix Schürmann, Blue Brain’s Computing Director. “At the same time, it is important to make the entry threshold as low as possible to make this innovation useful for the wider community. By integrating CoreNEURON with NEURON, we believe we are addressing both challenges,” he concludes.

CoreNEURON is open source and in use today enabling simulation science of detailed tissue models at unprecedented scale. Oren Amsalem, Neurobiology department, Hebrew University of Jerusalem uses CoreNEURON to simulate the somatosensory cortex with gap junctions: “CoreNEURON accelerated our simulations allowing us to replicate full length visual and auditory experiments. This enabled us to gain a deeper understanding of experimental results by uncovering the underlying biophysical and molecular mechanisms.”

NEURON and CoreNEURON furthermore are core technologies for the European Human Brain Project, where these simulators get integrated into a larger research infrastructure for brain research, enabling the community to build large-scale brain region models such as a biophysically detailed model of the hippocampus CA1 of a young rat as shown in the infographic above.

CoreNEURON and code generation program MOD2C are open sourced and available on GitHub -

Click here to read the paper -

Kumbhar, P., Hines, M., Fouriaux, J., Ovcharenko, A., King, J., Delalondre, F., and Schürmann, F. (2019). CoreNEURON : An optimized compute engine for the neuron simulator. Front. Neuroinform. 13, 63.


This work has been funded by the EPFL Blue Brain Project (funded by the Swiss ETH board), NIH grant number R01NS11613 (Yale University), the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement n◦ 604102 (HBP) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement n◦ 720270 (Human Brain Project SGA1) and Grant Agreement n◦ 785907 (Human Brain Project SGA2).


CoreNEURON is developed in a joint collaboration between the EPFL, Blue Brain Project, Yale University and the European Human Brain Project. The Blue Brain Project would like to thank Michael Hines in particular for his on-going support and his extensive knowledge of NEURON use cases over the years, which has helped, prioritize the features that have gone into CoreNEURON.