Microsoft PhD Fellowship awarded to Simla Harma
EPFL PhD student Simla Harma from the Parallel Systems Architecture Laboratory (IC) was recently awarded a prestigious Microsoft PhD Fellowship in the field of cloud architecture. According to Microsoft, “the Microsoft Research PhD Fellowship is a global program that identifies and empowers the next generation of exceptional computing research talent.”
The award confers $15,000 toward her thesis research, as well as an internship opportunity at a Microsoft campus.
I am from Turkey. I got my Bachelor’s in Computer Engineering at TOBB University of Economics and Technology (TOBB ETU) and finished a second major in Mathematics, both with 4.00/4.00 G.P.A., and ranked first in the university. During my undergrad studies, I did four internships in both academia and industry. My last internship was at EPFL with Summer@EPFL Program, in Parallel Systems Architecture Laboratory (PARSA) under the supervision of Prof. Babak Falsafi, where I worked on accelerating Deep Neural Network microservices. I completed my Master’s degree on the same topic at TOBB ETU’s Computer Engineering department under the supervision of Prof. Oğuz Ergin while collaborating with Prof. Falsafi.
During my undergraduate life, I always found projects to work on and a research tasks to conduct. I am eternally motivated to improve myself and contribute to the world of science. The Summer@EPFL Program was a great opportunity for me to learn more about EPFL and several interesting research projects carried out by its faculty members. Following my internship experience and my collaborative work during my Master’s, I knew I wanted to work at PARSA for my PhD.
My research focuses on investigating a clean-slate algorithm/architecture co-design of unified accelerators (that can accurately train and perform inference) in both single-node and multi-node systems.
Deep neural networks have become an essential tool for various applications in a wide range of areas like science, health, and entertainment. However, the explosion in the size and complexity of the models and datasets have led to a corresponding increase in computing demand. This results in significant electricity consumption, carbon emissions, and an ever-growing environmental footprint. According to the European Commission, the ICT sector is responsible for five to nine percent of global electricity consumption and more than 2% of all emissions.
My research targets building deep neural network platforms that are optimal in performance/Watt and improve utility by unifying the infrastructure for both training and inference. My work aims to reduce the negative environmental impacts of growing deep neural network workloads.
Microsoft Research works on the challenges introduced by growing DNN model size, model complexity, and data amount. They have multiple projects aiming to build energy- and time-efficient platforms for DNN training and inference. One of these projects Project Brainwave is using a block-floating-point-based number format Microsoft Floating Point (MSFP) to improve logic density for DNN inference.
DNN training is more sensitive to quantization than inference and we are studying the whole parameter space of Hybrid Block Floating Point for training. This number format creates the opportunity to consolidate two divergent DNN accelerator designs, training and inference, into a unified accelerator. While using a significantly similar number format to the Project Brainwave inference accelerator, our aim is to design accelerators capable of both training and inference.
These similarities between our work and Microsoft’s research show the potential for new collaborations and new discoveries in this research area.
Interested to learn more about funding from Microsoft Research? Contact the Research Office!