Harnessing light to build deep neural networks

Demetri Psaltis © 2023 EcoCloud/EPFL

Demetri Psaltis © 2023 EcoCloud/EPFL

Demetri Psaltis has long been known as a pioneer in the field of optical neural networks, but the technology fell out of favor in the 1990s. It is back, however, and the STI professor has renewed his interest in this area.

It is not often that professors from EPFL's School of Engineering (STI) are featured in the pages of The Economist. In an article from December 2022, entitled "Artificial intelligence and the rise of optical computing", the author stated that Demetri Psaltis of the Optics Laboratory “was among the first to use optical neural networks for face recognition.” Psaltis was indeed among the first to build such networks as early as the 1980s. However, he has not been focused solely on this subject for 40 years: it is a lot more complicated – and interesting – than that.

Neural networks – a lattice of facts (nodes) and connectors (edges) – gather data in a way inspired by our nervous system. They use artificial intelligence to increase the depth of their own knowledge for the benefit of the user. An example of a deep neural network is ChatGPT. Optical neural networks transport data with photons rather than electrons, making use of light’s parallelism and speed as a form of parallel processing.

Psaltis described the basis of this new technology in Applied Optics in 1985: "Optical techniques offer an effective means for the implementation of programmable global interconnections of very large numbers of identical parallel logic elements."

The world was not ready for optical neural networks

In the 1990s, Psaltis abandoned optical neural networks due to a lack of practical devices that can be integrated, and insufficient knowledge of complex neural networks. While he may have decided to leave neural networks on the back burner, Psaltis found new ways to expand the field of optics. Chemical advances in optical components, reconfigurable optical circuits, low-energy applications for optical devices, and microfluidics all feature in Psaltis’s work at Caltech until 2007 and at EPFL thereafter, but there is no mention of neural networks in his publications from 2003 until 2019. Then it is back – with a vengeance.

"What we have now is Google, supercomputers, fiber optic networks,” explains the professor. “So, everybody is talking about optical neural networks again. In industry, over a billion dollars has been invested in this technology!"

So what work are he and his researchers doing now, and how do neural networks feature in it? We can look briefly at two examples.

From optics to neural networks – and back again

Just last year, in a paper written with Christophe Moser of the Laboratory of Applied Photonics Devices, Psaltis wrote: “We use modern data-driven deep neural networks-based methods for imaging, projection in scattering media and specifically multimode fibers." Multimode fiber technology uses multiple rays of light simultaneously, with each running at a different reflection angle. It is faster and uses less energy than traditional binary technology.

As well as using optical computing to build neural networks, he is building neural networks that can help design components for optical computing. During the manufacture of microlenses, engineers can now make a simulation of the behavior of light passing through them by using MaxwellNet: a deep neural network with a difference. "We built a deep neural network out of Maxwell's equations. With MaxwellNet, the laws of physics are the rules. You don't have to compile a database or train the model so much as to run it through our application, and let James Clerk Maxwell decide if your model will work or not."

It sounds like science fiction, but it is just another chapter in this lifelong, ongoing adventure: harnessing light, surfing photons, conjuring matrices of data, and watching them dance.

This article was written by John Maxwell, head of communication at EcoCloud. Read the original full article on the EcoCloud website here.