Physics + Machine Learning — Nanophotonic Neural Networks
Nanophotonic Neural Nets are a nascent subset in physics and machine learning, which promise low-energy, ultra high-throughput ML implemented optically.
Most computing hardware is very computationally wasteful applied to artificial neural networks as they were designed for von Neumann (traditional) computing schemes.
“One of the major problems is that [von Neumann computing schemes] generally create intermediate data to implement the inference. The data movement, especially off-chip, causes a lot of penalty in energy and latency. That is a bottleneck.” ~ Meng-Fan Chang, National Tsing Hua University Professor
Some of the advantages of optical systems, in terms of energy requirement, is that they’re fully passive. No active computing is needed, so they also generate virtually no heat. Further, the “clock rate” of an optical system is not as limited as those in traditional electronic computing, as it’s limited by the “laser modulation frequency,” which can be between around 100GHz-1THz.
Of course, it’s a very new idea, so you can’t expect the same maturity as in standard electronics fabrication.
A Stanford University group recently published a paper describing a method for physically-implementable optical activation functions to be used in Nanophotonic Neural Networks (NNNs, anyone?). They also open-sourced their simulator.
Their simulation platform, written in Python, can be installed with pip install neuroptica
. You can play around with it yourself, or follow their demo for planar data classification, including a cell to specify an electro-optic activation.
Conclusion, or “Why Optical”?
Optical neural networks process information at the speed of light, though they’re not at a scalable manufacturing phase.
This article was written by Frederik Bussler, former CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.