Calculations at the Speed of light: A Photon-based Analog AI Accelerator Developed in the US

Team of Engineers at Pennsylvania University Develops New Silicon Photonic Chip

Scientists from the Pennsylvania University’s Faculty of Engineering and Applied Sciences have created a new chip that uses light waves instead of electricity to perform complex mathematical operations. The silicon photonic chip can be manufactured using existing technology and can serve as a co-processor for Graphics Processing Units (GPUs) for tasks related to machine learning.

Numerous Implications for Large-Scale Analogue Computing Platforms

The researchers successfully tested the chip with 2×2 and 3×3 vector-matrix multiplication operations, and also demonstrated its function with a 10×10 matrix. These exercises suggest that the proposed methods hold promise for the development of large-scale analogue computing platforms using light waves. The scientists have detailed this research in an article in the Nature Photonics journal.

Key Features of the Silicon Photonic Chip

The design of the new chip is based on the proof of the concept of manufacturing waveguides and amorphous lenses directly on a silicon wafer using standard etching and wafer processing techniques. Current methods of making such structures face limitations such as a narrow bandwidth and high sensitivity to manufacturing errors, which impedes the scalability of such architectures.

As the developers explain, instead of using a silicon wafer of uniform height, the height is reduced in certain areas by about 150 nanometers. These height modifications provide a means of controlling how light moves through the chip. The distribution of these height changes can be controlled to scatter light in specific patterns, enabling mathematical computations to be performed at the speed of light.

Essentially, waveguides are etched into the silicon, and a lens system is created. This allows the passage of light signals through a network of waveguides, following a strict algorithm based on the input signals. This system allows certain calculations to be offloaded from a typical GPU, quickening computations for tasks related to artificial intelligence and machine learning.

Related Posts