Calculations at the Speed of light: A Photon-based Analog AI Accelerator Developed in the US

Team of Engineers at Pennsylvania University Develops New Silicon Photonic Chip

Scientists from the Pennsylvania University’s Faculty of Engineering and Applied Sciences have created a new chip that uses light waves instead of electricity to perform complex mathematical operations. The silicon photonic chip can be manufactured using existing technology and can serve as a co-processor for Graphics Processing Units (GPUs) for tasks related to machine learning.

Numerous Implications for Large-Scale Analogue Computing Platforms

The researchers successfully tested the chip with 2×2 and 3×3 vector-matrix multiplication operations, and also demonstrated its function with a 10×10 matrix. These exercises suggest that the proposed methods hold promise for the development of large-scale analogue computing platforms using light waves. The scientists have detailed this research in an article in the Nature Photonics journal.

Key Features of the Silicon Photonic Chip

The design of the new chip is based on the proof of the concept of manufacturing waveguides and amorphous lenses directly on a silicon wafer using standard etching and wafer processing techniques. Current methods of making such structures face limitations such as a narrow bandwidth and high sensitivity to manufacturing errors, which impedes the scalability of such architectures.

As the developers explain, instead of using a silicon wafer of uniform height, the height is reduced in certain areas by about 150 nanometers. These height modifications provide a means of controlling how light moves through the chip. The distribution of these height changes can be controlled to scatter light in specific patterns, enabling mathematical computations to be performed at the speed of light.

Essentially, waveguides are etched into the silicon, and a lens system is created. This allows the passage of light signals through a network of waveguides, following a strict algorithm based on the input signals. This system allows certain calculations to be offloaded from a typical GPU, quickening computations for tasks related to artificial intelligence and machine learning.

This post was last modified on 02/22/2024

Julia Jackson: Hey there! I'm Julia Jackson, your friendly neighborhood tech geek, always navigating the exciting realms of technology with unbridled enthusiasm. Born and raised in the digital age, I've been on a relentless quest to understand and unravel the intricacies of the ever-evolving tech landscape. Hailing from a generation that witnessed the meteoric rise of the internet, I've been a digital native since the dial-up days. From the nostalgic hum of connecting to the World Wide Web to the lightning-fast speeds of today's fiber optics, I've witnessed and adapted to the digital evolution with a keen eye and a passion for all things tech. My love affair with technology goes beyond just using gadgets; I'm driven by an insatiable curiosity to understand the nuts and bolts that power our digital world. Whether it's coding languages, emerging technologies, or the latest in artificial intelligence, I'm always eager to delve deeper and unravel the mysteries that make our digital existence possible. Beyond my personal pursuits, I'm deeply committed to fostering a sense of community in the tech world. Whether through sharing knowledge on online forums, attending tech meetups, or mentoring aspiring techies, I believe in the power of collaboration and knowledge sharing to propel us all forward.