Penn engineers are developing a new chip using a deep neural network of optical waveguides capable of classifying nearly 2 billion frames per second

This Article is written as a summay by Marktechpost Staff based on the paper 'An on-chip photonic deep neural network for image classification'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper and post.

Please Don't Forget To Join Our ML Subreddit

Penn engineers have designed a new chip that uses a deep neural network of optical waveguides to recognize and classify an image in less than a nanosecond without the need for a separate processor or memory unit.

The study published in Nature explains how the chip’s many optical neurons are linked together using optical wires or ‘waveguides’ to build a deep network of multiple ‘layers of neurons’ that resembles the human brain . Information flows through the layers of the network, with each step helping to classify the input image into one of the learned categories. The chip-organized images in the study were hand-drawn letter-like characters.

Artificial intelligence (AI) is used in various systems, from text prediction to medical diagnosis. Many AI systems are based on artificial neural networks, electrical analogs of biological neurons interconnected with a known data set, such as photographs, and then used to detect or classify new data points inspired by the human brain.

The researchers’ chip, which measures less than a square centimeter, can recognize and classify an image in less than a millisecond without using a separate CPU or memory unit.

Source: https://penntoday.upenn.edu/news/penn-engineers-chip-can-classify-nearly-two-billion-images-second

The image of the target object is initially created on an image sensor, such as a smartphone’s digital camera, in classic neural networks used for image identification. The image sensor then transforms the light into electrical impulses, converted into binary data that can be processed, analyzed, stored and classified using computer processors. Accelerating these capabilities is essential for a variety of applications, including facial recognition, automatic recognition of text on photographs, and assisting self-driving cars to spot obstacles.

While consumer-grade image classification technology in most applications can benefit from a digital chip capable of performing billions of calculations per second, more advanced image classification applications such as identification moving objects. Even the most sophisticated technology is pushed to its limits by recognizing 3D objects and classifying microscopic body cells. The linear order of computational steps in a computer processor controlled by a clock-based schedule is now the speed limit of these technologies.

Penn engineers have developed the first scalable chip that instantly classifies and recognizes photos to overcome this restriction. An electrical and systems engineering professor, along with a postdoctoral fellow and a graduate student, eliminated the four main time-consuming components of a traditional computer chip: converting optical signals to electrical signals, the need to convert data from input in binary format, large memory module and clock-based calculations.

They did this by using a deep optical neural network based on a 9.3 square millimeter chip to directly process the light received from the object of interest.


Source link