The sensor is a major breakthrough for fields such as image recognition, robotics and artificial intelligence. Findings by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera were published today in Applied Physics Letters.
"You can think of it as a single pixel doing something that would currently require a microprocessor," said Labram, who is leading the research effort with support from the National Science Foundation.
Researchers at Oregon State University are making key advances with a new type of optical sensor that more closely mimics the human eye's ability to perceive changes in its visual field.
Products and exhibitors around the eye
Exhibitors and products related to this topic can be found in the database of MEDICA 2020:
The new sensor could be a perfect match for the neuromorphic computers that will power the next generation of artificial intelligence in applications like self-driving cars, robotics and advanced image recognition, Labram said. Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain's massively parallel networks.
"People have tried to replicate this in hardware and have been reasonably successful," Labram said. "However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers."
A spectacularly complex organ, the eye contains around 100 million photoreceptors. However, the optic nerve only has 1 million connections to the brain. This means that a significant amount of preprocessing and dynamic compression must take place in the retina before the image can be transmitted.
As it turns out, our sense of vision is particularly well adapted to detect moving objects and is comparatively "less interested" in static images, Labram said. Thus, our optical circuitry gives priority to signals from photoreceptors detecting a change in light intensity - you can demonstrate this yourself by staring at a fixed point until objects in your peripheral vision start to disappear, a phenomenon known as the Troxler effect.
Conventional sensing technologies, like the chips found in digital cameras and smartphones, are better suited to sequential processing, Labram said. Images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor.
In Labram's retinomorphic sensor, the perovskite is applied in ultrathin layers, just a few hundred nanometers thick, and functions essentially as a capacitor that varies its capacitance under illumination. A capacitor stores energy in an electrical field.
"The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on," he said. "As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that's what we want."
"We can convert video to a set of light intensities and then put that into our simulation," Labram said. "Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals."
"The good thing is that, with this simulation, we can input any video into one of these arrays and process that information in essentially the same way the human eye would," Labram said. "For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing."
MEDICA-tradefair.com; Source: Oregon State University