Today, computer cameras and computer vision algorithms on massively parallel hardware (such as GPUs) provide the main sensor source for surveillance of environments. Over the last years, a radically different, neurobiologically inspired type of vision sensor has emerged: event-based vision cameras. Those sensors do not operate on video frame images, but instead they mimic the perception of light as the biological eye does. They asynchronously report changes of illumination at individual pixels as “events” (or as “neuronal spikes”). These sensors overcome limitations of traditional cameras:
- they offer minimal-latency
- they adjust computing resources to the scene complexity
- they show no motion blur
- they require only low-bandwidth data transfers
- they provide ≥100dB perceivable illumination range.
Overall, such sensors are extremely well suited for low-power continuous monitoring, especially when combined with novel neuromorphic algorithms. In this research, we explore such a combination of novel sensing hardware, novel processors, and novel algorithms.
The unique combination of novel algorithms and low-power sensors and computing hardware creates new embedded vision systems that consume significantly less power compared to standard computer vision technology, e.g. for continuous surveillance.