Forum Numerica - Benoit Cottereau

Benoit Cottereau
Benoit Cottereau

 
 
Abstract

Deep neural networks can achieve incredible performances in numerous visual tasks but remain impeded by data corruptions in practical scenarios. Moreover, they usually rely on an important number of parameters coded by real values, which limits their implantation on neuromorphic chips for embedded applications. In this talk, I will present recent work from my group showing how event-based cameras and spiking neural networks (SNNs) can be combined to develop robust and energy-efficient artificial vision systems for scene understanding that do not suffer from these limitations. 

About the speaker

CNRS research director at the IPAL laboratory (Singapore) and at the Cerco laboratory (Toulouse, France). I lead the Efficient AI research group at IPAL and the Spatial Vision team at CerCo. I am also the responsible of the integrative neuroscience master program of Toulouse III University. My researches focus on visual processing in biological and artificial systems with a particular interest for visual scene understanding, e.g., semantic segmentation, depth estimation, and motion prediction. I notably develop bio-inspired vision systems which are based on event-based cameras and spiking neural networks. My work has numerous applications in terms of technological outputs (e.g., for designing bio-inspired AI systems) and in the clinical domain (e.g., for assisting patients with visual pathologies).