0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera


Chethan M. Parameshwara
Nitin J. Sanket
Chahat Deep Singh
Cornelia Fermüller
Yiannis Aloimonos

Perception and Robotics Group
at
University of Maryland, College Park





Abstract

Segmentation of moving objects in dynamic scenes is a key process in scene understanding for navigation tasks. Classical cameras suffer from motion blur in such scenarios rendering them effete. On the contrary, event cameras, because of their high temporal resolution and lack of motion blur, are tailor-made for this problem. We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to our knowledge. Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging. We further speed up our method by using the concept of motion propagation and cluster keyslices. The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets and outperformed the state-of-the-art detection rate by 12%, achieving a new state-of-the-art average detection rate of 81.06%, 94.2% and 82.35% on the aforementioned datasets. To enable further research and systematic evaluation of multi-motion segmentation, we present and open-source a new dataset/benchmark called MOD++, which includes challenging sequences and extensive data stratification in-terms of camera and object motion, velocity magnitudes, direction, and rotational speeds.





Figure: Multi-Motion Segmentation with a monocular event camera on an EV-IMO dataset sequence. Top Row: The event frames are color-coded bycluster membership. The corresponding grayscale frames are shown in the bottom row. Bounding boxes on the images are color coded with respect to theobjects for reference. Note that grayscale images are not used for computation and are provided for visualization purposes only. All the images in thispaper are best viewed in color.




Paper

Chethan M. Parameshwara, Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermüller, Yiannis Aloimonos.




[pdf]
[arXiv]
[Github]