EVDodgeNet: Deep Dynamic Obstacle Dodging with
event cameras


Nitin J. Sanket*1
Chethan M Parameshwara*1
Chahat Deep Singh1
Ashwin Kuruttukulam1
Cornelia Fermüller1
Davide Scaramuzza2
Yiannis Aloimonos1
*Equal Contribution

1Perception and Robotics Group
at
University of Maryland, College Park

and
2Robotics & Perception Group
at
University of Zurich | ETH Zürich

Presented in ICRA2020
Code and Dataset Released: Github






Abstract

Dynamic obstacle avoidance on quadrotors requires low latency. A class of sensors that are particularly suitable for such scenarios are event cameras. In this paper, we present a deep learning based solution for dodging multiple dynamic obstacles on a quadrotor with a single event camera and onboard computation. Our approach uses a series of shallow neural networks for estimating both the ego-motion and the motion of independently moving objects. The networks are trained in simulation and directly transfer to the real world without any fine-tuning or retraining. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy, proving that our navigation stack can cater to different scenarios.





Figure: A real quadrotor running EVDodgeNet to dodge two obstacles thrown at it simultaneously. From left to right in bottom row: (a) Raw event frame as seen from the front event camera. (b) Segmentation output. (c) Segmentation flow output which includes both segmentation and optical flow. (d) Simulation environment where EVDodgeNet was trained. (e) Segmentation ground truth. (f) Simulated front facing event frame. All the images in this paper are best viewed in color.




Paper

Nitin J. Sanket*, Chethan M Parameshwara*, Chahat Deep Singh, Ashwin Kuruttukulam, Cornelia Fermüller, Davide Scaramuzza, Yiannis Aloimonos.

* Equal Contribution





[pdf]
[Supplementary]
[arXiv]
[IEEE ICRA 2020]
[Github]
[Bibtex]