EVDodgeNet: Deep Dynamic Obstacle Dodging with
event cameras

Nitin J. Sanket*1
Chethan M Parameshwara*1
Chahat Deep Singh1
Cornelia Fermüller1
Davide Scaramuzza2
Yiannis Aloimonos1
*Equal Contribution

1Perception and Robotics Group
University of Maryland, College Park

2Robotics & Perception Group
University of Zurich | ETH Zürich

New!!! To be presented in ICRA2020
Code and Dataset Released: Github


Dynamic obstacle avoidance on quadrotors requires low latency. A class of sensors that are particularly suitable for such scenarios are event cameras. In this paper, we present a deep learning based solution for dodging multiple dynamic obstacles on a quadrotor with a single event camera and onboard computation. Our approach uses a series of shallow neural networks for estimating both the ego-motion and the motion of independently moving objects. The networks are trained in simulation and directly transfer to the real world without any fine-tuning or retraining. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy, proving that our navigation stack can cater to different scenarios.

Figure: A real quadrotor running EVDodgeNet to dodge two obstacles thrown at it simultaneously. From left to right in bottom row: (a) Raw event frame as seen from the front event camera. (b) Segmentation output. (c) Segmentation flow output which includes both segmentation and optical flow. (d) Simulation environment where EVDodgeNet was trained. (e) Segmentation ground truth. (f) Simulated front facing event frame. All the images in this paper are best viewed in color.


Nitin J. Sanket*, Chethan M Parameshwara*, Chahat Deep Singh, Cornelia Fermüller, Davide Scaramuzza, Yiannis Aloimonos.

* Equal Contribution