EVDodge: Embodied AI for High-Speed Dodging on a quadrotor using event cameras

Nitin J. Sanket*1
Chethan M Parameshwara*1
Chahat Deep Singh1
Cornelia Fermüller1
Davide Scaramuzza2
Yiannis Aloimonos1
*Equal Contribution

1Perception and Robotics Group
University of Maryland, College Park

2Robotics & Perception Group
University of Zurich | ETH Zürich


Figure: Top row (left to right): Simulation environment used to generate event frames to train our networks, real quadrotor running the network trained on simulation dodging two obstacles thrown simultaneously at it. Bottom row (left to right): Front and down facing event frame generated from simulation, simulation ground truth segmentation corresponding to front facing event frame, front and down facing event frame in real experiment, predicted segmentation mask corresponding to real front facing event frame and finally, segmentation flow output which includes both segmentation and optical flow.

The human fascination to understand ultra-efficient agile flying beings like birds and bees have propelled decades of research on trying to solve the problem of obstacle avoidance on micro aerial robots. However, most of the prior research has focused on static obstacle avoidance. This is due to the lack of high-speed visual sensors and scalable visual algorithms. The last decade has seen an exponential growth of neuromorphic sensors which are inspired by nature and have the potential to be the de facto standard for visual motion estimation problems.

After re-imagining the navigation stack of a micro air vehicle as a series of hierarchical competences, we develop a purposive artificial intelligence based formulation for the problem of general navigation. We call this AI framework "Embodied AI" - AI design based on the knowledge of agent's hardware limitations and timing/computation constraints. Following this design philosophy we develop a complete AI navigation stack for dodging multiple dynamic obstacles on a quadrotor with a monocular event camera and computation. We also present an approach to directly transfer the shallow neural networks trained in simulation to the real world by subsuming pre-processing using a neural network into the pipeline.

We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy proving that our navigation stack can cater to different scenarios.


Nitin J. Sanket*, Chethan M Parameshwara*, Chahat Deep Singh, Cornelia Fermüller, Davide Scaramuzza, Yiannis Aloimonos.

* Equal Contribution