AcTExplore
Active Tactile Exploration on Unknown Objects

Amir-Hossein Shahidzadeh
Seong Jong Yoo
Pavan Mantripragada
Chahat Deep Singh
Cornelia Fermüller
Yiannis Aloimonos

Perception and Robotics Group
at
University of Maryland, College Park


IEEE Conference on Robotics and Automation (ICRA) 2024

TL;DR: We explore and reconstruct the object's surface with robotic fingers equipped with tactile sensor

Paper
Supplementary Material
arXiv
Demo Video
Code (Coming Soon)
Poster
BibTeX

Abstract

Tactile exploration plays a crucial role in understanding object structures for fundamental robotics tasks such as grasping and manipulation. However, efficiently exploring objects using tactile sensors is challenging, primarily due to the large-scale unknown environments and limited sensing coverage of these sensors. To this end, we present AcTExplore, an active tactile exploration method driven by reinforcement learning for object reconstruction at scales that automatically explores the object surfaces in a limited number of steps. Through sufficient exploration, our algorithm incrementally collects tactile data and reconstructs 3D shapes of the objects as well, which can serve as a representation for higher-level downstream tasks. Our method achieves an average of 95.97% IoU coverage on unseen YCB objects while just being trained on primitive shapes.





Fig. 1: Reconstruction of a hammer: (a) demonstrates the tactile sensor trajectory in 3D. (b) illustrates the corresponding intermediate tactile readings on the hammer's surface. Note that we use the yellow-to-red gradient to denote the time. After thorough tactile exploration, we obtain a complete reconstruction of the object (c), indicating that the tactile sensor can cover the entire object's surface through our active strategy.





Overview: This figure illustrates the key steps and components of AcTExplore in a scenario where the sensor moves upward along the jar's edge. We employed Temporal Tactile Averaging for state representation f to encode consecutive observations, enabling the perception of movement on sensor, vital for learning dexterous actions. We also incorporate an Upper Confidence Bound (UCB) exploration as a bonus to encourage effective exploration.





Qualitative results on unseen YCB objects with different state and reward settings. From active tactile exploration, we obtain point cloud data of tactile depth readings on the object's surface. To generate mesh, we apply Poisson surface reconstruction algorithm




Acknowledgement

The authors thank Kaiqing Zhang for helpful discussions on the research. The support by Brin Family Foundation, the Northrop Grumman Mission Systems University Research Program, ONR and National Science Foundation are gratefully acknowledged.


Paper

Amir-Hossein Shahidzadeh, Seong Jong Yoo,
Pavan Mantripragada, Chahat Deep Singh, Cornelia Fermüller, Yiannis Aloimonos