Sim2Real Manipulation on Unknown Objects with Tactile-based Reinforcement Learning

ICRA 2024

Paper

Abstract

Using Tactile sensors for manipulation remains as one of the most challenging problems in robotics. At the heart of these challenges is generalization: How can we train a tactile-based policy that can manipulate unseen and diverse objects? In this paper, we propose to perform Reinforcement Learning with only visual tactile sensing inputs on diverse objects in a physical simulator. By training with large-scale data in simulation, it enables the policy to generalize to unseen objects. However, leveraging simulation introduces the Sim2Real transfer problem. To mitigate this problem, we study different tactile representations and evaluate how each affects real-robot manipulation results after transfer. We conduct our experiments on large-scale real-world objects and show significant improvements over baselines. Interestingly, we find that a particular abstraction of the tactile image leads to better Sim2Real results rather than the full information.

Pivoting

We use tactile images to train the reinforcement learning policy for pivoting tasks. The right-side tactile images in the video depict the RGB, difference image, and binary image from top to bottom.

Real-World Experiments

Tactile Based Policy













Tactile Based Policy on Soft Surface








PCA Angle Policy










Simulation

Tactile based Policy

The green transparent objects show the desired orientation of objects relative to the gripper's current orientation.