Using Tactile sensors for manipulation remains as one of the most challenging problems in robotics.
At the heart of these challenges is generalization: How can we train a tactile-based policy that can manipulate unseen and diverse objects? In this paper, we propose to perform Reinforcement Learning with only visual tactile sensing inputs on diverse objects in a physical simulator.
By training with large-scale data in simulation, it enables the policy to generalize to unseen objects.
However, leveraging simulation introduces the Sim2Real transfer problem. To mitigate this problem, we study different tactile representations and evaluate how each affects real-robot manipulation results after transfer.
We conduct our experiments on large-scale real-world objects and show significant improvements over baselines. Interestingly, we find that a particular abstraction of the tactile image leads to better Sim2Real results rather than the full information.