**Stabilizing Contrastive RL: Techniques for Offline Goal Reaching**
Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang,
Ruslan Salakhutdinov, Sergey Levine
Paper, Code

*__Abstract__:
In the same way that the computer vision (CV) and natural language processing (NLP) communities have developed self-supervised methods, reinforcement learning (RL) can be cast as a self-supervised problem: learning to reach any goal, without requiring human-specified rewards or labels. However, actually building a self-supervised foundation for RL faces some important challenges. Building on prior contrastive approaches to this RL problem, we conduct careful ablation experiments and discover that a shallow and wide architecture, combined with careful weight initialization and data augmentation, can significantly boost the performance of these contrastive RL approaches on challenging simulated benchmarks. Additionally, we demonstrate that, with these design decisions, contrastive approaches can solve real-world robotic manipulation tasks, with tasks being specified by a single goal image provided after training.*
Evaluation on Real Manipulation Tasks
===============================================================================
Below, we show examples of the behavior learned by stable contrastive RL and baselines, GC-IQL and GCBC, on the real manipulation tasks. All the methods successfully solves the simplest tasks "reach the eggplant", while stable contrastive RL achives 60% success rates on the other two tasks where baselines fail. Note that our method casts reinforcement learning as a self-supervised problem, without using any reward function and successfully solving multi-stage goal-conditioned control tasks.
**TASK:** Pick the red spoon and place it on the left of the pot.
 
 
**TASK:** Push the can to the wall.
 
 
**TASK:** Reach the eggplant on the table.
 
 
Evaluation on Simulated Manipulation Tasks
===============================================================================
We also visualize examples of the behavior learned by stable contrastive RL and the same baselines on the simulated manipulation tasks. Stable contrastive RL is able to solve multi-stage goal-conditioned control tasks, while baselines complete single stage or fail to complete the tasks.
**TASK:** Pick the green object in the drawer, place it in the tray, and then close the drawer.
 
 
**TASK:** Push the orange block on the table and then open the drawer.
 
 
**TASK:** Pick the green object on the table and place it in the tray.
 
 
**TASK:** Close the drawer.
 
 
Failure Cases
-------------------------------------------------------------------------------
For the task below, the agent tries to push the orange block, but fails to close the drawer.
**TASK:** Push the orange block on the table and then close the drawer.
 
--------------------------