One of the fundamental challenges associated with reinforcement learning (RL) in robotics is that collecting sufficient data can be both time-consuming and expensive. In this paper, we formalize a concept of time reversal symmetry in Markov decision processes (MDPs) which builds upon the established structure of dynamically reversible Markov chains (DRMCs) and time reversibility in classical physics. Using this new construct, we investigate the utility of this concept in reducing the sample complexity of robot learning. We observe that utilizing the structure of time reversal in an MDP allows every environment transition experienced by an agent to be transformed into a feasible reverse-time transition, effectively doubling the number of experiences in the environment. To test the usefulness of this newly synthesized data, we introduce a novel approach called time symmetric data augmentation (TSDA) and investigate its application in both proprioceptive and pixel-based state within the realm of off-policy, model-free RL. Empirical evaluations showcase how these synthetic transitions can enhance the sample efficiency of RL agents in time reversible scenarios without friction or contact. We also test this method in less idealized environments, where TSDA can significantly degrade sample efficiency and policy performance, but can also improve sample efficiency under the right conditions. Ultimately we conclude that time symmetry shows promise in robot learning, but only if the environment and reward structure are of an appropriate form.
Contributors: Brett Barkley, Amy Zhang, David Fridovich-Keil