By M Ganeshkumar, PhD Student, NUS Graduate School of Science and Engineering (NGS), Department of Electrical & Computer Engineering (ECE), on 20 January 2020
GPUs/TPUs are used to increase the processing speed when training deep learning models due to its parallel processing capability. Reinforcement learning on the other hand is predominantly CPU intensive due to the sequential interaction between the agent and environment. With the recent popularity of deep reinforcement learning (deep RL) algorithms, understanding how to shorten processing speed based on the available resources becomes imperative. The purpose of this report is to highlight some considerations to optimise the computational resources available for deep RL, specifically the size of network used.