Speaker: Tamas Endrei.

Abstract:

Despite deep reinforcement learning being around for more than 10 years, traditional deep learning best practices have largely avoided the field until now. This talk elaborates on deep learning techniques uncovered through RL-motivated research, touching on topics such as parallelizable recurrent neural networks, rational activation functions, bias-free networks, and dropout variants, among others. The aim is to highlight underappreciated deep learning ideas—whether introduced in RL or other fields—that can be valuable for a wide range of applications.