The difference between memory and prediction in recurrent networks

Invited

Abstract

Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters, and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity. We also discuss how well trained recurrent networks can predict, compared to the optimal.

Presenters

  • Sarah Marzen

    Massachusetts Institute of Technology

Authors

  • Sarah Marzen

    Massachusetts Institute of Technology

  • Alexander Hsu

    Mathematics, Claremont McKenna College