Training recurrent networks online without backtracking
=======================================================

What Is This?
--------------

This code is a demonstrative implementation of the 
algorithms mentioned in the preliminary paper (1).
It implements online optimization algorithms suited 
for recurrent neural network-like models, based on
low-rank approximations of the full differential state.

Rank one and rank n approximations are provided, as
well as regular optimization algorithms (truncated
backpropagation through time (BPTT) and real time
recurrent learning (RTRL)). Recurrent neural networks
(RNN) and leaky recurrent neural networks (LRNN) are
implemented, and convenience to train them with the
above mentionned algorithms are also provided.
Finally, the .tar.gz contains a test file 'longmus4' on
which you can directly train the models.

How to build?
--------------

To build the code, simply unzip the tar.gz file. Check that the C++
linear algebra library Eigen >3.0 is installed
(http://eigen.tuxfamily.org/), as well as the boost library, 
and that the compiler can find it via #include<eigen3/...> and 
#include<boost/...>.  Then run make from the exp folder where the
Makefile is located. The code uses C++11. Make sure you have a recent
enough version of GCC to be able to consistenly build.

How to use the executables?
-------------------------------------

Once the project is built, you will have access to
executables that allow you to train the two models 
described above with a set of predefined learning 
algorithms in the target folder. An example of such
use is the following:

target/lrnn_rk1_qdop 20 10 100 longmus4

This command line will train a LRNN model used as a 
predictor with the QDOP version of the rank-one approximation. 
The LRNN used will have 20 neurons and 10 randomly selected 
connections per neuron. The model will be trained for
100 seconds on the longmus4 example. You can easily
play on the different parameters and change the input
file.

Advanced usage
--------------

You can freely use and modify the source code.
You may want to attempt creating new model to test
the algorithms. This is entirely possible even though
the current algorithm code might need some adaptations.
To produce working models for a given algorithm, you only 
have to follow the prerequisites mentioned in the algorithm
header file. Keep in mind that there are some (loose)
restrictions on the class of models that can be used on a
given algorithm.

Contacts
--------
If you find any buggy behaviour or implementation mistakes,
feel free to contact us at
corentin.tallec@polytechnique.edu

References
----------
(1)Training recurrent networks online without backtracking,
Yann Ollivier, Corentin Tallec, Guillaume Charpiat
