Skip to main content
Selective focus photo of artificial human skull

(T8) Recurrent Neural Nets with applications to Language Modeling

Bruno Gonçalves

Abstract:
Many phenomena in our increasingly technological world evolve and change over time, from state to state or from value to value, with each state being (implicitly, or explicitly) dependent on its recent history. Obvious examples being timeseries (sequences of numeric states) and language (sequences of words). To properly understand and model them, we must to take in to account the sequence of values seen in previous steps and even long term temporal correlations. In this tutorial we will explore how to use Recurrent Neural Networks, to model and forecast series of events, using language as an intuitive example. Their advantages and disadvantages with respect to more traditional approaches will behighlighted and simple implementations using the Keras python library will be demonstrated. You will implement deep neural networks that are capable of guessing what the next letter in a word is or even what the next word in a sentence might be. Code and slides will be made available on GitHub.

See: 
https://github.com/bmtgoncalves/RNN

Presenter:
Bruno Gonçalves, JPMorgan Chase