Data Augmentation

In this demo, we observe the generative model behind data augmentation in action. When sending an input sentence, the model adapt itself in order to produce the 5 best possible alteration of it while staying as similar as possible from this input. These alterations is what you can observe on your screen after a couple minutes.

Explanation of the demo

Previous approach were using Recurrent Neural Network approach to tackle this kind of problem such as LSTM, even though they were performing good, they were very slow to train. This generative model is using an architecture based on attention similarly to the `transformer` from google and perform 3 times faster while also improving the results. This speed increase allows us to fine-tune the model on the fly to adapt it to its own input.

Demo information

Date
January 2019
Author
Guillaume Raille
Master Student
Related