Artificial intelligence is everywhere! Helping us driving our cars, unlocking our phones with our faces or even choosing the next movie to watch on Netflix. With artificial intelligence emerging at every corners of our life we might be wondering what is the limit of AI or where can’t it beat a human mind assuming such limit exists. To help us understand and find this out let’s take a look at a classic intelligent application development workflow.

Cooking a meal…

To represent this workflow, let’s project ourselves in the kitchen of a restaurant where the chef and its cooking assistants will be the manager and its engineering team. To cook a meal a first step would be to have the ingredients at our disposal. The ingredients in an Artificial Intelligent development are the data (i.e. any piece of information that is useful to make an intelligent prediction on the specific problem). For example, if we take the case of the Netflix recommender system, the data is what you and your fellow users liked in the past. Once we got our ingredients gathered on the table, we are ready for step number two: cooking. Cooking the ingredients can be seen as applying a recipe (or series of operations) on the ingredients in order to produce a good meal. Similarly, in Data Science once we got our data gathered, we apply a mathematical model on it in order to produce a prediction. Now, imagine the cooking team finished to prepare a meal and send it to a customer. A couple minutes later the meal comes back because the customer didn’t like it and said something was wrong. Any good chef would immediately change and try to improve the recipe in order to come up with a better meal the next time, right? Well the same thing happens in AI, once our model produces a prediction, we evaluate it, if it is a wrong prediction, we slightly modify the model in order to produce a better prediction next time. This step is referred as model training. Finally, after many trials and errors, our cooking team end up thinking that they got the perfect recipe. They know this recipe will most probably not be liked by every single customer however it is the one that according to all the iterations they saw before has the highest probability of being liked. A final recipe corresponds to a trained model in AI. A model that can finally be applied without any modification and that will provide predictions with a given confidence.

When the ingredients are lacking…

The restaurant analogy can still be used to explain one of the biggest limitation of such AI application. When making a meal a high part of the final taste will rely on the ingredients, their quality needs to be the highest, and their quantity needs to be sufficient. The same thing happens in AI, when the quality or the quantity of the data appears to be lacking, it is simply impossible to train model that makes interesting predictions. It might not be a problem for big companies such as Google, Facebook or Netflix to acquire enough data to train their model however for smaller companies and projects it is often too expansive, or simply impossible to find the necessary data to develop their own artificial intelligent application.

Data Augmentation to solve it all..

When ingredients are missing in a restaurant, the only solution to solve this issue would be to go and buy new ingredients. Fortunately, in AI when data comes missing other solutions exist. One of them called “data augmentation” is the one I will be focusing on. Imagine you find yourself in a situation where you possess only half of the necessary ingredients to cook a meal, or half the data to train a model. Using data augmentation in this scenario would consist in using a totally independent model on the few data that you managed to find in order to generate completely new synthetic data. Such model is called a generative model and has been trained on another larger source of unrelated data. This allows the generative model to generate data and be applied in many different applications. Once the generative model has been applied you end up with enough data to perform the workflow described previously and create a model that gives confident predictions.


Data augmentation aim at solving one of the largest problem of any Artificial Intelligence development pipeline: the lack of data. It does so by augmenting the data at hand using a generative model trained on a larger general knowledge dataset and enable improved performances of any specific model training that would suffer from the lack of data. Many thanks,Guillaume

Data Augmentation
Augment the given text with similar sentences.