A common limitation or pain point in a project involving machine learning is the amount of annotated data needed to train our models. In this paper, the authors are exploring what they call Contrastive Predictive Coding. This method allows their models to learn representations (features) in an unsupervised manner. In the end, the goal is to provide a training process that would require much less annotated data.