Neural Word Embeddings as Implicit Matrix Factorization

Neural Word Embeddings as Implicit Matrix Factorization.
Omer Levy and Yoav Goldberg. NIPS 2014. [pdf]

We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant.

.

.

.

.

.

One comment

  1. […] papers, and the subsequently bombastic media frenzy, the race to understand why word2vec works so well, some academic drama on GloVe vs word2vec, and a nice introduction to the algorithms behind […]

Leave a comment