Category Semantic Similarity

A Strong Baseline for Learning Cross-Lingual Word Embeddings from Sentence Alignments

Omer Levy, Anders Søgaard, and Yoav Goldberg. EACL 2017. This paper draws both empirical and theoretical parallels between the embedding and alignment literature, and suggests that adding additional sources of information, which go beyond the traditional signal of bilingual sentence-aligned corpora, may substantially improve cross-lingual word embeddings.

Modeling Extractive Sentence Intersection via Subtree Entailment

Modeling Extractive Sentence Intersection via Subtree Entailment. Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, and Iryna Gurevych. COLING 2016. [pdf] Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity. Despite its modeling power, it has received little attention because it is difficult for […]

Annotating Relation Inference in Context via Question Answering

Annotating Relation Inference in Context via Question Answering. Omer Levy and Ido Dagan. ACL 2016. [pdf] [supplementary] [slides] We convert the inference task to one of simple factoid question answering, allowing us to easily scale up to 16,000 high-quality examples. Code The code used to extract assertions and create the dataset is available here. Data […]

A Simple Word Embedding Model for Lexical Substitution

A Simple Word Embedding Model for Lexical Substitution. Oren Melamud, Omer Levy, and Ido Dagan. VSM Workshop 2015. [pdf] We propose a simple model for lexical substitution, which is based on the popular skip-gram word embedding model. The novelty of our approach is in leveraging explicitly the context embeddings generated within the skip-gram model, which […]

Improving Distributional Similarity with Lessons Learned from Word Embeddings

Improving Distributional Similarity with Lessons Learned from Word Embeddings. Omer Levy, Yoav Goldberg, and Ido Dagan. TACL 2015. [pdf] [errata] [slides] We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Code The word representations used in this […]

Do Supervised Distributional Methods Really Learn Lexical Inference Relations?

Do Supervised Distributional Methods Really Learn Lexical Inference Relations? Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. Short paper in NAACL 2015. [pdf] [slides] Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these […]

Neural Word Embeddings as Implicit Matrix Factorization

Neural Word Embeddings as Implicit Matrix Factorization. Omer Levy and Yoav Goldberg. NIPS 2014. [pdf] We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, […]

Proposition Knowledge Graphs

Proposition Knowledge Graphs. Gabriel Stanovsky, Omer Levy, and Ido Dagan. AHA! Workshop 2014. [pdf] This position paper proposes a novel representation for Information Discovery — Proposition Knowledge Graphs. These extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. . . . . .

Linguistic Regularities in Sparse and Explicit Word Representations *Best Paper CoNLL 2014*

Linguistic Regularities in Sparse and Explicit Word Representations. Omer Levy and Yoav Goldberg. CoNLL 2014. [pdf] [slides] This fascinating result raises a question: to what extent are the relational semantic properties a result of the embedding process? Experiments show that the RNN-based embeddings are superior to other dense representations, but how crucial is it for […]

Dependency-Based Word Embeddings

Dependency-Based Word Embeddings. Omer Levy and Yoav Goldberg. Short paper in ACL 2014. [pdf] [slides] While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by Mikolov et al. to include arbitrary contexts. Code The code used in […]