Named Entity Disambiguation for Noisy Text. Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy. CoNLL 2017. [pdf] We present WikilinksNED, a large-scale Named Entity Disambiguation dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. Code & Data The code and data are […]

Zero-Shot Relation Extraction via Reading Comprehension. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. CoNLL 2017. [pdf] [slides] We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. Code & Data The code and data are available here. . […]

Omer Levy, Anders Søgaard, and Yoav Goldberg. EACL 2017. This paper draws both empirical and theoretical parallels between the embedding and alignment literature, and suggests that adding additional sources of information, which go beyond the traditional signal of bilingual sentence-aligned corpora, may substantially improve cross-lingual word embeddings.

Modeling Extractive Sentence Intersection via Subtree Entailment. Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, and Iryna Gurevych. COLING 2016. [pdf] Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity. Despite its modeling power, it has received little attention because it is difficult for […]

Annotating Relation Inference in Context via Question Answering. Omer Levy and Ido Dagan. ACL 2016. [pdf] [supplementary] [slides] We convert the inference task to one of simple factoid question answering, allowing us to easily scale up to 16,000 high-quality examples. Code The code used to extract assertions and create the dataset is available here. Data […]

Learning to Exploit Structured Resources for Lexical Inference. Vered Shwartz, Omer Levy, Ido Dagan and Jacob Goldberger. CoNLL 2015. [pdf] [supplementary] This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich […]

A Simple Word Embedding Model for Lexical Substitution. Oren Melamud, Omer Levy, and Ido Dagan. VSM Workshop 2015. [pdf] We propose a simple model for lexical substitution, which is based on the popular skip-gram word embedding model. The novelty of our approach is in leveraging explicitly the context embeddings generated within the skip-gram model, which […]

Improving Distributional Similarity with Lessons Learned from Word Embeddings. Omer Levy, Yoav Goldberg, and Ido Dagan. TACL 2015. [pdf] [errata] [slides] We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Code The word representations used in this […]

Do Supervised Distributional Methods Really Learn Lexical Inference Relations? Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. Short paper in NAACL 2015. [pdf] [slides] Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these […]

Neural Word Embeddings as Implicit Matrix Factorization. Omer Levy and Yoav Goldberg. NIPS 2014. [pdf] We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, […]