Author Archives: levyomer

Named Entity Disambiguation for Noisy Text

Named Entity Disambiguation for Noisy Text. Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy. CoNLL 2017. [pdf] We present WikilinksNED, a large-scale Named Entity Disambiguation dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. Code & Data The code and data are […]

Zero-Shot Relation Extraction via Reading Comprehension

Zero-Shot Relation Extraction via Reading Comprehension. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. CoNLL 2017. [pdf] [slides] We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. Code & Data The code and data are available here. . […]

A Strong Baseline for Learning Cross-Lingual Word Embeddings from Sentence Alignments

Omer Levy, Anders S√łgaard, and Yoav Goldberg. EACL 2017. This paper draws both empirical and theoretical parallels between the embedding and alignment literature, and suggests that adding additional sources of information, which go beyond the traditional signal of bilingual sentence-aligned corpora, may substantially improve cross-lingual word embeddings.

Modeling Extractive Sentence Intersection via Subtree Entailment

Modeling Extractive Sentence Intersection via Subtree Entailment. Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, and Iryna Gurevych. COLING 2016. [pdf] Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity. Despite its modeling power, it has received little attention because it is difficult for […]

Annotating Relation Inference in Context via Question Answering

Annotating Relation Inference in Context via Question Answering. Omer Levy and Ido Dagan. ACL 2016. [pdf] [supplementary] [slides] We convert the inference task to one of simple factoid question answering, allowing us to easily scale up to 16,000 high-quality examples. Code The code used to extract assertions and create the dataset is available here. Data […]

Learning to Exploit Structured Resources for Lexical Inference

Learning to Exploit Structured Resources for Lexical Inference. Vered Shwartz, Omer Levy, Ido Dagan and Jacob Goldberger. CoNLL 2015. [pdf] [supplementary] This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich […]

A Simple Word Embedding Model for Lexical Substitution

A Simple Word Embedding Model for Lexical Substitution. Oren Melamud, Omer Levy, and Ido Dagan. VSM Workshop 2015. [pdf] We propose a simple model for lexical substitution, which is based on the popular skip-gram word embedding model. The novelty of our approach is in leveraging explicitly the context embeddings generated within the skip-gram model, which […]