Share this page:

Typed Tensor Decomposition of Knowledge Bases for Relation Extraction

Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Chris Meek, in EMNLP, 2014.

Download the full text


Abstract

While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-the-art method by 10 points when used as a subcomponent.



Bib Entry

@inproceedings{chang2014typed,
  author = {Chang, Kai-Wei and Yih, Wen-tau and Yang, Bishan and Meek, Chris},
  title = {Typed Tensor Decomposition of Knowledge Bases for Relation Extraction},
  booktitle = {EMNLP},
  year = {2014}
}

Related Publications

  1. Typed Tensor Decomposition of Knowledge Bases for Relation Extraction

    Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Chris Meek, in EMNLP, 2014.
    Full Text Video Abstract BibTeX Details
    While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-the-art method by 10 points when used as a subcomponent.
    @inproceedings{chang2014typed,
      author = {Chang, Kai-Wei and Yih, Wen-tau and Yang, Bishan and Meek, Chris},
      title = {Typed Tensor Decomposition of Knowledge Bases for Relation Extraction},
      booktitle = {EMNLP},
      year = {2014}
    }
    
    Details
  2. Multi-Relational Latent Semantic Analysis

    Kai-Wei Chang, Wen-tau Yih, and Chris Meek, in EMNLP, 2013.
    Full Text Slides Abstract BibTeX Details
    We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a low-rank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state-of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.
    @inproceedings{chang2013mrlsa,
      author = {Chang, Kai-Wei and Yih, Wen-tau and Meek, Chris},
      title = {Multi-Relational Latent Semantic Analysis},
      booktitle = {EMNLP},
      year = {2013}
    }
    
    Details