Relationship Extraction
Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of semantic categories (e.g. married to, employed by, lives in).
SUMMARY : entity是否需要特别地进行标注?
Capturing discriminative attributes (SemEval 2018 Task 10)
捕捉有区别性的属性
Capturing discriminative attributes (SemEval 2018 Task 10) is a binary classification task where participants were asked to identify whether an attribute could help discriminate between two concepts. Unlike other word similarity prediction tasks, this task focuses on the semantic differences between words.
e.g. red(attribute) can be used to discriminate apple (concept1) from banana (concept2) -> label 1
FewRel
The Few-Shot Relation Classification Dataset (FewRel) is a different setting from the previous datasets. This dataset consists of 70K sentences expressing 100 relations annotated by crowdworkers on Wikipedia corpus. The few-shot learning task follows the N-way K-shot meta learning setting. It is both the largest supervised relation classification dataset as well as the largest few-shot learning dataset till now.
The public leaderboard is available on the FewRel website.
Multi-Way Classification of Semantic Relations Between Pairs of Nominals (SemEval 2010 Task 8)
SemEval-2010 introduced ‘Task 8 - Multi-Way Classification of Semantic Relations Between Pairs of Nominals’. The task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation. The dataset contains nine general semantic relations together with a tenth ‘OTHER’ relation.
Example:
There were apples, pears and oranges in the bowl.
(content-container, pears, bowl)
The main evaluation metric used is macro-averaged F1, averaged across the nine proper relationships (i.e. excluding the OTHER relation), taking directionality of the relation into account.
Several papers have used additional data (e.g. pre-trained word embeddings, WordNet) to improve performance. The figures reported here are the highest achieved by the model using any external resources.
End-to-End Models
*: It uses external lexical resources, such as WordNet, part-of-speech tags, dependency tags, and named entity tags.