Knowledge Graph Embedding by Double Limit Scoring Loss
ABSTARCT :
Knowledge graph embedding is an effective way to represent knowledge graph, which greatly enhance the performances on knowledge graph completion tasks, e.g. entity or relation prediction. For knowledge graph embedding models, designing a powerful loss framework is crucial to the discrimination between correct and incorrect triplets. Margin-based ranking loss is a commonly used negative sampling framework to make a suitable margin between the scores of positive and negative triples. However, this loss can not ensure ideal low scores for the positive triplets and high scores for the negative triplets, which is not beneficial for knowledge completion tasks. In this paper, we present a double limit scoring loss to separately set upper bound for correct triplets and lower bound for incorrect triplets, which provides more effective and flexible optimization for knowledge graph embedding. Upon the presented loss framework, we present several knowledge graph embedding models including TransE-SS, TransH-SS, TransD-SS, ProjE-SS and ComplEx-SS. The experimental results on link prediction and triplet classification show that our proposed models have the significant improvement compared to state-of-the-art baselines.
EXISTING SYSTEM :
? We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models.
? Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a nontrivial task.
? Other approach might leverage external ontological constraints such as entity types (Krompaß et al., 2015) to generate negative examples, but such resource does not always exist or accessible.
? Other forms of loss functions exist, for example CONVE uses a triple-wise logistic function to model how likely the triple is true, but by far the two described above are the most common.
DISADVANTAGE :
? Despite the enhancements of predictions accuracy achieved by multi-class loss approaches , they can have scalability issues in real-world knowledge graphs with a large number of entities as they use the full entities vocabulary as negative instances.
? On the other hand, multi-class based models train to rank positive triples against their all possible corruptions as a multi-class problem where the range of classes is the set of all entities.
? In our experiments, we alleviate this problem by filtering out positive instances in the triple corruptions. Therefore, MRR and Hits@k are computed using the knowledge graph original triples and non-positive corruptions.
PROPOSED SYSTEM :
• A large number of knowledge graph embedding models, which represent entities and relations in a knowledge graph with vectors or matrices, have been proposed in recent years.
• Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) was originally proposed for generating samples in a continuous space such as images. A GAN consists of two parts, the generator and the discriminator.
• To evaluate our proposed framework, we test its performance for the link prediction task with different generators and discriminators.
• Experimentally, we tested the proposed ideas with four commonly used KGE models on three datasets, and the results showed that the adversarial learning framework brought consistent improvements to various KGE models under different settings.
ADVANTAGE :
? In this paper, we focus on comparing different loss functions when applied to several representative KGE models.
? By performing a systematic analysis of the performance (in terms of Mean Reciprocal Rank, MRR) of different models using different loss functions, we hope to contribute towards improving the understanding of how loss functions influence the behaviour of KGE models across different benchmark datasets.
? We propose a new formulation for a KGE loss that can provide enhancements to the performance of KGE models.
? Moreover, the loss function we have proposed, PSL, enhances models’ performance on multiple datasets.
|