![]() Cite (Informal): Knowledge Base Completion: Baselines Strike Back (Kadlec et al., RepL4NLP 2017) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: = "Knowledge Base Completion: Baselines Strike Back",īooktitle = "Proceedings of the 2nd Workshop on Representation Learning for our reimplementation of the DistMult model. Association for Computational Linguistics. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69–74, Vancouver, Canada. Knowledge Base Completion: Baselines Strike Back. Anthology ID: W17-2609 Volume: Proceedings of the 2nd Workshop on Representation Learning for NLP Month: August Year: 2017 Address: Vancouver, Canada Venue: RepL4NLP SIG: SIGREP Publisher: Association for Computational Linguistics Note: Pages: 69–74 Language: URL: DOI: 10.18653/v1/W17-2609 Bibkey: kadlec-etal-2017-knowledge Cite (ACL): Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. This should prompt future research to re-consider how the performance of models is evaluated and reported. ![]() Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning or different training objectives. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline - our reimplementation of the DistMult model. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets like FB15k and WN18. Abstract Many papers have been published on the knowledge base completion task in the past few years.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |