Show simple item record

Knowledge Context for Entity and Relation Linking

dc.contributor.advisorAuer, Sören
dc.contributor.authorOnando, Isaiah Mulang'
dc.date.accessioned2021-10-28T12:57:13Z
dc.date.available2021-10-28T12:57:13Z
dc.date.issued28.10.2021
dc.identifier.urihttps://hdl.handle.net/20.500.11811/9384
dc.description.abstractKnowledge graphs (KGs) are structures that provide a compendious representation of real world facts about entities and their relationships. The last decade has seen an increase in the number, size and application of knowledge graphs especially owing to the easy accessibility of the World Wide Web as a knowledge store. Adding structure to this data implies that machines can easily interpret, reason with, and infer meanings across different domains.
Such rich stores of structured data have been proven to boost performances in core Natural Language Processing (NLP) tasks such as Relation Extraction, Question Answering, Dialog Systems, Web search, etc. Furthermore, owing to these vast structured knowledge stores new research and application areas have emerged, viz. automatic KG construction, KG completion, and KG Alignment. Central to these tasks is the need to align entities and their relations in text to equivalents in referent knowledge bases. However, the difference in representation of such relations within unstructured text as compared to the formally structured knowledge bases manifest major challenges namely: lexical gap, ambiguity, complex and implicit relations, the unpredictability of natural language vs formulaic knowledge bases, and complex grammar used in text etc. Numerous research efforts have sought to provide tools and approaches for text to KG disambiguation. Notwithstanding, the aforementioned challenges still remain obstacles to overcome.
This thesis makes two considerations to address entity and relation linking. We envision tools that harness both the power of deep learning methods as well as traditional Artificial Intelligence techniques. We also view the KG as a source of information that can be anchored as features to inform machine learning models. In this view, we propose encoding this curated information for the linking models. We first devise an approach called ReMatch to perform end-to-end relation linking. ReMatch represents essential attributes of the relations in short text and models KG relations in a complementary structure to enhance the similarity scoring process. A terminology graph is used to augment these two structures with synonym relations. Next, we perform end-to-end entity linking via an attention-based encoder-decoder neural network that captures signals from a infused background KG. In this context, our approach Arjun is a first attempt to encode entity information from Wikidata KG as contextual signals in a neural network architecture. There are two neural encoders used in Arjun, where the first one recognises entity mentions. We create a local KG infused from two open domain KGs to associate entities with their aliases. The infused KG is used to power another encoder network for the disambiguation. In a subsequent implementation, We extend the Arjun idea to perform end-to-end entity linking by leveraging the power of the state-of-the-art transformers. A fine-tuned transformer model recognises entity mentions in text, but allows for a mix-and-match approach to the candidate generation step. We then utilise entity descriptions in a second transformer model for disambiguation. In another direction, we experiment with KG triples to evaluate the impact of KG context on transformer models. We desire to unearth underlying nuances in KG entities, and define appropriate representations of the same for the learning models. This work provides insightful results for the community on types, encoding and extent of KG context for NLP. Finally, we employ the novel intuition gained to enhance a model for the explanation regeneration task in elementary science QA. Our contributions target a broader research agenda by providing efficient approaches that leverage information in KGs, and to propel efforts that obtain best of both KGs and NLP.
en
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectWissenskontext
dc.subjectWissensgraph
dc.subjectEntitäts-Disambiguierung
dc.subjectEntitäts-Verknüpfung
dc.subjectBeziehungs-Verknüpfung
dc.subjectBeziehungs-Extraktion
dc.subjectAufmerksame Neuronale Netze
dc.subjectNeuronale Netze
dc.subjectDeep Learning
dc.subjectNatürliche Sprachverarbeitung
dc.subjectNatürliches Sprachverständnis
dc.subjectKnowledge Context
dc.subjectKnowledge Graph
dc.subjectEntity Disambiguation
dc.subjectEntity Linking
dc.subjectRelation Linking
dc.subjectRelation Extraction
dc.subjectAttentive Neural Networks
dc.subjectNeural Networks
dc.subjectNatural Language Processing
dc.subjectNatural Language Understanding
dc.subjectNLP
dc.subjectNLU
dc.subjectKGs
dc.subject.ddc004 Informatik
dc.titleKnowledge Context for Entity and Relation Linking
dc.typeDissertation oder Habilitation
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-63968
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelDissertation
ulbbnediss.dissID6396
ulbbnediss.date.accepted01.09.2021
ulbbnediss.instituteMathematisch-Naturwissenschaftliche Fakultät : Fachgruppe Informatik / Institut für Informatik
ulbbnediss.fakultaetMathematisch-Naturwissenschaftliche Fakultät
dc.contributor.coRefereeLehmann, Jens
ulbbnediss.contributor.orcidhttps://orcid.org/0000-0002-0554-0511
ulbbnediss.contributor.gnd1246392445


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright