Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks


Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random-walks over structured knowledge graphs. Specifically, we use soft-prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random-walk paths that lead to the answer. Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require multi-hop reasoning.

In Findings of the Association for Computational Linguistics 2023
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Kanishka Misra
Kanishka Misra
Postdoc at UT Austin

My research interests include Natural Language Processing, Cognitive Science, and Deep Learning.