On Semantic Cognition, Inductive Generalization, and Language Models

Image credit: Smashicons

Abstract

My doctoral research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs), by drawing on insights from the study of concepts and categories grounded in cognitive science. I propose a framework inspired by ‘inductive reasoning,’ a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties. Drawing from experiments that study inductive reasoning, I propose to analyze semantic inductive generalization in LMs using phenomena observed in human-induction literature, investigate inductive behavior on tasks such as implicit reasoning and emergent feature recognition, and analyze and relate induction dynamics to the learned conceptual representation space.

Publication
In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI) Doctoral Consortium. 2022
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Kanishka Misra
Kanishka Misra
Research Assistant Professor at Toyota Technological Institute at Chicago

My research interests include Natural Language Processing, Cognitive Science, and Deep Learning.