I am a postdoctoral fellow in the linguistics department at UT Austin, working with Dr. Kyle Mahowald. In Sept 2024, I will start as a Research Assistant Professor at the Toyota Technological Institute at Chicago!

Previously, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.

My research focuses on evaluating and analyzing large language models from the perspective of human semantic cognition, investigating capacities such as their ability to encode typicality effects, recall property knowledge, demonstrate property inheritance, and perform human-like category-based induction. Together, these capacities shed light on the extent to which LMs encode and extract conceptual meaning from input contexts. Through my work, I hope to contribute towards bridging the experimental paradigms in the study of human cognition with that of artificial intelligence systems.

My email is kmisra [at] utexas [dot] edu.[why is it like that?]

Other things:

  • I am also the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.

  • I spent Fall 2022 as a Research Intern at Google AI working on multi-hop reasoning and language models!

  • I was selected to be a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!

  • In summer of 2022, I hosted a two part discussion group on Neural Nets for Cognition @ CogSci 2022

Recent News

SEE ALL

  • November 2023: Presenting work on lexical category abstraction in BERT at BUCLD with Najoung Kim!

  • October 2023: Guest lecture at Eunsol Choi’s CS 395T class at UT Austin!

  • September 2023: Invited talks at C.Psyd (Marty van Schijndel’s lab) and TTIC!

  • August 2023: Moving to Austin to start my Postdoc with Kyle Mahowald at UT Austin! Excited for (KM)$^2$ projects!

  • July 2023: Successfully defended my dissertation, On semantic cognition, inductive generalization, and language models! Watch this space for what the future holds!

  • July 2023: Najoung Kim and I will be presenting our work on understanding lexical category membership inferences in BERT at BUCLD! Excited to visit Boston (and meet 🍪) in November!

Recent Posts

Introducing $\texttt{minicons}$: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact