I am an Assistant Professor in the Linguistics Department at UT Austin, and a Harrington Faculty Fellow for 2025-26. I am a member of the Computational Linguistics Research Group and also the wider NLP Research Community at UT. I maintain a courtesy appointment at Toyota Technological Institute at Chicago.
My research program lies at the intersection of Cognitive Science, Linguistics, and Artificial Intelligence. I am primarily interested in characterizing the statistical mechanisms that underlie the acquisition and generalization of complex linguistic phenomena and conceptual meaning. To this end, I: (1) develop methods to evaluate and analyze AI models from the perspective of semantic cognition; and (2) use AI models as simulated learners to test and generate novel hypotheses about language acquisition and generalization. My research has been recognized with awards at EACL 2023, ACL 2023, and EMNLP 2024!
I am looking to recruit PhD Students through Linguistics, expected to start in Fall 2026. I am primarily interested in working with students who have interests at the intersection of AI and CogSci/NLP. Students can learn more about applications here.
Unfortunately, I am not recruiting any MS Students or Interns at the moment.
Previously, I was a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus. Before, I was a postdoctoral fellow in the Linguistics department at UT Austin, working with Dr. Kyle Mahowald. Before that, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.
My email is kmisra [at] utexas [dot] edu.[why is it like that?]
Kanishka Misra, Julia Rayz, and Allyson Ettinger. 2023. COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models. EACL 2023. Best Paper Award.
Kanishka Misra and Kyle Mahowald. 2024. Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs. EMNLP 2024. Outstanding Paper Award.
Kanishka Misra and Najoung Kim. 2024. Generating novel experimental hypotheses from language models: A case study on cross-dative generalization. arxiv preprint.
Juan Diego Rodriguez, Aaron Mueller, and Kanishka Misra. 2025. Characterizing the Role of Similarity in the Property Inferences of Language Models. NAACL 2025.
Yulu Qin,* Dheeraj Varghese,* Adam Dahlgren Lindström, Lucia Donatelli, Kanishka Misra,† and Najoung Kim.† 2025. Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It. NeurIPS 2025.
Some upcoming talks:
January 2025: New paper with Sriram Padmanabhan and Siyuan Song on understanding VLMs’ use of generics in inductive inferences.
January 2025: Two papers accepted to be presented at EACL 2026 in march: 1) inferences from discourse connectives; 2) at-issue sensitivity in LMs.
December 2025: Joint talk at UCSD, with Najoung Kim, titled: Whence Insights? Delineating Human and Machine CogSci, and hypothesis generation as a potential bridge
December 2025: At NeurIPS presenting a poster with Yulu Qin and Najoung Kim.
October 2025: New preprint with Sanghee Kim on at-issue sensitivity in Language Models.
October 2025: New preprint with Daniel Brubaker, Will Sheffield, and Jessy Li on inferences from discourse connectives!