I am an Assistant Professor in the Linguistics Department at UT Austin, and a Harrington Faculty Fellow for 2025-26. I am a member of the Computational Linguistics Research Group and also the wider NLP Research Community at UT. I maintain a courtesy appointment at Toyota Technological Institute at Chicago.
My research program lies at the intersection of Cognitive Science, Linguistics, and Artificial Intelligence. I am primarily interested in characterizing the statistical mechanisms that underlie the acquisition and generalization of complex linguistic phenomena and conceptual meaning. To this end, I: (1) develop methods to evaluate and analyze AI models from the perspective of semantic cognition; and (2) use AI models as simulated learners to test and generate novel hypotheses about language acquisition and generalization. My research has been recognized with awards at EACL 2023, ACL 2023, and EMNLP 2024!
I am looking to recruit PhD Students through Linguistics, expected to start in Fall 2026. I am primarily interested in working with students who have interests at the intersection of AI and CogSci/NLP. Students can learn more about applications here.
Unfortunately, I am not recruiting any MS Students or Interns at the moment.
Previously, I was a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus. Before, I was a postdoctoral fellow in the Linguistics department at UT Austin, working with Dr. Kyle Mahowald. Before that, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.
My email is kmisra [at] utexas [dot] edu.[why is it like that?]
Kanishka Misra, Julia Rayz, and Allyson Ettinger. 2023. COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models. EACL 2023. Best Paper Award.
Kanishka Misra and Kyle Mahowald. 2024. Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs. EMNLP 2024. Outstanding Paper Award.
Kanishka Misra and Najoung Kim. 2024. Generating novel experimental hypotheses from language models: A case study on cross-dative generalization. arxiv preprint.
Juan Diego Rodriguez, Aaron Mueller, and Kanishka Misra. 2025. Characterizing the Role of Similarity in the Property Inferences of Language Models. NAACL 2025.
Yulu Qin,* Dheeraj Varghese,* Adam Dahlgren Lindström, Lucia Donatelli, Kanishka Misra,† and Najoung Kim.† 2025. Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It. arxiv preprint, Under Review.
August 2025: Moving to UT Austin Linguistics to start as Assistant Professor and Harrington Faculty Fellow (2025-26) this fall! Excited to be back to my favorite Linguistics Department!
July 2025: I’ll be at CogSci - excited to meet old friends and make new!
July 2025: New paper by Yulu Qin (lead), Dheeraj Varghese (lead), Adam Dahlgren Lindstrom, Lucia Donatelli, and Najoung Kim on the impact of VL-training on taxonomic knowledge of LMs!
July 2025: Qing’s paper on datives was accepted at COLM 2025, and Sriram’s paper on suspicious coincidences was accepted at PragLM workshop (COLM)! See you all in Montreal!
June 2025: Just a new Paper with Will Sheffield (lead), Valentina Pyatkin, Ashwini Deo, Kyle Mahowald, and Jessy Li! To be presented at ACL 2025!
April 2025: Paper with Sriram Padmanabhan (lead), Kyle Mahowald, and Eunsol Choi on LMs’ sensitivity to suspicious coincidences in their input.