Kanishka Misra

Kanishka Misra

Assistant Professor of Linguistics and Harrington Fellow at UT-Austin

UT Austin CompLing Group

UT Austin NLP

I am an Assistant Professor in the Linguistics Department at UT Austin, and a Harrington Faculty Fellow for 2025-26. I am a member of the Computational Linguistics Research Group and also the wider NLP Research Community at UT. I maintain a courtesy appointment at Toyota Technological Institute at Chicago.

My research program lies at the intersection of Cognitive Science, Linguistics, and Artificial Intelligence. I am primarily interested in characterizing the statistical mechanisms that underlie the acquisition and generalization of complex linguistic phenomena and conceptual meaning. To this end, I: (1) develop methods to evaluate and analyze AI models from the perspective of semantic cognition; and (2) use AI models as simulated learners to test and generate novel hypotheses about language acquisition and generalization. My research has been recognized with awards at EACL 2023, ACL 2023, and EMNLP 2024!

I am looking to recruit PhD Students through Linguistics, expected to start in Fall 2026. I am primarily interested in working with students who have interests at the intersection of AI and CogSci/NLP. Students can learn more about applications here.

Unfortunately, I am not recruiting any MS Students or Interns at the moment.

Previously, I was a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus. Before, I was a postdoctoral fellow in the Linguistics department at UT Austin, working with Dr. Kyle Mahowald. Before that, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.

My email is kmisra [at] utexas [dot] edu.[why is it like that?]

Other things:

  • I am the author of minicons, a python library that facilitates large scale behavioral analyses of transformer language models.
  • I used to co-organize the UChicago/TTIC NLP Seminar, along with the wonderful Mina Lee and Zhewei Sun.
  • I spent Fall 2022 as a Research Intern at Google AI working on multi-hop reasoning and language models.
  • In summer of 2022, I hosted a two part discussion group on Neural Nets for Cognition @ CogSci 2022

Representative Papers

Recent News

SEE ALL

  • August 2025: Moving to UT Austin Linguistics to start as Assistant Professor and Harrington Faculty Fellow (2025-26) this fall! Excited to be back to my favorite Linguistics Department!

  • July 2025: I’ll be at CogSci - excited to meet old friends and make new!

  • July 2025: New paper by Yulu Qin (lead), Dheeraj Varghese (lead), Adam Dahlgren Lindstrom, Lucia Donatelli, and Najoung Kim on the impact of VL-training on taxonomic knowledge of LMs!

  • July 2025: Qing’s paper on datives was accepted at COLM 2025, and Sriram’s paper on suspicious coincidences was accepted at PragLM workshop (COLM)! See you all in Montreal!

  • June 2025: Just a new Paper with Will Sheffield (lead), Valentina Pyatkin, Ashwini Deo, Kyle Mahowald, and Jessy Li! To be presented at ACL 2025!

  • April 2025: Paper with Sriram Padmanabhan (lead), Kyle Mahowald, and Eunsol Choi on LMs’ sensitivity to suspicious coincidences in their input.

Recent Posts

Introducing minicons: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact

Note: I am not currently looking to hire any PhD students, but please reach out to me if I can be of assistance or point you to other fantastic researchers who are looking for students. I will unfortunately not respond to your email if its content suggests you have not read my website.