Kanishka Misra

Kanishka Misra

Research Assistant Professor at Toyota Technological Institute at Chicago

Speech & Language @ TTIC

UT Austin CompLing Group

I am a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus.

My research program lies at the intersection of Cognitive Science, Linguistics, and Artificial Intelligence. I am primarily interested in characterizing the statistical mechanisms that underlie the acquisition and generalization of complex linguistic phenomena and conceptual meaning. To this end, I: (1) develop methods to evaluate and analyze AI models from the perspective of semantic cognition; and (2) use AI models as simulated learners to test and generate novel hypotheses about language acquisition and generalization. My research has been recognized with awards at EACL 2023, ACL 2023, and EMNLP 2024!

Previously, I was a postdoctoral fellow in the Linguistics department at UT Austin, working with Dr. Kyle Mahowald. Before that, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.

My email is kanishka [at] ttic [dot] edu.[why is it like that?]

Note: I am not currently looking to hire any PhD students, but please reach out to me if I can be of assistance or point you to other fantastic researchers who are looking for students. I will unfortunately not respond to your email if its content suggests you have not read my website.

Other things:

  • I co-organize the UChicago/TTIC NLP Seminar, along with the wonderful Mina Lee and Zhewei Sun. We are constantly looking for external speakers – please reach out if you are interested to present! I strongly encourage people from underrepresented and marginalized groups to reach out.

  • I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.

  • I spent Fall 2022 as a Research Intern at Google AI working on multi-hop reasoning and language models.

Recent News

SEE ALL

Recent Posts

Introducing $\texttt{minicons}$: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact

Note: I am not currently looking to hire any PhD students, but please reach out to me if I can be of assistance or point you to other fantastic researchers who are looking for students. I will unfortunately not respond to your email if its content suggests you have not read my website.