I am a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus.
My research program lies at the intersection of Cognitive Science, Linguistics, and Artificial Intelligence. I am primarily interested in characterizing the statistical mechanisms that underlie the acquisition and generalization of complex linguistic phenomena and conceptual meaning. To this end, I: (1) develop methods to evaluate and analyze AI models from the perspective of semantic cognition; and (2) use AI models as simulated learners to test and generate novel hypotheses about language acquisition and generalization.
Previously, I was a postdoctoral fellow in the Linguistics department at UT Austin, working with Dr. Kyle Mahowald. Before that, I was a PhD student at Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also worked closely with Dr. Allyson Ettinger and her lab at UChicago.
My email is kanishka [at] ttic [dot] edu.[why is it like that?]
I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.
I spent Fall 2022 as a Research Intern at Google AI working on multi-hop reasoning and language models.
October 2024: New preprint on characterizing the role of similarity in LMs’ property inferences with Juan Diego Rodriguez and Aaron Mueller. My first paper as a supervising author!
October 2024: Two papers accepted at EMNLP 2024: (1) on testing robust property inferences in LMs with experimental contexts, with Kyle and Allyson; and (2) characterizing LMs’ generalization to rare linguistic phenomena, with Kyle! Congrats to my co-authors!
October 2024: Talk at the Research at TTIC Seminar Series!
September 2024: I am co-organizing the TTIC/UChicago NLP Seminar along with Mina Lee and Zhewei Sun!
August 2024: I have now moved to Chicago! Excited to start as Research Faculty at TTIC!
July 2024: New preprint with Najoung Kim on using language models to generate novel experimental hypotheses.