About Us
We are a community of linguists, computer scientists, and cognitive scientists excited about the use of computational methods to study language in humans and artificial intelligence. Our ultimate goal is to track down how abstractions—generalized representations or computations over more specific instances—emerge in language learners. We are specifically interested in tracing the origins of abstractions that underlie linguistic generalization (surface-form competition, word order, selection restrictions, etc.) and general purpose conceptual behaviors (inductive generalization, property inheritance, conceptual organization, etc.). To this end, we:
- Develop tests for AI systems that target their ability to capture these abstractions.
- Develop methods that allow us to understand what cues in a computational model’s learning environment facilitate its ability to demonstrate a specific abstraction.
- Use AI models as simulated learners to test and generate novel hypotheses about linguistic abstraction and generalization.
Our work has been recognized with paper awards at NAFIPS 2021, EACL 2023, ACL 2023, and EMNLP 2024.
Koalab was officially lauched in Fall 2025, shortly after the name was proposed by
Najoung Kim, long time friend of the lab.
Note: It is ok to pronounce our name as Koala-lab.
Members
Principal Investigator
UT Austin
PhD Student (advised by John Beavers)
UT Austin
Undergrad (Senior)
UT Austin
Undergrad (Senior)
UT Austin
Undergrad (Junior)
UT Austin
Student Affiliates
PhD Student (advised by Kyle Mahowald)
UT Austin
PhD Student (advised by Allyson Ettinger)
University of Chicago
PhD Student (advised by Najoung Kim)
Boston University
PhD Student (advised by Greg Durrett and Katrin Erk)
UT Austin
PhD Student (advised by Jessy Li)
UT Austin
PhD Student (advised by Karen Livescu)
TTIC
PhD Student (advised by Kyle Mahowald)
UT Austin
Undergrad (Senior)
UW Madison