minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

Abstract

We present $\texttt{minicons}$, an open source library that provides a standard API for researchers interested in conducting behavioral and representational analyses of transformer-based language models (LMs). Specifically, minicons enables researchers to apply analysis methods at two levels: (1) at the prediction level – by providing functions to efficiently extract word/sentence level probabilities; and (2) at the representational level – by also facilitating efficient extraction of word/phrase level vectors from one or more layers. In this paper, we describe the library and apply it to two motivating case studies: One focusing on the learning dynamics of the BERT architecture on relative grammatical judgments, and the other on benchmarking 23 different LMs on zero-shot abductive reasoning. minicons is available at this url.

Publication
In arxiv Preprint
Click the Cite button above to cite this paper!
Kanishka Misra
Kanishka Misra
Research Assistant Professor at Toyota Technological Institute at Chicago

My research interests include Natural Language Processing, Cognitive Science, and Deep Learning.