Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs

Abstract

Language models learn rare syntactic phenomena, but it has been argued that they rely on rote memorization, as opposed to grammatical generalization. Training on a corpus of human-scale in size (100M words), we iteratively trained transformer language models on systematically manipulated corpora and then evaluated their learning of a particular rare grammatical phenomenon: the English Article+Adjective+Numeral+Noun (AANN) construction (“a beautiful five days”). We first compared how well this construction was learned on the default corpus relative to a counterfactual corpus in which the AANN sentences were removed. AANNs were still learned better than systematically perturbed variants of the construction. Using additional counterfactual corpora, we suggest that this learning occurs through generalization from related constructions (e.g., “a few days”). An additional experiment showed that this learning is enhanced when there is more variability in the input. Taken together, our results provide an existence proof that models learn rare grammatical phenomena by generalization from less rare phenomena.

Publication
In arxiv 2024
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Kanishka Misra
Kanishka Misra
Postdoc at UT Austin

My research interests include Natural Language Processing, Cognitive Science, and Deep Learning.