Research

What Can AI Teach Us About Human Language Learning?

AI language models like GPT can achieve remarkably human-like language fluency just from processing human language itself. This raises an intriguing question: could humans – especially children – be learning in a similar way?

My research draws inspiration from these models and tests whether the mechanisms they use are plausible for human learners. Specifically, I investigate whether the kinds of simple patterns that drive learning in models, like how often words co-occur, are also present in the language children hear, and whether they’re powerful enough to support learning. Across several studies, I’ve found that even young children can build rich word knowledge from surprisingly simple statistical patterns in everyday language, without needing complex reasoning or large-scale data.

Representative papers:

Talks:

How Does Language Support Word Learning?

Children acquire thousands of words just from hearing language around them, without formal instruction or looking everything up in a dictionary. What features of everyday language make this possible?

My work examines how conversational context supports word learning. For example, children might infer that a “mango” is a fruit just from hearing it in the same kinds of contexts as words like “apple” and “orange.” I study how both the diversity and consistency of the language surrounding a word affect how easily it’s learned, and how these patterns help link unfamiliar words to meanings already in a child’s vocabulary.

Representative papers:

Learning Without Even Trying

Much of what we know about the world is organized into categories like “dog,” “cup,” or “chair” that guide how we think about and interact with them. For example, just from recognizing something as a dog, we can anticipate that it is likely to walk on the ground rather than take flight, interpret its behavior as friendly or threatening, and discuss it in conversation. But how do we learn these categories in the first place?

I investigate the idea that categories emerge through incidental learning – from everyday, happenstance encounters, such as passing dogs on the street or in the park. My research shows that even when people aren’t trying to learn, they can absorb category-relevant information, with meaningful downstream consequences for building category knowledge.

Representative papers: