Title
The language system in the broader landscape of the human brain
Bio
Dr. Fedorenko is a cognitive neuroscientist who studies the human language system and its relationship with other systems. She received her Bachelor’s degree from Harvard University in 2002, and her Ph.D. from MIT in 2007. She was then awarded a K99R00 Pathway to Independence Career Development Award from NIH. In 2014, she joined the faculty at MGH/HMS, and in 2019 she returned to MIT where she is currently an Associate Professor in the Department of Brain and Cognitive Sciences and a member of the McGovern Institute for Brain Research. Dr. Fedorenko uses fMRI, intracranial recordings and stimulation, EEG, MEG, and computational modeling, to study adults and children, including those with developmental and acquired brain disorders, and otherwise atypical brains.
Abstract
I seek to understand how our brains understand and produce language.
I will talk about three things that my lab has discovered about the “language network”, a set of frontal and temporal brain areas that store thousands of words and constructions and use these representations to extract meaning from word sequences (to understand or decode linguistic messages) and to convert abstract ideas into word sequences (to produce or encode messages). First, the language network is highly selective for language processing. Language areas show little neural activity when individuals solve math problems, listen to music, or reason about others’ minds. Further, some individuals with severe aphasia lose the ability to understand and produce language but can still do math, play chess, and reason about the world. Thus, language does not appear to be necessary for thinking and reasoning. Second, processing the meanings of individual words and putting words together into phrases and sentences are not spatially segregated in the language network: every region within the language network is robustly sensitive to both word meanings and linguistic structure. This finding overturns the popular idea of an abstract syntactic module but aligns with evidence from behavioral psycholinguistic work, language development, and computational modeling. And third, representations from large language models like GPT-2 predict neural responses during language processing in humans, which suggests that these language models capture something about how the human language system represents linguistic information.
In the second part of the talk, I will discuss more recent and emergent research directions. These include: a) investigations of how the language system emerges during development, how it changes with experience, in aging and in populations who use language to a greater extent (like polyglots), and how it recovers from damage; b) work that aims to understand the constraints on the functional architecture of the brain, including modularity and lateralization of function, through the study of individuals with atypical brains (e.g., individuals growing up without a temporal lobe due to early stroke); c) studies of the relationship between language and social cognition; and d) work that builds on the discovery of neural-network-to-brain alignment to determine which model features produce this alignment as a route toward an eventual mechanistic-level understanding of how we interpret and produce language.