Abstract: |
Understanding spoken language involves a complex set of processes that transform the auditory input into a meaningful interpretation. Our percept is not of acoustic-phonetic detail but of the speaker’s intended meaning. This transition occurs on millisecond timescales, with remarkable speed and accuracy, and with no awareness of the complex computations on which it depends. How is this achieved? What are the processes and representations that support the transition from sound to meaning, and what are the neurobiological systems in which they are instantiated? Surprisingly little is known about the specific spatio-temporal patterning and the specific neuro-computational properties of this complex dynamic system. In current research we address these issues by combining advanced techniques from neuroimaging, multivariate statistics and computational linguistics to probe the dynamic patterns of neural activity that are elicited by spoken words and the incremental processes that combine them into syntactically and semantically coherent sentences. Computational linguistic analyses of language corpora enable us to build quantifiable models of different dimensions of language interpretation – from phonetics and phonology to argument structure and semantic integration - and we test for their presence using multivariate methods on combined electro- and magneto encephalography (EMEG) data, as the utterance unfolds in real time. In this talk, I will present the novel account of speech comprehension that is emerging from this research. |