|Description||University of Chicago Linguistics Colloquium |
Andrea Martin, Max Planck Institute for Psycholinguistics
Computing (de)compositional linguistic representations within the constraints of neurophysiology
Human language is a fundamental biological signal with computational properties that differ from other perception-action systems: hierarchical relationships between sounds, words, phrases, and sentences, and the unbounded ability to combine smaller units into larger ones, resulting in a "discrete infinity" of expressions. These properties have long made language hard to account for from a biological systems perspective and within models of cognition. In this talk, I synthesize fundamental insights from the language sciences, computation, and neuroscience that center on the idea that time can be used to combine and separate representations. I describe how a well-supported computational model from a related area of cognition capitalizes on the time and rhythm in computation in order to learn symbolic representations from unstructured data. Neuroscientific experiments can then be instrumentalized to determine if the brain solves the problem in a similar way to the artificial network; I give an example of how this approach can be leveraged to test formal models. Finally, I synthesize evidence from cognitive neuroscience and computational modelling that suggests a formal and mechanistic alignment between structure building and cortical oscillations, and I detail the computational properties a system needs in order to learn and maintain symbolic representations within the constraints of neurophysiology. In this way, basic insights from linguistics and psycholinguistics can be integrated with the currency of neural computation.