Gabriella Vigliocco (Professor of the Psychology of Language)
Department of Experimental Psychology, University College London
Ecological Language: a multimodal approach to the study of
human language learning and processing
Abstract
The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning are based on multiple cues both linguistic and non-linguistic (such as gestures, eye gaze, mouth patterns and prosody). Yet, our understanding of how language is learnt and processed - as well as applications of this knowledge - comes mostly from reductionist approaches in which the multimodal signal is reduced to speech or text. I will introduce our current programme of research that investigates language in real-world settings in which the listener/learner has access to -- and therefore can take advantage of -- the multiple cues provided by the speaker. I will then describe studies that aim at characterising the distribution of the multimodal cues in the language used by caregivers when interacting with their children (mostly 2-4 years old) and provide data concerning how these cues are differentially distributed depending upon whether the child knows the objects being talked about (allowing us to more clearly isolate learning episodes), and whether objects being talked about are present. I will then move to a study using EEG addressing the question of how discourse but crucially also the non-linguistic cues modulate predictions about the next word in a sentence. Throughout the talk, I will highlight the ways in which this real world, more ecologically valid, approach to the study of language bear promise across disciplines.
Biography
Christopher Manning (Thomas M. Siebel Professor in Machine Learning)
Department of Linguists and Computer Science, Stanford University
Multi-step reasoning for answering complex questions
Abstract
Current neural network systems have had enormous success on matching but still struggle in supporting multi-step inference. In this talk, I will examine two recent lines of work to address this gap, done with Drew Hudson and Peng Qi. In one line of work we have developed neural networks with explicit structure to support attention, composition, and reasoning, with an explicitly iterative inference architecture. Our Neural State Machine design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs encourage modularity and generalization from limited data. We show the model's effectiveness on visual question answering datasets. The second line of work makes progress in doing multi-step question answering over a large open-domain text collection. Most previous work on open-domain question answering employs a retrieve-and-read strategy, which fails when the question requires complex reasoning, because simply retrieving with the question seldom yields all necessary supporting facts. I present a model for explainable multi-hop reasoning in open-domain QA that iterates between finding supporting facts and reading the retrieved context. This GoldEn Retriever model is not only explainable but shows strong performance on the recent HotpotQA dataset for multi-step reasoning.
Biography
Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.