Keynotes 2017

Naomi Feldman

Title: Rational distortions of learners' linguistic input

Abstract:
Language acquisition can be modeled as a statistical inference problem: children use sentences and sounds in their input to infer linguistic structure.  However, in many cases, children learn from data whose statistical structure is distorted relative to the language they are learning.  Such distortions can arise either in the input itself, or as a result of children's immature strategies for encoding their input.  This work examines several cases in which the statistical structure of children's input differs from the language being learned.  Analyses show that these distortions of the input can be accounted for with a statistical learning framework by carefully considering the inference problems that learners solve during language acquisition

Biography:
Naomi Feldman is an associate professor in the Department of Linguistics and the Institute for Advanced Computer Studies at the University of Maryland.  She received her PhD in Cognitive Science from Brown University in 2011.   Her research lies at the intersection of cognitive science, computer science, and linguistics.  She uses methods from machine learning to create formal models of how people learn and represent the structure of their language, and has been developing methods that take advantage of naturalistic speech corpora to study how listeners encode information from their linguistic environment.

Chris Dyer

Title: Should Neural Network Architecture Reflect Linguistic Structure?

Abstract:
I explore the hypothesis that conventional neural network models (e.g., recurrent neural networks) are incorrectly biased for making linguistically sensible generalizations when learning, and that a better class of models is based on architectures that reflect hierarchical structures for which considerable behavioral evidence exists. I focus on the problem of modeling and representing the meanings of sentences. On the generation front, I introduce recurrent neural network grammars (RNNGs), a joint, generative model of phrase-structure trees and sentences. RNNGs operate via a recursive syntactic process reminiscent of probabilistic context-free grammar generation, but decisions are parameterized using RNNs that condition on the entire (top-down, left-to-right) syntactic derivation history, thus relaxing context-free independence assumptions, while retaining a bias toward explaining decisions via "syntactically local" conditioning contexts. Experiments show that RNNGs obtain better results in generating language than models that don’t exploit linguistic structure. On the representation front, I explore unsupervised learning of syntactic structures based on distant semantic supervision using a reinforcement-learning algorithm. The learner seeks a syntactic structure that provides a compositional architecture that produces a good representation for a downstream semantic task. Although the inferred structures are quite different from traditional syntactic analyses, the performance on the downstream tasks surpasses that of systems that use sequential RNNs and tree-structured RNNs based on treebank dependencies. This is joint work with Adhi Kuncoro, Dani Yogatama, Miguel Ballesteros, Phil Blunsom, Ed Grefenstette, Wang Ling, and Noah A. Smith.

Biography:
Chris Dyer is a research scientist at DeepMind and an assistant professor in the School of Computer Science at Carnegie Mellon University. In 2017, he received the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has occasionally been nominated for best paper awards in prestigious NLP venues and has, much more occasionally, won them. He lives in London and, in his spare time, plays cello.