EECS Special Seminar: Knowledge from Language via Deep Understanding

Thursday, March 15, 2018 - 4:00pm to Friday, March 16, 2018 - 3:55pm

Event Calendar Category

EECS

Speaker Name

Danqi Chen

Affiliation

Stanford University

Building and Room number

32-G449 Patil/Kiva

Abstract

Almost all of human knowledge is now available online, but the vast majority of it is principally encoded in the form of human language explanations. In this talk, I explore novel neural network approaches that open up opportunities for getting a deep understanding of natural language text. First, I show how distributed representations enabled the building of a smaller, faster and more accurate dependency parser for finding the structure of sentences. Then I show how related neural technologies can be used to improve the construction of knowledge bases from text. However, maybe we don't need this intermediate step and can directly gain knowledge and answer people's questions from large textbases? In the third part, I explore this possibility by directly reading text with a simple yet highly effective neural architecture for question answering.

Biography

Danqi Chen is a PhD student in Computer Science at Stanford University, working with Christopher Manning on deep learning approaches to natural language processing. Her research centers on how computers can achieve a deep understanding of human language and the information it contains. Danqi received Outstanding Paper Awards at ACL 2016 and EMNLP 2017, a Facebook Fellowship, a Microsoft Research Women’s Fellowship and an Outstanding Course Assistant Award from Stanford. Previously, she received her B.E. in Computer Science from Tsinghua University.