Tuesday, October 25, 2016 - 4:00pm
Event Calendar Category
LIDS Seminar Series
University of Washington
Building and Room Number
In this talk, we'll first review how submodular functions are useful in data science for various data manipulation problems (e.g., summarization and partitioning), and how certain submodular functions (e.g., sums of concave composed with modular functions (SCMs)) are particularly useful due to their practical viability and scalability. We then introduce a new class of submodular functions called deep submodular functions (DSFs). We motivate DSFs by addressing a limitation of SCMs. We then situate DSFs within the broader context of classes of submodular functions in relationship both to various matroid ranks and SCMs. Notably, we find that DSFs constitute a strictly broader class than SCMs (thus justifying their mathematical study), although they retain all of the attractive properties of SCMs. Interestingly, some DSFs can be seen as special cases of deep neural networks (DNNs), hence the name, but DSFs still do not comprise all submodular functions. Finally, we show how to learn DSFs in a max-margin framework, and offer some preliminary but encouraging empirical learning results on both synthetic and real-world data instances.
Joint work with Brian Dolhansky.
Jeffrey A. Bilmes is a professor in the Department of Electrical Engineering at the University of Washington, Seattle and an adjunct professor in the Department of Computer Science and Engineering and the Department of Linguistics. He received his Ph.D. in Computer Science from the University of California, Berkeley. He is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes has been working on submodularity in machine learning for more than thirteen years. He received a best paper award at ICML 2013, a best paper award at NIPS 2013, and a best paper award at ACM-BCB 2016 for work in this area. Prof. Bilmes is also a recipient of a 25-year best paper award from the International Conference on Supercomputing for his 1997 paper on high-performance matrix computations. Prof. Bilmes authored the graphical models toolkit (GMTK), a dynamic graphical-model based optimized software system that is widely used in speech and language processing, bioinformatics, and human-activity recognition.