Characterizations of how neural networks learn

Monday, May 13, 2024 - 9:00am to 10:30am

Event Calendar Category

LIDS Thesis Defense

Speaker Name

Enric Boix-Adsera



Building and Room number


Join Zoom meeting


Training neural network architectures on Internet-scale datasets has led to many recent advances in machine learning. However, the mechanisms underlying how neural networks learn from data are largely opaque. This thesis develops a mechanistic understanding of how neural networks learn in several settings, as well as new tools to analyze trained networks.

First, we study data where the labels depend on an unknown low-dimensional subspace of the input (i.e., the multi-index setting). We identify the ``leap complexity'', which characterizes how much data networks need in order to learn. Our analysis reveals dynamics we observe empirically in state-of-the-art transformer models.

Second, we study the ability of language models to learn to reason. On a family of ``relational reasoning'' tasks, we prove that modern transformers learn to reason with enough data, but classical fully-connected architectures do not. Our analysis suggests small architectural modifications that improve data efficiency.

Finally, we construct new tools to interpret trained networks. These are: (a) a definition of distance between two models that captures their functional similarity, and (b) a distillation algorithm to extract interpretable decision-tree structure from a trained model when possible.

Committee: Guy Bresler, Philippe Rigollet, Constantinos Daskalakis, Emmanuel Abbe