Neural Auto-associative Memory Via Sparse Recovery

Wednesday, September 28, 2016 - 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Ankit Rawat

Affiliation

RLE

Building and Room Number

LIDS Lounge

Abstract

An associative memory is a framework of content-addressable memory that  stores a collection of message vectors (or a data set) over a neural network while enabling a neurally feasible mechanism to recover any message in the data set from its noisy version. Designing an associative memory requires addressing two main tasks: 1) learning phase: given a data set, learn a concise representation of the data set in the form of a graphical model  (or a neural network),  2) recall phase: given a noisy version of a message vector from the data set, output the correct message vector via a neurally feasible algorithm over the network learnt during the learning phase. 

We study the problem of designing a class of neural associative memories which learns a  network representation  for a large data set that ensures correction against a large number of adversarial errors during the recall phase. Specifically, the associative memories designed in this paper can store dataset containing $\exp(n)$ $n$-length message vectors over a network with $O(n)$ nodes and can tolerate $\Omega(\frac{n}{{\rm polylog} n})$ adversarial errors. We carry out this memory design by mapping the learning phase and recall phase to the tasks of dictionary learning with a square dictionary and iterative error correction in an expander code, respectively.

This is a joint work with Arya Mazumdar (UMass Amherst).

Biography