Classification and Dimensionality Reduction Using Geometric Level Set Methods

Shell International uses noisy measurements to tell the difference between two types of rock, sandstone and shale, which is useful for oil exploration and production.  The United States Air Force and United States Army use sensor observations to determine if a target is present or absent.  These two, along with others such as using a blood test to determine whether a patient has a particular disease or not, and identifying whether an email is spam or not based on its constituent words, are problems of decision making under uncertainty.  The measurements and class labels are governed by a probability distribution.  If this distribution is known then an optimal decision rule may be constructed, where the optimal decision rule is specified in a space of lower dimension than the space of measurements through the use of sufficient statistics.  However, it is often the case that this distribution is not known.  In the field known as machine learning, the distribution is not known and algorithms learn decision rules from a finite set of training examples drawn from the distribution. 

Level set methods are techniques that use an implicit representation to discover boundaries that optimize some objective or solve some differential equation.  They have been used in fluid mechanics, computational geometry, image processing and computer vision (including research performed at LIDS), computer graphics, materials science, and numerous other fields.  Kush Varshney and Alan Willsky are applying level set methods to machine learning, a field to which they have not been previously applied.  The LIDS researchers have developed an algorithm based on level set methods that learns decision boundaries that separate regions corresponding to different class labels in the space of measurements.  In addition, they have developed a technique to jointly find an informative reduced-dimensional linear subspace of the measurement space along with decision boundaries in the reduced-dimensional space via optimization on the Stiefel manifold.  This dimensionality reduction when learning from training examples is analogous to sufficient statistics when the distribution is known. 

Related Files