sergey


Dr. Sergey Yekhanin

Microsoft Research, Redmond, USA


Codes with Local Decoding Procedures


Abstract: Error correcting codes allow senders to add redundancy to messages, encoding bit strings representing messages into longer bit strings called codewords, in a way that the message can still be recovered even if a fraction of the codeword bits are corrupted. In certain settings however the receiver might not be interested in recovering all the message, but rather seek to quickly recover just a few coordinates of it. Codes that allow one to recover individual message coordinates extremely fast (locally), from accessing just a small number of carefully chosen coordinates of a corrupted codeword are said to admit a local decoding procedure. Such codes have recently played an important role in several areas of theoretical computer science and have also been used in practice to provide reliability in large distributed storage systems. In this course we will survey existing constructions of codes with local decoding procedures as well as their main applications. The course assumes basic familiarity with the properties of finite fields and is otherwise self-contained.

babak


Prof. Babak Hassibi

California Institute of Technology, Pasadena, USA


Non-Smooth Convex Optimization and Structured Signal Recovery

Abstract: In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. In particular, the advent of compressed sensing has led to a flowering of ideas and methods in this area. While the algorithms (basis pursuit, LASSO, etc.) are fairly well established, rigorous frameworks for the exact analysis of the performance of such methods is only just emerging. The goal of this tutorial will be to develop and describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for various measurement ensembles (Gaussian, Haar, etc.). This will allow researchers and practitioners to assess the performance of these methods before actual implementation and will allow them to optimally choose parameters such as regularizer coefficients, number of measurements, etc. The theory includes all earlier results as special cases. It builds on an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes), as well as on a non-trivial generalization due to Gordon in 1988, and introduces concepts from convex geometry (such as Gaussian widths) in a very natural way. The tutorial will explain all this, and its various implications, in some detail.