Courses

 
The following courses will be given at the school (tentative titles and abstracts as of January 12, 2017):

  • High Dimensional Probability and the Geometry of Random Matrices, with Applications to Sparse Recovery
    Bernhard G. Bodmann
    , University of Houston, USA
    This course reviews spectral and geometric results in matrix theory that have become popular in the context of compressed sensing. The first part starts with measure concentration for random variables in high dimensional spaces, including matrices with independent, identically subgaussian distributed entries or sums of random rank-one projections. This leads to the restricted isometry property for certain classes of random matrices, a property that is well-known as a sufficient condition for sparse recovery by norm minimization. In the second part, a refined analysis, recovery guarantees based on the robust null-space property and the robust width will be established for suitably randomized linear measurements.

  • Low Rank Tensor Formats
    Lars Grasedyck
    , RWTH Aachen, Germany
    The first half of the lectures is supposed to give a general introduction to low rank tensor formats for the data sparse representation of higher order tensors and multivariate functions by use of Canonical Polyadic (CP), Tucker, Tensor Train (TT) and Hierarchical Tucker (HT) representations. Each of the formats comes with its own notion of rank in higher dimension, and all of them collapse to the same classical matrix rank in two dimensions.
    Each of the notions of rank gives rise to a different set or manifold of tensors of fixed rank and we compare the advantages and drawbacks between the different formats. The concepts are introduced in the discrete setting where a tensor is a mapping from a Cartesian product of finite index sets, but we also point out the relation to multivariate functions, the continuous setting. We summarize some interesting open questions that open new and sometimes very difficult areas of research.
    The second half of the lectures focusses on the application (in the broadest sense) of low rank tensors, e.g. in model reduction, uncertainty quantification, high dimensional partial differential equations and big data analysis. The most challenging question for all of these applications is whether or not the object of interest to be represented in one of the data sparse formats possesses an approximation of low rank, and how one can find such an approximation.

  • Sparsity and Optimization for Large Scale Inverse Problems
    Serge Gratton
    , University of Toulouse, INP-IRIT, France
    Large scale problems involving data and possibly models, are of interest in many applications where a system is to be better understood or when its behaviour has to be forecast. From the algorithmic point of view, these problems lead to large scale filtering problems that have to be solved in a well controlled computational time, especially when the system to be forecast is a real physical system, as it is the case for instance in Meteorological applications for instance. We will both consider deterministic and random solution methods amenable to large scale parallel computers.
    The course objective is to present the current landscape of today's methods in use in this area, putting a special emphasis on how the problem structure has to be tacken into account when problems with millions, sometimes billions, of unknowns are considered. Whenever possible, solution methods will take benefit from a partially separable structure of the problem, as may occur when partial differential equations are involved. They may also track explicitly or implicitly low dimensional geometrical objects that capture the dominant behaviour of the system at hand, as a way to reduce the computational complexity. Finally, and following the recent trend in data analysis, appropriate sparse representation of signal may be used to improve the observability of the problems when adequate.
    A key feature of the course will be an extension of successful numerical analysis techniques (preconditioning, stopping criteria, domain decomposition, multigrid, complexity issues) to the more general setting of randomized methods for large scale nonlinear and nonconvex optimization problems.

  • Sparsity in Optimization
    Deanna Needell
    , UCLA, USA
    This course will cover various topics in optimization that rely on sparsity. Sparsity is an abstract concept that describes the ability for an object to be represented in a low dimensional fashion. For example, a large data matrix may be extremely low-rank, a vector may have only a small number of non-zero entries, or an analog signal may be a superposition of only a few frequency components. When an object is known to have such a sparse representation, recent work in compressed sensing and related fields demonstrates that the object can be accurately captured from a small number of measurements or observations. From these observations, efficient optimization techniques are necessary to recover the signal. This course will provide the background to understand these techniques as well as their mathematical formulations and theoretical guarantees. Applications including medical imaging, sensor network monitoring, video, and many others will also be discussed.