Data Don't Lie: Image Processing via Learned Efficient Representations
In a large number of disciplines, image acquisition has advanced much faster than image analysis. This permits to replace a number of pre-defined concepts by learned ones. In particular, we can learn efficient image representations. Among these, sparse representations have recently drawn much attention from the signal processing and learning communities. The basic underlying model consist of considering that natural images, or signals in general, admit a sparse decomposition in some redundant dictionary. This means that we can find a linear combination of a few atoms from the dictionary that lead to an efficient representation of the original signal. Recent results have shown that learning (overcomplete) non-parametric dictionaries for image representations, instead of using off-the-shelf ones, significantly improves numerous image and video processing tasks.
In this talk, I will first briefly present results on learning multiscale overcomplete dictionaries for color image and video restoration. I will present the framework and provide numerous examples showing state-of-the-art results. I will then briefly show how to extend this to image classification, deriving energies and optimization procedures that lead to learning non-parametric dictionaries for sparse representations optimized for classification. I will conclude by showing results on the extension of this to sensing and the learning of incoherent dictionaries. Models derived from universal coding are presented as well.
The work I present in this talk is the result of great collaborations with J. Mairal, F. Rodriguez, J. Martin-Duarte, I. Ramirez, F. Lecumberry, F. Bach, M. Elad, J. Ponce, and A. Zisserman.
Guillermo Sapiro, University of Minnesota