TY - BOOK
T1 - Greedy adaptive algorithms for sparse representations
AU - Onose, Alexandru
N1 - Awarding institution:Tampere University of Technology
PY - 2014/5/13
Y1 - 2014/5/13
N2 - A vector or matrix is said to be sparse if the number of non-zero elements is significantly smaller than the number of zero elements. In estimation theory the vectors of model parameters can be known in advance to have a sparse structure, and solving an estimation problem taking into account this constraint can improve substantially the accuracy of the solution. The theory of sparse models has advanced significantly in recent years providing many results that can guarantee certain properties of the sparse solutions. These performance guarantees can be very powerful in applications and they have no correspondent in the estimation theory for non-sparse models.
Model sparsity is an inherent characteristic of many applications (image compressing, wireless channel estimation, direction of arrival) in signal processing and other related areas.Due to the continuous technological advances that allow faster numerical computations, optimization problems, too complex to be solved in the past, are now able to provide better solutions by considering also sparsity constraints. However, an exhaustive search to finding sparse solutions generally requires a combinatorial search for the correct support, a very limiting factor due to the huge numerical complexity. This motivated a growing interest towards developing batch sparsity aware algorithms in the past twenty years.
More recently, the main goal for the continuous research related to sparsity is the quest for faster, less computational intensive, adaptive methods able to recursively update the solution. In this thesis we present several such algorithms. They are greedy in nature and minimize the least squares criterion under the constraint that the solution is sparse. Similarly to other greedy sparse methods, two main steps are performed once new data are available: update the sparse support by changing the positions that contribute to the solution; compute the coefficients towards the minimization of the least squares criterion restricted to the current support. Two classes of adaptive algorithms were proposed.
The first is derived from the batch matching pursuit algorithm. It uses a coordinate descent approach to update the solution, each coordinate being selected by a criterion similar to the one used by matching pursuit. We devised two algorithms that use a cyclic update strategy to improve the solution at each time instant. Since the solution support and coefficient values are assumed to vary slowly, a faster and better performing approach is later proposed by spreading the coordinate descent update in time. It was also adapted to work in a distributed setup in which different nodes communicate with their neighbors to improve their local solution towards a global optimum.
The second direction can be linked to the batch orthogonal least squares. The algorithms maintain a partial QR decomposition with pivoting and require a permutation based support selection strategy to ensure a low complexity while allowing the tracking of slow variations in the support. Two versions of the algorithm were proposed. They allow past data to be forgotten by using an exponential or a sliding window, respectively. The former was modified to improve the solution in a structured sparsity case, when the solution is group sparse.
We also propose mechanisms for estimating online the sparsity level. They are based on information theoretic criteria, namely the predictive least squares and the Bayesian information criterion.
The main contributions are the development of the adaptive greedy algorithms and the use of the information theoretic criteria enabling the algorithms to behave robustly. The algorithms have good performance, require limited prior information and are computationally efficient. Generally, the configuration parameters, if they exist, can be easily chosen as a tradeoff between the stationary error and the convergence speed.
AB - A vector or matrix is said to be sparse if the number of non-zero elements is significantly smaller than the number of zero elements. In estimation theory the vectors of model parameters can be known in advance to have a sparse structure, and solving an estimation problem taking into account this constraint can improve substantially the accuracy of the solution. The theory of sparse models has advanced significantly in recent years providing many results that can guarantee certain properties of the sparse solutions. These performance guarantees can be very powerful in applications and they have no correspondent in the estimation theory for non-sparse models.
Model sparsity is an inherent characteristic of many applications (image compressing, wireless channel estimation, direction of arrival) in signal processing and other related areas.Due to the continuous technological advances that allow faster numerical computations, optimization problems, too complex to be solved in the past, are now able to provide better solutions by considering also sparsity constraints. However, an exhaustive search to finding sparse solutions generally requires a combinatorial search for the correct support, a very limiting factor due to the huge numerical complexity. This motivated a growing interest towards developing batch sparsity aware algorithms in the past twenty years.
More recently, the main goal for the continuous research related to sparsity is the quest for faster, less computational intensive, adaptive methods able to recursively update the solution. In this thesis we present several such algorithms. They are greedy in nature and minimize the least squares criterion under the constraint that the solution is sparse. Similarly to other greedy sparse methods, two main steps are performed once new data are available: update the sparse support by changing the positions that contribute to the solution; compute the coefficients towards the minimization of the least squares criterion restricted to the current support. Two classes of adaptive algorithms were proposed.
The first is derived from the batch matching pursuit algorithm. It uses a coordinate descent approach to update the solution, each coordinate being selected by a criterion similar to the one used by matching pursuit. We devised two algorithms that use a cyclic update strategy to improve the solution at each time instant. Since the solution support and coefficient values are assumed to vary slowly, a faster and better performing approach is later proposed by spreading the coordinate descent update in time. It was also adapted to work in a distributed setup in which different nodes communicate with their neighbors to improve their local solution towards a global optimum.
The second direction can be linked to the batch orthogonal least squares. The algorithms maintain a partial QR decomposition with pivoting and require a permutation based support selection strategy to ensure a low complexity while allowing the tracking of slow variations in the support. Two versions of the algorithm were proposed. They allow past data to be forgotten by using an exponential or a sliding window, respectively. The former was modified to improve the solution in a structured sparsity case, when the solution is group sparse.
We also propose mechanisms for estimating online the sparsity level. They are based on information theoretic criteria, namely the predictive least squares and the Bayesian information criterion.
The main contributions are the development of the adaptive greedy algorithms and the use of the information theoretic criteria enabling the algorithms to behave robustly. The algorithms have good performance, require limited prior information and are computationally efficient. Generally, the configuration parameters, if they exist, can be easily chosen as a tradeoff between the stationary error and the convergence speed.
M3 - Doctoral thesis
SN - 978-952-15-3282-5
T3 - Tampere University of Techology. Publication
BT - Greedy adaptive algorithms for sparse representations
PB - Tampere University of Technology
CY - Tampere
ER -