Stojanac, Željka: Low-rank Tensor Recovery. - Bonn, 2016. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5n-45021
@phdthesis{handle:20.500.11811/6904,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5n-45021,
author = {{Željka Stojanac}},
title = {Low-rank Tensor Recovery},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2016,
month = oct,

note = {Low-rank tensor recovery is an interesting subject from both the theoretical and application point of view. On one side, it is a natural extension of the sparse vector and low-rank matrix recovery problem. On the other side, estimating a low-rank tensor has applications in many different areas such as machine learning, video compression, and seismic data interpolation. In this thesis, two approaches are introduced. The first approach is a convex optimization approach and could be considered as a tractable extension of $ell_1$-minimization for sparse vector and nuclear norm minimization for matrix recovery to tensor scenario. It is based on theta bodies – a recently introduced tool from real algebraic geometry. In particular, theta bodies of appropriately defined polynomial ideal correspond to the unit-theta norm balls. These unit-theta norm balls are relaxations of the unit-tensor-nuclear norm ball. Thus, in this case, we consider a canonical tensor format. The method requires computing the reduced Groebner basis (with respect to the graded reverse lexicographic ordering) of the appropriately defined polynomial ideal. Numerical results for third-order tensor recovery via $theta_1$-norm are provided. The second approach is a generalization of iterative hard thresholding algorithm for sparse vector and low-rank matrix recovery to tensor scenario (tensor IHT or TIHT algorithm). Here, we consider the Tucker format, the tensor train decomposition, and the hierarchical Tucker decomposition. The analysis of the algorithm is based on a version of the restricted isometry property (tensor RIP or TRIP) adapted to the tensor decomposition at hand. We show that subgaussian measurement ensembles satisfy TRIP with high probability under an almost optimal condition on the number of measurements. Additionally, we show that partial Fourier maps combined with random sign flips of the tensor entries satisfy TRIP with high probability. Under the assumption that the linear operator satisfies TRIP and under an additional assumption on the thresholding operator, we provide a linear convergence result for the TIHT algorithm. Finally, we present numerical results on low-Tucker-rank third-order tensors via partial Fourier maps combined with random sign flips of tensor entries, tensor completion, and Gaussian measurement ensembles.},
url = {https://hdl.handle.net/20.500.11811/6904}
}

The following license files are associated with this item:

InCopyright