440 research outputs found
Iterative Singular Tube Hard Thresholding Algorithms for Tensor Completion
Due to the explosive growth of large-scale data sets, tensors have been a
vital tool to analyze and process high-dimensional data. Different from the
matrix case, tensor decomposition has been defined in various formats, which
can be further used to define the best low-rank approximation of a tensor to
significantly reduce the dimensionality for signal compression and recovery. In
this paper, we consider the low-rank tensor completion problem. We propose a
novel class of iterative singular tube hard thresholding algorithms for tensor
completion based on the low-tubal-rank tensor approximation, including basic,
accelerated deterministic and stochastic versions. Convergence guarantees are
provided along with the special case when the measurements are linear.
Numerical experiments on tensor compressive sensing and color image inpainting
are conducted to demonstrate convergence and computational efficiency in
practice
μμ μ‘μ μ κ±°μ μμ€ μμ 볡μμ μν μ κ·ν λ°©λ²
νμλ
Όλ¬Έ(λ°μ¬)--μμΈλνκ΅ λνμ :μμ°κ³Όνλν μ리과νλΆ,2020. 2. κ°λͺ
μ£Ό.In this thesis, we discuss regularization methods for denoising images corrupted by Gaussian or Cauchy noise and image dehazing in underwater. In image denoising, we introduce the second-order extension of structure tensor total variation and propose a hybrid method for additive Gaussian noise. Furthermore, we apply the weighted nuclear norm under nonlocal framework to remove additive Cauchy noise in images. We adopt the nonconvex alternating direction method of multiplier to solve the problem iteratively. Subsequently, based on the color ellipsoid prior which is effective for restoring hazy image in the atmosphere, we suggest novel dehazing method adapted for underwater condition. Because attenuation rate of light varies depending on wavelength of light in water, we apply the color ellipsoid prior only for green and blue channels and combine it with intensity map of red channel to refine the obtained depth map further. Numerical experiments show that our proposed methods show superior results compared with other methods both in quantitative and qualitative aspects.λ³Έ λ
Όλ¬Έμμ μ°λ¦¬λ κ°μ°μμ λλ μ½μ λΆν¬λ₯Ό λ°λ₯΄λ μ‘μμΌλ‘ μ€μΌλ μμκ³Ό λ¬Ό μμμ μ»μ μμμ 볡μνκΈ° μν μ κ·ν λ°©λ²μ λν΄ λ
Όμνλ€. μμ μ‘μ λ¬Έμ μμ μ°λ¦¬λ λ§μ
κ°μ°μμ μ‘μμ ν΄κ²°μ μν΄ κ΅¬μ‘° ν
μ μ΄λ³μ΄μ μ΄μ°¨ νμ₯μ λμ
νκ³ μ΄κ²μ μ΄μ©ν νΌν© λ°©λ²μ μ μνλ€. λμκ° λ§μ
μ½μ μ‘μ λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ μ°λ¦¬λ κ°μ€ ν΅ λ
Έλ¦μ λΉκ΅μμ μΈ νμμ μ μ©νκ³ λΉλ³Όλ‘ κ΅μ°¨ μΉμλ²μ ν΅ν΄μ λ°λ³΅μ μΌλ‘ λ¬Έμ λ₯Ό νΌλ€. μ΄μ΄μ λκΈ° μ€μ μκ° λ μμμ 볡μνλλ° ν¨κ³Όμ μΈ μ νμλ©΄ κ°μ μ κΈ°μ΄νμ¬, μ°λ¦¬λ λ¬Ό μμ μν©μ μλ§μ μμ 볡μ λ°©λ²μ μ μνλ€. λ¬Ό μμμ λΉμ κ°μ μ λλ λΉμ νμ₯μ λ°λΌ λ¬λΌμ§κΈ° λλ¬Έμ, μ°λ¦¬λ μ νμλ©΄ κ°μ μ μμμ λ
Ήμκ³Ό μ²μ μ±λμ μ μ©νκ³ κ·Έλ‘λΆν° μ»μ κΉμ΄ μ§λλ₯Ό μ μ μ±λμ κ°λ μ§λμ νΌν©νμ¬ κ°μ λ κΉμ΄ μ§λλ₯Ό μ»λλ€. μμΉμ μ€νμ ν΅ν΄μ μ°λ¦¬κ° μ μν λ°©λ²λ€μ λ€λ₯Έ λ°©λ²κ³Ό λΉκ΅νκ³ μ§μ μΈ μΈ‘λ©΄κ³Ό νκ° μ§νμ λ°λ₯Έ μμ μΈ μΈ‘λ©΄ λͺ¨λμμ μ°μν¨μ νμΈνλ€.1 Introduction 1
1.1 Image denoising for Gaussian and Cauchy noise 2
1.2 Underwater image dehazing 5
2 Preliminaries 9
2.1 Variational models for image denoising 9
2.1.1 Data-fidelity 9
2.1.2 Regularization 11
2.1.3 Optimization algorithm 14
2.2 Methods for image dehazing in the air 15
2.2.1 Dark channel prior 16
2.2.2 Color ellipsoid prior 19
3 Image denoising for Gaussian and Cauchy noise 23
3.1 Second-order structure tensor and hybrid STV 23
3.1.1 Structure tensor total variation 24
3.1.2 Proposed model 28
3.1.3 Discretization of the model 31
3.1.4 Numerical algorithm 35
3.1.5 Experimental results 37
3.2 Weighted nuclear norm minimization for Cauchy noise 46
3.2.1 Variational models for Cauchy noise 46
3.2.2 Low rank minimization by weighted nuclear norm 52
3.2.3 Proposed method 55
3.2.4 ADMM algorithm 56
3.2.5 Numerical method and experimental results 58
4 Image restoration in underwater 71
4.1 Scientific background 72
4.2 Proposed method 73
4.2.1 Color ellipsoid prior on underwater 74
4.2.2 Background light estimation 78
4.3 Experimental results 80
5 Conclusion 87
Appendices 89Docto
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- β¦