535 research outputs found
A Nonconvex Projection Method for Robust PCA
Robust principal component analysis (RPCA) is a well-studied problem with the
goal of decomposing a matrix into the sum of low-rank and sparse components. In
this paper, we propose a nonconvex feasibility reformulation of RPCA problem
and apply an alternating projection method to solve it. To the best of our
knowledge, we are the first to propose a method that solves RPCA problem
without considering any objective function, convex relaxation, or surrogate
convex constraints. We demonstrate through extensive numerical experiments on a
variety of applications, including shadow removal, background estimation, face
detection, and galaxy evolution, that our approach matches and often
significantly outperforms current state-of-the-art in various ways.Comment: In the proceedings of Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
Machine learning for flow field measurements: a perspective
Advancements in machine-learning (ML) techniques are driving a paradigm shift in image
processing. Flow diagnostics with optical techniques is not an exception. Considering the
existing and foreseeable disruptive developments in flow field measurement techniques, we
elaborate this perspective, particularly focused to the field of particle image velocimetry. The
driving forces for the advancements in ML methods for flow field measurements in recent years
are reviewed in terms of image preprocessing, data treatment and conditioning. Finally, possible
routes for further developments are highlighted.Stefano Discetti acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949085). Yingzheng Liu acknowledges financial support from the National Natural Science Foundation of China (11725209)
The transformative potential of machine learning for experiments in fluid mechanics
The field of machine learning has rapidly advanced the state of the art in
many fields of science and engineering, including experimental fluid dynamics,
which is one of the original big-data disciplines. This perspective will
highlight several aspects of experimental fluid mechanics that stand to benefit
from progress advances in machine learning, including: 1) augmenting the
fidelity and quality of measurement techniques, 2) improving experimental
design and surrogate digital-twin models and 3) enabling real-time estimation
and control. In each case, we discuss recent success stories and ongoing
challenges, along with caveats and limitations, and outline the potential for
new avenues of ML-augmented and ML-enabled experimental fluid mechanics
On recursive least-squares filtering algorithms and implementations
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application
Adaptive Scattered Data Fitting with Tensor Product Spline-Wavelets
The core of the work we present here is an algorithm that constructs a least squares approximation to a given set of unorganized points. The approximation is expressed as a linear combination of particular B-spline wavelets. It implies a multiresolution setting which constructs a hierarchy of approximations to the data with increasing level of detail, proceeding from coarsest to finest scales. It allows for an efficient selection of the degrees of freedom of the problem and avoids the introduction of an artificial uniform grid. In fact, an analysis of the data can be done at each of the scales of the hierarchy, which can be used to select adaptively a set of wavelets that can represent economically the characteristics of the cloud of points in the next level of detail. The data adaption of our method is twofold, as it takes into account both horizontal distribution and vertical irregularities of data. This strategy can lead to a striking reduction of the problem complexity. Furthermore, among the possible ways to achieve a multiscale formulation, the wavelet approach shows additional advantages, based on good conditioning properties and level-wise orthogonality. We exploit these features to enhance the efficiency of iterative solution methods for the system of normal equations of the problem. The combination of multiresolution adaptivity with the numerical properties of the wavelet basis gives rise to an algorithm well suited to cope with problems requiring fast solution methods. We illustrate this by means of numerical experiments that compare the performance of the method on various data sets working with different multi-resolution bases. Afterwards, we use the equivalence relation between wavelets and Besov spaces to formulate the problem of data fitting with regularization. We find that the multiscale formulation allows for a flexible and efficient treatment of some aspects of this problem. Moreover, we study the problem known as robust fitting, in which the data is assumed to be corrupted by wrong measurements or outliers. We compare classical methods based on re-weighting of residuals to our setting in which the wavelet representation of the data computed by our algorithm is used to locate the outliers. As a final application that couples two of the main applications of wavelets (data analysis and operator equations), we propose the use of this least squares data fitting method to evaluate the non-linear term in the wavelet-Galerkin formulation of non-linear PDE problems. At the end of this thesis we discuss efficient implementation issues, with a special interest in the interplay between solution methods and data structures
Statistical Models and Optimization Algorithms for High-Dimensional Computer Vision Problems
Data-driven and computational approaches are showing significant promise in solving several challenging problems in various fields such as bioinformatics, finance and many branches of engineering. In this dissertation, we explore the potential of these approaches, specifically statistical data models and optimization algorithms, for solving several challenging problems in computer vision. In doing so, we contribute to the literatures of both statistical data models and computer vision. In the context of statistical data models, we propose principled approaches for solving robust regression problems, both linear and kernel, and missing data matrix factorization problem. In computer vision, we propose statistically optimal and efficient algorithms for solving the remote face recognition and structure from motion (SfM) problems.
The goal of robust regression is to estimate the functional relation between two variables from a given data set which might be contaminated with outliers. Under the reasonable assumption that there are fewer outliers than inliers in a data set, we formulate the robust linear regression problem as a sparse learning problem, which can be solved using efficient polynomial-time algorithms. We also provide sufficient conditions under which the proposed algorithms correctly solve the robust regression problem. We then extend our robust formulation to the case of kernel regression, specifically to propose a robust version for relevance vector machine (RVM) regression.
Matrix factorization is used for finding a low-dimensional representation for data embedded in a high-dimensional space. Singular value decomposition is the standard algorithm for solving this problem. However, when the matrix has many missing elements this is a hard problem to solve. We formulate the missing data matrix factorization problem as a low-rank semidefinite programming problem (essentially a rank constrained SDP), which allows us to find accurate and efficient solutions for large-scale factorization problems.
Face recognition from remotely acquired images is a challenging problem because of variations due to blur and illumination. Using the convolution model for blur, we show that the set of all images obtained by blurring a given image forms a convex set. We then use convex optimization techniques to find the distances between a given blurred (probe) image and the gallery images to find the best match. Further, using a low-dimensional linear subspace model for illumination variations, we extend our theory in a similar fashion to recognize blurred and poorly illuminated faces.
Bundle adjustment is the final optimization step of the SfM problem where the goal is to obtain the 3-D structure of the observed scene and the camera parameters from multiple images of the scene. The traditional bundle adjustment algorithm, based on minimizing the l_2 norm of the image re-projection error, has cubic complexity in the number of unknowns. We propose an algorithm, based on minimizing the l_infinity norm of the re-projection error, that has quadratic complexity in the number of unknowns. This is achieved by reducing the large-scale optimization problem into many small scale sub-problems each of which can be solved using second-order cone programming
Robust Subspace Estimation via Low-Rank and Sparse Decomposition and Applications in Computer Vision
PhDRecent advances in robust subspace estimation have made dimensionality reduction and
noise and outlier suppression an area of interest for research, along with continuous
improvements in computer vision applications. Due to the nature of image and video
signals that need a high dimensional representation, often storage, processing, transmission,
and analysis of such signals is a difficult task. It is therefore desirable to obtain a
low-dimensional representation for such signals, and at the same time correct for corruptions,
errors, and outliers, so that the signals could be readily used for later processing.
Major recent advances in low-rank modelling in this context were initiated by the work of
Cand`es et al. [17] where the authors provided a solution for the long-standing problem of
decomposing a matrix into low-rank and sparse components in a Robust Principal Component
Analysis (RPCA) framework. However, for computer vision applications RPCA
is often too complex, and/or may not yield desirable results. The low-rank component
obtained by the RPCA has usually an unnecessarily high rank, while in certain tasks
lower dimensional representations are required. The RPCA has the ability to robustly
estimate noise and outliers and separate them from the low-rank component, by a sparse
part. But, it has no mechanism of providing an insight into the structure of the sparse
solution, nor a way to further decompose the sparse part into a random noise and a structured
sparse component that would be advantageous in many computer vision tasks. As
videos signals are usually captured by a camera that is moving, obtaining a low-rank
component by RPCA becomes impossible. In this thesis, novel Approximated RPCA
algorithms are presented, targeting different shortcomings of the RPCA. The Approximated
RPCA was analysed to identify the most time consuming RPCA solutions, and
replace them with simpler yet tractable alternative solutions. The proposed method is
able to obtain the exact desired rank for the low-rank component while estimating a
global transformation to describe camera-induced motion. Furthermore, it is able to
decompose the sparse part into a foreground sparse component, and a random noise
part that contains no useful information for computer vision processing. The foreground
sparse component is obtained by several novel structured sparsity-inducing norms, that
better encapsulate the needed pixel structure in visual signals. Moreover, algorithms for
reducing complexity of low-rank estimation have been proposed that achieve significant
complexity reduction without sacrificing the visual representation of video and image
information. The proposed algorithms are applied to several fundamental computer
vision tasks, namely, high efficiency video coding, batch image alignment, inpainting,
and recovery, video stabilisation, background modelling and foreground segmentation,
robust subspace clustering and motion estimation, face recognition, and ultra high definition
image and video super-resolution. The algorithms proposed in this thesis including
batch image alignment and recovery, background modelling and foreground segmentation,
robust subspace clustering and motion segmentation, and ultra high definition
image and video super-resolution achieve either state-of-the-art or comparable results to
existing methods
- …