128 research outputs found
Improved Bounds on Restricted Isometry Constants for Gaussian Matrices
The Restricted Isometry Constants (RIC) of a matrix measures how close to
an isometry is the action of on vectors with few nonzero entries, measured
in the norm. Specifically, the upper and lower RIC of a matrix of
size is the maximum and the minimum deviation from unity (one) of
the largest and smallest, respectively, square of singular values of all
matrices formed by taking columns from . Calculation of
the RIC is intractable for most matrices due to its combinatorial nature;
however, many random matrices typically have bounded RIC in some range of
problem sizes . We provide the best known bound on the RIC for
Gaussian matrices, which is also the smallest known bound on the RIC for any
large rectangular matrix. Improvements over prior bounds are achieved by
exploiting similarity of singular values for matrices which share a substantial
number of columns.Comment: 16 pages, 8 figure
Low-rank optimization for semidefinite convex problems
We propose an algorithm for solving nonlinear convex programs defined in
terms of a symmetric positive semidefinite matrix variable . This algorithm
rests on the factorization , where the number of columns of Y fixes
the rank of . It is thus very effective for solving programs that have a low
rank solution. The factorization evokes a reformulation of the
original problem as an optimization on a particular quotient manifold. The
present paper discusses the geometry of that manifold and derives a second
order optimization method. It furthermore provides some conditions on the rank
of the factorization to ensure equivalence with the original problem. The
efficiency of the proposed algorithm is illustrated on two applications: the
maximal cut of a graph and the sparse principal component analysis problem.Comment: submitte
Quantitative rainfall analysis of the 2021 mid-July flood event in Belgium
The exceptional flood of July 2021 in central Europe impacted Belgium severely. As rainfall was the triggering factor of this event, this study aims to characterize rainfall amounts in Belgium from 13 to 16 July 2021 based on two types of observational data. First, observations recorded by high-quality rain gauges operated by weather and hydrological services in Belgium have been compiled and quality checked. Second, a radar-based rainfall product has been improved to provide a reliable estimation of quantitative precipitation at high spatial and temporal resolutions over Belgium. Several analyses of these data are performed here to describe the spatial and temporal distribution of rainfall during the event. These analyses indicate that the rainfall accumulations during the event reached unprecedented levels over large areas. Accumulations over durations from 1 to 3 d significantly exceeded the 200-year return level in several places, with up to 90 % of exceedance over the 200-year return level for 2 and 3 d values locally in the Vesdre Basin. Such a record-breaking event needs to be documented as much as possible, and available observational data must be shared with the scientific community for further studies in hydrology, in urban planning and, more generally, in all multi-disciplinary studies aiming to identify and understand factors leading to such disaster. The corresponding rainfall data are therefore provided freely in a supplement (Journée et al., 2023; Goudenhoofdt et al., 2023).</p
Compressed Sensing: How Sharp Is the Restricted Isometry Property?
Compressed sensing (CS) seeks to recover an unknown vector with N entries by making far fewer than N measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply N. CS combines directly the important task of compression with the measurement task. Since its introduction in 2004ther e have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS-exact reconstruction from seemingly undersampled measurements-it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry. © 2011 Society for Industrial and Applied Mathematics
Semi-sparse PCA
It is well-known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation
The value of intraoperative neurophysiological monitoring in tethered cord surgery
The value of intraoperative neurophysiological monitoring (IONM) with surgical detethering in dysraphic patients has been questioned. A retrospective analysis of our series of 65 patients is presented with special focus on technical set-up and outcome. All patients were diagnosed with a tethered cord (TC) due to spinal dysraphism. A high-risk group (HRG) was determined consisting of 40 patients with a lipomyelomeningocele and/or a split cord malformation sometimes in combination with a tight filum terminale. The surgical procedure was a detethering operation in all cases performed by a single surgeon during a 9-year period (1999-2008). A standard set-up of IONM was used in all patients consisting of motor-evoked potentials (MEP) evoked by transcranial electrical stimulation (TES) and electrical nerve root stimulation. In young patients, conditioning stimulation was applied in order to improve absent or weak MEPs. IONM responses could be obtained in all patients. Postoperative deterioration of symptoms was found in two patients of whom one patient belonged to the HRG. Mean maximal follow-up of all 65 patients was 4.6 years (median 4.1 years). Long-term deterioration of symptoms was found in 6 of 65 patients with a mean follow-up of 5 years (median 5.3 years). The use of IONM is feasible in all TC patients. The identification of functional nervous structures and continuous guarding of the integrity of sacral motor roots by IONM may contribute to the safety of surgical detethering
Noisy independent component analysis as a method of rotating the factor scores
Noisy independent component analysis (ICA) is viewed as a method of factor rotation in exploratory factor analysis (EFA). Starting from an initial EFA solution, rather than rotating the loadings towards simplicity, the factors are rotated orthogonally towards independence. An application to Thurstone's box problem in psychometrics is presented using a new data matrix containing measurement error. Results show that the proposed rotational approach to noisy ICA recovers the components used to generate the mixtures quite accurately and also produces simple loadings
Geometric optimization methods for independent component analysis applied on gene expression data
peer reviewe
Generalized power method for sparse principal component analysis
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed
- …