22,249 research outputs found

    Finding a low-rank basis in a matrix subspace

    Full text link
    For a given matrix subspace, how can we find a basis that consists of low-rank matrices? This is a generalization of the sparse vector problem. It turns out that when the subspace is spanned by rank-1 matrices, the matrices can be obtained by the tensor CP decomposition. For the higher rank case, the situation is not as straightforward. In this work we present an algorithm based on a greedy process applicable to higher rank problems. Our algorithm first estimates the minimum rank by applying soft singular value thresholding to a nuclear norm relaxation, and then computes a matrix with that rank using the method of alternating projections. We provide local convergence results, and compare our algorithm with several alternative approaches. Applications include data compression beyond the classical truncated SVD, computing accurate eigenvectors of a near-multiple eigenvalue, image separation and graph Laplacian eigenproblems

    Solving optimal control problems governed by random Navier-Stokes equations using low-rank methods

    Full text link
    Many problems in computational science and engineering are simultaneously characterized by the following challenging issues: uncertainty, nonlinearity, nonstationarity and high dimensionality. Existing numerical techniques for such models would typically require considerable computational and storage resources. This is the case, for instance, for an optimization problem governed by time-dependent Navier-Stokes equations with uncertain inputs. In particular, the stochastic Galerkin finite element method often leads to a prohibitively high dimensional saddle-point system with tensor product structure. In this paper, we approximate the solution by the low-rank Tensor Train decomposition, and present a numerically efficient algorithm to solve the optimality equations directly in the low-rank representation. We show that the solution of the vorticity minimization problem with a distributed control admits a representation with ranks that depend modestly on model and discretization parameters even for high Reynolds numbers. For lower Reynolds numbers this is also the case for a boundary control. This opens the way for a reduced-order modeling of the stochastic optimal flow control with a moderate cost at all stages.Comment: 29 page

    Latent class analysis for segmenting preferences of investment bonds

    Get PDF
    Market segmentation is a key component of conjoint analysis which addresses consumer preference heterogeneity. Members in a segment are assumed to be homogenous in their views and preferences when worthing an item but distinctly heterogenous to members of other segments. Latent class methodology is one of the several conjoint segmentation procedures that overcome the limitations of aggregate analysis and a-priori segmentation. The main benefit of Latent class models is that market segment membership and regression parameters of each derived segment are estimated simultaneously. The Latent class model presented in this paper uses mixtures of multivariate conditional normal distributions to analyze rating data, where the likelihood is maximized using the EM algorithm. The application focuses on customer preferences for investment bonds described by four attributes; currency, coupon rate, redemption term and price. A number of demographic variables are used to generate segments that are accessible and actionable.peer-reviewe
    • …
    corecore