424 research outputs found
Incremental incomplete LU factorizations with applications
International audienceThis paper addresses the problem of computing preconditioners for solving linear systems of equations with a sequence of slowly varying matrices. This problem arises in many important applications. For example, a common situation in computational fluid dynamics, is when the equations change only slightly, possibly in some parts of the physical domain. In such situations it is wasteful to recompute entirely any LU or ILU factorizations computed for the previous coefficient matrix. A number of techniques for computing incremental ILU factorizations are examined. For example we consider methods based on approximate inverses as well as alternating techniques for updating the factors L and U of the factorization
Incremental Incomplete LU factorizations with applications to time-dependent PDEs
Session 4International audienc
Comparison of some Reduced Representation Approximations
In the field of numerical approximation, specialists considering highly
complex problems have recently proposed various ways to simplify their
underlying problems. In this field, depending on the problem they were tackling
and the community that are at work, different approaches have been developed
with some success and have even gained some maturity, the applications can now
be applied to information analysis or for numerical simulation of PDE's. At
this point, a crossed analysis and effort for understanding the similarities
and the differences between these approaches that found their starting points
in different backgrounds is of interest. It is the purpose of this paper to
contribute to this effort by comparing some constructive reduced
representations of complex functions. We present here in full details the
Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM)
together with other approaches that enter in the same category
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
Online Unsupervised Multi-view Feature Selection
In the era of big data, it is becoming common to have data with multiple
modalities or coming from multiple sources, known as "multi-view data".
Multi-view data are usually unlabeled and come from high-dimensional spaces
(such as language vocabularies), unsupervised multi-view feature selection is
crucial to many applications. However, it is nontrivial due to the following
challenges. First, there are too many instances or the feature dimensionality
is too large. Thus, the data may not fit in memory. How to select useful
features with limited memory space? Second, how to select features from
streaming data and handles the concept drift? Third, how to leverage the
consistent and complementary information from different views to improve the
feature selection in the situation when the data are too big or come in as
streams? To the best of our knowledge, none of the previous works can solve all
the challenges simultaneously. In this paper, we propose an Online unsupervised
Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming
multi-view data in an online fashion. OMVFS embeds unsupervised feature
selection into a clustering algorithm via NMF with sparse learning. It further
incorporates the graph regularization to preserve the local structure
information and help select discriminative features. Instead of storing all the
historical data, OMVFS processes the multi-view data chunk by chunk and
aggregates all the necessary information into several small matrices. By using
the buffering technique, the proposed OMVFS can reduce the computational and
storage cost while taking advantage of the structure information. Furthermore,
OMVFS can capture the concept drifts in the data streams. Extensive experiments
on four real-world datasets show the effectiveness and efficiency of the
proposed OMVFS method. More importantly, OMVFS is about 100 times faster than
the off-line methods
- …