1,290 research outputs found
Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers
In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration
Automatic Structure Detection in Constraints of Tabular Data
Abstract. Methods for the protection of statistical tabular data—as controlled tabular adjustment, cell suppression, or controlled rounding— need to solve several linear programming subproblems. For large multi-dimensional linked and hierarchical tables, such subproblems turn out to be computationally challenging. One of the techniques used to reduce the solution time of mathematical programming problems is to exploit the constraints structure using some specialized algorithm. Two of the most usual structures are block-angular matrices with either linking rows (primal block-angular structure) or linking columns (dual block-angular structure). Although constraints associated to tabular data have intrin-sically a lot of structure, current software for tabular data protection neither detail nor exploit it, and simply provide a single matrix, or at most a set of smallest submatrices. We provide in this work an efficient tool for the automatic detection of primal or dual block-angular struc-ture in constraints matrices. We test it on some of the complex CSPLIB instances, showing that when the number of linking rows or columns is small, the computational savings are significant
Almost Block Diagonal Linear Systems: Sequential and Parallel Solution Techniques, and Applications
Almost block diagonal (ABD) linear systems arise in a variety of contexts, specifically in numerical methods for two-point boundary value problems for ordinary differential equations and in related partial differential equation problems. The stable, efficient sequential solution of ABDs has received much attention over the last fifteen years and the parallel solution more recently. We survey the fields of application with emphasis on how ABDs and bordered ABDs (BABDs) arise. We outline most known direct solution techniques, both sequential and parallel, and discuss the comparative efficiency of the parallel methods. Finally, we examine parallel iterative methods for solving BABD systems. Copyright (C) 2000 John Wiley & Sons, Ltd
Recommended from our members
Improving the stability and robustness of incomplete symmetric indefinite factorization preconditioners
Sparse symmetric indefinite linear systems of equations arise in numerous practical applications. In many situations, an iterative method is the method of choice but a preconditioner is normally required for it to be effective. In this paper, the focus is on a class of
incomplete factorization algorithms that can be used
to compute preconditioners for symmetric indefinite systems.
A limited memory approach is employed that incorporates a
number of new ideas with the goal of improving the stability, robustness and efficiency of the
preconditioner. These include the monitoring of stability
as the factorization proceeds and the incorporation of
pivot modifications when potential instability is observed.
Numerical experiments involving test problems arising from a range of real-world applications demonstrate the effectiveness of our approach
Flat Bases of Invariant Polynomials and P-matrices of E7 and E8
Let be a compact group of linear transformations of an Euclidean space
. The -invariant functions can be expressed as
functions of a finite basic set of -invariant homogeneous polynomials,
called an integrity basis. The mathematical description of the orbit space
depends on the integrity basis too: it is realized through polynomial
equations and inequalities expressing rank and positive semi-definiteness
conditions of the -matrix, a real symmetric matrix determined by the
integrity basis. The choice of the basic set of -invariant homogeneous
polynomials forming an integrity basis is not unique, so it is not unique the
mathematical description of the orbit space too. If is an irreducible
finite reflection group, Saito et al. in 1980 characterized some special basic
sets of -invariant homogeneous polynomials that they called {\em flat}. They
also found explicitly the flat basic sets of invariant homogeneous polynomials
of all the irreducible finite reflection groups except of the two largest
groups and . In this paper the flat basic sets of invariant
homogeneous polynomials of and and the corresponding -matrices
are determined explicitly. Using the results here reported one is able to
determine easily the -matrices corresponding to any other integrity basis of
or . From the -matrices one may then write down the equations and
inequalities defining the orbit spaces of and relatively to a flat
basis or to any other integrity basis. The results here obtained may be
employed concretely to study analytically the symmetry breaking in all theories
where the symmetry group is one of the finite reflection groups and
or one of the Lie groups and in their adjoint representations.Comment: 14 page
Reconsidering optimal experimental design for conjoint analysis
The quality of Conjoint Analysis estimations heavily depends on the alternatives presented in the experiment. An efficient selection of the experiment design matrix allows more information to be elicited about consumer preferences from a small number of questions, thus reducing experimental
cost and respondent's fatigue. The statistical literature considers optimal design algorithms (Kiefer,
1959), and typically selects the same combination of stimuli more than once. However in the
context of conjoint analysis, replications do not make sense for individual respondents. In this
paper we present a general approach to compute optimal designs for conjoint experiments in a
variety of scenarios and methodologies: continuous, discrete and mixed attributes types, customer
panels with random effects, and quantile regression models. We do not compute good designs, but
the best ones according to the size (determinant or trace) of the information matrix of the
associated estimators without repeating profiles as in Kiefer's methodology. We handle efficient
optimization algorithms to achieve our goal, avoiding the use of widespread ad-hoc intuitive rules.Research funded by two research projects, S-0505/TIC-0230 by the Comunidad de Madrid and
ECO2011-30198 by MICINN agency of Spanish Governmen
- …