20,436 research outputs found
The Bit Complexity of Efficient Continuous Optimization
We analyze the bit complexity of efficient algorithms for fundamental
optimization problems, such as linear regression, -norm regression, and
linear programming (LP). State-of-the-art algorithms are iterative, and in
terms of the number of arithmetic operations, they match the current time
complexity of multiplying two -by- matrices (up to polylogarithmic
factors). However, previous work has typically assumed infinite precision
arithmetic, and due to complicated inverse maintenance techniques, the actual
running times of these algorithms are unknown. To settle the running time and
bit complexity of these algorithms, we demonstrate that a core common
subroutine, known as \emph{inverse maintenance}, is backward-stable.
Additionally, we show that iterative approaches for solving constrained
weighted regression problems can be accomplished with bounded-error
pre-conditioners. Specifically, we prove that linear programs can be solved
approximately in matrix multiplication time multiplied by polylog factors that
depend on the condition number of the matrix and the inner and outer
radius of the LP problem. -norm regression can be solved approximately in
matrix multiplication time multiplied by polylog factors in . Lastly,
linear regression can be solved approximately in input-sparsity time multiplied
by polylog factors in . Furthermore, we present results for achieving
lower than matrix multiplication time for -norm regression by utilizing
faster solvers for sparse linear systems.Comment: 71 page
Improving the Asymmetric TSP by Considering Graph Structure
Recent works on cost based relaxations have improved Constraint Programming
(CP) models for the Traveling Salesman Problem (TSP). We provide a short survey
over solving asymmetric TSP with CP. Then, we suggest new implied propagators
based on general graph properties. We experimentally show that such implied
propagators bring robustness to pathological instances and highlight the fact
that graph structure can significantly improve search heuristics behavior.
Finally, we show that our approach outperforms current state of the art
results.Comment: Technical repor
Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era
- …