620 research outputs found
On Semidefinite Relaxations for Matrix-Weighted State-Estimation Problems in Robotics
In recent years, there has been remarkable progress in the development of
so-called certifiable perception methods, which leverage semidefinite, convex
relaxations to find global optima of perception problems in robotics. However,
many of these relaxations rely on simplifying assumptions that facilitate the
problem formulation, such as an isotropic measurement noise distribution. In
this paper, we explore the tightness of the semidefinite relaxations of
matrix-weighted (anisotropic) state-estimation problems and reveal the
limitations lurking therein: matrix-weighted factors can cause convex
relaxations to lose tightness. In particular, we show that the semidefinite
relaxations of localization problems with matrix weights may be tight only for
low noise levels. We empirically explore the factors that contribute to this
loss of tightness and demonstrate that redundant constraints can be used to
regain tightness, albeit at the expense of real-time performance. As a second
technical contribution of this paper, we show that the state-of-the-art
relaxation of scalar-weighted SLAM cannot be used when matrix weights are
considered. We provide an alternate formulation and show that its SDP
relaxation is not tight (even for very low noise levels) unless specific
redundant constraints are used. We demonstrate the tightness of our
formulations on both simulated and real-world data
Convex Geometric Motion Planning on Lie Groups via Moment Relaxation
This paper reports a novel result: with proper robot models on matrix Lie
groups, one can formulate the kinodynamic motion planning problem for rigid
body systems as \emph{exact} polynomial optimization problems that can be
relaxed as semidefinite programming (SDP). Due to the nonlinear rigid body
dynamics, the motion planning problem for rigid body systems is nonconvex.
Existing global optimization-based methods do not properly deal with the
configuration space of the 3D rigid body; thus, they do not scale well to
long-horizon planning problems. We use Lie groups as the configuration space in
our formulation and apply the variational integrator to formulate the forced
rigid body systems as quadratic polynomials. Then we leverage Lasserre's
hierarchy to obtain the globally optimal solution via SDP. By constructing the
motion planning problem in a sparse manner, the results show that the proposed
algorithm has \emph{linear} complexity with respect to the planning horizon.
This paper demonstrates the proposed method can provide rank-one optimal
solutions at relaxation order two for most of the testing cases of 1) 3D drone
landing using the full dynamics model and 2) inverse kinematics for serial
manipulators.Comment: Accepted to Robotics: Science and Systems (RSS), 202
Cutset Sampling for Bayesian Networks
The paper presents a new sampling methodology for Bayesian networks that
samples only a subset of variables and applies exact inference to the rest.
Cutset sampling is a network structure-exploiting application of the
Rao-Blackwellisation principle to sampling in Bayesian networks. It improves
convergence by exploiting memory-based inference algorithms. It can also be
viewed as an anytime approximation of the exact cutset-conditioning algorithm
developed by Pearl. Cutset sampling can be implemented efficiently when the
sampled variables constitute a loop-cutset of the Bayesian network and, more
generally, when the induced width of the networks graph conditioned on the
observed sampled variables is bounded by a constant w. We demonstrate
empirically the benefit of this scheme on a range of benchmarks
Rigorous numerical approaches in electronic structure theory
Electronic structure theory concerns the description of molecular properties according to the postulates of quantum mechanics. For practical purposes, this is realized entirely through numerical computation, the scope of which is constrained by computational costs that increases rapidly with the size of the system.
The significant progress made in this field over the past decades have been facilitated in part by the willingness of chemists to forego some mathematical rigour in exchange for greater efficiency. While such compromises allow large systems to be computed feasibly, there are lingering concerns over the impact that these compromises have on the quality of the results that are produced. This research is motivated by two key issues that contribute to this loss of quality, namely i) the numerical errors accumulated due to the use of finite precision arithmetic and the application of numerical approximations, and ii) the reliance on iterative methods that are not guaranteed to converge to the correct solution.
Taking the above issues in consideration, the aim of this thesis is to explore ways to perform electronic structure calculations with greater mathematical rigour, through the application of rigorous numerical methods. Of which, we focus in particular on methods based on interval analysis and deterministic global optimization. The Hartree-Fock electronic structure method will be used as the subject of this study due to its ubiquity within this domain.
We outline an approach for placing rigorous bounds on numerical error in Hartree-Fock computations. This is achieved through the application of interval analysis techniques, which are able to rigorously bound and propagate quantities affected by numerical errors. Using this approach, we implement a program called Interval Hartree-Fock. Given a closed-shell system and the current electronic state, this program is able to compute rigorous error bounds on quantities including i) the total energy, ii) molecular orbital energies, iii) molecular orbital coefficients, and iv) derived electronic properties.
Interval Hartree-Fock is adapted as an error analysis tool for studying the impact of numerical error in Hartree-Fock computations. It is used to investigate the effect of input related factors such as system size and basis set types on the numerical accuracy of the Hartree-Fock total energy. Consideration is also given to the impact of various algorithm design decisions. Examples include the application of different integral screening thresholds, the variation between single and double precision arithmetic in two-electron integral evaluation, and the adjustment of interpolation table granularity. These factors are relevant to both the usage of conventional Hartree-Fock code, and the development of Hartree-Fock code optimized for novel computing devices such as graphics processing units.
We then present an approach for solving the Hartree-Fock equations to within a guaranteed margin of error. This is achieved by treating the Hartree-Fock equations as a non-convex global optimization problem, which is then solved using deterministic global optimization. The main contribution of this work is the development of algorithms for handling quantum chemistry specific expressions such as the one and two-electron integrals within the deterministic global optimization framework. This approach was implemented as an extension to an existing open source solver.
Proof of concept calculations are performed for a variety of problems within Hartree-Fock theory, including those in i) point energy calculation, ii) geometry optimization, iii) basis set optimization, and iv) excited state calculation. Performance analyses of these calculations are also presented and discussed
Recommended from our members
The tensile properties of compatible glassy polyblends based upon poly(2,6-dimethyl-1,4-phenylene oxide).
The mechanical behavior of compatible glassy polyblends based upon poly (2 , 6-dimethyl-l , 4-phenylene oxide) (PPO) was investigated. In particular, the influence of composition, molecular weight, and molecular weight distribution upon the large deformation tensile properties was assessed. Various possible correlations between the experimentally determined moduli and theory are considered. Included are correlations with density, packing density, composite theory and lattice fluid theory. Similarities in behavior of the compatible glassy polyblends to the phenomenon known as antiplasticization is presented. The modeling of the properties of these polymer mixtures via Simplex lattice design is also detailed. Finally, attention is given to the development of compatibility criteria based upon the large deformation tensile property and density measurements.
It was shown that composite equations cannot adequately describe the mechanical behavior of compatible PPO based polyblends. However, it is possible to generate a second order Simplex equation which will closely model the modulus-compositional empirical trends. Furthermore, there are strong indications that the interaction term in the Simplex equation can serve as a useful gauge for compatibility and level of compatibility.
It was also shown that all the criteria for the phenomenon known as antiplasticization were fulfilled by all the compatible PPO based systems examined. For example, the high molecular weight antiplasticizer , polystyrene (PS) , when dissolved in PPO, decreases the glass transition temperature of the blend while raising the magnitude of the secant modulus and tensile strength above the value which would be predicted by the rule of mixtures. Packing density was found to be useful for explaining antiplasticization and compatibility. It appears to be the key to understanding the moduli of glassy alloys. The density and packing density are the only equilibrium quantities which pass through a maximum similar to the modulus. These results suggest that compatibility might be handled without resorting to specific molecular interaction
Doctor of Philosophy
dissertationSparse matrix codes are found in numerous applications ranging from iterative numerical solvers to graph analytics. Achieving high performance on these codes has however been a significant challenge, mainly due to array access indirection, for example, of the form A[B[i]]. Indirect accesses make precise dependence analysis impossible at compile-time, and hence prevent many parallelizing and locality optimizing transformations from being applied. The expert user relies on manually written libraries to tailor the sparse code and data representations best suited to the target architecture from a general sparse matrix representation. However libraries have limited composability, address very specific optimization strategies, and have to be rewritten as new architectures emerge. In this dissertation, we explore the use of the inspector/executor methodology to accomplish the code and data transformations to tailor high performance sparse matrix representations. We devise and embed abstractions for such inspector/executor transformations within a compiler framework so that they can be composed with a rich set of existing polyhedral compiler transformations to derive complex transformation sequences for high performance. We demonstrate the automatic generation of inspector/executor code, which orchestrates code and data transformations to derive high performance representations for the Sparse Matrix Vector Multiply kernel in particular. We also show how the same transformations may be integrated into sparse matrix and graph applications such as Sparse Matrix Matrix Multiply and Stochastic Gradient Descent, respectively. The specific constraints of these applications, such as problem size and dependence structure, necessitate unique sparse matrix representations that can be realized using our transformations. Computations such as Gauss Seidel, with loop carried dependences at the outer most loop necessitate different strategies for high performance. Specifically, we organize the computation into level sets or wavefronts of irregular size, such that iterations of a wavefront may be scheduled in parallel but different wavefronts have to be synchronized. We demonstrate automatic code generation of high performance inspectors that do explicit dependence testing and level set construction at runtime, as well as high performance executors, which are the actual parallelized computations. For the above sparse matrix applications, we automatically generate inspector/executor code comparable in performance to manually tuned libraries
Proceedings of the 18th Irish Conference on Artificial Intelligence and Cognitive Science
These proceedings contain the papers that were accepted for publication at AICS-2007, the 18th Annual Conference on Artificial Intelligence and Cognitive Science, which was held in the Technological University Dublin; Dublin, Ireland; on the 29th to the 31st August 2007. AICS is the annual conference of the Artificial Intelligence Association of Ireland (AIAI)
- …