100 research outputs found
Determinantal sets, singularities and application to optimal control in medical imagery
Control theory has recently been involved in the field of nuclear magnetic
resonance imagery. The goal is to control the magnetic field optimally in order
to improve the contrast between two biological matters on the pictures.
Geometric optimal control leads us here to analyze mero-morphic vector fields
depending upon physical parameters , and having their singularities defined by
a deter-minantal variety. The involved matrix has polynomial entries with
respect to both the state variables and the parameters. Taking into account the
physical constraints of the problem, one needs to classify, with respect to the
parameters, the number of real singularities lying in some prescribed
semi-algebraic set. We develop a dedicated algorithm for real root
classification of the singularities of the rank defects of a polynomial matrix,
cut with a given semi-algebraic set. The algorithm works under some genericity
assumptions which are easy to check. These assumptions are not so restrictive
and are satisfied in the aforementioned application. As more general strategies
for real root classification do, our algorithm needs to compute the critical
loci of some maps, intersections with the boundary of the semi-algebraic
domain, etc. In order to compute these objects, the determinantal structure is
exploited through a stratifi-cation by the rank of the polynomial matrix. This
speeds up the computations by a factor 100. Furthermore, our implementation is
able to solve the application in medical imagery, which was out of reach of
more general algorithms for real root classification. For instance,
computational results show that the contrast problem where one of the matters
is water is partitioned into three distinct classes
Deciding the consistency of non-linear real arithmetic constraints with a conflict driven search using cylindrical algebraic coverings
We present a new algorithm for determining the satisfiability of conjunctions
of non-linear polynomial constraints over the reals, which can be used as a
theory solver for satisfiability modulo theory (SMT) solving for non-linear
real arithmetic. The algorithm is a variant of Cylindrical Algebraic
Decomposition (CAD) adapted for satisfiability, where solution candidates
(sample points) are constructed incrementally, either until a satisfying sample
is found or sufficient samples have been sampled to conclude unsatisfiability.
The choice of samples is guided by the input constraints and previous
conflicts.
The key idea behind our new approach is to start with a partial sample;
demonstrate that it cannot be extended to a full sample; and from the reasons
for that rule out a larger space around the partial sample, which build up
incrementally into a cylindrical algebraic covering of the space. There are
similarities with the incremental variant of CAD, the NLSAT method of Jovanovic
and de Moura, and the NuCAD algorithm of Brown; but we present worked examples
and experimental results on a preliminary implementation to demonstrate the
differences to these, and the benefits of the new approach
Cylindrical algebraic decomposition with equational constraints
Cylindrical Algebraic Decomposition (CAD) has long been one of the most
important algorithms within Symbolic Computation, as a tool to perform
quantifier elimination in first order logic over the reals. More recently it is
finding prominence in the Satisfiability Checking community as a tool to
identify satisfying solutions of problems in nonlinear real arithmetic.
The original algorithm produces decompositions according to the signs of
polynomials, when what is usually required is a decomposition according to the
truth of a formula containing those polynomials. One approach to achieve that
coarser (but hopefully cheaper) decomposition is to reduce the polynomials
identified in the CAD to reflect a logical structure which reduces the solution
space dimension: the presence of Equational Constraints (ECs).
This paper may act as a tutorial for the use of CAD with ECs: we describe all
necessary background and the current state of the art. In particular, we
present recent work on how McCallum's theory of reduced projection may be
leveraged to make further savings in the lifting phase: both to the polynomials
we lift with and the cells lifted over. We give a new complexity analysis to
demonstrate that the double exponent in the worst case complexity bound for CAD
reduces in line with the number of ECs. We show that the reduction can apply to
both the number of polynomials produced and their degree.Comment: Accepted into the Journal of Symbolic Computation. arXiv admin note:
text overlap with arXiv:1501.0446
Data-Discriminants of Likelihood Equations
Maximum likelihood estimation (MLE) is a fundamental computational problem in
statistics. The problem is to maximize the likelihood function with respect to
given data on a statistical model. An algebraic approach to this problem is to
solve a very structured parameterized polynomial system called likelihood
equations. For general choices of data, the number of complex solutions to the
likelihood equations is finite and called the ML-degree of the model. The only
solutions to the likelihood equations that are statistically meaningful are the
real/positive solutions. However, the number of real/positive solutions is not
characterized by the ML-degree. We use discriminants to classify data according
to the number of real/positive solutions of the likelihood equations. We call
these discriminants data-discriminants (DD). We develop a probabilistic
algorithm for computing DDs. Experimental results show that, for the benchmarks
we have tried, the probabilistic algorithm is more efficient than the standard
elimination algorithm. Based on the computational results, we discuss the real
root classification problem for the 3 by 3 symmetric matrix~model.Comment: 2 table
- …