778 research outputs found
Using the distribution of cells by dimension in a cylindrical algebraic decomposition
We investigate the distribution of cells by dimension in cylindrical
algebraic decompositions (CADs). We find that they follow a standard
distribution which seems largely independent of the underlying problem or CAD
algorithm used. Rather, the distribution is inherent to the cylindrical
structure and determined mostly by the number of variables.
This insight is then combined with an algorithm that produces only
full-dimensional cells to give an accurate method of predicting the number of
cells in a complete CAD. Since constructing only full-dimensional cells is
relatively inexpensive (involving no costly algebraic number calculations) this
leads to heuristics for helping with various questions of problem formulation
for CAD, such as choosing an optimal variable ordering. Our experiments
demonstrate that this approach can be highly effective.Comment: 8 page
Program Verification in the presence of complex numbers, functions with branch cuts etc
In considering the reliability of numerical programs, it is normal to "limit
our study to the semantics dealing with numerical precision" (Martel, 2005). On
the other hand, there is a great deal of work on the reliability of programs
that essentially ignores the numerics. The thesis of this paper is that there
is a class of problems that fall between these two, which could be described as
"does the low-level arithmetic implement the high-level mathematics". Many of
these problems arise because mathematics, particularly the mathematics of the
complex numbers, is more difficult than expected: for example the complex
function log is not continuous, writing down a program to compute an inverse
function is more complicated than just solving an equation, and many algebraic
simplification rules are not universally valid.
The good news is that these problems are theoretically capable of being
solved, and are practically close to being solved, but not yet solved, in
several real-world examples. However, there is still a long way to go before
implementations match the theoretical possibilities
A "Piano Movers" Problem Reformulated
It has long been known that cylindrical algebraic decompositions (CADs) can
in theory be used for robot motion planning. However, in practice even the
simplest examples can be too complicated to tackle. We consider in detail a
"Piano Mover's Problem" which considers moving an infinitesimally thin piano
(or ladder) through a right-angled corridor.
Producing a CAD for the original formulation of this problem is still
infeasible after 25 years of improvements in both CAD theory and computer
hardware. We review some alternative formulations in the literature which use
differing levels of geometric analysis before input to a CAD algorithm. Simpler
formulations allow CAD to easily address the question of the existence of a
path. We provide a new formulation for which both a CAD can be constructed and
from which an actual path could be determined if one exists, and analyse the
CADs produced using this approach for variations of the problem.
This emphasises the importance of the precise formulation of such problems
for CAD. We analyse the formulations and their CADs considering a variety of
heuristics and general criteria, leading to conclusions about tackling other
problems of this form.Comment: 8 pages. Copyright IEEE 201
Choosing a variable ordering for truth-table invariant cylindrical algebraic decomposition by incremental triangular decomposition
Cylindrical algebraic decomposition (CAD) is a key tool for solving problems
in real algebraic geometry and beyond. In recent years a new approach has been
developed, where regular chains technology is used to first build a
decomposition in complex space. We consider the latest variant of this which
builds the complex decomposition incrementally by polynomial and produces CADs
on whose cells a sequence of formulae are truth-invariant. Like all CAD
algorithms the user must provide a variable ordering which can have a profound
impact on the tractability of a problem. We evaluate existing heuristics to
help with the choice for this algorithm, suggest improvements and then derive a
new heuristic more closely aligned with the mechanics of the new algorithm
Flow-cytometric quantification of microbial cells on sand from water biofilters
Rapid quantification of absolute microbial cell abundances is important for a comprehensive interpretation of microbiome surveys and crucial to support theoretical modelling and the design of engineered systems. In this paper, we propose a protocol specifically optimised for the quantification of microbial abundances in water biofilters using flow cytometry (FCM). We optimised cell detachment from sand biofilter particles for FCM quantification through the evaluation of five chemical dispersants (NaCl, Triton-X100, CaCl2, sodium pyrophosphate (PP), Tween 80 combined with PP), different mechanical pre-treatments (low and high energy sonication and shaking) and two fixation methods (glutaraldehyde and ethanol). The developed protocol was cross-compared using other established and commonly employed methods for biomass quantification in water filter samples (adenosine triphosphate (ATP) quantification, real-time quantitative PCR (qPCR) and volatile solids (VS)). The highest microbial count was obtained by detaching the biofilm from biofilter grains and dispersing clusters into singles cells using Tween 80 and sodium pyrophosphate combined with four steps of high energy sonication (27W, for 80 s each step); glutaraldehyde was shown to be the best fixative solution. The developed protocol was reliable and highly reproducible and produced results that are comparable to data from alternative quantification methods. Indeed, high correlations were found with trends obtained through ATP and qPCR (ρ = 0.98 and ρ = 0.91) measurements. The VS content was confirmed as an inaccurate method to express biomass in sand samples since it correlated poorly with all the other three methods (ρ = 0.005 with FCM, 0.002 with ATP and 0.177 with qPCR). FCM and ATP showed the strongest agreement between absolute counts with a slope of the correlation equal to 0.7, while qPCR seemed to overestimate cell counts by a factor of ten. The rapidity and reproducibility of the method developed make its application ideal for routine quantification of microbial cell abundances on sand from water biofilters and thus useful in revealing the ecological patterns and quantifying the metabolic kinetics involved in such systems
Cylindrical algebraic decomposition with equational constraints
Cylindrical Algebraic Decomposition (CAD) has long been one of the most
important algorithms within Symbolic Computation, as a tool to perform
quantifier elimination in first order logic over the reals. More recently it is
finding prominence in the Satisfiability Checking community as a tool to
identify satisfying solutions of problems in nonlinear real arithmetic.
The original algorithm produces decompositions according to the signs of
polynomials, when what is usually required is a decomposition according to the
truth of a formula containing those polynomials. One approach to achieve that
coarser (but hopefully cheaper) decomposition is to reduce the polynomials
identified in the CAD to reflect a logical structure which reduces the solution
space dimension: the presence of Equational Constraints (ECs).
This paper may act as a tutorial for the use of CAD with ECs: we describe all
necessary background and the current state of the art. In particular, we
present recent work on how McCallum's theory of reduced projection may be
leveraged to make further savings in the lifting phase: both to the polynomials
we lift with and the cells lifted over. We give a new complexity analysis to
demonstrate that the double exponent in the worst case complexity bound for CAD
reduces in line with the number of ECs. We show that the reduction can apply to
both the number of polynomials produced and their degree.Comment: Accepted into the Journal of Symbolic Computation. arXiv admin note:
text overlap with arXiv:1501.0446
The dual endothelin converting enzyme/neutral endopeptidase inhibitor SLV-306 (daglutril), inhibits systemic conversion of big endothelin-1 in humans
Aims - Inhibition of neutral endopeptidases (NEP) results in a beneficial increase in plasma concentrations of natriuretic peptides such as ANP. However NEP inhibitors were ineffective anti-hypertensives, probably because NEP also degrades vasoconstrictor peptides, including endothelin-1 (ET-1). Dual NEP and endothelin converting enzyme (ECE) inhibition may be more useful. The aim of the study was to determine whether SLV-306 (daglutril), a combined ECE/NEP inhibitor, reduced the systemic conversion of big ET-1 to the mature peptide. Secondly, to determine whether plasma ANP levels were increased.
Main methods - Following oral administration of three increasing doses of SLV-306 (to reach an average target concentration of 75, 300, 1200 ng ml− 1 of the active metabolite KC-12615), in a randomised, double blinded regime, big ET-1 was infused into thirteen healthy male volunteers. Big ET-1 was administered at a rate of 8 and 12 pmol kg− 1 min− 1 (20 min each). Plasma samples were collected pre, during and post big ET-1 infusion. ET-1, C-terminal fragment (CTF), big ET-1, and atrial natriuretic peptide (ANP) were measured.
Key findings - At the two highest concentrations tested, SLV-306 dose dependently attenuated the rise in blood pressure after big ET-1 infusion. There was a significant increase in circulating big ET-1 levels, compared with placebo, indicating that SLV-306 was inhibiting an increasing proportion of endogenous ECE activity. Plasma ANP concentrations also significantly increased, consistent with systemic NEP inhibition.
Significance - SLV-306 leads to inhibition of both NEP and ECE in humans. Simultaneous augmentation of ANP and inhibition of ET-1 production is of potential therapeutic benefit in cardiovascular disease
Non-linear Real Arithmetic Benchmarks derived from Automated Reasoning in Economics
We consider problems originating in economics that may be solved
automatically using mathematical software. We present and make freely available
a new benchmark set of such problems. The problems have been shown to fall
within the framework of non-linear real arithmetic, and so are in theory
soluble via Quantifier Elimination (QE) technology as usually implemented in
computer algebra systems. Further, they all can be phrased in prenex normal
form with only existential quantifiers and so are also admissible to those
Satisfiability Module Theory (SMT) solvers that support the QF_NRA. There is a
great body of work considering QE and SMT application in science and
engineering, but we demonstrate here that there is potential for this
technology also in the social sciences.Comment: To appear in Proc. SC-Square 2018. Dataset described is hosted by
Zenodo at: https://doi.org/10.5281/zenodo.1226892 . arXiv admin note:
substantial text overlap with arXiv:1804.1003
- …