964 research outputs found
Increasing the Numeric Expressiveness of the Planning Domain Definition Language
The technology of artificial intelligence (AI) planning is being adopted across many different disciplines. This has resulted in the wider use of the Planning Domain Definition Language (PDDL), where it is being used to model planning problems of different natures. One such area where AI planning is particularly attractive is engineering, where the optimisation problems are mathematically rich. The example used throughout this paper is the optimisation (minimisation) of machine tool measurement uncertainty. This planning problem highlights the limits of PDDL's numerical expressiveness in the absence of the square root function. A workaround method using the Babylonian algorithm is then evaluated before the extension of PDDL to include more mathematics functions is discussed
NTFS Permissions Explorer
Administrating and monitoring NTFS permissions can be somewhat of a cumbersome and convoluted task. In today’s modern data rich world there has never been a more important time to ensure that your data is secured against unwanted access. This software-based solution has been produced to aid user understand of the current implemented permissions and identify possible problems
Quintessence reconstructed: new constraints and tracker viability
We update and extend our previous work reconstructing the potential of a quintessence field from current observational data. We extend the cosmological data set to include new supernova data, plus information from the cosmic microwave background and from baryon acoustic oscillations. We extend the modeling by considering Padé approximant expansions as well as Taylor series, and by using observations to assess the viability of the tracker hypothesis. We find that parameter constraints have improved by a factor of 2, with a strengthening of the preference of the cosmological constant over evolving quintessence models. Present data show some signs, though inconclusive, of favoring tracker models over nontracker models under our assumptions
Representing the Process of Machine Tool Calibration in First-order Logic
Machine tool calibration requires a wide range of measurement techniques that can be carried out in many different sequences. Planning a machine tool calibration is typically performed by a subject expert with a great understanding of International standards and industrial best-practice guides. However, it is often the case that the planned sequence of measurements is not the optimal. Therefore, in an attempt to improve the process, intelligent computing methods can be designed for plan suggestion. As a starting point, this paper presents a way of converting expert knowledge into first-order logic that can be expressed in the PROLOG language. It then shows how queries can be executed against the logic to construct a knowledge-base of all the different measurements that can be performed during machine tool calibration
Constraining the dark fluid
Cosmological observations are normally fit under the assumption that the dark
sector can be decomposed into dark matter and dark energy components. However,
as long as the probes remain purely gravitational, there is no unique
decomposition and observations can only constrain a single dark fluid; this is
known as the dark degeneracy. We use observations to directly constrain this
dark fluid in a model-independent way, demonstrating in particular that the
data cannot be fit by a dark fluid with a single constant equation of state.
Parameterizing the dark fluid equation of state by a variety of polynomials in
the scale factor , we use current kinematical data to constrain the
parameters. While the simplest interpretation of the dark fluid remains that it
is comprised of separate dark matter and cosmological constant contributions,
our results cover other model types including unified dark energy/matter
scenarios.Comment: 5 pages, 5 figures incorporated. Updated to new observational data
including SHOES determination of H0; new citations adde
Unified dark energy and dark matter from a scalar field different from quintessence
We explore unification of dark matter and dark energy in a theory containing
a scalar field of non-Lagrangian type, obtained by direct insertion of a
kinetic term into the energy-momentum tensor. This scalar is different from
quintessence, having an equation of state between -1 and 0 and a zero sound
speed in its rest frame. We solve the equations of motion for an exponential
potential via a rewriting as an autonomous system, and demonstrate the
observational viability of the scenario, for sufficiently small exponential
potential parameter \lambda, by comparison to a compilation of kinematical
cosmological data.Comment: 10 pages RevTeX4 with 5 figures incorporate
Recommended from our members
Models for Human Navigation and Optimal Path Planning Using Level Set Methods and Hamilton-Jacobi Equations
We present several models for different physical scenarios which are centered around human movement or optimal path planning, and use partial differential equations and concepts from control theory. The first model is a game-theoretic model for environmental crime which tracks criminals' movement using the level set method, and improves upon previous continuous models by removing overly restrictive assumptions of symmetry. Next, we design a method for determining optimal hiking paths in mountainous regions using an anisotropic level set equation. After this, we present a model for optimal human navigation with uncertainty which is rooted in dynamic programming and stochastic optimal control theory. Lastly, we consider optimal path planning for simple, self-driving cars in the Hamilton-Jacobi formulation. We improve upon previous models which simplify the car to a point mass, and present a reasonably general upwind, sweeping scheme to solve the relevant Hamilton-Jacobi equation
Reconstructing thawing quintessence with multiple datasets
In this work we model the quintessence potential in a Taylor series
expansion, up to second order, around the present-day value of the scalar
field. The field is evolved in a thawing regime assuming zero initial velocity.
We use the latest data from the Planck satellite, baryonic acoustic
oscillations observations from the Sloan Digital Sky Survey, and Supernovae
luminosity distance information from Union2.1 to constrain our models
parameters, and also include perturbation growth data from the WiggleZ, BOSS
and the 6dF surveys. The supernova data provide the strongest individual
constraint on the potential parameters. We show that the growth data
performance is competitive with the other datasets in constraining the dark
energy parameters we introduce. We also conclude that the combined constraints
we obtain for our model parameters, when compared to previous works of nearly a
decade ago, have shown only modest improvement, even with new growth of
structure data added to previously-existent types of data.Comment: 9 pages, 4 figures and 1 table. Version 2 with minor changes to match
Physical Review D accepted versio
Model selection in cosmology
Model selection aims to determine which theoretical models are most plausible given some data, without necessarily considering preferred values of model parameters. A common model selection question is to ask when new data require introduction of an additional parameter, describing a newly discovered physical effect. We review model selection statistics, then focus on the Bayesian evidence, which implements Bayesian analysis at the level of models rather than parameters. We describe our CosmoNest code, the first computationally efficient implementation of Bayesian model selection in a cosmological context. We apply it to recent WMAP satellite data, examining the need for a perturbation spectral index differing from the scaleinvariant (Harrison–Zel'dovich) case
The WMAP normalization of inflationary cosmologies
We use the three-year WMAP observations to determine the normalization of the
matter power spectrum in inflationary cosmologies. In this context, the
quantity of interest is not the normalization marginalized over all parameters,
but rather the normalization as a function of the inflationary parameters n and
r with marginalization over the remaining cosmological parameters. We compute
this normalization and provide an accurate fitting function. The statistical
uncertainty in the normalization is 3 percent, roughly half that achieved by
COBE. We use the k-l relation for the standard cosmological model to identify
the pivot scale for the WMAP normalization. We also quote the inflationary
energy scale corresponding to the WMAP normalization.Comment: 4 pages RevTex4 with two figure
- …