367 research outputs found

    Bayesian Age-Period-Cohort Model of Lung Cancer Mortality

    Get PDF
    Background The objective of this study was to analyze the time trend for lung cancer mortality in the population of the USA by 5 years based on most recent available data namely to 2010. The knowledge of the mortality rates in the temporal trends is necessary to understand cancer burden.Methods Bayesian Age-Period-Cohort model was fitted using Poisson regression with histogram smoothing prior to decompose mortality rates based on age at death, period at death, and birth-cohort.Results Mortality rates from lung cancer increased more rapidly from age 52 years. It ended up to 325 deaths annually for 82 years on average. The mortality of younger cohorts was lower than older cohorts. The risk of lung cancer was lowered from period 1993 to recent periods.Conclusions The fitted Bayesian Age-Period-Cohort model with histogram smoothing prior is capable of explaining mortality rate of lung cancer. The reduction in carcinogens in cigarettes and increase in smoking cessation from around 1960 might led to decreasing trend of lung cancer mortality after calendar period 1993

    An iterative approach to precondition inference using constrained Horn clauses

    Get PDF
    We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of unsafe initial states. The precondition then is the constraint corresponding to the complement of that set, under-approximating the set of safe initial states. This idea of complementation is not new, but previous attempts to exploit it have suffered from the loss of precision. Here we develop an iterative specialisation algorithm to give more precise, and in some cases optimal safety conditions. The algorithm combines existing transformations, namely constraint specialisation, partial evaluation and a trace elimination transformation. The last two of these transformations perform polyvariant specialisation, leading to disjunctive constraints which improve precision. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe

    Architecture of the Andromeda galaxy : a quantitative analysis of clustering in the inner stellar halo

    Get PDF
    We present a quantitative measurement of the amount of clustering present in the inner ∼30 kpc of the stellar halo of the Andromeda galaxy (M31). For this we analyse the angular positions and radial velocities of the carefully selected planetary nebulae in the M31 stellar halo. We study the cumulative distribution of pairwise distances in angular position and line-of-sight velocity space, and find that the M31 stellar halo contains substantially more stars in the form of close pairs as compared to that of a featureless smooth halo. In comparison to a smoothed/scrambled distribution, we estimate that the clustering excess in the M31 inner halo is roughly 40 per cent at maximum and on average ∼20 per cent. Importantly, comparing against the 11 stellar halo models of Bullock & Johnston, which were simulated within the context of the ΛCDM (Λ cold dark matter) cosmological paradigm, we find that the amount of substructures in the M31 stellar halo closely resembles that of a typical ΛCDM halo.Publisher PDFPeer reviewe

    Interpolant tree automata and their application in Horn clause verification

    Get PDF
    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.Comment: In Proceedings VPT 2016, arXiv:1607.0183

    Solving non-linear Horn clauses using a linear Horn clause solver

    Get PDF
    In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm then proceeds by applying the linearisation transformation and solver for linear Horn clauses to a sequence of sets of clauses with successively increasing dimension bound. The approach is then further developed by using a solution of clauses of lower dimension to (partially) linearise clauses of higher dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise.Comment: In Proceedings HCVS2016, arXiv:1607.0403

    Theoretical analysis of a single and double reflection atom interferometer in a weakly-confining magnetic trap

    Full text link
    The operation of a BEC based atom interferometer, where the atoms are held in a weakly-confining magnetic trap and manipulated with counter-propagating laser beams, is analyzed. A simple analytic model is developed to describe the dynamics of the interferometer. It is used to find the regions of parameter space with high and low contrast of the interference fringes for both single and double reflection interferometers. We demonstrate that for a double reflection interferometer the coherence time can be increased by shifting the recombination time. The theory is compared with recent experimental realizations of these interferometers.Comment: 25 pages, 6 figure

    The need for speed : escape velocity and dynamical mass measurements of the Andromeda galaxy

    Get PDF
    Our nearest large cosmological neighbour, the Andromeda galaxy (M31), is a dynamical system, and an accurate measurement of its total mass is central to our understanding of its assembly history, the life-cycles of its satellite galaxies, and its role in shaping the Local Group environment. Here, we apply a novel approach to determine the dynamical mass of M31 using high velocity Planetary Nebulae (PNe), establishing a hierarchical Bayesian model united with a scheme to capture potential outliers and marginalize over tracers unknown distances. With this, we derive the escape velocity run of M31 as a function of galacto-centric distance, with both parametric and non-parametric approaches. We determine the escape velocity of M31 to be 470 ± 40  km s−1 at a galacto-centric distance of 15  kpc, and also, derive the total potential of M31, estimating the virial mass and radius of the galaxy to be 0.8±0.1×1012M⊙ and 240 ± 10  kpc, respectively. Our M31 mass is on the low-side of the measured range, this supports the lower expected mass of the M31-Milky Way system from the timing and momentum arguments, satisfying the H i constraint on circular velocity between 10 ≲ R/ kpc < 35, and agreeing with the stellar mass Tully-Fisher relation. To place these results in a broader context, we compare them to the key predictions of the ΛCDM cosmological paradigm, including the stellar-mass–halo-mass and the dark matter halo concentration–virial mass correlation, and finding it to be an outlier to this relation.PostprintPeer reviewe

    Jeans that fit : weighing the mass of the Milky Way analogues in the ΛCDM universe

    Get PDF
    The spherical Jeans equation is a widely used tool for dynamical study of gravitating systems in astronomy. Here, we test its efficacy in robustly weighing the mass of Milky Way analogues, given they need not be in equilibrium or even spherical. Utilizing Milky Way stellar haloes simulated in accordance with Λ cold dark matter (ΛCDM) cosmology by Bullock and Johnston and analysing them under the Jeans formalism, we recover the underlying mass distribution of the parent galaxy, within distance r/kpc ∈ [10, 100], with a bias of ∼ 12 per cent and a dispersion of ∼ 14 per cent. Additionally, the mass profiles of triaxial dark matter haloes taken from the surfs simulation, within scaled radius 0.2 < r/rmax < 3, are measured with a bias of ∼ − 2.4 per cent and a dispersion of ∼ 10 per cent. The obtained dispersion is not because of Poisson noise due to small particle numbers as it is twice the later. We interpret the dispersion to be due to the inherent nature of the ΛCDM haloes, for example being aspherical and out-of-equilibrium. Hence, the dispersion obtained for stellar haloes sets a limit of about 12 per cent (after adjusting for random uncertainty) on the accuracy with which the mass profiles of the Milky Way-like galaxies can be reconstructed using the spherical Jeans equation. This limit is independent of the quantity and quality of the observational data. The reason for a non-zero bias is not clear, hence its interpretation is not obvious at this stage.Publisher PDFPeer reviewe
    • …
    corecore