1,766 research outputs found

    Computing stable models by program transformation

    Get PDF
    In analogy to the Davis--Putnam procedure we develop a new procedure for computing stable models of propositional normal disjunctive logic programs, using case analysis and simplification. Our procedure enumerates all stable mofels without repetition and without the need for a minimality check. Since it is not necessary to store the set of stable models explicitly, the procedure runs in polynomial space. We allow clauses with empty heads, in order to represent truth or falsity of a proposition as a one--literal clause. In particular, a clause of form A \sim A \rightarrow expresses that A A is contrained to be true, without providing a justification for A A . Adding this clause to a program restricts its stable models to those containing A, without introducing new stable models. Together with A A \rightarrow this provides the basis for case analysis. We present our procedure as a set of rules which transform a program into a set of solved forms, which resembles the standard method for presenting unification algorithms. Rules are sound in the sense that they preserve the set of stable models. A A subset of the rules is shown to be complete in the sense that for each stable model a solved form can be obtained. The method allows for concise presentation, flexible choice of a control strategy and simple correctness proofs

    New filter technique improves home television reception

    Get PDF
    Program studies and designs combline filters and analyzes their effectiveness in improving TV quality. Signal tracking methods are improved. Combline phase-lock loop provides significant sensitivity improvement above and below threshold

    A Cylindrical, Inner Volume Selecting 2D-T2-Prep Improves GRAPPA-Accelerated Image Quality in MRA of the Right Coronary Artery.

    Get PDF
    Two-dimensional (2D) spatially selective radiofrequency (RF) pulses may be used to excite restricted volumes. By incorporating a "pencil beam" 2D pulse into a T2-Prep, one may create a "2D-T2-Prep" that combines T2-weighting with an intrinsic outer volume suppression. This may particularly benefit parallel imaging techniques, where artefacts typically originate from residual foldover signal. By suppressing foldover signal with a 2D-T2-Prep, image quality may therefore improve. We present numerical simulations, phantom and in vivo validations to address this hypothesis. A 2D-T2-Prep and a conventional T2-Prep were used with GRAPPA-accelerated MRI (R = 1.6). The techniques were first compared in numerical phantoms, where per pixel maps of SNR (SNRmulti), noise, and g-factor were predicted for idealized sequences. Physical phantoms, with compartments doped to mimic blood, myocardium, fat, and coronary vasculature, were scanned with both T2-Preparation techniques to determine the actual SNRmulti and vessel sharpness. For in vivo experiments, the right coronary artery (RCA) was imaged in 10 healthy adults, using accelerations of R = 1,3, and 6, and vessel sharpness was measured for each. In both simulations and phantom experiments, the 2D-T2-Prep improved SNR relative to the conventional T2-Prep, by an amount that depended on both the acceleration factor and the degree of outer volume suppression. For in vivo images of the RCA, vessel sharpness improved most at higher acceleration factors, demonstrating that the 2D-T2-Prep especially benefits accelerated coronary MRA. Suppressing outer volume signal with a 2D-T2-Prep improves image quality particularly well in GRAPPA-accelerated acquisitions in simulations, phantoms, and volunteers, demonstrating that it should be considered when performing accelerated coronary MRA

    Ecological neighborhoods as a framework for umbrella species selection

    Get PDF
    Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept

    An exact test to detect geographic aggregations of events

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Traditional approaches to statistical disease cluster detection focus on the identification of geographic areas with high numbers of incident or prevalent cases of disease. Events related to disease may be more appropriate for analysis than disease cases in some contexts. Multiple events related to disease may be possible for each disease case and the repeated nature of events needs to be incorporated in cluster detection tests.</p> <p>Results</p> <p>We provide a new approach for the detection of aggregations of events by testing individual administrative areas that may be combined with their nearest neighbours. This approach is based on the exact probabilities for the numbers of events in a tested geographic area. The test is analogous to the cluster detection test given by Besag and Newell and does not require the distributional assumptions of a similar test proposed by Rosychuk et al. Our method incorporates diverse population sizes and population distributions that can differ by important strata. Monte Carlo simulations help assess the overall number of clusters identified. The population and events for each area as well as a nearest neighbour spatial relationship are required. We also provide an alternative test applicable to situations when only the aggregate number of events, and not the number of events per individual, are known. The methodology is illustrated on administrative data of presentations to emergency departments.</p> <p>Conclusions</p> <p>We provide a new method for the detection of aggregations of events that does not rely on distributional assumptions and performs well.</p

    Extracting mathematical semantics from LaTeX documents

    Get PDF
    We report on a project to use SGLR parsing and term rewriting with ELAN4 to extract the semantics of mathematical formulas from a LaTeX document and representing them in MathML. The LaTeX document we used is part of the Digital Library of Mathematical Functions (DLMF) project of the US National Institute of Standards and Technology (NIST) and obeys project-specific conventions, which contains macros for mathematical constructions, among them 200 predefined macros for special functions, the subject matter of the project. The SGLR parser can parse general context-free languages, which suffices to extract the structure of mathematical formulas from calculus that are written in the usual mathematical style, with most parentheses and multiplication signs omitted. The parse tree is then rewritten into a more concise and uniform internal syntax that is used as the base for extracting MathML or other semantical information

    Extracting mathematical semantics from LaTeX documents

    Get PDF
    We report on a project to use SGLR parsing and term rewriting with ELAN4 to extract the semantics of mathematical formulas from a LaTeX document and representing them in MathML. The LaTeX document we used is part of the Digital Library of Mathematical Functions (DLMF) project of the US National Institute of Standards and Technology (NIST) and obeys project-specific conventions, which contains macros for mathematical constructions, among them 200 predefined macros for special functions, the subject matter of the project. The SGLR parser can parse general context-free languages, which suffices to extract the structure of mathematical formulas from calculus that are written in the usual mathematical style, with most parentheses and multiplication signs omitted. The parse tree is then rewritten into a more concise and uniform internal syntax that is used as the base for extracting MathML or other semantical information

    Estimating the Use of Public Lands: Integrated Modeling of Open Populations with Convolution Likelihood Ecological Abundance Regression

    Get PDF
    We present an integrated open population model where the population dynamics are defined by a differential equation, and the related statistical model utilizes a Poisson binomial convolution likelihood. Key advantages of the proposed approach over existing open population models include the flexibility to predict related, but unobserved quantities such as total immigration or emigration over a specified time period, and more computationally efficient posterior simulation by elimination of the need to explicitly simulate latent immigration and emigration. The viability of the proposed method is shown in an in-depth analysis of outdoor recreation participation on public lands, where the surveyed populations changed rapidly and demographic population closure cannot be assumed even within a single day

    A Bayesian method for assessing multi-scale species-habitat relationships

    Get PDF
    Context Scientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multiscale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large. Objectives Our objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance. Methods We introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA. Results Our method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%. Conclusions Given the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships
    corecore