5,311 research outputs found

    Designing screening protocols for amphibian disease that account for imperfect and variable capture rates of individuals

    Get PDF
    The amphibian chytrid fungus, Batrachochytrium dendrobatidis, is one of the main factors in global amphibian decline. Accurate knowledge of its presence and prevalence in an area is needed to trigger conservation actions. However, imperfect capture rates determine the number of individuals caught and tested during field surveys, and contribute to the uncertainty surrounding estimates of prevalence. Screening programs should be planned with the objective of minimizing such uncertainty. We show how this can be achieved by using predictive models that incorporate information about population size and capture rates. Using as a case study an existing screening program for three populations of the yellow-bellied toad (Bombina variegata pachypus) in northern Italy, we sought to quantify the effect of seasonal variation in individual capture rates on the uncertainty surrounding estimates of chytrid prevalence. We obtained estimates of population size and capture rates from mark-recapture data, and found wide seasonal variation in the individual recapture rates. We then incorporated this information in a binomial model to predict the estimates of prevalence that would be obtained by sampling at different times in the season, assuming no infected individuals were found. Sampling during the period of maximum capture probability was predicted to decrease upper 95% credible intervals by a maximum of 36%, compared with least suitable periods, with greater gains when using uninformative priors. We evaluated model predictions by comparing them with the results of screening surveys in 2012. The observed results closely matched the predicted figures for all populations, suggesting that this method can be reliably used to maximize the sampling size of surveillance programs, thus improving their efficiency

    A comparison of cost and quality of three methods for estimating density for wild pig (\u3ci\u3eSus scrofa\u3c/i\u3e)

    Get PDF
    A critical element in effective wildlife management is monitoring the status of wildlife populations; however, resources to monitor wildlife populations are typically limited. We compared cost effectiveness of three common population estimation methods (i.e. non-invasive DNA sampling, camera sampling, and sampling from trapping) by applying them to wild pigs (Sus scrofa) across three habitats in South Carolina, U.S.A where they are invasive. We used mark-recapture analyses for fecal DNA sampling data, spatially-explicit capture-recapture analyses for camera sampling data, and a removal analysis for removal sampling from trap data. Density estimates were similar across methods. Camera sampling was the least expensive, but had large variances. Fecal DNA sampling was the most expensive, although this technique generally performed well. We examined how reductions in effort by method related to increases in relative bias or imprecision. For removal sampling, the largest cost savings while maintaining unbiased density estimates was from reducing the number of traps. For fecal DNA sampling, a reduction in effort only minimally reduced costs due to the need for increased lab replicates while maintaining high quality estimates. For camera sampling, effort could only be marginally reduced before inducing bias. We provide a decision tree for researchers to help make monitoring decisions

    A FRAMEWORK FOR SOFTWARE RELIABILITY MANAGEMENT BASED ON THE SOFTWARE DEVELOPMENT PROFILE MODEL

    Get PDF
    Recent empirical studies of software have shown a strong correlation between change history of files and their fault-proneness. Statistical data analysis techniques, such as regression analysis, have been applied to validate this finding. While these regression-based models show a correlation between selected software attributes and defect-proneness, in most cases, they are inadequate in terms of demonstrating causality. For this reason, we introduce the Software Development Profile Model (SDPM) as a causal model for identifying defect-prone software artifacts based on their change history and software development activities. The SDPM is based on the assumption that human error during software development is the sole cause for defects leading to software failures. The SDPM assumes that when a software construct is touched, it has a chance to become defective. Software development activities such as inspection, testing, and rework further affect the remaining number of software defects. Under this assumption, the SDPM estimates the defect content of software artifacts based on software change history and software development activities. SDPM is an improvement over existing defect estimation models because it not only uses evidence from current project to estimate defect content, it also allows software managers to manage software projects quantitatively by making risk informed decisions early in software development life cycle. We apply the SDPM in several real life software development projects, showing how it is used and analyzing its accuracy in predicting defect-prone files and compare the results with the Poisson regression model

    Influence of landscape attributes on Virginia opossum density

    Get PDF
    TheVirginia opossum (Didelphis virginiana), North America\u27s only marsupial, has a range extending from southern Ontario, Canada, to the Yucatan Peninsula, Mexico, and from the Atlantic seaboard to the Pacific. Despite the Virginia opossum\u27s taxonomic uniqueness in relation to other mammals in North America and rapidly expanding distribution, its ecology remains relatively understudied. Our poor understanding of the ecology of this important mesopredator is especially pronounced in the rural southeastern United States. Our goal was to estimate effects of habitat on opossum density within an extensive multiyear spatial capture‐recapture study. Additionally, we compared the results of this spatial capture‐recapture analysis with a simple relative abundance index. Opossum densities in the relatively underdeveloped regions of the southeastern United States were lower compared to the more human‐dominated landscapes of the Northeast and Midwest. In the southeastern United States, Virginia opossums occurred at a higher density in bottomland swamp and riparian hardwood forest compared to upland pine (Pinus spp.) plantations and isolated wetlands. These results reinforce the notion that the Virginia opossum is commonly associated with land cover types adjacent to permanent water (bottomland swamps, riparian hardwood). The relatively low density of opossums at isolated wetland sites suggests that the large spatial scale of selection demonstrated by opossums gives the species access to preferable cover types within the same landscape

    Modelling for Pest Risk Analysis: Spread and Economic Impacts

    No full text
    The introduction of invasive pests beyond their natural range is one of the main causes of the loss of biodiversity and leads to severe costs. Bioeconomic models that integrate biological invasion spread theory, economic impacts and invasion management would be of great help to increase the transparency of pest risk analysis (PRA) and provide for more effective and efficient management of invasive pests. In this thesis, bioeconomic models of management of invasive pests are developed. The models are applied to three cases of study. The main case looks at the invasion in Europe by the western corn rootworm (WCR), Diabrotica virgifera ssp. virgifera LeConte (Coleoptera: Chrysomelidae). A range of quantitative modelling approaches was employed: (i) dispersal kernels fitted to mark-release-recapture experimental data; (ii) optimal control models combined with info-gap theory; (iii) spatially explicit stochastic simulation models; and (iv) agent-based models. As a result of the application of the models new insights on the management of invasive pests and the links between spread and economic impacts were gained: (i) current official management measures to eradicate WCR were found to be ineffective; (ii) eradication and containment programmes that are economically optimal under no uncertainty were found out to be also the most robustly immune policy to unacceptable outcomes under severe uncertainty; (iii) PRA focusing on single invasive pests might lead to management alternatives that dot not correspond to the optimal economic allocation if the rest of the invasive pests sharing the same management budget are considered; (iv) the control of satellite colonies of an invasion occurring by stratified dispersal is ineffective when a strong propagule pressure is generated from the main body of the invasion and this effect is increased by the presence of human-assisted long-distance dispersal; and (v) agent-based models were shown to be an adequate tool to integrate biological invasion spread models with economic analysis models

    Re-thinking the assessment and monitoring of largescale coastal developments for improved marine megafauna outcomes

    Get PDF
    Rachel Groom studied environmental impact assessment (EIA) legislation and regulation for its effectiveness in protecting marine megafauna subject to coastal development pressures. She examined data sufficiency, project constraints, marine megafauna ecology and efficacy of monitoring techniques. Challenges were discussed and guidelines provided for stakeholders to achieve improved conservation outcomes

    Improving Labor Inspections Systems: Design Options

    Get PDF
    [Excerpt] The following paper identifies experimental designs for the evaluation of labor inspection systems in Latin America. It includes six principal sections. Section 1 discusses the main differences between the “Latin model” (Piore and Schrank 2008) of labor inspection and the more familiar approach adopted by enforcement agencies like OSHA and the Wage and Hour Division in the US. Section 2 discusses theories of regulatory noncompliance and develops a logic model that links enforcement strategies to compliance outcomes in the region. Section 3 discusses some of the strategies that are available to Latin American labor inspectors and sets the stage for a discussion of their assignment to experimental subjects. Section 4 identifies five possible subjects of experimentation (e.g., inspectors, firms, jurisdictions) and discusses their respective receptivity to both random assignment and counterfactual analysis (e.g., data needs, estimation procedures, etc.). Section 5 addresses practical considerations involved in the design and conduct of experiments on inspection systems—including their utility, ethics, and viability—and introduces a checklist designed to facilitate their assessment. And Section 6 describes three potential experiments—labeled “professionals v. partisans,” “risk-based targeting v. randomized inspection,” and “carrots v. sticks” respectively—and discusses their principal goals and limitations in light of the checklist
    corecore