539 research outputs found

    Trend detection in source-sink systems: when should sink habitats be monitored?

    Get PDF
    We determine the power of population monitoring in source or sink habitat to detect declining reproductive success in source habitat using a stochastic population model. The relative power to detect a trend in the source by monitoring either the source or the sink varies with life history parameters, environmental stochasticity, and observation uncertainty. The power to detect a decline monitoring either source or sink habitat is maximized when the reproductive surplus in the source is low. The power to detect a decline by monitoring the sink increases with increasing reproductive deficit in the sink. If environmental stochasticity in the source increases, the power in the sink goes down due to a lower signal-to-noise ratio. However, the power in the sink increases if environmental stochasticity is increased further, because increasing stochasticity reduces the geometric mean growth rate in the source. Intriguingly, it is often most efficient to monitor the sink even though the actual reproductive decline occurs in the source. If reproductive success is declining in both habitats, censusing the sink will always have higher power. However, the probability of Type 1 error is always higher in the sink. Our results clearly have implications for optimal population monitoring in source-sink landscapes

    Mixture models for overdispersed data

    Get PDF
    Ecological data often do not conform to the assumptions of standard probability distributions and this has important implications for the validity of statistical inference. A common reason for this is that the variability of ecological data is often much higher than can be accounted for by the standard probability distributions that underpin most statistical inference in ecology. This leads to an underestimation of variances and bias in statistical tests unless the overdispersion is accounted for. Consequently, having methods for dealing with overdispersion is an essential component of the ecologist’s statistical toolbox. This chapter introduces statistical methods known as mixture models that can deal with overdispersion. Mixture models are powerful because not only can they account for overdispersion, but they can also help to identify the actual ecological or observation processes that drive overdispersion. The chapter begins by discussing the causes and consequences of overdispersion in ecological data and how overdispersion can be identified. Mixture models are then described and illustrated using two different case studies from survival analysis and the analysis of population abundance. The chapter ends with a discussion of some of the limitations of mixture models and pitfalls to look out for

    Emission Line Galaxies in the STIS Parallel Survey II: Star Formation Density

    Get PDF
    We present the luminosity function of [OII]-emitting galaxies at a median redshift of z=0.9, as measured in the deep spectroscopic data in the STIS Parallel Survey (SPS). The luminosity function shows strong evolution from the local value, as expected. By using random lines of sight, the SPS measurement complements previous deep single field studies. We calculate the density of inferred star formation at this redshift by converting from [OII] to H-alpha line flux as a function of absolute magnitude and find rho_dot=0.043 +/- 0.014 Msun/yr/Mpc^3 at a median redshift z~0.9 within the range 0.46<z<1.415 (H_0 = 70 km/s/Mpc, Omega_M=0.3, Omega_Lambda=0.7. This density is consistent with a (1+z)^4 evolution in global star formation since z~1. To reconcile the density with similar measurements made by surveys targeting H-alpha may require substantial extinction correction.Comment: 16 preprint pages including 5 figures; accepted for publication in Ap

    Randomized controlled trials in adult traumatic brain injury: A systematic review on the use and reporting of clinical outcome assessments

    Get PDF
    As part of efforts to improve study design, the use of outcome measures in randomized controlled trials (RCTs) in traumatic brain injury (TBI) is receiving increasing attention. This review aimed to assess how clinical outcome assessments (COAs) have been used and reported in RCTs in adult TBI. Systematic literature searches were conducted to identify medium to large (n ≄ 100) acute and post-acute TBI trials published since 2000. Data were extracted independently by two reviewers using a set of structured templates. Items from the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement and CONSORT patient-reported outcomes (PRO) extension were used to evaluate reporting quality of COAs. Glasgow Outcome Scale/Extended (GOS/GOSE) data were extracted using a checklist developed specifically for the review. A total of 126 separate COAs were identified in 58 studies. The findings demonstrate heterogeneity in the use of TBI outcomes, limiting comparisons and meta-analyses of RCT findings. The GOS/GOSE was included in 39 studies, but implemented in a variety of ways, which may not be equivalent. Multidimensional outcomes were used in 30 studies, and these were relatively more common in rehabilitation settings. The use of PROs was limited, especially in acute study settings. Quality of reporting was variable, and key information concerning COAs was often omitted, making it difficult to know how precisely outcomes were assessed. Consistency across studies would be increased and future meta-analyses facilitated by (a) using common data elements recommendations for TBI outcomes and (b) following CONSORT guidelines when publishing RCTs

    Managing risk and uncertainty in systematic conservation planning with insufficient information

    Get PDF
    This research was supported by Japan Society for the Promotion of Science and by the Okinawa Institute of Science and Technology Graduate University. RKR was supported by an Australian Research Council Discovery Early Career Research Award (DE210100492). JRR was supported by an Australian Research Council Future Fellowship (FT200100096) .1. Recent advances in systematic conservation planning make use of modern portfolio theory - a framework to construct and select optimal allocation of assets - to address the challenges posed by climate change uncertainty. However, these methods are difficult to implement for fine scale conservation planning when the information on future climate scenarios is insufficient. Insufficient information makes the estimators of the key inputs in the optimisation procedure unreliable leading to technical problems for the construction of optimal asset allocation. 2. We identify three statistical methods - Constant Correlation Model, the Ledoit-Wolf approach and the weighted non-negative least-squares approach - that can overcome the lack of sufficient information and enable the use of modern portfolio theory for fine scale conservation planning. 3. We illustrate the use of the three methods for identifying efficient portfolio allocation strategies, i.e. strategies that give minimum amount of risk for a chosen level of return or maximum return for a chosen level of risk, using case studies of wetland conservation planning in North America and coastal conservation planning in Australia. We compare conservation planning strategies with complete information using standard portfolio theory and with insufficient information using the three methods to highlight their advantages and disadvantages. We find the Ledoit-Wolf and weighted non-negative least-squares approaches perform well and can identify risk-return out-comes that are close to those identified with complete information. 4. The methods presented in this study broaden the range of cases where the application of modern portfolio theory is possible in conservation planning to enhance its uptake and lead to more efficient allocation of conservation resources.Publisher PDFPeer reviewe

    A spatially explicit habitat selection model incorporating home range behavior

    Get PDF
    Understanding habitat selection is of primary interest in theoretical and applied ecology. One approach is to infer habitat selection processes from differences in population densities between habitats using methods such as isodar and isoleg analysis. Another approach is to directly observe the movements of individuals. However, habitat selection models based on movement data often fail to adequately incorporate spatial processes. This is problematic if the probability of selecting a particular habitat is dependent upon its spatial context. This would occur, for example, where organisms exhibit home range behavior and the choice of habitat is dependent on its location relative to the home range. In this paper we present a spatially explicit habitat selection model for movement data that incorporates home range behavior as a spatial process. Our approach extends a previous model by formulating the probability of selecting a habitat as a function of its distance from the animal's current location and home range center. We demonstrate that these enhancements lead to more parsimonious models when applied to a koala radiotracking data set from eastern Australia. This approach could also be applied to modeling other spatial habitat selection processes, leading to more biologically meaningful models for a range of species and applications

    Study protocol - A systematic review and meta-analysis of hypothermia in experimental traumatic brain injury: Why have promising animal studies not been replicated in pragmatic clinical trials?

    Get PDF
    Traumatic brain injury (TBI) is a major cause of death and permanent disability. Systemic hypothermia, a treatment used in TBI for many decades, has recently been found to be associated with neutral or unfavourable clinical outcomes despite apparently promising preclinical research. Systematic review and meta‐analysis is a tool to summarize literature and observe trends in experimental design and quality that underpin its general conclusions. Here we aim to use these techniques to describe the use of hypothermia in animal TBI models, collating data relating to outcome and both study design and quality. From here we intend to observe correlations between features and attempt to explain any discrepancies found between animal and clinical data. This protocol describes the relevant methodology in detail

    Cost-efficient fenced reserves for conservation: single large or two small?

    Get PDF
    Fences that exclude alien invasive species are used to reduce predation pressure on reintroduced threatened wildlife. Planning these continuously managed systems of reserves raises an important extension of the Single Large or Several Small (SLOSS) reserve planning framework: the added complexity of ongoing management. We investigate the long-term cost-efficiency of a single large or two small predator exclusion fences in the arid Australian context of reintroducing bilbies Macrotis lagotis, and we highlight the broader significance of our results with sensitivity analysis. A single fence more frequently results in a much larger net cost than two smaller fences. We find that the cost-efficiency of two fences is robust to strong demographic and environmental uncertainty, which can help managers to mitigate the risk of incurring high costs over the entire life of the project

    The Value of Resolving Uncertainty in Social-Ecological Systems

    Get PDF
    Conservation is increasingly framed or analyzed as a coupled social-ecological problem. However, considering the broader links between social and ecological systems reveals additional and increasing dimensions of uncertainty for conservation management. Reducing uncertainty is expected to lead to improved management decisions, however collecting more data or lengthening project time frames to reduce uncertainty is not without cost. In this study we analyze where conservation managers should invest resources to improve management outcomes by decreasing uncertainty in a coupled social-ecological system. We consider five system components: social and ecological nodes and links, and social-ecological links. We find that the expected value of improving information for any one component is always highest for the component which is most directly acted upon by managers. Our results can help guide conservation investment to reduce uncertainty where improved knowledge of a social-ecological system will provide the greatest improvement in management outcomes

    Measurement of Cosmic Shear with the Space Telescope Imaging Spectrograph

    Get PDF
    Weak lensing by large-scale structure allows a direct measure of the dark matter distribution. We have used parallel images taken with the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope to measure weak lensing, or cosmic shear. We measure the shapes of 26036 galaxies in 1292 STIS fields and measure the shear variance at a scale of 0.51 arcminutes. The charge transfer efficiency (CTE) of STIS has degraded over time and introduces a spurious ellipticity into galaxy shapes during the readout process. We correct for this effect as a function of signal to noise and CCD position. We further show that the detected cosmic shear signal is nearly constant in time over the approximately four years of observation. We detect cosmic shear at the 5.1 sigma level, and our measurement of the shear variance is consistent with theoretical predictions in a LambdaCDM universe. This provides a measure of the normalization of the mass power spectrum sigma_8=(1.02 +- 0.16) (0.3/Omega_m)^{0.46} (0.21/Gamma)^{0.18}$. The one-sigma error includes noise, cosmic variance, systematics and the redshift uncertainty of the source galaxies. This is consistent with previous cosmic shear measurements, but tends to favor those with a high value of sigma_8. It is also consistent with the recent determination of sigma_8 from the Wilkinson Microwave Anisotropy Probe (WMAP) experiment.Comment: 9 pages, 5 figure, 1 table, Accepted to Ap
    • 

    corecore