28 research outputs found

    The Power of Expert Opinion in Ecological Models Using Bayesian Methods: Impact of Grazing on Birds

    Get PDF
    One of our greatest challenges as researchers is predicting impacts of landuse on biota and predicting the impact of livestock grazing on birds is no exception. Insufficient data and poor survey design often yield results that are not statistically significant or difficult to interpret because researchers cannot disentangle the effects of grazing from other disturbances. This has resulted in few publications on the impact of grazing on birds alone. Ecologists with extensive experience in bird ecology in grazed landscapes could inform an analysis when time and monetary constraints limit the amount of data that can be collected. Using responses from twenty well-recognised ecologists throughout Australia we capture this expert knowledge and incorporate it into a statistical model using Bayesian methods. Although relatively new to ecology, Bayesian methods allow straightforward probability statements to be made about specific models or scenarios and they allow the integration of different types of information, including scientific judgement while formally accommodating and incorporating the uncertainty in the information provided. Data on bird density was collected across three broad levels of grazing (no/low, moderate and high) typical of sub-tropical Australia. This field data was used in conjunction with expert data to produce estimates of species persistence under grazing. The addition of expert data through priors in our model strengthened results under at least one grazing level for all but one bird species examined. When experts were in agreement credible intervals were tightened substantially, whereas when experts were in disagreement results were similar to those evaluated in the absence of expert information. In fields where there is extensive expert knowledge, yet little published data, the use of expert information as priors for ecological models is a cost effective way of making more confident predictions about the effect of management on biodiversity

    Fine-suspended sediment and water budgets for a large, seasonally dry tropical catchment: Burdekin River catchment, Queensland, Australia

    Get PDF
    The Burdekin River catchment (~130,400 km2) is a seasonally dry tropical catchment located in north-east Queensland, Australia. It is the single largest source of suspended sediment to the Great Barrier Reef (GBR). Fine sediments are a threat to ecosystems on the GBR where they contribute to elevated turbidity (reduced light), sedimentation stress, and potential impacts from the associated nutrients. Suspended sediment data collected over a 5 year period were used to construct a catchment-wide sediment source and transport budget. The Bowen River tributary was identified as the major source of end-of-river suspended sediment export, yielding an average of 530 t km−2 yr−1 during the study period. Sediment trapping within a large reservoir (1.86 million ML) and the preferential transport of clays and fine silts downstream of the structure were also examined. The data reveal that the highest clay and fine silt loads—which are of most interest to environmental managers of the GBR—are not always sourced from areas that yield the largest total suspended sediment load (i.e., all size fractions). Our results demonstrate the importance of incorporating particle size into catchment sediment budget studies undertaken to inform management decisions to reduce downstream turbidity and sedimentation. Our data on sediment source, reservoir influence, and subcatchment and catchment yields will improve understandings of sediment dynamics in other tropical catchments, particularly those located in seasonally wet-dry tropical savannah/semiarid climates. The influence of climatic variability (e.g., drought/wetter periods) on annual sediment loads within large seasonally dry tropical catchments is also demonstrated by our data

    Sampling re-design increases power to detect change in the Great Barrier Reef’s inshore water quality

    Get PDF
    Monitoring programs are fundamental to understanding the state and trend of aquatic ecosystems. Sampling designs are a crucial component of monitoring programs and ensure that measurements evaluate progress toward clearly stated management objectives, which provides a mechanism for adaptive management. Here, we use a well-established marine monitoring program for inshore water quality in the Great Barrier Reef (GBR), Australia to investigate whether a sampling re-design has increased the program’s capacity to meet its primary objectives. Specifically, we use bootstrap resampling to assess the change in statistical power to detect temporal water quality trends in a 15-year inshore marine water quality data set that includes data from both before and after the sampling re-design. We perform a comprehensive power analysis for six water quality analytes at four separate study areas in the GBR Marine Park and find that the sampling re-design (i) increased power to detect trends in 23 of the 24 analyte-study area combinations, and (ii) resulted in an average increase in power of 34% to detect increasing or decreasing trends in water quality analytes. This increase in power is attributed more to the addition of sampling locations than increasing the sampling rate. Therefore, the sampling re-design has substantially increased the capacity of the program to detect temporal trends in inshore marine water quality. Further improvements in sampling design need to focus on the program’s capability to reliably detect trends within realistic timeframes where inshore improvements to water quality can be expected to occur

    Reliability measures for local nodes assessment in classification trees

    No full text
    Most of the modem developments with classification trees are aimed at improving their predictive capacity. This article considers a curiously neglected aspect of classification trees, namely the reliability of predictions that come from a given classification tree. In the sense that a node of a tree represents a point in the predictor space in the limit, the aim of this article is the development of localized assessment of the reliability of prediction rules. A classification tree may be used either to provide a probability forecast, where for each node the membership probabilities for each class constitutes the prediction, or a true classification where each new observation is predictively assigned to a unique class. Correspondingly, two types of reliability measure will be derived-namely, prediction reliability and classification reliability. We use bootstrapping methods as the main tool to construct these measures. We also provide a suite of graphical displays by which they may be easily appreciated. In addition to providing some estimate of the reliability of specific forecasts of each type, these measures can also be used to guide future data collection to improve the effectiveness of the tree model. The motivating example we give has a binary response, namely the presence or absence of a species of Eucalypt, Eucalyptus cloeziana, at a given sampling location in response to a suite of environmental covariates, (although the methods are not restricted to binary response data)

    Fitting genetic models to twin data with binary and ordered categorical responses: A comparison of structural equation modelling and Bayesian hierarchical models

    No full text
    We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand

    Comment

    No full text

    Combining non-parametric models with logistic regression: an application to motor vehicle injury data

    No full text
    To date, computer-intensive non-parametric modelling procedures such as classification and regression trees (CART) and multivariate adaptive regression splines (MARS) have rarely been used in the analysis of epidemiological studies. Most published studies focus on techniques such as logistic regression to summarise their results simply in the form of odds ratios. However flexible, non-parametric techniques such as CART and MARS can provide more informative and attractive models whose individual components can be displayed graphically. An application of these sophisticated techniques in the analysis of an epidemiological case-control study of injuries resulting from motor vehicle accidents has been encouraging. They have not only identified potential areas of risk largely governed by age and number of years driving experience but can also identify outlier groups and can be used as a precursor to a more detailed logistic regression analysis. (C) 2000 Elsevier Science B.V. All rights reserved

    Smallset Timelines: A Visual Representation of Data Preprocessing Decisions

    Full text link
    Data preprocessing is a crucial stage in the data analysis pipeline, with both technical and social aspects to consider. Yet, the attention it receives is often lacking in research practice and dissemination. We present the Smallset Timeline, a visualisation to help reflect on and communicate data preprocessing decisions. A "Smallset" is a small selection of rows from the original dataset containing instances of dataset alterations. The Timeline is comprised of Smallset snapshots representing different points in the preprocessing stage and captions to describe the alterations visualised at each point. Edits, additions, and deletions to the dataset are highlighted with colour. We develop the R software package, smallsets, that can create Smallset Timelines from R and Python data preprocessing scripts. Constructing the figure asks practitioners to reflect on and revise decisions as necessary, while sharing it aims to make the process accessible to a diverse range of audiences. We present two case studies to illustrate use of the Smallset Timeline for visualising preprocessing decisions. Case studies include software defect data and income survey benchmark data, in which preprocessing affects levels of data loss and group fairness in prediction tasks, respectively. We envision Smallset Timelines as a go-to data provenance tool, enabling better documentation and communication of preprocessing tasks at large.Comment: In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22), June 21-24, 2022, Seoul, Republic of Kore

    Combining contemporary and long-term erosion rates to target erosion hot-spots in the Great Barrier Reef, Australia

    No full text
    Methods for prioritising catchment remediation are based on understanding the source of sediment over the short-medium timescales (10–102 years) using techniques such as sediment finger-printing, sediment flux monitoring, and catchment modelling. Because such approaches do not necessarily quantify the natural variation in sediment flux over the longer timescale, they often represent background or pre-agricultural erosion rates poorly. This study compares long-term (∼100 to >10,000 years) erosion rates derived from terrestrial cosmogenic nuclides (10Be) with contemporary erosion rates obtained by monitoring sediment fluxes over ~5–10 years. The ratio of these two data sets provides a measure of the accelerated erosion factor (AEF), which can be used to identify erosion hot-spots at the sub-catchment scale. The study area is the Burdekin catchment, the largest source of contemporary sediment to the Great Barrier Reef lagoon. Long-term erosion rates vary from −1 in the Suttor and Belyando sub-catchments to 0.0296 mm yr−1 in the Bowen. The contemporary erosion rates are highest on small hillslopes with patchy ground cover (0.2726 mm yr−1) and in the Bowen sub-catchment (0.2207 mm yr−1), and lowest in the Belyando sub-catchment (0.0019 mm yr−1). All but two of the sub-catchment sites have an AEF > 1.0, indicating higher contemporary erosion rates than estimated long-term averages. Results confirm that the contemporary or agriculturally-induced erosion rates at these sites have increased considerably. Within the context of the Reef Water Quality Protection Plan, these results provide justification for water quality targets to be set at the sub-catchment scale, particularly for large and geomorphically diverse catchments.</p
    corecore